10.2 CSV & Text files
The two workhorse functions for reading text files (a.k.a. flat files) are
read_csv()
and read_table()
. They both use the same parsing code to
intelligently convert tabular data into a DataFrame object. See the
cookbook for some advanced strategies.
10.2.1 Parsing options
read_csv()
and read_table()
accept the following arguments:
10.2.1.1 Basic
- filepath_or_buffer : various
- Either a path to a file (a
python:str
,python:pathlib.Path
, orpy:py._path.local.LocalPath
), URL (including http, ftp, and S3 locations), or any object with aread()
method (such as an open file orStringIO
). - sep : str, defaults to
','
forread_csv()
,\t
forread_table()
- Delimiter to use. If sep is
None
, will try to automatically determine this. Separators longer than 1 character and different from'\s+'
will be interpreted as regular expressions, will force use of the python parsing engine and will ignore quotes in the data. Regex example:'\\r\\t'
. - delimiter : str, default
None
- Alternative argument name for sep.
- delim_whitespace : boolean, default False
Specifies whether or not whitespace (e.g.
' '
or'\t'
) will be used as the delimiter. Equivalent to settingsep='\s+'
. If this option is set to True, nothing should be passed in for thedelimiter
parameter.New in version 0.18.1: support for the Python parser.
10.2.1.2 Column and Index Locations and Names
- header : int or list of ints, default
'infer'
- Row number(s) to use as the column names, and the start of the data. Default
behavior is as if
header=0
if nonames
passed, otherwise as ifheader=None
. Explicitly passheader=0
to be able to replace existing names. The header can be a list of ints that specify row locations for a multi-index on the columns e.g.[0,1,3]
. Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this parameter ignores commented lines and empty lines ifskip_blank_lines=True
, so header=0 denotes the first line of data rather than the first line of the file. - names : array-like, default
None
- List of column names to use. If file contains no header row, then you should
explicitly pass
header=None
. Duplicates in this list are not allowed unlessmangle_dupe_cols=True
, which is the default. - index_col : int or sequence or
False
, defaultNone
- Column to use as the row labels of the DataFrame. If a sequence is given, a
MultiIndex is used. If you have a malformed file with delimiters at the end of
each line, you might consider
index_col=False
to force pandas to not use the first column as the index (row names). - usecols : array-like, default
None
- Return a subset of the columns. All elements in this array must either be positional (i.e. integer indices into the document columns) or strings that correspond to column names provided either by the user in names or inferred from the document header row(s). For example, a valid usecols parameter would be [0, 1, 2] or [‘foo’, ‘bar’, ‘baz’]. Using this parameter results in much faster parsing time and lower memory usage.
- as_recarray : boolean, default
False
DEPRECATED: this argument will be removed in a future version. Please call
pd.read_csv(...).to_records()
instead.Return a NumPy recarray instead of a DataFrame after parsing the data. If set to
True
, this option takes precedence over thesqueeze
parameter. In addition, as row indices are not available in such a format, theindex_col
parameter will be ignored.- squeeze : boolean, default
False
- If the parsed data only contains one column then return a Series.
- prefix : str, default
None
- Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, ...
- mangle_dupe_cols : boolean, default
True
- Duplicate columns will be specified as ‘X.0’...’X.N’, rather than ‘X’...’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.
10.2.1.3 General Parsing Configuration
- dtype : Type name or dict of column -> type, default
None
- Data type for data or columns. E.g.
{'a': np.float64, 'b': np.int32}
(unsupported withengine='python'
). Use str or object to preserve and not interpret dtype. - engine : {
'c'
,'python'
} - Parser engine to use. The C engine is faster while the python engine is currently more feature-complete.
- converters : dict, default
None
- Dict of functions for converting values in certain columns. Keys can either be integers or column labels.
- true_values : list, default
None
- Values to consider as
True
. - false_values : list, default
None
- Values to consider as
False
. - skipinitialspace : boolean, default
False
- Skip spaces after delimiter.
- skiprows : list-like or integer, default
None
- Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file.
- skipfooter : int, default
0
- Number of lines at bottom of file to skip (unsupported with engine=’c’).
- skip_footer : int, default
0
- DEPRECATED: use the
skipfooter
parameter instead, as they are identical - nrows : int, default
None
- Number of rows of file to read. Useful for reading pieces of large files.
- low_memory : boolean, default
True
- Internally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set
False
, or specify the type with thedtype
parameter. Note that the entire file is read into a single DataFrame regardless, use thechunksize
oriterator
parameter to return the data in chunks. (Only valid with C parser) - buffer_lines : int, default None
- DEPRECATED: this argument will be removed in a future version because its value is not respected by the parser
- compact_ints : boolean, default False
DEPRECATED: this argument will be removed in a future version
If
compact_ints
isTrue
, then for any column that is of integer dtype, the parser will attempt to cast it as the smallest integerdtype
possible, either signed or unsigned depending on the specification from theuse_unsigned
parameter.- use_unsigned : boolean, default False
DEPRECATED: this argument will be removed in a future version
If integer columns are being compacted (i.e.
compact_ints=True
), specify whether the column should be compacted to the smallest signed or unsigned integer dtype.- memory_map : boolean, default False
- If a filepath is provided for
filepath_or_buffer
, map the file object directly onto memory and access the data directly from there. Using this option can improve performance because there is no longer any I/O overhead.
10.2.1.4 NA and Missing Data Handling
- na_values : scalar, str, list-like, or dict, default
None
- Additional strings to recognize as NA/NaN. If dict passed, specific per-column
NA values. By default the following values are interpreted as NaN:
'-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A', 'NA', '#NA', 'NULL', 'NaN', '-NaN', 'nan', '-nan', ''
. - keep_default_na : boolean, default
True
- If na_values are specified and keep_default_na is
False
the default NaN values are overridden, otherwise they’re appended to. - na_filter : boolean, default
True
- Detect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing
na_filter=False
can improve the performance of reading a large file. - verbose : boolean, default
False
- Indicate number of NA values placed in non-numeric columns.
- skip_blank_lines : boolean, default
True
- If
True
, skip over blank lines rather than interpreting as NaN values.
10.2.1.5 Datetime Handling
- parse_dates : boolean or list of ints or names or list of lists or dict, default
False
. - If
True
-> try parsing the index. - If
[1, 2, 3]
-> try parsing columns 1, 2, 3 each as a separate date column. - If
[[1, 3]]
-> combine columns 1 and 3 and parse as a single date column. - If
{'foo' : [1, 3]}
-> parse columns 1, 3 as date and call result ‘foo’. A fast-path exists for iso8601-formatted dates.
- If
- infer_datetime_format : boolean, default
False
- If
True
and parse_dates is enabled for a column, attempt to infer the datetime format to speed up the processing. - keep_date_col : boolean, default
False
- If
True
and parse_dates specifies combining multiple columns then keep the original columns. - date_parser : function, default
None
- Function to use for converting a sequence of string columns to an array of
datetime instances. The default uses
dateutil.parser.parser
to do the conversion. Pandas will try to call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more strings (corresponding to the columns defined by parse_dates) as arguments. - dayfirst : boolean, default
False
- DD/MM format dates, international and European format.
10.2.1.6 Iteration
- iterator : boolean, default
False
- Return TextFileReader object for iteration or getting chunks with
get_chunk()
. - chunksize : int, default
None
- Return TextFileReader object for iteration. See iterating and chunking below.
10.2.1.7 Quoting, Compression, and File Format
- compression : {
'infer'
,'gzip'
,'bz2'
,'zip'
,'xz'
,None
}, default'infer'
For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip, bz2, zip, or xz if filepath_or_buffer is a string ending in ‘.gz’, ‘.bz2’, ‘.zip’, or ‘.xz’, respectively, and no decompression otherwise. If using ‘zip’, the ZIP file must contain only one data file to be read in. Set to
None
for no decompression.New in version 0.18.1: support for ‘zip’ and ‘xz’ compression.
- thousands : str, default
None
- Thousands separator.
- decimal : str, default
'.'
- Character to recognize as decimal point. E.g. use
','
for European data. - float_precision : string, default None
- Specifies which converter the C engine should use for floating-point values.
The options are
None
for the ordinary converter,high
for the high-precision converter, andround_trip
for the round-trip converter. - lineterminator : str (length 1), default
None
- Character to break file into lines. Only valid with C parser.
- quotechar : str (length 1)
- The character used to denote the start and end of a quoted item. Quoted items can include the delimiter and it will be ignored.
- quoting : int or
csv.QUOTE_*
instance, default0
- Control field quoting behavior per
csv.QUOTE_*
constants. Use one ofQUOTE_MINIMAL
(0),QUOTE_ALL
(1),QUOTE_NONNUMERIC
(2) orQUOTE_NONE
(3). - doublequote : boolean, default
True
- When
quotechar
is specified andquoting
is notQUOTE_NONE
, indicate whether or not to interpret two consecutivequotechar
elements inside a field as a singlequotechar
element. - escapechar : str (length 1), default
None
- One-character string used to escape delimiter when quoting is
QUOTE_NONE
. - comment : str, default
None
- Indicates remainder of line should not be parsed. If found at the beginning of
a line, the line will be ignored altogether. This parameter must be a single
character. Like empty lines (as long as
skip_blank_lines=True
), fully commented lines are ignored by the parameter header but not by skiprows. For example, ifcomment='#'
, parsing ‘#empty\na,b,c\n1,2,3’ with header=0 will result in ‘a,b,c’ being treated as the header. - encoding : str, default
None
- Encoding to use for UTF when reading/writing (e.g.
'utf-8'
). List of Python standard encodings. - dialect : str or
python:csv.Dialect
instance, defaultNone
- If
None
defaults to Excel dialect. Ignored if sep longer than 1 char. Seepython:csv.Dialect
documentation for more details. - tupleize_cols : boolean, default
False
- Leave a list of tuples on columns as is (default is to convert to a MultiIndex on the columns).
10.2.1.8 Error Handling
- error_bad_lines : boolean, default
True
- Lines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned. If
False
, then these “bad lines” will dropped from the DataFrame that is returned (only valid with C parser). See bad lines below. - warn_bad_lines : boolean, default
True
- If error_bad_lines is
False
, and warn_bad_lines isTrue
, a warning for each “bad line” will be output (only valid with C parser).
Consider a typical CSV file containing, in this case, some time series data:
In [1]: print(open('foo.csv').read())
date,A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
The default for read_csv is to create a DataFrame with simple numbered rows:
In [2]: pd.read_csv('foo.csv')
Out[2]:
date A B C
0 20090101 a 1 2
1 20090102 b 3 4
2 20090103 c 4 5
In the case of indexed data, you can pass the column number or column name you wish to use as the index:
In [3]: pd.read_csv('foo.csv', index_col=0)
Out[3]:
A B C
date
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
In [4]: pd.read_csv('foo.csv', index_col='date')
Out[4]:
A B C
date
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
You can also use a list of columns to create a hierarchical index:
In [5]: pd.read_csv('foo.csv', index_col=[0, 'A'])
Out[5]:
B C
date A
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
The dialect
keyword gives greater flexibility in specifying the file format.
By default it uses the Excel dialect but you can specify either the dialect name
or a python:csv.Dialect
instance.
Suppose you had data with unenclosed quotes:
In [6]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv
uses the Excel dialect and treats the double quote as
the quote character, which causes it to fail when it finds a newline before it
finds the closing double quote.
We can get around this using dialect
In [7]: dia = csv.excel()
In [8]: dia.quoting = csv.QUOTE_NONE
In [9]: pd.read_csv(StringIO(data), dialect=dia)
Out[9]:
label1 label2 label3
index1 "a c e
index2 b d f
All of the dialect options can be specified separately by keyword arguments:
In [10]: data = 'a,b,c~1,2,3~4,5,6'
In [11]: pd.read_csv(StringIO(data), lineterminator='~')
Out[11]:
a b c
0 1 2 3
1 4 5 6
Another common dialect option is skipinitialspace
, to skip any whitespace
after a delimiter:
In [12]: data = 'a, b, c\n1, 2, 3\n4, 5, 6'
In [13]: print(data)
a, b, c
1, 2, 3
4, 5, 6
In [14]: pd.read_csv(StringIO(data), skipinitialspace=True)
Out[14]:
a b c
0 1 2 3
1 4 5 6
The parsers make every attempt to “do the right thing” and not be very fragile. Type inference is a pretty big deal. So if a column can be coerced to integer dtype without altering the contents, it will do so. Any non-numeric columns will come through as object dtype as with the rest of pandas objects.
10.2.2 Specifying column data types
Starting with v0.10, you can indicate the data type for the whole DataFrame or individual columns:
In [15]: data = 'a,b,c\n1,2,3\n4,5,6\n7,8,9'
In [16]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [17]: df = pd.read_csv(StringIO(data), dtype=object)
In [18]: df
Out[18]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
In [19]: df['a'][0]
Out[19]: '1'
In [20]: df = pd.read_csv(StringIO(data), dtype={'b': object, 'c': np.float64})
In [21]: df.dtypes
Out[21]:
a int64
b object
c float64
dtype: object
Fortunately, pandas
offers more than one way to ensure that your column(s)
contain only one dtype
. If you’re unfamiliar with these concepts, you can
see here to learn more about dtypes, and
here to learn more about object
conversion in
pandas
.
For instance, you can use the converters
argument
of read_csv()
:
In [22]: data = "col_1\n1\n2\n'A'\n4.22"
In [23]: df = pd.read_csv(StringIO(data), converters={'col_1':str})
In [24]: df
Out[24]:
col_1
0 1
1 2
2 'A'
3 4.22
In [25]: df['col_1'].apply(type).value_counts()
Out[25]:
<type 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric()
function to coerce the
dtypes after reading in the data,
In [26]: df2 = pd.read_csv(StringIO(data))
In [27]: df2['col_1'] = pd.to_numeric(df2['col_1'], errors='coerce')
In [28]: df2
Out[28]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22
In [29]: df2['col_1'].apply(type).value_counts()
Out[29]:
<type 'float'> 4
Name: col_1, dtype: int64
which would convert all valid parsing to floats, leaving the invalid parsing
as NaN
.
Ultimately, how you deal with reading in columns containing mixed dtypes
depends on your specific needs. In the case above, if you wanted to NaN
out
the data anomalies, then to_numeric()
is probably your best option.
However, if you wanted for all the data to be coerced, no matter the type, then
using the converters
argument of read_csv()
would certainly be
worth trying.
Note
The dtype
option is currently only supported by the C engine.
Specifying dtype
with engine
other than ‘c’ raises a
ValueError
.
Note
In some cases, reading in abnormal data with columns containing mixed dtypes will result in an inconsistent dataset. If you rely on pandas to infer the dtypes of your columns, the parsing engine will go and infer the dtypes for different chunks of the data, rather than the whole dataset at once. Consequently, you can end up with column(s) with mixed dtypes. For example,
In [30]: df = pd.DataFrame({'col_1':range(500000) + ['a', 'b'] + range(500000)})
In [31]: df.to_csv('foo')
In [32]: mixed_df = pd.read_csv('foo')
In [33]: mixed_df['col_1'].apply(type).value_counts()
Out[33]:
<type 'int'> 737858
<type 'str'> 262144
Name: col_1, dtype: int64
In [34]: mixed_df['col_1'].dtype
Out[34]: dtype('O')
will result with mixed_df containing an int
dtype for certain chunks
of the column, and str
for others due to the mixed dtypes from the
data that was read in. It is important to note that the overall column will be
marked with a dtype
of object
, which is used for columns with mixed dtypes.
10.2.3 Naming and Using Columns
10.2.3.1 Handling column names
A file may or may not have a header row. pandas assumes the first row should be used as the column names:
In [35]: data = 'a,b,c\n1,2,3\n4,5,6\n7,8,9'
In [36]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [37]: pd.read_csv(StringIO(data))
Out[37]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names
argument in conjunction with header
you can
indicate other names to use and whether or not to throw away the header row (if
any):
In [38]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [39]: pd.read_csv(StringIO(data), names=['foo', 'bar', 'baz'], header=0)
Out[39]:
foo bar baz
0 1 2 3
1 4 5 6
2 7 8 9
In [40]: pd.read_csv(StringIO(data), names=['foo', 'bar', 'baz'], header=None)
Out[40]:
foo bar baz
0 a b c
1 1 2 3
2 4 5 6
3 7 8 9
If the header is in a row other than the first, pass the row number to
header
. This will skip the preceding rows:
In [41]: data = 'skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9'
In [42]: pd.read_csv(StringIO(data), header=1)
Out[42]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
10.2.4 Duplicate names parsing
If the file or header contains duplicate names, pandas by default will deduplicate these names so as to prevent data overwrite:
In [43]: data = 'a,b,a\n0,1,2\n3,4,5'
In [44]: pd.read_csv(StringIO(data))
Out[44]:
a b a.1
0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True
by default, which modifies
a series of duplicate columns ‘X’...’X’ to become ‘X.0’...’X.N’. If mangle_dupe_cols
=False
, duplicate data can arise:
In [2]: data = 'a,b,a\n0,1,2\n3,4,5'
In [3]: pd.read_csv(StringIO(data), mangle_dupe_cols=False)
Out[3]:
a b a
0 2 1 2
1 5 4 5
To prevent users from encountering this problem with duplicate data, a ValueError
exception is raised if mangle_dupe_cols != True
:
In [2]: data = 'a,b,a\n0,1,2\n3,4,5'
In [3]: pd.read_csv(StringIO(data), mangle_dupe_cols=False)
...
ValueError: Setting mangle_dupe_cols=False is not supported yet
10.2.4.1 Filtering columns (usecols
)
The usecols
argument allows you to select any subset of the columns in a
file, either using the column names or position numbers:
In [45]: data = 'a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz'
In [46]: pd.read_csv(StringIO(data))
Out[46]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
In [47]: pd.read_csv(StringIO(data), usecols=['b', 'd'])
Out[47]:
b d
0 2 foo
1 5 bar
2 8 baz
In [48]: pd.read_csv(StringIO(data), usecols=[0, 2, 3])
Out[48]:
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz
10.2.5 Comments and Empty Lines
10.2.5.1 Ignoring line comments and empty lines
If the comment
parameter is specified, then completely commented lines will
be ignored. By default, completely blank lines will be ignored as well. Both of
these are API changes introduced in version 0.15.
In [49]: data = '\na,b,c\n \n# commented line\n1,2,3\n\n4,5,6'
In [50]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
In [51]: pd.read_csv(StringIO(data), comment='#')
Out[51]:
a b c
0 1 2 3
1 4 5 6
If skip_blank_lines=False
, then read_csv
will not ignore blank lines:
In [52]: data = 'a,b,c\n\n1,2,3\n\n\n4,5,6'
In [53]: pd.read_csv(StringIO(data), skip_blank_lines=False)
Out[53]:
a b c
0 NaN NaN NaN
1 1.0 2.0 3.0
2 NaN NaN NaN
3 NaN NaN NaN
4 4.0 5.0 6.0
Warning
The presence of ignored lines might create ambiguities involving line numbers;
the parameter header
uses row numbers (ignoring commented/empty
lines), while skiprows
uses line numbers (including commented/empty lines):
In [54]: data = '#comment\na,b,c\nA,B,C\n1,2,3'
In [55]: pd.read_csv(StringIO(data), comment='#', header=1)
Out[55]:
A B C
0 1 2 3
In [56]: data = 'A,B,C\n#comment\na,b,c\n1,2,3'
In [57]: pd.read_csv(StringIO(data), comment='#', skiprows=2)
Out[57]:
a b c
0 1 2 3
If both header
and skiprows
are specified, header
will be
relative to the end of skiprows
. For example:
In [58]: data = '# empty\n# second empty line\n# third empty' \
In [58]: 'line\nX,Y,Z\n1,2,3\nA,B,C\n1,2.,4.\n5.,NaN,10.0'
In [59]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
1,2,3
A,B,C
1,2.,4.
5.,NaN,10.0
In [60]: pd.read_csv(StringIO(data), comment='#', skiprows=4, header=1)
Out[60]:
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0
10.2.5.2 Comments
Sometimes comments or meta data may be included in a file:
In [61]: print(open('tmp.csv').read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
By default, the parser includes the comments in the output:
In [62]: df = pd.read_csv('tmp.csv')
In [63]: df
Out[63]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome
We can suppress the comments using the comment
keyword:
In [64]: df = pd.read_csv('tmp.csv', comment='#')
In [65]: df
Out[65]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
10.2.6 Dealing with Unicode Data
The encoding
argument should be used for encoded unicode data, which will
result in byte strings being decoded to unicode in the result:
In [66]: data = b'word,length\nTr\xc3\xa4umen,7\nGr\xc3\xbc\xc3\x9fe,5'.decode('utf8').encode('latin-1')
In [67]: df = pd.read_csv(BytesIO(data), encoding='latin-1')
In [68]: df
Out[68]:
word length
0 Träumen 7
1 Grüße 5
In [69]: df['word'][1]
Out[69]: u'Gr\xfc\xdfe'
Some formats which encode all characters as multiple bytes, like UTF-16, won’t parse correctly at all without specifying the encoding. Full list of Python standard encodings
10.2.7 Index columns and trailing delimiters
If a file has one more column of data than the number of column names, the first column will be used as the DataFrame’s row names:
In [70]: data = 'a,b,c\n4,apple,bat,5.7\n8,orange,cow,10'
In [71]: pd.read_csv(StringIO(data))
Out[71]:
a b c
4 apple bat 5.7
8 orange cow 10.0
In [72]: data = 'index,a,b,c\n4,apple,bat,5.7\n8,orange,cow,10'
In [73]: pd.read_csv(StringIO(data), index_col=0)
Out[73]:
a b c
index
4 apple bat 5.7
8 orange cow 10.0
Ordinarily, you can achieve this behavior using the index_col
option.
There are some exception cases when a file has been prepared with delimiters at
the end of each data line, confusing the parser. To explicitly disable the
index column inference and discard the last column, pass index_col=False
:
In [74]: data = 'a,b,c\n4,apple,bat,\n8,orange,cow,'
In [75]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [76]: pd.read_csv(StringIO(data))
Out[76]:
a b c
4 apple bat NaN
8 orange cow NaN
In [77]: pd.read_csv(StringIO(data), index_col=False)
Out[77]:
a b c
0 4 apple bat
1 8 orange cow
10.2.8 Date Handling
10.2.8.1 Specifying Date Columns
To better facilitate working with datetime data, read_csv()
and
read_table()
use the keyword arguments parse_dates
and date_parser
to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime
objects.
The simplest case is to just pass in parse_dates=True
:
# Use a column as an index, and parse it as dates.
In [78]: df = pd.read_csv('foo.csv', index_col=0, parse_dates=True)
In [79]: df
Out[79]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
# These are python datetime objects
In [80]: df.index
Out[80]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', name=u'date', freq=None)
It is often the case that we may want to store date and time data separately,
or store various date fields separately. the parse_dates
keyword can be
used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates
, the resulting date
columns will be prepended to the output (so as to not affect the existing column
order) and the new column names will be the concatenation of the component
column names:
In [81]: print(open('tmp.csv').read())
KORD,19990127, 19:00:00, 18:56:00, 0.8100
KORD,19990127, 20:00:00, 19:56:00, 0.0100
KORD,19990127, 21:00:00, 20:56:00, -0.5900
KORD,19990127, 21:00:00, 21:18:00, -0.9900
KORD,19990127, 22:00:00, 21:56:00, -0.5900
KORD,19990127, 23:00:00, 22:56:00, -0.5900
In [82]: df = pd.read_csv('tmp.csv', header=None, parse_dates=[[1, 2], [1, 3]])
In [83]: df
Out[83]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose
to retain them via the keep_date_col
keyword:
In [84]: df = pd.read_csv('tmp.csv', header=None, parse_dates=[[1, 2], [1, 3]],
....: keep_date_col=True)
....:
In [85]: df
Out[85]:
1_2 1_3 0 1 2 \
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 19990127 19:00:00
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 19990127 20:00:00
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD 19990127 21:00:00
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD 19990127 21:00:00
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD 19990127 22:00:00
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD 19990127 23:00:00
3 4
0 18:56:00 0.81
1 19:56:00 0.01
2 20:56:00 -0.59
3 21:18:00 -0.99
4 21:56:00 -0.59
5 22:56:00 -0.59
Note that if you wish to combine multiple columns into a single date column, a
nested list must be used. In other words, parse_dates=[1, 2]
indicates that
the second and third columns should each be parsed as separate date columns
while parse_dates=[[1, 2]]
means the two columns should be parsed into a
single column.
You can also use a dict to specify custom name columns:
In [86]: date_spec = {'nominal': [1, 2], 'actual': [1, 3]}
In [87]: df = pd.read_csv('tmp.csv', header=None, parse_dates=date_spec)
In [88]: df
Out[88]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into a single date column, then a new column is prepended to the data. The index_col specification is based off of this new set of columns rather than the original data columns:
In [89]: date_spec = {'nominal': [1, 2], 'actual': [1, 3]}
In [90]: df = pd.read_csv('tmp.csv', header=None, parse_dates=date_spec,
....: index_col=0) #index is the nominal column
....:
In [91]: df
Out[91]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note
read_csv has a fast_path for parsing datetime strings in iso8601 format, e.g “2000-01-01T00:01:02+00:00” and similar variations. If you can arrange for your data to store datetimes in this format, load times will be significantly faster, ~20x has been observed.
Note
When passing a dict as the parse_dates argument, the order of the columns prepended is not guaranteed, because dict objects do not impose an ordering on their keys. On Python 2.7+ you may use collections.OrderedDict instead of a regular dict if this matters to you. Because of this, when using a dict for ‘parse_dates’ in conjunction with the index_col argument, it’s best to specify index_col as a column label rather then as an index on the resulting frame.
10.2.8.2 Date Parsing Functions
Finally, the parser allows you to specify a custom date_parser
function to
take full advantage of the flexibility of the date parsing API:
In [92]: import pandas.io.date_converters as conv
In [93]: df = pd.read_csv('tmp.csv', header=None, parse_dates=date_spec,
....: date_parser=conv.parse_date_time)
....:
In [94]: df
Out[94]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Pandas will try to call the date_parser
function in three different ways. If
an exception is raised, the next one is tried:
date_parser
is first called with one or more arrays as arguments, as defined using parse_dates (e.g.,date_parser(['2013', '2013'], ['1', '2'])
)- If #1 fails,
date_parser
is called with all the columns concatenated row-wise into a single array (e.g.,date_parser(['2013 1', '2013 2'])
) - If #2 fails,
date_parser
is called once for every row with one or more string arguments from the columns indicated with parse_dates (e.g.,date_parser('2013', '1')
for the first row,date_parser('2013', '2')
for the second, etc.)
Note that performance-wise, you should try these methods of parsing dates in order:
- Try to infer the format using
infer_datetime_format=True
(see section below) - If you know the format, use
pd.to_datetime()
:date_parser=lambda x: pd.to_datetime(x, format=...)
- If you have a really non-standard format, use a custom
date_parser
function. For optimal performance, this should be vectorized, i.e., it should accept arrays as arguments.
You can explore the date parsing functionality in date_converters.py
and
add your own. We would love to turn this module into a community supported set
of date/time parsers. To get you started, date_converters.py
contains
functions to parse dual date and time columns, year/month/day columns,
and year/month/day/hour/minute/second columns. It also contains a
generic_parser
function so you can curry it with a function that deals with
a single date rather than the entire array.
10.2.8.3 Inferring Datetime Format
If you have parse_dates
enabled for some or all of your columns, and your
datetime strings are all formatted the same way, you may get a large speed
up by setting infer_datetime_format=True
. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means
of parsing the strings. 5-10x parsing speeds have been observed. pandas
will fallback to the usual parsing if either the format cannot be guessed
or the format that was guessed cannot properly parse the entire column
of strings. So in general, infer_datetime_format
should not have any
negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All representing December 30th, 2011 at 00:00:00)
- “20111230”
- “2011/12/30”
- “20111230 00:00:00”
- “12/30/2011 00:00:00”
- “30/Dec/2011 00:00:00”
- “30/December/2011 00:00:00”
infer_datetime_format
is sensitive to dayfirst
. With
dayfirst=True
, it will guess “01/12/2011” to be December 1st. With
dayfirst=False
(default) it will guess “01/12/2011” to be January 12th.
# Try to infer the format for the index column
In [95]: df = pd.read_csv('foo.csv', index_col=0, parse_dates=True,
....: infer_datetime_format=True)
....:
In [96]: df
Out[96]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
10.2.8.4 International Date Formats
While US date formats tend to be MM/DD/YYYY, many international formats use
DD/MM/YYYY instead. For convenience, a dayfirst
keyword is provided:
In [97]: print(open('tmp.csv').read())
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
In [98]: pd.read_csv('tmp.csv', parse_dates=[0])
Out[98]:
date value cat
0 2000-01-06 5 a
1 2000-02-06 10 b
2 2000-03-06 15 c
In [99]: pd.read_csv('tmp.csv', dayfirst=True, parse_dates=[0])
Out[99]:
date value cat
0 2000-06-01 5 a
1 2000-06-02 10 b
2 2000-06-03 15 c
10.2.9 Specifying method for floating-point conversion
The parameter float_precision
can be specified in order to use
a specific floating-point converter during parsing with the C engine.
The options are the ordinary converter, the high-precision converter, and
the round-trip converter (which is guaranteed to round-trip values after
writing to a file). For example:
In [100]: val = '0.3066101993807095471566981359501369297504425048828125'
In [101]: data = 'a,b,c\n1,2,{0}'.format(val)
In [102]: abs(pd.read_csv(StringIO(data), engine='c', float_precision=None)['c'][0] - float(val))
Out[102]: 1.1102230246251565e-16
In [103]: abs(pd.read_csv(StringIO(data), engine='c', float_precision='high')['c'][0] - float(val))
Out[103]: 5.5511151231257827e-17
In [104]: abs(pd.read_csv(StringIO(data), engine='c', float_precision='round_trip')['c'][0] - float(val))
Out[104]: 0.0
10.2.10 Thousand Separators
For large numbers that have been written with a thousands separator, you can
set the thousands
keyword to a string of length 1 so that integers will be parsed
correctly:
By default, numbers with a thousands separator will be parsed as strings
In [105]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z
In [106]: df = pd.read_csv('tmp.csv', sep='|')
In [107]: df
Out[107]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z
In [108]: df.level.dtype
Out[108]: dtype('O')
The thousands
keyword allows integers to be parsed correctly
In [109]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z
In [110]: df = pd.read_csv('tmp.csv', sep='|', thousands=',')
In [111]: df
Out[111]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [112]: df.level.dtype
Out[112]: dtype('int64')
10.2.11 NA Values
To control which values are parsed as missing values (which are signified by NaN
), specifiy a
string in na_values
. If you specify a list of strings, then all values in
it are considered to be missing values. If you specify a number (a float
, like 5.0
or an integer
like 5
),
the corresponding equivalent values will also imply a missing value (in this case effectively
[5.0,5]
are recognized as NaN
.
To completely override the default values that are recognized as missing, specify keep_default_na=False
.
The default NaN
recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A','N/A', 'NA',
'#NA', 'NULL', 'NaN', '-NaN', 'nan', '-nan']
. Although a 0-length string
''
is not included in the default NaN
values list, it is still treated
as a missing value.
read_csv(path, na_values=[5])
the default values, in addition to 5
, 5.0
when interpreted as numbers are recognized as NaN
read_csv(path, keep_default_na=False, na_values=[""])
only an empty field will be NaN
read_csv(path, keep_default_na=False, na_values=["NA", "0"])
only NA
and 0
as strings are NaN
read_csv(path, na_values=["Nope"])
the default values, in addition to the string "Nope"
are recognized as NaN
10.2.12 Infinity
inf
like values will be parsed as np.inf
(positive infinity), and -inf
as -np.inf
(negative infinity).
These will ignore the case of the value, meaning Inf
, will also be parsed as np.inf
.
10.2.13 Returning Series
Using the squeeze
keyword, the parser will return output with a single column
as a Series
:
In [113]: print(open('tmp.csv').read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [114]: output = pd.read_csv('tmp.csv', squeeze=True)
In [115]: output
Out[115]:
Patient1 123000
Patient2 23000
Patient3 1234018
Name: level, dtype: int64
In [116]: type(output)
Out[116]: pandas.core.series.Series
10.2.14 Boolean values
The common values True
, False
, TRUE
, and FALSE
are all
recognized as boolean. Sometime you would want to recognize some other values
as being boolean. To do this use the true_values
and false_values
options:
In [117]: data= 'a,b,c\n1,Yes,2\n3,No,4'
In [118]: print(data)
a,b,c
1,Yes,2
3,No,4
In [119]: pd.read_csv(StringIO(data))
Out[119]:
a b c
0 1 Yes 2
1 3 No 4
In [120]: pd.read_csv(StringIO(data), true_values=['Yes'], false_values=['No'])
Out[120]:
a b c
0 1 True 2
1 3 False 4
10.2.15 Handling “bad” lines
Some files may have malformed lines with too few fields or too many. Lines with too few fields will have NA values filled in the trailing fields. Lines with too many will cause an error by default:
In [27]: data = 'a,b,c\n1,2,3\n4,5,6,7\n8,9,10'
In [28]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
CParserError Traceback (most recent call last)
CParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4
You can elect to skip bad lines:
In [29]: pd.read_csv(StringIO(data), error_bad_lines=False)
Skipping line 3: expected 3 fields, saw 4
Out[29]:
a b c
0 1 2 3
1 8 9 10
10.2.16 Quoting and Escape Characters
Quotes (and other escape characters) in embedded fields can be handled in any
number of ways. One way is to use backslashes; to properly parse this data, you
should pass the escapechar
option:
In [121]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'
In [122]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
In [123]: pd.read_csv(StringIO(data), escapechar='\\')
Out[123]:
a b
0 hello, "Bob", nice to see you 5
10.2.17 Files with Fixed Width Columns
While read_csv
reads delimited data, the read_fwf()
function works
with data files that have known and fixed column widths. The function parameters
to read_fwf
are largely the same as read_csv with two extra parameters:
colspecs
: A list of pairs (tuples) giving the extents of the fixed-width fields of each line as half-open intervals (i.e., [from, to[ ). String value ‘infer’ can be used to instruct the parser to try detecting the column specifications from the first 100 rows of the data. Default behaviour, if not specified, is to infer.widths
: A list of field widths which can be used instead of ‘colspecs’ if the intervals are contiguous.
Consider a typical fixed-width data file:
In [124]: print(open('bar.csv').read())
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
In order to parse this file into a DataFrame, we simply need to supply the column specifications to the read_fwf function along with the file name:
#Column specifications are a list of half-intervals
In [125]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
In [126]: df = pd.read_fwf('bar.csv', colspecs=colspecs, header=None, index_col=0)
In [127]: df
Out[127]:
1 2 3
0
id8141 360.2429 149.9102 11950.7
id1594 444.9536 166.9857 11788.4
id1849 364.1368 183.6288 11806.2
id1230 413.8361 184.3757 11916.8
id1948 502.9540 173.2372 12468.3
Note how the parser automatically picks column names X.<column number> when
header=None
argument is specified. Alternatively, you can supply just the
column widths for contiguous columns:
#Widths are a list of integers
In [128]: widths = [6, 14, 13, 10]
In [129]: df = pd.read_fwf('bar.csv', widths=widths, header=None)
In [130]: df
Out[130]:
0 1 2 3
0 id8141 360.2429 149.9102 11950.7
1 id1594 444.9536 166.9857 11788.4
2 id1849 364.1368 183.6288 11806.2
3 id1230 413.8361 184.3757 11916.8
4 id1948 502.9540 173.2372 12468.3
The parser will take care of extra white spaces around the columns so it’s ok to have extra separation between the columns in the file.
New in version 0.13.0.
By default, read_fwf
will try to infer the file’s colspecs
by using the
first 100 rows of the file. It can do it only in cases when the columns are
aligned and correctly separated by the provided delimiter
(default delimiter
is whitespace).
In [131]: df = pd.read_fwf('bar.csv', header=None, index_col=0)
In [132]: df
Out[132]:
1 2 3
0
id8141 360.2429 149.9102 11950.7
id1594 444.9536 166.9857 11788.4
id1849 364.1368 183.6288 11806.2
id1230 413.8361 184.3757 11916.8
id1948 502.9540 173.2372 12468.3
10.2.18 Indexes
10.2.18.1 Files with an “implicit” index column
Consider a file with one less entry in the header than the number of data column:
In [133]: print(open('foo.csv').read())
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In this special case, read_csv
assumes that the first column is to be used
as the index of the DataFrame:
In [134]: pd.read_csv('foo.csv')
Out[134]:
A B C
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
Note that the dates weren’t automatically parsed. In that case you would need to do as before:
In [135]: df = pd.read_csv('foo.csv', parse_dates=True)
In [136]: df.index
Out[136]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', freq=None)
10.2.18.2 Reading an index with a MultiIndex
Suppose you have data indexed by two columns:
#print os.getcwd()
In [137]: print(open("./source/data/mindex_ex.csv").read())
year,indiv,zit,xit
1977,"A",1.2,.6
1977,"B",1.5,.5
1977,"C",1.7,.8
1978,"A",.2,.06
1978,"B",.7,.2
1978,"C",.8,.3
1978,"D",.9,.5
1978,"E",1.4,.9
1979,"C",.2,.15
1979,"D",.14,.05
1979,"E",.5,.15
1979,"F",1.2,.5
1979,"G",3.4,1.9
1979,"H",5.4,2.7
1979,"I",6.4,1.2
The index_col
argument to read_csv
and read_table
can take a list of
column numbers to turn multiple columns into a MultiIndex
for the index of the
returned object:
In [138]: df = pd.read_csv("./source/data/mindex_ex.csv", index_col=[0,1])
In [139]: df
Out[139]:
zit xit
year indiv
1977 A 1.20 0.60
B 1.50 0.50
C 1.70 0.80
1978 A 0.20 0.06
B 0.70 0.20
C 0.80 0.30
D 0.90 0.50
E 1.40 0.90
1979 C 0.20 0.15
D 0.14 0.05
E 0.50 0.15
F 1.20 0.50
G 3.40 1.90
H 5.40 2.70
I 6.40 1.20
In [140]: df.ix[1978]
Out[140]:
zit xit
indiv
A 0.2 0.06
B 0.7 0.20
C 0.8 0.30
D 0.9 0.50
E 1.4 0.90
10.2.18.3 Reading columns with a MultiIndex
By specifying list of row locations for the header
argument, you
can read in a MultiIndex
for the columns. Specifying non-consecutive
rows will skip the intervening rows. In order to have the pre-0.13 behavior
of tupleizing columns, specify tupleize_cols=True
.
In [141]: from pandas.util.testing import makeCustomDataframe as mkdf
In [142]: df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
In [143]: df.to_csv('mi.csv')
In [144]: print(open('mi.csv').read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [145]: pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1])
Out[145]:
C0 C_l0_g0 C_l0_g1 C_l0_g2
C1 C_l1_g0 C_l1_g1 C_l1_g2
C2 C_l2_g0 C_l2_g1 C_l2_g2
C3 C_l3_g0 C_l3_g1 C_l3_g2
R0 R1
R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2
R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2
R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2
R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2
R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2
Starting in 0.13.0, read_csv
will be able to interpret a more common format
of multi-columns indices.
In [146]: print(open('mi2.csv').read())
,a,a,a,b,c,c
,q,r,s,t,u,v
one,1,2,3,4,5,6
two,7,8,9,10,11,12
In [147]: pd.read_csv('mi2.csv',header=[0,1],index_col=0)
Out[147]:
a b c
q r s t u v
one 1 2 3 4 5 6
two 7 8 9 10 11 12
Note: If an index_col
is not specified (e.g. you don’t have an index, or wrote it
with df.to_csv(..., index=False
), then any names
on the columns index will be lost.
10.2.19 Automatically “sniffing” the delimiter
read_csv
is capable of inferring delimited (not necessarily
comma-separated) files, as pandas uses the python:csv.Sniffer
class of the csv module. For this, you have to specify sep=None
.
In [148]: print(open('tmp2.sv').read())
:0:1:2:3
0:0.469112299907:-0.282863344329:-1.50905850317:-1.13563237102
1:1.21211202502:-0.173214649053:0.119208711297:-1.04423596628
2:-0.861848963348:-2.10456921889:-0.494929274069:1.07180380704
3:0.721555162244:-0.70677113363:-1.03957498511:0.271859885543
4:-0.424972329789:0.567020349794:0.276232019278:-1.08740069129
5:-0.673689708088:0.113648409689:-1.47842655244:0.524987667115
6:0.40470521868:0.57704598592:-1.71500201611:-1.03926848351
7:-0.370646858236:-1.15789225064:-1.34431181273:0.844885141425
8:1.07576978372:-0.10904997528:1.64356307036:-1.46938795954
9:0.357020564133:-0.67460010373:-1.77690371697:-0.968913812447
In [149]: pd.read_csv('tmp2.sv', sep=None, engine='python')
Out[149]:
Unnamed: 0 0 1 2 3
0 0 0.4691 -0.2829 -1.5091 -1.1356
1 1 1.2121 -0.1732 0.1192 -1.0442
2 2 -0.8618 -2.1046 -0.4949 1.0718
3 3 0.7216 -0.7068 -1.0396 0.2719
4 4 -0.4250 0.5670 0.2762 -1.0874
5 5 -0.6737 0.1136 -1.4784 0.5250
6 6 0.4047 0.5770 -1.7150 -1.0393
7 7 -0.3706 -1.1579 -1.3443 0.8449
8 8 1.0758 -0.1090 1.6436 -1.4694
9 9 0.3570 -0.6746 -1.7769 -0.9689
10.2.20 Iterating through files chunk by chunk
Suppose you wish to iterate through a (potentially very large) file lazily rather than reading the entire file into memory, such as the following:
In [150]: print(open('tmp.sv').read())
|0|1|2|3
0|0.469112299907|-0.282863344329|-1.50905850317|-1.13563237102
1|1.21211202502|-0.173214649053|0.119208711297|-1.04423596628
2|-0.861848963348|-2.10456921889|-0.494929274069|1.07180380704
3|0.721555162244|-0.70677113363|-1.03957498511|0.271859885543
4|-0.424972329789|0.567020349794|0.276232019278|-1.08740069129
5|-0.673689708088|0.113648409689|-1.47842655244|0.524987667115
6|0.40470521868|0.57704598592|-1.71500201611|-1.03926848351
7|-0.370646858236|-1.15789225064|-1.34431181273|0.844885141425
8|1.07576978372|-0.10904997528|1.64356307036|-1.46938795954
9|0.357020564133|-0.67460010373|-1.77690371697|-0.968913812447
In [151]: table = pd.read_table('tmp.sv', sep='|')
In [152]: table
Out[152]:
Unnamed: 0 0 1 2 3
0 0 0.4691 -0.2829 -1.5091 -1.1356
1 1 1.2121 -0.1732 0.1192 -1.0442
2 2 -0.8618 -2.1046 -0.4949 1.0718
3 3 0.7216 -0.7068 -1.0396 0.2719
4 4 -0.4250 0.5670 0.2762 -1.0874
5 5 -0.6737 0.1136 -1.4784 0.5250
6 6 0.4047 0.5770 -1.7150 -1.0393
7 7 -0.3706 -1.1579 -1.3443 0.8449
8 8 1.0758 -0.1090 1.6436 -1.4694
9 9 0.3570 -0.6746 -1.7769 -0.9689
By specifying a chunksize
to read_csv
or read_table
, the return
value will be an iterable object of type TextFileReader
:
In [153]: reader = pd.read_table('tmp.sv', sep='|', chunksize=4)
In [154]: reader
Out[154]: <pandas.io.parsers.TextFileReader at 0x2b6c20d1cd10>
In [155]: for chunk in reader:
.....: print(chunk)
.....:
Unnamed: 0 0 1 2 3
0 0 0.4691 -0.2829 -1.5091 -1.1356
1 1 1.2121 -0.1732 0.1192 -1.0442
2 2 -0.8618 -2.1046 -0.4949 1.0718
3 3 0.7216 -0.7068 -1.0396 0.2719
Unnamed: 0 0 1 2 3
4 4 -0.4250 0.5670 0.2762 -1.0874
5 5 -0.6737 0.1136 -1.4784 0.5250
6 6 0.4047 0.5770 -1.7150 -1.0393
7 7 -0.3706 -1.1579 -1.3443 0.8449
Unnamed: 0 0 1 2 3
8 8 1.0758 -0.1090 1.6436 -1.4694
9 9 0.3570 -0.6746 -1.7769 -0.9689
Specifying iterator=True
will also return the TextFileReader
object:
In [156]: reader = pd.read_table('tmp.sv', sep='|', iterator=True)
In [157]: reader.get_chunk(5)
Out[157]:
Unnamed: 0 0 1 2 3
0 0 0.4691 -0.2829 -1.5091 -1.1356
1 1 1.2121 -0.1732 0.1192 -1.0442
2 2 -0.8618 -2.1046 -0.4949 1.0718
3 3 0.7216 -0.7068 -1.0396 0.2719
4 4 -0.4250 0.5670 0.2762 -1.0874
10.2.21 Specifying the parser engine
Under the hood pandas uses a fast and efficient parser implemented in C as well
as a python implementation which is currently more feature-complete. Where
possible pandas uses the C parser (specified as engine='c'
), but may fall
back to python if C-unsupported options are specified. Currently, C-unsupported
options include:
sep
other than a single character (e.g. regex separators)skipfooter
sep=None
withdelim_whitespace=False
Specifying any of the above options will produce a ParserWarning
unless the
python engine is selected explicitly using engine='python'
.
10.2.22 Writing out Data
10.2.22.1 Writing to CSV format
The Series and DataFrame objects have an instance method to_csv
which
allows storing the contents of the object as a comma-separated-values file. The
function takes a number of arguments. Only the first is required.
path_or_buf
: A string path to the file to write or a StringIOsep
: Field delimiter for the output file (default ”,”)na_rep
: A string representation of a missing value (default ‘’)float_format
: Format string for floating point numberscols
: Columns to write (default None)header
: Whether to write out the column names (default True)index
: whether to write row (index) names (default True)index_label
: Column label(s) for index column(s) if desired. If None (default), and header and index are True, then the index names are used. (A sequence should be given if the DataFrame uses MultiIndex).mode
: Python write mode, default ‘w’encoding
: a string representing the encoding to use if the contents are non-ASCII, for python versions prior to 3line_terminator
: Character sequence denoting line end (default ‘\n’)quoting
: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL)quotechar
: Character used to quote fields (default ‘”’)doublequote
: Control quoting ofquotechar
in fields (default True)escapechar
: Character used to escapesep
andquotechar
when appropriate (default None)chunksize
: Number of rows to write at a timetupleize_cols
: If False (default), write as a list of tuples, otherwise write in an expanded line format suitable forread_csv
date_format
: Format string for datetime objects
10.2.22.2 Writing a formatted string
The DataFrame object has an instance method to_string
which allows control
over the string representation of the object. All arguments are optional:
buf
default None, for example a StringIO objectcolumns
default None, which columns to writecol_space
default None, minimum width of each column.na_rep
defaultNaN
, representation of NA valueformatters
default None, a dictionary (by column) of functions each of which takes a single argument and returns a formatted stringfloat_format
default None, a function which takes a single (float) argument and returns a formatted string; to be applied to floats in the DataFrame.sparsify
default True, set to False for a DataFrame with a hierarchical index to print every multiindex key at each row.index_names
default True, will print the names of the indicesindex
default True, will print the index (ie, row labels)header
default True, will print the column labelsjustify
defaultleft
, will print column headers left- or right-justified
The Series object also has a to_string
method, but with only the buf
,
na_rep
, float_format
arguments. There is also a length
argument
which, if set to True
, will additionally output the length of the Series.