2.9 Data In/Out
Performance comparison of SQL vs HDF5
2.9.1 CSV
The CSV docs
how to read in multiple files, appending to create a single dataframe
Reading only certain rows of a csv chunk-by-chunk
Reading the first few lines of a frame
Reading a file that is compressed but not by gzip/bz2
(the native compressed formats which read_csv
understands).
This example shows a WinZipped
file, but is a general application of opening the file within a context manager and
using that handle to read.
See here
Reading CSV with Unix timestamps and converting to local timezone
Write a multi-row index CSV without writing duplicates
Parsing date components in multi-columns is faster with a format
In [30]: i = pd.date_range('20000101',periods=10000)
In [31]: df = pd.DataFrame(dict(year = i.year, month = i.month, day = i.day))
In [32]: df.head()
Out[32]:
day month year
0 1 1 2000
1 2 1 2000
2 3 1 2000
3 4 1 2000
4 5 1 2000
In [33]: %timeit pd.to_datetime(df.year*10000+df.month*100+df.day,format='%Y%m%d')
100 loops, best of 3: 7.08 ms per loop
# simulate combinging into a string, then parsing
In [34]: ds = df.apply(lambda x: "%04d%02d%02d" % (x['year'],x['month'],x['day']),axis=1)
In [35]: ds.head()
Out[35]:
0 20000101
1 20000102
2 20000103
3 20000104
4 20000105
dtype: object
In [36]: %timeit pd.to_datetime(ds)
1 loops, best of 3: 488 ms per loop
2.9.1.1 Skip row between header and data
In [1]: from io import StringIO
In [2]: import pandas as pd
In [3]: data = """;;;;
...: ;;;;
...: ;;;;
...: ;;;;
...: ;;;;
...: ;;;;
...: ;;;;
...: ;;;;
...: ;;;;
...: ;;;;
...: date;Param1;Param2;Param4;Param5
...: ;m;C;m;m
...: ;;;;
...: 01.01.1990 00:00;1;1;2;3
...: 01.01.1990 01:00;5;3;4;5
...: 01.01.1990 02:00;9;5;6;7
...: 01.01.1990 03:00;13;7;8;9
...: 01.01.1990 04:00;17;9;10;11
...: 01.01.1990 05:00;21;11;12;13
...: """
...:
2.9.1.1.1 Option 1: pass rows explicitly to skiprows
In [4]: pd.read_csv(StringIO(data.decode('UTF-8')), sep=';', skiprows=[11,12],
...: index_col=0, parse_dates=True, header=10)
...:
Out[4]:
Param1 Param2 Param4 Param5
date
1990-01-01 00:00:00 1 1 2 3
1990-01-01 01:00:00 5 3 4 5
1990-01-01 02:00:00 9 5 6 7
1990-01-01 03:00:00 13 7 8 9
1990-01-01 04:00:00 17 9 10 11
1990-01-01 05:00:00 21 11 12 13
2.9.1.1.2 Option 2: read column names and then data
In [5]: pd.read_csv(StringIO(data.decode('UTF-8')), sep=';',
...: header=10, parse_dates=True, nrows=10).columns
...:
Out[5]: Index([u'date', u'Param1', u'Param2', u'Param4', u'Param5'], dtype='object')
In [6]: columns = pd.read_csv(StringIO(data.decode('UTF-8')), sep=';',
...: header=10, parse_dates=True, nrows=10).columns
...:
In [7]: pd.read_csv(StringIO(data.decode('UTF-8')), sep=';',
...: header=12, parse_dates=True, names=columns)
...:
Out[7]:
date Param1 Param2 Param4 Param5
0 01.01.1990 00:00 1 1 2 3
1 01.01.1990 01:00 5 3 4 5
2 01.01.1990 02:00 9 5 6 7
3 01.01.1990 03:00 13 7 8 9
4 01.01.1990 04:00 17 9 10 11
5 01.01.1990 05:00 21 11 12 13
2.9.5 HDFStore
The HDFStores docs
Simple Queries with a Timestamp Index
Managing heterogeneous data using a linked multiple table hierarchy
Merging on-disk tables with millions of rows
Avoiding inconsistencies when writing to a store from multiple processes/threads
De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from csv file and creating a store by chunks, with date parsing as well. See here
Creating a store chunk-by-chunk from a csv file
Appending to a store, while creating a unique index
Reading in a sequence of files, then providing a global unique index to a store while appending
Groupby on a HDFStore with low group density
Groupby on a HDFStore with high group density
Hierarchical queries on a HDFStore
Troubleshoot HDFStore exceptions
Setting min_itemsize with strings
Using ptrepack to create a completely-sorted-index on a store
Storing Attributes to a group node
In [8]: df = pd.DataFrame(np.random.randn(8,3))
In [9]: store = pd.HDFStore('test.h5')
In [10]: store.put('df',df)
# you can store an arbitrary python object via pickle
In [11]: store.get_storer('df').attrs.my_attribute = dict(A = 10)
In [12]: store.get_storer('df').attrs.my_attribute
Out[12]: {'A': 10}
2.9.6 Binary Files
pandas readily accepts numpy record arrays, if you need to read in a binary
file consisting of an array of C structs. For example, given this C program
in a file called main.c
compiled with gcc main.c -std=gnu99
on a
64-bit machine,
#include <stdio.h>
#include <stdint.h>
typedef struct _Data
{
int32_t count;
double avg;
float scale;
} Data;
int main(int argc, const char *argv[])
{
size_t n = 10;
Data d[n];
for (int i = 0; i < n; ++i)
{
d[i].count = i;
d[i].avg = i + 1.0;
d[i].scale = (float) i + 2.0f;
}
FILE *file = fopen("binary.dat", "wb");
fwrite(&d, sizeof(Data), n, file);
fclose(file);
return 0;
}
the following Python code will read the binary file 'binary.dat'
into a
pandas DataFrame
, where each element of the struct corresponds to a column
in the frame:
names = 'count', 'avg', 'scale'
# note that the offsets are larger than the size of the type because of
# struct padding
offsets = 0, 8, 16
formats = 'i4', 'f8', 'f4'
dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},
align=True)
df = pd.DataFrame(np.fromfile('binary.dat', dt))
Note
The offsets of the structure elements may be different depending on the architecture of the machine on which the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not cross platform. We recommended either HDF5 or msgpack, both of which are supported by pandas’ IO facilities.