>>> import numpy as np
>>> np.random.seed(123456)
>>> import pandas as pd
>>> import pandas.util.testing as tm
>>> np.set_printoptions(precision=4, suppress=True)
>>> pd.options.display.max_rows=8
1 Sparse data structures
Note
The SparsePanel
class has been removed in 0.19.0
We have implemented “sparse” versions of Series and DataFrame. These are not sparse
in the typical “mostly 0”. Rather, you can view these objects as being “compressed”
where any data matching a specific value (NaN
/ missing value, though any value
can be chosen) is omitted. A special SparseIndex
object tracks where data has been
“sparsified”. This will make much more sense in an example. All of the standard pandas
data structures have a to_sparse
method:
In [1]: from numpy.random import randn
In [2]: ts = pd.Series(randn(10))
In [3]: ts[2:-2] = np.nan
In [4]: sts = ts.to_sparse()
In [5]: sts
Out[5]:
0 0.469112
1 -0.282863
2 NaN
3 NaN
...
6 NaN
7 NaN
8 -0.861849
9 -2.104569
dtype: float64
BlockIndex
Block locations: array([0, 8], dtype=int32)
Block lengths: array([2, 2], dtype=int32)
The to_sparse
method takes a kind
argument (for the sparse index, see
below) and a fill_value
. So if we had a mostly zero Series, we could
convert it to sparse with fill_value=0
:
In [6]: ts.fillna(0).to_sparse(fill_value=0)
Out[6]:
0 0.469112
1 -0.282863
2 0.000000
3 0.000000
...
6 0.000000
7 0.000000
8 -0.861849
9 -2.104569
dtype: float64
BlockIndex
Block locations: array([0, 8], dtype=int32)
Block lengths: array([2, 2], dtype=int32)
The sparse objects exist for memory efficiency reasons. Suppose you had a large, mostly NA DataFrame:
In [7]: df = pd.DataFrame(randn(10000, 4))
In [8]: df.ix[:9998] = np.nan
In [9]: sdf = df.to_sparse()
In [10]: sdf
Out[10]:
0 1 2 3
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
... ... ... ... ...
9996 NaN NaN NaN NaN
9997 NaN NaN NaN NaN
9998 NaN NaN NaN NaN
9999 0.280249 -1.648493 1.490865 -0.890819
[10000 rows x 4 columns]
In [11]: sdf.density
Out[11]: 0.0001
As you can see, the density (% of values that have not been “compressed”) is extremely low. This sparse object takes up much less memory on disk (pickled) and in the Python interpreter. Functionally, their behavior should be nearly identical to their dense counterparts.
Any sparse object can be converted back to the standard dense form by calling
to_dense
:
In [12]: sts.to_dense()
Out[12]:
0 0.469112
1 -0.282863
2 NaN
3 NaN
...
6 NaN
7 NaN
8 -0.861849
9 -2.104569
dtype: float64
1.1 SparseArray
SparseArray
is the base layer for all of the sparse indexed data
structures. It is a 1-dimensional ndarray-like object storing only values
distinct from the fill_value
:
In [13]: arr = np.random.randn(10)
In [14]: arr[2:5] = np.nan; arr[7:8] = np.nan
In [15]: sparr = pd.SparseArray(arr)
In [16]: sparr
Out[16]:
[-1.95566352972, -1.6588664276, nan, nan, nan, 1.15893288864, 0.145297113733, nan, 0.606027190513, 1.33421134013]
Fill: nan
IntIndex
Indices: array([0, 1, 5, 6, 8, 9], dtype=int32)
Like the indexed objects (SparseSeries, SparseDataFrame), a SparseArray
can be converted back to a regular ndarray by calling to_dense
:
In [17]: sparr.to_dense()
Out[17]:
array([-1.9557, -1.6589, nan, nan, nan, 1.1589, 0.1453,
nan, 0.606 , 1.3342])
1.2 SparseList
Note
The SparseList
class has been deprecated and will be removed in a future version.
SparseList
is a list-like data structure for managing a dynamic collection
of SparseArrays. To create one, simply call the SparseList
constructor with
a fill_value
(defaulting to NaN
):
In [18]: spl = pd.SparseList()
In [19]: spl
Out[19]: <pandas.sparse.list.SparseList object at 0x2b6c228bdad0>
The two important methods are append
and to_array
. append
can
accept scalar values or any 1-dimensional sequence:
In [20]: spl.append(np.array([1., np.nan, np.nan, 2., 3.]))
In [21]: spl.append(5)
In [22]: spl.append(sparr)
In [23]: spl
Out[23]:
<pandas.sparse.list.SparseList object at 0x2b6c228bdad0>
[1.0, nan, nan, 2.0, 3.0]
Fill: nan
IntIndex
Indices: array([0, 3, 4], dtype=int32)
[5.0]
Fill: nan
IntIndex
Indices: array([0], dtype=int32)
[-1.95566352972, -1.6588664276, nan, nan, nan, 1.15893288864, 0.145297113733, nan, 0.606027190513, 1.33421134013]
Fill: nan
IntIndex
Indices: array([0, 1, 5, 6, 8, 9], dtype=int32)
As you can see, all of the contents are stored internally as a list of
memory-efficient SparseArray
objects. Once you’ve accumulated all of the
data, you can call to_array
to get a single SparseArray
with all the
data:
In [24]: spl.to_array()
Out[24]:
[1.0, nan, nan, 2.0, 3.0, 5.0, -1.95566352972, -1.6588664276, nan, nan, nan, 1.15893288864, 0.145297113733, nan, 0.606027190513, 1.33421134013]
Fill: nan
IntIndex
Indices: array([ 0, 3, 4, 5, 6, 7, 11, 12, 14, 15], dtype=int32)
1.3 SparseIndex objects
Two kinds of SparseIndex
are implemented, block
and integer
. We
recommend using block
as it’s more memory efficient. The integer
format
keeps an arrays of all of the locations where the data are not equal to the
fill value. The block
format tracks only the locations and sizes of blocks
of data.
1.4 Sparse Calculation
You can apply NumPy ufuncs to SparseArray
and get a SparseArray
as a result.
In [25]: arr = pd.SparseArray([1., np.nan, np.nan, -2., np.nan])
In [26]: np.abs(arr)
Out[26]:
[1.0, nan, nan, 2.0, nan]
Fill: nan
IntIndex
Indices: array([0, 3], dtype=int32)
The ufunc is also applied to fill_value
. This is needed to get
the correct dense result.
In [27]: arr = pd.SparseArray([1., -1, -1, -2., -1], fill_value=-1)
In [28]: np.abs(arr)
Out[28]:
[1.0, -1, -1, 2.0, -1]
Fill: -1
IntIndex
Indices: array([0, 3], dtype=int32)
In [29]: np.abs(arr).to_dense()
Out[29]: array([ 1., -1., -1., 2., -1.])
1.5 Interaction with scipy.sparse
Experimental api to transform between sparse pandas and scipy.sparse structures.
A SparseSeries.to_coo()
method is implemented for transforming a SparseSeries
indexed by a MultiIndex
to a scipy.sparse.coo_matrix
.
The method requires a MultiIndex
with two or more levels.
In [30]: s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])
In [31]: s.index = pd.MultiIndex.from_tuples([(1, 2, 'a', 0),
....: (1, 2, 'a', 1),
....: (1, 1, 'b', 0),
....: (1, 1, 'b', 1),
....: (2, 1, 'b', 0),
....: (2, 1, 'b', 1)],
....: names=['A', 'B', 'C', 'D'])
....:
In [32]: s
Out[32]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
# SparseSeries
In [33]: ss = s.to_sparse()
In [34]: ss
Out[34]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 2], dtype=int32)
In the example below, we transform the SparseSeries
to a sparse representation of a 2-d array by specifying that the first and second MultiIndex
levels define labels for the rows and the third and fourth levels define labels for the columns. We also specify that the column and row labels should be sorted in the final sparse representation.
In [35]: A, rows, columns = ss.to_coo(row_levels=['A', 'B'],
....: column_levels=['C', 'D'],
....: sort_labels=True)
....:
In [36]: A
Out[36]:
<3x4 sparse matrix of type '<type 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [37]: A.todense()
Out[37]:
matrix([[ 0., 0., 1., 3.],
[ 3., 0., 0., 0.],
[ 0., 0., 0., 0.]])
In [38]: rows
Out[38]: [(1, 1), (1, 2), (2, 1)]
In [39]: columns
Out[39]: [('a', 0), ('a', 1), ('b', 0), ('b', 1)]
Specifying different row and column labels (and not sorting them) yields a different sparse matrix:
In [40]: A, rows, columns = ss.to_coo(row_levels=['A', 'B', 'C'],
....: column_levels=['D'],
....: sort_labels=False)
....:
In [41]: A
Out[41]:
<3x2 sparse matrix of type '<type 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [42]: A.todense()
Out[42]:
matrix([[ 3., 0.],
[ 1., 3.],
[ 0., 0.]])
In [43]: rows
Out[43]: [(1, 2, 'a'), (1, 1, 'b'), (2, 1, 'b')]
In [44]: columns
Out[44]: [0, 1]
A convenience method SparseSeries.from_coo()
is implemented for creating a SparseSeries
from a scipy.sparse.coo_matrix
.
In [45]: from scipy import sparse
In [46]: A = sparse.coo_matrix(([3.0, 1.0, 2.0], ([1, 0, 0], [0, 2, 3])),
....: shape=(3, 4))
....:
In [47]: A
Out[47]:
<3x4 sparse matrix of type '<type 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [48]: A.todense()
Out[48]:
matrix([[ 0., 0., 1., 2.],
[ 3., 0., 0., 0.],
[ 0., 0., 0., 0.]])
The default behaviour (with dense_index=False
) simply returns a SparseSeries
containing
only the non-null entries.
In [49]: ss = pd.SparseSeries.from_coo(A)
In [50]: ss
Out[50]:
0 2 1.0
3 2.0
1 0 3.0
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([3], dtype=int32)
Specifying dense_index=True
will result in an index that is the Cartesian product of the
row and columns coordinates of the matrix. Note that this will consume a significant amount of memory
(relative to dense_index=False
) if the sparse matrix is large (and sparse) enough.
In [51]: ss_dense = pd.SparseSeries.from_coo(A, dense_index=True)
In [52]: ss_dense
Out[52]:
0 0 NaN
1 NaN
2 1.0
3 2.0
...
2 0 NaN
1 NaN
2 NaN
3 NaN
dtype: float64
BlockIndex
Block locations: array([2], dtype=int32)
Block lengths: array([3], dtype=int32)