.. currentmodule:: pandas .. ipython:: python :suppress: import numpy as np np.random.seed(123456) np.set_printoptions(precision=4, suppress=True) import pandas as pd pd.options.display.max_rows=8 import os import csv .. _enhancingperf.cython: Cython (Writing C extensions for pandas) ---------------------------------------- For many use cases writing pandas in pure python and numpy is sufficient. In some computationally heavy applications however, it can be possible to achieve sizeable speed-ups by offloading work to `cython `__. This tutorial assumes you have refactored as much as possible in python, for example trying to remove for loops and making use of numpy vectorization, it's always worth optimising in python first. This tutorial walks through a "typical" process of cythonizing a slow computation. We use an `example from the cython documentation `__ but in the context of pandas. Our final cythonized solution is around 100 times faster than the pure python. .. _enhancingperf.pure: Pure python ~~~~~~~~~~~ We have a DataFrame to which we want to apply a function row-wise. .. ipython:: python df = pd.DataFrame({'a': np.random.randn(1000), 'b': np.random.randn(1000), 'N': np.random.randint(100, 1000, (1000)), 'x': 'x'}) df Here's the function in pure python: .. ipython:: python def f(x): return x * (x - 1) def integrate_f(a, b, N): s = 0 dx = (b - a) / N for i in range(N): s += f(a + i * dx) return s * dx We achieve our result by using ``apply`` (row-wise): .. code-block:: ipython In [7]: %timeit df.apply(lambda x: integrate_f(x['a'], x['b'], x['N']), axis=1) 10 loops, best of 3: 174 ms per loop But clearly this isn't fast enough for us. Let's take a look and see where the time is spent during this operation (limited to the most time consuming four calls) using the `prun ipython magic function `__: .. ipython:: python %prun -l 4 df.apply(lambda x: integrate_f(x['a'], x['b'], x['N']), axis=1) By far the majority of time is spend inside either ``integrate_f`` or ``f``, hence we'll concentrate our efforts cythonizing these two functions. .. note:: In python 2 replacing the ``range`` with its generator counterpart (``xrange``) would mean the ``range`` line would vanish. In python 3 ``range`` is already a generator. .. _enhancingperf.plain: Plain cython ~~~~~~~~~~~~ First we're going to need to import the cython magic function to ipython (for cython versions < 0.21 you can use ``%load_ext cythonmagic``): .. ipython:: python :okwarning: %load_ext Cython Now, let's simply copy our functions over to cython as is (the suffix is here to distinguish between function versions): .. ipython:: In [2]: %%cython ...: def f_plain(x): ...: return x * (x - 1) ...: def integrate_f_plain(a, b, N): ...: s = 0 ...: dx = (b - a) / N ...: for i in range(N): ...: s += f_plain(a + i * dx) ...: return s * dx ...: .. note:: If you're having trouble pasting the above into your ipython, you may need to be using bleeding edge ipython for paste to play well with cell magics. .. code-block:: ipython In [4]: %timeit df.apply(lambda x: integrate_f_plain(x['a'], x['b'], x['N']), axis=1) 10 loops, best of 3: 85.5 ms per loop Already this has shaved a third off, not too bad for a simple copy and paste. .. _enhancingperf.type: Adding type ~~~~~~~~~~~ We get another huge improvement simply by providing type information: .. ipython:: In [3]: %%cython ...: cdef double f_typed(double x) except? -2: ...: return x * (x - 1) ...: cpdef double integrate_f_typed(double a, double b, int N): ...: cdef int i ...: cdef double s, dx ...: s = 0 ...: dx = (b - a) / N ...: for i in range(N): ...: s += f_typed(a + i * dx) ...: return s * dx ...: .. code-block:: ipython In [4]: %timeit df.apply(lambda x: integrate_f_typed(x['a'], x['b'], x['N']), axis=1) 10 loops, best of 3: 20.3 ms per loop Now, we're talking! It's now over ten times faster than the original python implementation, and we haven't *really* modified the code. Let's have another look at what's eating up time: .. ipython:: python %prun -l 4 df.apply(lambda x: integrate_f_typed(x['a'], x['b'], x['N']), axis=1) .. _enhancingperf.ndarray: Using ndarray ~~~~~~~~~~~~~ It's calling series... a lot! It's creating a Series from each row, and get-ting from both the index and the series (three times for each row). Function calls are expensive in python, so maybe we could minimise these by cythonizing the apply part. .. note:: We are now passing ndarrays into the cython function, fortunately cython plays very nicely with numpy. .. ipython:: In [4]: %%cython ...: cimport numpy as np ...: import numpy as np ...: cdef double f_typed(double x) except? -2: ...: return x * (x - 1) ...: cpdef double integrate_f_typed(double a, double b, int N): ...: cdef int i ...: cdef double s, dx ...: s = 0 ...: dx = (b - a) / N ...: for i in range(N): ...: s += f_typed(a + i * dx) ...: return s * dx ...: cpdef np.ndarray[double] apply_integrate_f(np.ndarray col_a, np.ndarray col_b, np.ndarray col_N): ...: assert (col_a.dtype == np.float and col_b.dtype == np.float and col_N.dtype == np.int) ...: cdef Py_ssize_t i, n = len(col_N) ...: assert (len(col_a) == len(col_b) == n) ...: cdef np.ndarray[double] res = np.empty(n) ...: for i in range(len(col_a)): ...: res[i] = integrate_f_typed(col_a[i], col_b[i], col_N[i]) ...: return res ...: The implementation is simple, it creates an array of zeros and loops over the rows, applying our ``integrate_f_typed``, and putting this in the zeros array. .. warning:: In 0.13.0 since ``Series`` has internaly been refactored to no longer sub-class ``ndarray`` but instead subclass ``NDFrame``, you can **not pass** a ``Series`` directly as a ``ndarray`` typed parameter to a cython function. Instead pass the actual ``ndarray`` using the ``.values`` attribute of the Series. Prior to 0.13.0 .. code-block:: python apply_integrate_f(df['a'], df['b'], df['N']) Use ``.values`` to get the underlying ``ndarray`` .. code-block:: python apply_integrate_f(df['a'].values, df['b'].values, df['N'].values) .. note:: Loops like this would be *extremely* slow in python, but in Cython looping over numpy arrays is *fast*. .. code-block:: ipython In [4]: %timeit apply_integrate_f(df['a'].values, df['b'].values, df['N'].values) 1000 loops, best of 3: 1.25 ms per loop We've gotten another big improvement. Let's check again where the time is spent: .. ipython:: python %prun -l 4 apply_integrate_f(df['a'].values, df['b'].values, df['N'].values) As one might expect, the majority of the time is now spent in ``apply_integrate_f``, so if we wanted to make anymore efficiencies we must continue to concentrate our efforts here. .. _enhancingperf.boundswrap: More advanced techniques ~~~~~~~~~~~~~~~~~~~~~~~~ There is still hope for improvement. Here's an example of using some more advanced cython techniques: .. ipython:: In [5]: %%cython ...: cimport cython ...: cimport numpy as np ...: import numpy as np ...: cdef double f_typed(double x) except? -2: ...: return x * (x - 1) ...: cpdef double integrate_f_typed(double a, double b, int N): ...: cdef int i ...: cdef double s, dx ...: s = 0 ...: dx = (b - a) / N ...: for i in range(N): ...: s += f_typed(a + i * dx) ...: return s * dx ...: @cython.boundscheck(False) ...: @cython.wraparound(False) ...: cpdef np.ndarray[double] apply_integrate_f_wrap(np.ndarray[double] col_a, np.ndarray[double] col_b, np.ndarray[int] col_N): ...: cdef int i, n = len(col_N) ...: assert len(col_a) == len(col_b) == n ...: cdef np.ndarray[double] res = np.empty(n) ...: for i in range(n): ...: res[i] = integrate_f_typed(col_a[i], col_b[i], col_N[i]) ...: return res ...: .. code-block:: ipython In [4]: %timeit apply_integrate_f_wrap(df['a'].values, df['b'].values, df['N'].values) 1000 loops, best of 3: 987 us per loop Even faster, with the caveat that a bug in our cython code (an off-by-one error, for example) might cause a segfault because memory access isn't checked.