gensim.corpora.ShardedCorpus

class gensim.corpora.ShardedCorpus(output_prefix, corpus, dim=None, shardsize=4096, overwrite=False, sparse_serialization=False, sparse_retrieval=False, gensim=False)[source]

This corpus is designed for situations where you need to train a model on matrices, with a large number of iterations. (It should be faster than gensim’s other IndexedCorpus implementations for this use case; check the benchmark_datasets.py script. It should also serialize faster.)

The corpus stores its data in separate files called “shards”. This is a compromise between speed (keeping the whole dataset in memory) and memory footprint (keeping the data on disk and reading from it on demand). Persistence is done using the standard gensim load/save methods.

Note

The dataset is read-only, there is - as opposed to gensim’s Similarity class, which works similarly - no way of adding documents to the dataset (for now).

You can use ShardedCorpus to serialize your data just like any other gensim corpus that implements serialization. However, because the data is saved as numpy 2-dimensional ndarrays (or scipy sparse matrices), you need to supply the dimension of your data to the corpus. (The dimension of word frequency vectors will typically be the size of the vocabulary, etc.)

>>> corpus = gensim.utils.mock_data()
>>> output_prefix = 'mydata.shdat'
>>> ShardedCorpus.serialize(output_prefix, corpus, dim=1000)

The output_prefix tells the ShardedCorpus where to put the data. Shards are saved as output_prefix.0, output_prefix.1, etc. All shards must be of the same size. The shards can be re-sized (which is essentially a re-serialization into new-size shards), but note that this operation will temporarily take twice as much disk space, because the old shards are not deleted until the new shards are safely in place.

After serializing the data, the corpus will then save itself to the file output_prefix.

On further initialization with the same output_prefix, the corpus will load the already built dataset unless the overwrite option is given. (A new object is “cloned” from the one saved to output_prefix previously.)

To retrieve data, you can load the corpus and use it like a list:

>>> sh_corpus = ShardedCorpus.load(output_prefix)
>>> batch = sh_corpus[100:150]

This will retrieve a numpy 2-dimensional array of 50 rows and 1000 columns (1000 was the dimension of the data we supplied to the corpus). To retrieve gensim-style sparse vectors, set the gensim property:

>>> sh_corpus.gensim = True
>>> batch = sh_corpus[100:150]

The batch now will be a generator of gensim vectors.

Since the corpus needs the data serialized in order to be able to operate, it will serialize data right away on initialization. Instead of calling ShardedCorpus.serialize(), you can just initialize and use the corpus right away:

>>> corpus = ShardedCorpus(output_prefix, corpus, dim=1000)
>>> batch = corpus[100:150]

ShardedCorpus also supports working with scipy sparse matrices, both during retrieval and during serialization. If you want to serialize your data as sparse matrices, set the sparse_serialization flag. For retrieving your data as sparse matrices, use the sparse_retrieval flag. (You can also retrieve densely serialized data as sparse matrices, for the sake of completeness, and vice versa.) By default, the corpus will retrieve numpy ndarrays even if it was serialized into sparse matrices.

>>> sparse_prefix = 'mydata.sparse.shdat'
>>> ShardedCorpus.serialize(sparse_prefix, corpus, dim=1000, sparse_serialization=True)
>>> sparse_corpus = ShardedCorpus.load(sparse_prefix)
>>> batch = sparse_corpus[100:150]
>>> type(batch)
<type 'numpy.ndarray'>
>>> sparse_corpus.sparse_retrieval = True
>>> batch = sparse_corpus[100:150]
<class 'scipy.sparse.csr.csr_matrix'>

While you can touch the sparse_retrieval attribute during the life of a ShardedCorpus object, you should definitely not touch ` sharded_serialization! Changing the attribute will not miraculously re-serialize the data in the requested format.

The CSR format is used for sparse data throughout.

Internally, to retrieve data, the dataset keeps track of which shard is currently open and on a __getitem__ request, either returns an item from the current shard, or opens a new one. The shard size is constant, except for the last shard.

Methods

__init__(output_prefix, corpus[, dim, ...]) Initializes the dataset.
get_by_offset(offset) As opposed to getitem, this one only accepts ints as offsets.
in_current(offset) Determine whether the given offset falls within the current shard.
in_next(offset) Determine whether the given offset falls within the next shard.
init_by_clone() Initialize by copying over attributes of another ShardedCorpus instance saved to the output_prefix given at __init__().
init_shards(output_prefix, corpus[, ...]) Initialize shards from the corpus.
load(fname[, mmap]) Load itself in clean state.
load_shard(n) Load (unpickle) the n-th shard as the “live” part of the dataset into the Dataset object.
reset() Reset to no shard at all.
resize_shards(shardsize) Re-process the dataset to new shard size.
save(*args, **kwargs) Save itself (the wrapper) in clean state (after calling reset()) to the output_prefix file.
save_corpus(fname, corpus[, id2word, ...]) Implement a serialization interface.
save_shard(shard[, n, filename]) Pickle the given shard.
serialize(serializer, fname, corpus[, ...]) Iterate through the document stream corpus, saving the documents as a ShardedCorpus to fname.
shard_by_offset(offset) Determine which shard the given offset belongs to.