Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


  • Chunks are too small.

    If a very small chunk size is specified for a dataset it can cause the dataset to be excessively large and it can result in degraded performance when accessing the dataset. The smaller the chunk size the more chunks that HDF5 has to keep track of, and the more time it will take to search for a chunk.

  • Chunks are too large.

    An entire chunk has to be read and uncompressed before performing an operation. There can be a performance penalty for reading a small subset, if the chunk size is subtantially substantially larger than the subset. Also, a dataset may be larger than expected if there are chunks that only contain a small amount of data.


  • A chunk does not fit in the Chunk Cache.

    Every chunked dataset has a chunk cache associated with it that has a default size of 1 MB. The purpose of the chunk cache is to improve performance by keeping chunks that are accessed frequently in memory so that they do not have to be accessed from disk. If a chunk is too large to fit in the chunk cache, it can significantly degrade performance. However, the size of the chunk cache can be increased by calling H5Pset_chunk_cache.