HDF 4.0 (and later releases) supports a low-level compression interface, which allows any data-object to be compressed using a variety of algorithms.

Currently only three compression algorithms are supported: Run-Length Encoding (RLE), adaptive Huffman, and an LZ-77 dictionary coder (the gzip 'deflation' algorithm). Plans for future algorithms include an Lempel/Ziv-78 dictionary coding, an arithmetic coder and a faster Huffman algorithm.

New with HDF 4.1 is support for "chunking" and "chunking with compression". Data chunking allows an n-dimensional SDS or GR image to be stored as a series of n-dimensional chunks. See the HDF User's Guide for more information.

With HDF4.2r0, HDF supports SZIP compression. For further information regarding it, see SZIP Compression in HDF Products.

NOTE:   Compression and chunking are limited to fixed sized datasets. You cannot compress or chunk a dataset that has unlimited dimensions.