Configuring filters that process data during I/O operation (H5Z)
- H5Z_FILTER_AVAIL — Determines whether a filter is available.
- H5Z_GET_FILTER_INFO — Retrieves information about a filter
- H5Z_REGISTER — Registers a new filter with the HDF5 library
- H5Z_UNREGISTER — Unregisters a filter.
HDF5 supports a filter pipeline that provides the capability for standard and customized raw data processing during I/O operations. HDF5 is distributed with a small set of standard filters such as compression (gzip, SZIP, and a shuffling algorithm) and error checking (Fletcher32 checksum). For further flexibility, the library allows a user application to extend the pipeline through the creation and registration of customized filters.
The flexibility of the filter pipeline implementation enables the definition of additional filters by a user application. A filter
- is associated with a dataset when the dataset is created,
- can be used only with chunked data
(i.e., datasets stored in theH5D_CHUNKED
storage layout), and - is applied independently to each chunk of the dataset.
The HDF5 library does not support filters for contiguous datasets because of the difficulty of implementing random access for partial I/O. Compact dataset filters are not supported because it would not produce significant results.
Filter identifiers for the filters distributed with the HDF5 Library are as follows:
H5Z_FILTER_DEFLATE | The gzip compression, or deflation, filter |
H5Z_FILTER_SZIP | The SZIP compression filter |
H5Z_FILTER_NBIT | The N-bit compression filter |
H5Z_FILTER_SCALEOFFSET | The scale-offset compression filter |
H5Z_FILTER_SHUFFLE | The shuffle algorithm filter |
H5Z_FILTER_FLETCHER32 | The Fletcher32 checksum, or error checking, filter |
Custom filters that have been registered with the library will have additional unique identifiers.
See HDF5 Dynamically Loaded Filters for more information on how an HDF5 application can apply a filter that is not registered with the HDF5 library.
--- Last Modified: August 23, 2019 | 10:05 AM