Page tree









Reads a raw data chunk directly from a dataset in a file into a buffer

H5D_READ_CHUNK replaces the now deprecated  H5DO_READ_CHUNK function, which was located in the high level optimization library. The parameters and behavior are identical to the original.


H5D_READ_CHUNK (dset_id, dxpl_id, offset, filters, buf)


herr_t H5Dread_chunk( hid_t dset_id, hid_t dxpl_id, const hsize_t *offset, uint32_t *filters, void *buf )

hid_t dset_idIN: Identifier for the dataset to be read
hid_t dxpl_idIN: Transfer property list identifier for this I/O operation
const hsize_t *offsetIN: Logical position of the chunk’s first element in the dataspace
uint32_t * filters  
IN/OUT: Mask for identifying the filters used with the chunk
void *bufIN: Buffer containing the chunk read from the dataset


H5D_READ_CHUNK reads a raw data chunk as specified by its logical offset in a chunked dataset dset_id from the dataset in the file into the application memory buffer buf. The data in buf is read directly from the file bypassing the library’s internal data transfer pipeline, including filters.

dxpl_id is a data transfer property list identifier.

offset is an array specifying the logical position of the first element of the chunk in the dataset’s dataspace. The length of the offset array must equal the number of dimensions, or rank, of the dataspace. The values in offset must not exceed the dimension limits and must specify a point that falls on a dataset chunk boundary.

The mask filters indicates which filters are used with the chunk when written. A zero value indicates that all enabled filters are applied on the chunk. A filter is skipped if the bit corresponding to the filter’s position in the pipeline (0 ≤ position < 32) is turned on.

buf is the memory buffer containing the chunk read from the dataset in the file.

Exercise caution when using H5D_READ_CHUNK and H5D_WRITE_CHUNK, as they read and write data chunks directly in a file. H5D_WRITE_CHUNK bypasses hyperslab selection, the conversion of data from one datatype to another, and the filter pipeline to write the chunk. Developers should have experience with these processes before using this function. Please see Using the Direct Chunk Write Function for more information.

Also note that H5D_READ_CHUNK and H5D_WRITE_CHUNK are not supported under parallel and do not support variable length types.



Returns a non-negative value if successful; otherwise returns a negative value.


Include Bitbucket Server for Confluence: An error occured

Connection to Bitbucket Server could not be established. Verify that you have properly configured the Bitbucket Server application link for your Confluence space and that your Bitbucket Server instance is up and running. Error details: PKIX path building failed: unable to find valid certification path to requested target

Release    Change
1.10.3Moved from HDF5 High Level Optimizations library to core library
1.10.2, 1.8.19C function introduced as H5DOread_chunk in HDF5 HIgh Level Optimizations library

--- Last Modified: June 03, 2019 | 03:31 PM