Page tree









Writes a raw data chunk from a buffer directly to a dataset in a file

H5D_WRITE_CHUNK replaces the now deprecated  H5DO_WRITE_CHUNK function, which was located in the high level optimization library. The parameters and behavior are identical to the original.


H5D_WRITE_CHUNK (dset_id, dxpl_id, filters, offset, data_size, buf)


herr_t H5Dwrite_chunk( hid_t dset_id, hid_t dxpl_id, uint32_t filters, const hsize_t *offset, size_t data_size, const void *buf )

hid_t dset_idIN: Identifier for the dataset to write to
hid_t dxpl_idIN: Transfer property list identifier for this I/O operation
uint32_t filters  
IN: Mask for identifying the filters in use
const hsize_t *offsetIN: Logical position of the chunk’s first element in the dataspace
size_t data_sizeIN: Size of the actual data to be written in bytes
const void *bufIN: Buffer containing data to be written to the chunk


H5D_WRITE_CHUNK writes a raw data chunk as specified by its logical offset in a chunked dataset dset_id from the application memory buffer buf to the dataset in the file. Typically, the data in buf is preprocessed in memory by a custom transformation, such as compression. The chunk will bypass the library’s internal data transfer pipeline, including filters, and will be written directly to the file. Only one chunk can be written with this function.

dxpl_id is a data transfer property list identifier.

filters is a mask providing a record of which filters are used with the chunk. The default value of the mask is zero (0), indicating that all enabled filters are applied. A filter is skipped if the bit corresponding to the filter’s position in the pipeline (0 ≤ position < 32) is turned on. This mask is saved with the chunk in the file.

offset is an array specifying the logical position of the first element of the chunk in the dataset’s dataspace. The length of the offset array must equal the number of dimensions, or rank, of the dataspace. The values in offset must not exceed the dimension limits and must specify a point that falls on a dataset chunk boundary.

data_size is the size in bytes of the chunk, representing the number of bytes to be read from the buffer buf. If the data chunk has been precompressed, data_size should be the size of the compressed data.

buf is the memory buffer containing data to be written to the chunk in the file.

Exercise caution when using H5D_READ_CHUNK and H5D_WRITE_CHUNK, as they read and write write data chunks directly in a file. H5D_WRITE_CHUNK bypasses hyperslab selection, the conversion of data from one datatype to another, and the filter pipeline to write the chunk. Developers should have experience with these processes before using this function. Please see Using the Direct Chunk Write Function for more information.

Also note that H5D_READ_CHUNK and H5D_WRITE_CHUNK are not supported under parallel and do not support variable length types.



Returns a non-negative value if successful; otherwise returns a negative value.


Include Bitbucket Server for Confluence: An error occured

Connection to Bitbucket Server could not be established. Verify that you have properly configured the Bitbucket Server application link for your Confluence space and that your Bitbucket Server instance is up and running. Error details: PKIX path building failed: unable to find valid certification path to requested target

Release    Change
1.10.3Function moved from HDF5 High Level Optimizations library to core library

--- Last Modified: June 16, 2020 | 01:41 PM