H5P_GET_MPIO_NO_COLLECTIVE_CAUSE
Retrieves local and global causes that broke collective I/O on the last parallel I/O call
Procedure:
H5P_GET_MPIO_NO_COLLECTIVE_CAUSE( dxpl_id, local_no_collective_cause, global_no_collective_cause )
Signature:
herr_t H5Pget_mpio_no_collective_cause(
hid_t dxpl_id,
uint32_t * local_no_collective_cause,
uint32_t * global_no_collective_cause)
Parameters:
hid_t dxpl_id | IN: Dataset transfer property list identifier |
uint32_t *local_no_collective_cause | OUT: An enumerated set value indicating the causes that prevented collective I/O in the local process |
uint32_t *global_no_collective_cause | OUT: An enumerated set value indicating the causes across all processes that prevented collective I/O |
Motivation:
A user can request collective I/O via a data transfer property list (DXPL) that has been suitably modified with H5P_SET_DXPL_MPIO. However, there are conditions that can cause HDF5 to forgo collective I/O and perform independent I/O. Such causes can be different across the processes of a parallel application. This function allows the user to determine what caused the HDF5 library to skip collective I/O locally, that is in the local process, and globally, across all processes.
Description:
H5P_GET_MPIO_NO_COLLECTIVE_CAUSE serves two purposes. It can be used to determine whether collective I/O was used for the last preceding parallel I/O call. If collective I/O was not used, the function retrieves the local and global causes that broke collective I/O on that parallel I/O call. The properties retrieved by this function are set before I/O takes place and are retained even when I/O fails.
Valid values returned in local_no_collective_cause
and global_no_collective_cause
are as follows or, if there are multiple causes, a bitwise OR of the relevant causes; the numbers in the center column are the bitmask values:
H5D_MPIO_COLLECTIVE | 00000000 | Collective I/O was performed successfully (Default) |
H5D_MPIO_SET_INDEPENDENT | 00000001 | Collective I/O was not performed because independent I/O was requested |
H5D_MPIO_DATATYPE_CONVERSION | 00000010 | Collective I/O was not performed because datatype conversions were required |
H5D_MPIO_DATA_TRANSFORMS | 00000100 | Collective I/O was not performed because data transforms needed to be applied |
H5D_MPIO_SET_MPIPOSIX | 00001000 | Collective I/O was not performed because the selected file driver was MPI-POSIX |
H5D_MPIO_NOT_SIMPLE_OR_SCALAR_DATASPACES | 00010000 | Collective I/O was not performed because one of the dataspaces was neither simple nor scalar |
H5D_MPIO_POINT_SELECTIONS | 00100000 | Collective I/O was not performed because there were point selections in one of the dataspaces |
H5D_MPIO_NOT_CONTIGUOUS_OR_CHUNKED_DATASET | 01000000 | Collective I/O was not performed because the dataset was neither contiguous nor chunked |
H5D_MPIO_FILTERS | 10000000 | Collective I/O was not performed because filters needed to be applied |
The above name/value pairs are members of HDF5’s H5D_mpio_no_collective_cause_t
enumeration.
Returns:
Returns a non-negative value if successful; otherwise returns a negative value.
Example:
Include Bitbucket Server for Confluence: An error occured
Connection to Bitbucket Server could not be established. Verify that you have properly configured the Bitbucket Server application link for your Confluence space and that your Bitbucket Server instance is up and running. Error details: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
History:
Release | Change |
---|
1.8.10 | C function introduced in this release. |
--- Last Modified: August 09, 2019 | 12:48 PM