Page tree









Sets the MPI atomicity mode


H5F_SET_MPI_ATOMICITY (file_id, flag)


herr_t H5Fset_mpi_atomicity( hid_t file_id, hbool_t flag )

SUBROUTINE h5fset_mpi_atomicity_f(file_id, flag, hdferr) 
  INTEGER(HID_T), INTENT(IN) :: file_id    ! File identifier
  INTEGER(HBOOL_T), INTENT(IN) :: flag     ! Atomicity flag
  INTEGER, INTENT(OUT) :: hdferr           ! Error code
                                           ! 0 on success; -1 on failure
END SUBROUTINE h5fset_mpi_atomicity_f

hid_t file_id,IN: HDF5 file identifier
hbool_t flagIN: Logical flag for atomicity setting 
Valid values are:

   Sets MPI file access to atomic mode.

  Sets MPI file access to nonatomic mode.



H5F_SET_MPI_ATOMICITY is applicable only in parallel environments using MPI I/O. The function is one of the tools used to ensure sequential consistency. This means that a set of operations will behave as though they were performed in a serial order consistent with the program order.

H5F_SET_MPI_ATOMICITY sets MPI consistency semantics for data access to the file, file_id.

If flag is set to 1, all file access operations will appear atomic, guaranteeing sequential consistency. If flag is set to 0, enforcement of atomic file access will be turned off.

H5F_SET_MPI_ATOMICITY is a collective function and all participating processes must pass the same values for file_id and flag.

This function is available only when the HDF5 library is configured with parallel support (--enable-parallel). It is useful only when used with the H5FD_MPIO driver (see H5P_SET_FAPL_MPIO).

H5F_SET_MPI_ATOMICITY calls MPI_File_set_atomicity underneath and is not supported if the execution platform does not support MPI_File_set_atomicity. When it is supported and used, the performance of data access operations may drop significantly.

In certain scenarios, even when MPI_File_set_atomicity is supported, setting atomicity with H5F_SET_MPI_ATOMICITY and flag set to 1 does not always yield strictly atomic updates. For example, some H5D_WRITE calls translate to multiple MPI_File_write_at calls. This happens in all cases where the high-level file access routine translates to multiple lower level file access routines. The following scenarios will raise this issue:

  • Non-contiguous file access using independent I/O
  • Partial collective I/O using chunked access
  • Collective I/O using filters or when data conversion is required

This issue arises because MPI atomicity is a matter of MPI file access operations rather than HDF5 access operations. But the user is normally seeking atomicity at the HDF5 level. To accomplish this, the application must set a barrier after a write, H5D_WRITE, but before the next read, H5D_READ, in addition to calling H5F_SET_MPI_ATOMICITY.The barrier will guarantee that all underlying write operations execute atomically before the read operations starts. This ensures additional ordering semantics and will normally produce the desired behavior.

See Also:


Returns a non-negative value if successful; otherwise returns a negative value. 


Release    Change
1.8.9C function and Fortran subroutine introduced in this release.

--- Last Modified: July 22, 2020 | 03:13 PM