MPI_File_set_atomicity underneath and is not supported if the execution platform does not support
MPI_File_set_atomicity. When it is supported and used, the performance of data access operations may drop significantly.
In certain scenarios, even when
MPI_File_set_atomicity is supported, setting atomicity with H5F_SET_MPI_ATOMICITY and
flag set to
1 does not always yield strictly atomic updates. For example, some H5D_WRITE calls translate to multiple
MPI_File_write_at calls. This happens in all cases where the high-level file access routine translates to multiple lower level file access routines. The following scenarios will raise this issue:
- Non-contiguous file access using independent I/O
- Partial collective I/O using chunked access
- Collective I/O using filters or when data conversion is required
This issue arises because MPI atomicity is a matter of MPI file access operations rather than HDF5 access operations. But the user is normally seeking atomicity at the HDF5 level. To accomplish this, the application must set a barrier after a write, H5D_WRITE, but before the next read, H5D_READ, in addition to calling H5F_SET_MPI_ATOMICITY.The barrier will guarantee that all underlying write operations execute atomically before the read operations starts. This ensures additional ordering semantics and will normally produce the desired behavior.