Page tree

The license could not be verified: License Certificate has expired!

 

JAVA

FORTRAN

C++

C

 

Link

H5P_SET_DXPL_MPIO_CHUNK_OPT

Sets a flag specifying linked-chunk I/O or multi-chunk I/O

Procedure:

H5P_SET_DXPL_MPIO_CHUNK_OPT ( dxpl_id, opt_mode )

Signature:

herr_t H5Pset_dxpl_mpio_chunk_opt 
      (hid_t dxpl_id, 
      H5FD_mpio_chunk_opt_t opt_mode)

  

Parameters:
hid_t dxpl_idIN: Data transfer property list identifier
H5FD_mpio_chunk_opt_t opt_mode    IN: Optimization flag specifying linked-chunk I/O or multi-chunk I/O

Description:

H5P_SET_DXPL_MPIO_CHUNK_OPT specifies whether I/O is to be performed as linked-chunk I/O or as multi-chunk I/O. This function overrides the HDF5 library's internal algorithm for determining which mechanism to use.

When an application uses collective I/O with chunked storage, the HDF5 library normally uses an internal algorithm to determine whether that I/O activity should be conducted as one linked-chunk I/O or as multi-chunk I/O. H5P_SET_DXPL_MPIO_CHUNK_OPT is provided so that an application can override the library's algorithm in circumstances where the library might lack the information needed to make an optimal decision.

H5P_SET_DXPL_MPIO_CHUNK_OPT works by setting one of the following flags in the parameter opt_mode:

H5FD_MPIO_CHUNK_ONE_IODo one link chunked I/O
H5FD_MPIO_CHUNK_MULTI_IO    Do multi-chunked I/O

This function works by setting a corresponding property in the dataset transfer property list dxpl_id.

The library performs I/O in the specified manner unless it determines that the low-level MPI IO package does not support the requested behavior; in such cases, the HDF5 library will internally use independent I/O.

Use of this function is optional.

Returns:

Returns a non-negative value if successful. Otherwise returns a negative value.

Example:

testpar / t_coll_chunk.c [739:787]  1.10/master  HDFFV/hdf5
  /* set up the collective transfer property list */
  xfer_plist = H5Pcreate(H5P_DATASET_XFER);
  VRFY((xfer_plist >= 0), "");

  status = H5Pset_dxpl_mpio(xfer_plist, H5FD_MPIO_COLLECTIVE);
  VRFY((status>= 0),"MPIO collective transfer property succeeded");
  if(dxfer_coll_type == DXFER_INDEPENDENT_IO) {
     status = H5Pset_dxpl_mpio_collective_opt(xfer_plist, H5FD_MPIO_INDIVIDUAL_IO);
     VRFY((status>= 0),"set independent IO collectively succeeded");
  }

  switch(api_option){
	case API_LINK_HARD:
	   status = H5Pset_dxpl_mpio_chunk_opt(xfer_plist,H5FD_MPIO_CHUNK_ONE_IO);
           VRFY((status>= 0),"collective chunk optimization succeeded");
           break;

	case API_MULTI_HARD:
	   status = H5Pset_dxpl_mpio_chunk_opt(xfer_plist,H5FD_MPIO_CHUNK_MULTI_IO);
	   VRFY((status>= 0),"collective chunk optimization succeeded ");
           break;

	case API_LINK_TRUE:
           status = H5Pset_dxpl_mpio_chunk_opt_num(xfer_plist,2);
	   VRFY((status>= 0),"collective chunk optimization set chunk number succeeded");
           break;

	case API_LINK_FALSE:
           status = H5Pset_dxpl_mpio_chunk_opt_num(xfer_plist,6);
           VRFY((status>= 0),"collective chunk optimization set chunk number succeeded");
           break;

	case API_MULTI_COLL:
           status = H5Pset_dxpl_mpio_chunk_opt_num(xfer_plist,8);/* make sure it is using multi-chunk IO */
           VRFY((status>= 0),"collective chunk optimization set chunk number succeeded");
	   status = H5Pset_dxpl_mpio_chunk_opt_ratio(xfer_plist,50);
           VRFY((status>= 0),"collective chunk optimization set chunk ratio succeeded");
           break;

	case API_MULTI_IND:
           status = H5Pset_dxpl_mpio_chunk_opt_num(xfer_plist,8);/* make sure it is using multi-chunk IO */
           VRFY((status>= 0),"collective chunk optimization set chunk number succeeded");
	   status = H5Pset_dxpl_mpio_chunk_opt_ratio(xfer_plist,100);
           VRFY((status>= 0),"collective chunk optimization set chunk ratio succeeded");
           break;

	default:
            ;
   }

History:

 

--- Last Modified: August 09, 2019 | 02:02 PM