Search

Help

Page 1 of 9. Showing 83 results (0.035 seconds)

  1. Why are my files sizes different, if I open an HDF5 file more than once rather than writing the data out in one call?

    The size discrepencies can be related to the way small metadata and raw data gets allocated in the file. Currently, all metadata below a certain threshold size … to be wasted in the file because the library doesn't currently remember the free space in the file from one file open to the next. The threshold block size
    HDF Knowledge BaseJun 29, 2017
  2. howison_hdf5_lustre_iasds2010.pdf

    contiguous access, but that transfer sizes varied by several orders of magnitude (from several kilobytes to hundreds of megabytes) 11. On parallel file systems like Lustre that use serverside file extent locks, varying transfer sizes often lead to accesses that are poorly distributed and misaligned relative to the lock
  3. howison_hdf5_lustre_iasds2010.pdf

    contiguous access, but that transfer sizes varied by several orders of magnitude (from several kilobytes to hundreds of megabytes) 11. On parallel file systems like Lustre that use serverside file extent locks, varying transfer sizes often lead to accesses that are poorly distributed and misaligned relative to the lock
  4. Direct Chunk Read and Write Questions

    , the HDF5 file size bloats (around 2048 bytes more). Why not limit this size to the size of the chunk (8 bytes 2 x int32)? Asking this because the dataset … that their chunk is that size. Filters can increase the size of the stored data (consider a checksum filter), so there's no obvious size limits that we could enforce
    HDF Knowledge BaseJun 17, 2020
  5. Information on the metadata cache

    The metadata cache exists to cache metadata for an entire HDF5 file, and exists as long as the file is open. As the working set size for HDF5 files varies … maximum size. To increase the likelihood that this will not happen, the cache allows the user to specify a minimum clean size which is a minimum total size of all
    HDF Knowledge BaseAug 14, 2017
  6. How do you work with a file created with the file family feature?

    that it was created with the file family feature and what the file member size is. Also, you must have written enough data to the file when you created it, to fill up the first file member. Otherwise, HDF5 will reset the file member size to the size of the data that was written. Here is an example of how you would read a file
    HDF Knowledge BaseDec 13, 2017
  7. How to improve performance with Parallel HDF5

    to be able to set the chunk dimensions. Metadata cache: it is usually a good idea to increase the metadata cache size if possible to avoid small writes to the file … for collective buffering file access. Target nodes access data in chunks of this size. The chunks are distributed among target nodes in a roundrobin (CYCLIC
    HDF Knowledge BaseMar 26, 2020
  8. How to reclaim unused space in an HDF5 file

    Question: We have a workflow where occasionally a dataset must be removed from an HDF5 file. When this happens, the file does not decrease in size. Is there a call to reclaim the unused space or remove it from within an application? HDF51.10 There are file space management strategies to manage the unused space
    HDF Knowledge BaseOct 15, 2019
  9. Closing my HDF5 file, I get a segfault with an error "MPI_FILE_SET_SIZE(76): Inconsistent arguments to collective routine"

    ) that are not the same on all processes. Mistakes like that result in a different size of the file on all the processes and hence the MPIFilesetsize fails with different arguments … This indicates that you have created datasets or groups or attributes in the file "uncollectively", meaning either not all processes called the create
    HDF Knowledge BaseJul 13, 2017
  10. Should new products be developed in HDF (HDF4)?

    for Fortran). HDF5 does not have a file size limitation. In theory, HDF4 has a 2 GB limit on file sizes. In reality, the size of the files you can store in HDF4
    HDF Knowledge BaseJul 11, 2017