Why must attributes be written collectively
Attributes in general are meant to be very small. Attributes (both the attribute information and the data it holds) are considered to be metadata on an object. Because of this, they are held in the metadata cache. The HDF library has a requirement that all metadata updates be done collectively so all processes see the same stream of metadata updates. This is how HDF5 was designed. Breaking the collective requirements for metadata updates has been discussed previously and we know that it is worth having for certain scenarios, but it is just not possible at the moment without a lot of research and funding.
Attribute data is treated as metadata because it is perceived as something that is present on all processes and not really generated by one process and sent to other processes. An example would be a label indicating that a given dataset is stored as of timestep 1 or at a given setting.
If you want to avoid sending the attribute's data to all processes, you may wish to consider using a dataset instead. Datasets can be created with any dimensions like attributes, and can be even created with a scalar dataspace to hold one element (see H5S_CREATE).