Add 0-sized arrays for particles that will be created
Implements #809 (closed).
Example: If we run with SF we want to have 0-sized star arrays in the snapshots prior to the first star forming event. This helps with all the analysis tools as they don't have to deal with a special case of an hdf group not existing. The arrays have a size (0,1) for most properties and (0,3) where relevant (x, v). The units are correctly written out.
The cell meta-data is also correctly handling the fact that there are genuinely no particles.
Note that the header is untouched and will keep saying there are 0 stars.
Merge request reports
Activity
added enhancement feature request i/o labels
requested review from @jch
assigned to @bvandenbroucke
- Resolved by Matthieu Schaller
@jborrow @bvandenbroucke any thoughts on whether this could break
swiftsimio
?
- Resolved by Bert Vandenbroucke
I think it would be best to omit datasets for particle types which will never exist. It avoids clutter in the snapshot files and conveys some information about how the simulation was run.
- Resolved by Matthieu Schaller
Also, does it really write (0,1) datasets for empty 1D arrays? Shouldn't they just be size (0,)?
added 1 commit
- ed5e4671 - Pass the 'to_write' array to prepare_file() in the parallel-io code
- Resolved by Matthieu Schaller
Is there any significance to the values of CanHaveTypes, apart from if they're zero or non-zero? In a small EAGLE box with stars and black holes enabled I get CanHaveTypes=(128, 1, 0, 0, 32768, 524288, 0).
- Resolved by Matthieu Schaller
It only seems to need a couple of small changes to work in collective mode too: https://gitlab.cosma.dur.ac.uk/jch/swiftsim/-/commit/7aaedd5a61922065df220a71efa33937be26005a
Sorry, my swift branch was set to private for some reason. It should be ok now. In case it's not, here's the diff:
diff --git a/src/parallel_io.c b/src/parallel_io.c index b1a2bbba3870d576a90df31c0d3e50d00611db17..df2c8863dc5182a794840b376b66c6f82a2a6e10 100644 --- a/src/parallel_io.c +++ b/src/parallel_io.c @@ -1294,9 +1294,9 @@ void prepare_file(struct engine* e, const char* fileName, /* Loop over all particle types */ for (int ptype = 0; ptype < swift_type_count; ptype++) { - /* Don't do anything if there are (a) no particles of this kind, or (b) - * if we have disabled every field of this particle type. */ - if (N_total[ptype] == 0 || numFields[ptype] == 0) continue; + /* Don't do anything if there are (a) no particles of this kind in this + run, or (b) if we have disabled every field of this particle type. */ + if (to_write[ptype] == 0 || numFields[ptype] == 0) continue; /* Add the global information for that particle type to * the XMF meta-file */ @@ -2020,7 +2020,7 @@ void write_output_parallel(struct engine* e, (enum part_type)ptype, compression_level_current_default, e->verbose); - if (compression_level != compression_do_not_write) { + if (compression_level != compression_do_not_write && N_total[ptype] > 0) { write_array_parallel(e, h_grp, fileName, partTypeGroupName, list[i], Nparticles, N_total[ptype], mpi_rank, offset[ptype], internal_units, snapshot_units);
Edited by John Helly- Resolved by Matthieu Schaller
It seems to be necessary to skip the write if no rank has any data because the collective H5Dwrite fails in that case.
added 1 commit
- 06a334b1 - Make the 'to_write' array an array of 1s and 0s rather than a mixture of policy values and zeroes
added 1 commit
- 1db49f2a - In the parallel writes, create the 0-sized arrays but do not attempt to fill them
- Resolved by Bert Vandenbroucke
What analysis tools fail if there are some groups that don't exist? This seems very weird, and checking whether those things exist is not hard (or at least should not be hard).