diff --git a/doc/RTD/source/ParameterFiles/output_selection.rst b/doc/RTD/source/ParameterFiles/output_selection.rst index e61e0ab9544bff60a0e683a6c1838b84bff9132c..78fa55de86c1e2a34ffc732d09f38ade0289fd73 100644 --- a/doc/RTD/source/ParameterFiles/output_selection.rst +++ b/doc/RTD/source/ParameterFiles/output_selection.rst @@ -43,6 +43,12 @@ straight after having read the ICs. Similarly, SWIFT will also *not* write a snapshot at the end of a simulation unless a snapshot at the final time is specified in the list. +Note that if a simulation is restarted using check-point files, the +list of outputs will be re-read. This means that it must be found on +the disk at the same place as it was when the simulation was first +started. It also implies that the content of the file can be altered +if the need for additional snapshots suddenly arises. + .. _Output_selection_label: Output Selection @@ -54,7 +60,7 @@ available for a given configuration of SWIFT by running output.yml``. The file generated contains the list of fields that a simulation running with this config would output in each snapshot. It also lists the description string of each field and the unit -conversion string to go from internal comoving units to physical +conversion string to go from internal co-moving units to physical CGS. Entries in the file look like: .. code:: YAML @@ -130,6 +136,26 @@ field for each particle type: Standard_BH: on # Not strictly necessary, on is already the default +Additionally, users can use the different sections to specify an alternative +base name and sub-directory for the snapshots corresponding to a given +selection: + +.. code:: YAML + + BlackHolesOnly: + basename: bh + subdir: snip + +This will put the outputs corresponding to the ``BlackHolesOnly`` selection into +a sub-directory called ``snip`` and have the files themselves called +``bh_0000.hdf5`` where the number corresponds to the global number of +snapshots. The counter is global and not reset for each type of selection. +If the basename or sub-directory keywords are omitted then the code will use the +default values specified in the ``Snapshots`` section of the main parameter file. +The sub-directories are created when writing the first snapshot of a given +category; the onus is hence on the user to ensure correct writing permissions +ahead of that time. + Combining Output Lists and Output Selection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -179,3 +205,38 @@ This will enable your simulation to perform partial dumps only at the outputs labelled as ``Snipshot``. The name of the output selection that corresponds to your choice in the output list will be written to the snapshot header as ``Header/SelectOutput``. + +Note that if a the name used in the ``Select Output`` column does not +exist as a section in the output selection YAML file, SWIFT will write +all the available fields. + +Using non-regular snapshot numbers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In some cases it may be helpful to have snapshot numbers that do not +simply increase by one each time. This could be used to encode the +simulation time in the filename for instance. To achieve this, a third +column can be added to the output list giving the snapshot labels to +use for each output:: + + # Redshift, Select Output, Label + 100.0, Snapshot, 100 + 90.0, Snapshot, 90 + 1.0, Snapshot, 1 + ... + +The label has to be an integer. This will lead to the following +snapshots being produced: + +.. code:: bash + + snap_100.hdf5 + snap_90.hdf5 + snap_1.hdf5 + +Assuming the snapshot basename (either global or set for the +``Snapshot`` output selection) was set to ``snap``. + +Note that to specify labels, the ``Select Output`` column needs to be +specified (but can simply default to dumping everything). + diff --git a/doc/RTD/source/ParameterFiles/parameter_description.rst b/doc/RTD/source/ParameterFiles/parameter_description.rst index b41872aabfef702beaf18308cb4be917ab6553b6..91a365717bdcae1185f0c0c6471121c8b7790ccb 100644 --- a/doc/RTD/source/ParameterFiles/parameter_description.rst +++ b/doc/RTD/source/ParameterFiles/parameter_description.rst @@ -719,10 +719,8 @@ parameter is the base name that will be used for all the outputs in the run: This name will then be appended by an under-score and 4 digits followed by ``.hdf5`` (e.g. ``base_name_1234.hdf5``). The 4 digits are used to label the different outputs, starting at ``0000``. In the default setup the digits simply -increase by one for each snapshot. However, if the optional parameter -``int_time_label_on`` is switched on, then we use 6 digits and these will the -physical time of the simulation rounded to the nearest integer -(e.g. ``base_name_001234.hdf5``) [#f3]_. +increase by one for each snapshot. (See :ref:`Output_list_label` to change that +behaviour.) The time of the first snapshot is controlled by the two following options: @@ -747,11 +745,13 @@ The location and naming of the snapshots is altered by the following options: * Directory in which to write snapshots: ``subdir``. (default: empty string). -If this is set then the full path to the snapshot files will be generated by -taking this value and appending a slash and then the snapshot file name +If this is set then the full path to the snapshot files will be generated +by taking this value and appending a slash and then the snapshot file name described above - e.g. ``subdir/base_name_1234.hdf5``. The directory is -created if necessary. Any VELOCIraptor output produced by the run is also written -to this directory. +created if necessary. Note however, that the sub-directories are created +when writing the first snapshot of a given category; the onus is hence on +the user to ensure correct writing permissions ahead of that time. Any +VELOCIraptor output produced by the run is also written to this directory. When running the code with structure finding activated, it is often useful to have a structure catalog written at the same simulation time @@ -855,7 +855,6 @@ would have: time_first: 0.01 delta_time: 0.005 invoke_stf: 0 - int_time_label_on: 0 compression: 3 distributed: 1 UnitLength_in_cgs: 1. # Use cm in outputs @@ -867,7 +866,8 @@ would have: Some additional specific options for the snapshot outputs are described in the following pages: -* :ref:`Output_list_label` (to have snapshots not evenly spaced in time), +* :ref:`Output_list_label` (to have snapshots not evenly spaced in time or with + non-regular labels), * :ref:`Output_selection_label` (to select what particle fields to write). .. _Parameters_line_of_sight: @@ -1486,10 +1486,3 @@ gparts are active after that snapshot output timestep. .. [#f2] which would translate into a constant :math:`G_N=1.5517771\times10^{-9}~cm^{3}\,g^{-1}\,s^{-2}` if expressed in the CGS system. - -.. [#f3] This feature only makes sense for non-cosmological runs for which the - internal time unit is such that when rounded to the nearest integer a - sensible number is obtained. A use-case for this feature would be to - compare runs over the same physical time but with different numbers of - snapshots. Snapshots at a given time would always have the same set of - digits irrespective of the number of snapshots produced before. diff --git a/doc/RTD/source/Snapshots/index.rst b/doc/RTD/source/Snapshots/index.rst index abd8144a749e002ed0b4eb8b3e53d017b30b7f7a..b48d587f11bd5aa3487ffb8d6aea3a3a89b49f97 100644 --- a/doc/RTD/source/Snapshots/index.rst +++ b/doc/RTD/source/Snapshots/index.rst @@ -26,12 +26,16 @@ format described below. The most important quantity of the header is the array ``NumPart_Total`` which contains the number of particles of each type in this snapshot. This is an array -of 6 numbers; one for each of the supported types. The field -``NumPart_ThisFile`` contains the number of particles in this sub-snapshot file -when the user asked for distributed snapshots (see :ref:`Parameters_snapshots`); -otherwise it contains the same information as ``NumPart_Total``. The field -``NumFilesPerSnapshot`` specifies the number of sub-snapshot files (always 1 -unless a distributed snapshot was asked). +of 6 numbers; one for each of the supported types. Following the Gadget-2 +convention, if that number is larger than 2^31, SWIFT will use the +``NumPart_HighWord`` field to store the high-word bits of the total number of +particles. The field ``NumPart_ThisFile`` contains the number of particles in +this sub-snapshot file when the user asked for distributed snapshots (see +:ref:`Parameters_snapshots`); otherwise it contains the same information as +``NumPart_Total``. Note however, that there is no high-word for this field. We +store it as a 64-bits integer [#f1]_. The field ``NumFilesPerSnapshot`` specifies the +number of sub-snapshot files (always 1 unless a distributed snapshot was asked +for). The field ``InitialMassTable`` contains the *mean* initial mass of each of the particle types present in the initial conditions. This can be used as estimator @@ -159,10 +163,10 @@ Structure of the particle arrays There are several groups that contain 'auxiliary' information, such as ``Header``. Particle data is placed in separate groups depending of the type of -the particles. The type use the naming convention of Gadget-2 (with -the OWLS and EAGLE extensions). A more intuitive naming convention is -given in the form of aliases within the file. The aliases are shown in -the third column of the table. +the particles. There are currently 6 particle types available. The type use the +naming convention of Gadget-2 (with the OWLS and EAGLE extensions). A more +intuitive naming convention is given in the form of aliases within the file. The +aliases are shown in the third column of the table. +---------------------+------------------------+-----------------------------+----------------------------------------+ | HDF5 Group Name | Physical Particle Type | HDF5 alias | In code ``enum part_type`` | @@ -173,6 +177,8 @@ the third column of the table. +---------------------+------------------------+-----------------------------+----------------------------------------+ | ``/PartType2/`` | Background Dark Matter | ``/DMBackgroundParticles/`` | ``swift_type_dark_matter_background`` | +---------------------+------------------------+-----------------------------+----------------------------------------+ +| ``/PartType3/`` | Sinks | ``/SinkParticles/`` | ``swift_type_sink`` | ++---------------------+------------------------+-----------------------------+----------------------------------------+ | ``/PartType4/`` | Stars | ``/StarsParticles/`` | ``swift_type_star`` | +---------------------+------------------------+-----------------------------+----------------------------------------+ | ``/PartType5/`` | Black Holes | ``/BHParticles/`` | ``swift_type_black_hole`` | @@ -181,6 +187,10 @@ the third column of the table. The last column in the table gives the ``enum`` value from ``part_type.h`` corresponding to a given entry in the files. +For completeness, the list of particle type names is stored in the snapshot +header in the array ``/Header/PartTypeNames``. The number of types (aka. the +length of this array) is stored as the attribute ``/Header/NumPartTypes``. + Each group contains a series of arrays corresponding to each field of the particles stored in the snapshots. The exact list of fields depends on what compile time options were used and what module was activated. A full list can be @@ -204,7 +214,7 @@ Each particle field contains meta-data about the units and how to convert it to CGS in physical or co-moving frames. The meta-data is in part designed for users to directly read and in part for machine reading of the information. Each field contains the exponent of the -scale-factor, reduced Hubble constant [#f1]_ and each of the 5 base units +scale-factor, reduced Hubble constant [#f2]_ and each of the 5 base units that is required to convert the field values to physical CGS units. These fields are: @@ -402,9 +412,16 @@ from the disk. Note that this is all automated in the ``swiftsimio`` python library and we highly encourage its use. -.. [#f1] Note that all quantities in SWIFT are always "h-free" in the - sense that they are expressed in units withouy any h - terms. This implies that the ``h-scale exponent`` field value - is always 0. SWIFT nevertheless includes this field to be - comprehensive and to prevent confusion with other software - packages that express their quantities with h-full units. +.. [#f1] In the rare case where an output + selection (see :ref:`Output_selection_label`) disabling a given particle type in + its entirety was used, the corresponding entry in ``NumPart_ThisFile`` will be 0 + whilst the ``NumPart_Total`` field will still contain the number of + particles present in the run. + + +.. [#f2] Note that all quantities in SWIFT are always "h-free" in the sense that + they are expressed in units withouy any h terms. This implies that the + ``h-scale exponent`` field value is always 0. SWIFT nevertheless + includes this field to be comprehensive and to prevent confusion with + other software packages that express their quantities with h-full + units. diff --git a/examples/Planetary/EarthImpact/earth_impact.yml b/examples/Planetary/EarthImpact/earth_impact.yml index 9f147b0135ffab70da2c3f17f1bda1e111a803aa..60f8fbcafc7c018ddd969db243f07682f6504e99 100644 --- a/examples/Planetary/EarthImpact/earth_impact.yml +++ b/examples/Planetary/EarthImpact/earth_impact.yml @@ -23,7 +23,6 @@ Snapshots: basename: earth_impact # Common part of the name of output files time_first: 0 # Time of the first output (in internal units) delta_time: 1000 # Time difference between consecutive outputs (in internal units) - int_time_label_on: 1 # Enable to label the snapshots using the time rounded to an integer (in internal units) # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/main.c b/examples/main.c index c1f8b99c857b7344e4c290c2e21e4f32d099dd2d..805f66df28b4cc948bef02aa36f1191629b9377d 100644 --- a/examples/main.c +++ b/examples/main.c @@ -886,6 +886,31 @@ int main(int argc, char *argv[]) { /* Now read it. */ restart_read(&e, restart_file); +#ifdef WITH_MPI + integertime_t min_ti_current = e.ti_current; + integertime_t max_ti_current = e.ti_current; + + /* Verify that everyone agrees on the current time */ + MPI_Allreduce(&e.ti_current, &min_ti_current, 1, MPI_LONG_LONG_INT, MPI_MIN, + MPI_COMM_WORLD); + MPI_Allreduce(&e.ti_current, &max_ti_current, 1, MPI_LONG_LONG_INT, MPI_MAX, + MPI_COMM_WORLD); + + if (min_ti_current != max_ti_current) { + if (myrank == 0) + message("The restart files don't all contain the same ti_current!"); + + for (int i = 0; i < myrank; ++i) { + if (myrank == i) + message("MPI rank %d reading file '%s' found an integer time= %lld", + myrank, restart_file, e.ti_current); + MPI_Barrier(MPI_COMM_WORLD); + } + + if (myrank == 0) error("Aborting"); + } +#endif + /* And initialize the engine with the space and policies. */ if (myrank == 0) clocks_gettime(&tic); engine_config(/*restart=*/1, /*fof=*/0, &e, params, nr_nodes, myrank, diff --git a/examples/parameter_example.yml b/examples/parameter_example.yml index 4d19572c088716fe4f2a7a5bdba728a078c2c26b..67b72dce9454832fcc56f4f6a4d4bd3f1aac7495 100644 --- a/examples/parameter_example.yml +++ b/examples/parameter_example.yml @@ -155,7 +155,6 @@ Snapshots: invoke_fof: 0 # (Optional) Call FOF every time a snapshot is written compression: 0 # (Optional) Set the level of GZIP compression of the HDF5 datasets [0-9]. 0 does no compression. The lossless compression is applied to *all* the fields. distributed: 0 # (Optional) When running over MPI, should each rank write a partial snapshot or do we want a single file? 1 implies one file per MPI rank. - int_time_label_on: 0 # (Optional) Enable to label the snapshots using the time rounded to an integer (in internal units) UnitMass_in_cgs: 1 # (Optional) Unit system for the outputs (Grams) UnitLength_in_cgs: 1 # (Optional) Unit system for the outputs (Centimeters) UnitVelocity_in_cgs: 1 # (Optional) Unit system for the outputs (Centimeters per second) diff --git a/src/common_io.c b/src/common_io.c index dc6c238e7a1cea5a62991021d34c425420813f30..e928a234a065e6e2263437419b7500da6a19ee41 100644 --- a/src/common_io.c +++ b/src/common_io.c @@ -35,12 +35,20 @@ #include "version.h" /* I/O functions of each sub-module */ +#include "black_holes_io.h" #include "chemistry_io.h" #include "cooling_io.h" #include "feedback.h" +#include "fof_io.h" +#include "gravity_io.h" #include "hydro_io.h" +#include "particle_splitting.h" +#include "rt_io.h" +#include "sink_io.h" +#include "star_formation_io.h" #include "stars_io.h" #include "tracers_io.h" +#include "velociraptor_io.h" /* Some standard headers. */ #include <math.h> @@ -754,6 +762,28 @@ void io_write_engine_policy(hid_t h_file, const struct engine* e) { H5Gclose(h_grp); } +void io_write_part_type_names(hid_t h_grp) { + + io_write_attribute_i(h_grp, "NumPartTypes", swift_type_count); + + /* Create an array of partcle type names */ + const int name_length = 128; + char names[swift_type_count][name_length]; + for (int i = 0; i < swift_type_count; ++i) + strcpy(names[i], part_type_names[i]); + + hsize_t dims[1] = {swift_type_count}; + hid_t type = H5Tcopy(H5T_C_S1); + H5Tset_size(type, name_length); + hid_t space = H5Screate_simple(1, dims, NULL); + hid_t dset = H5Dcreate(h_grp, "PartTypeNames", type, space, H5P_DEFAULT, + H5P_DEFAULT, H5P_DEFAULT); + H5Dwrite(dset, type, H5S_ALL, H5S_ALL, H5P_DEFAULT, names[0]); + H5Dclose(dset); + H5Tclose(type); + H5Sclose(space); +} + #endif /* HAVE_HDF5 */ /** @@ -1372,6 +1402,8 @@ void io_make_snapshot_subdir(const char* dirname) { * @brief Construct the file names for a single-file hdf5 snapshots and * corresponding XMF descriptor file. * + * The XMF file always uses the default basename. + * * @param filename (return) The file name of the hdf5 snapshot. * @param xmf_filename (return) The file name of the associated XMF file. * @param use_time_label Are we using time labels for the snapshot indices? @@ -1379,37 +1411,40 @@ void io_make_snapshot_subdir(const char* dirname) { * @param time The current simulation time. * @param stf_count The counter of STF outputs. * @param snap_count The counter of snapshot outputs. + * @param default_subdir The common part of the default sub-directory names. * @param subdir The sub-directory in which the snapshots are written. + * @param default_basename The common part of the default snapshot names. * @param basename The common part of the snapshot names. */ void io_get_snapshot_filename(char filename[1024], char xmf_filename[1024], - const int use_time_label, - const int snapshots_invoke_stf, const double time, + const struct output_list* output_list, + const int snapshots_invoke_stf, const int stf_count, const int snap_count, - const char* subdir, const char* basename) { + const char* default_subdir, const char* subdir, + const char* default_basename, + const char* basename) { int snap_number = -1; - if (use_time_label) - snap_number = (int)round(time); - else if (snapshots_invoke_stf) + int number_digits = -1; + if (output_list && output_list->alternative_labels_on) { + snap_number = output_list->snapshot_labels[snap_count]; + number_digits = 0; + } else if (snapshots_invoke_stf) { snap_number = stf_count; - else + number_digits = 4; + } else { snap_number = snap_count; - - int number_digits = -1; - if (use_time_label) - number_digits = 6; - else number_digits = 4; + } /* Are we using a sub-dir? */ if (strlen(subdir) > 0) { sprintf(filename, "%s/%s_%0*d.hdf5", subdir, basename, number_digits, snap_number); - sprintf(xmf_filename, "%s/%s.xmf", subdir, basename); + sprintf(xmf_filename, "%s/%s.xmf", default_subdir, default_basename); } else { sprintf(filename, "%s_%0*d.hdf5", basename, number_digits, snap_number); - sprintf(xmf_filename, "%s.xmf", basename); + sprintf(xmf_filename, "%s.xmf", default_basename); } } /** @@ -1426,3 +1461,158 @@ void io_get_snapshot_filename(char filename[1024], char xmf_filename[1024], void io_set_ids_to_one(struct gpart* gparts, const size_t Ngparts) { for (size_t i = 0; i < Ngparts; i++) gparts[i].id_or_neg_offset = 1; } + +/** + * @brief Select the fields to write to snapshots for the gas particles. + * + * @param parts The #part's + * @param xparts The #xpart's + * @param with_cosmology Are we running with cosmology switched on? + * @param with_cooling Are we running with cooling switched on? + * @param with_temperature Are we running with temperature switched on? + * @param with_fof Are we running FoF? + * @param with_stf Are we running with structure finding? + * @param with_rt Are we running with radiative transfer? + * @param e The #engine (to access scheme properties). + * @param num_fields (return) The number of fields to write. + * @param list (return) The list of fields to write. + */ +void io_select_hydro_fields(const struct part* const parts, + const struct xpart* const xparts, + const int with_cosmology, const int with_cooling, + const int with_temperature, const int with_fof, + const int with_stf, const int with_rt, + const struct engine* const e, int* const num_fields, + struct io_props* const list) { + + hydro_write_particles(parts, xparts, list, num_fields); + + *num_fields += particle_splitting_write_particles( + parts, xparts, list + *num_fields, with_cosmology); + *num_fields += chemistry_write_particles(parts, xparts, list + *num_fields, + with_cosmology); + if (with_cooling || with_temperature) { + *num_fields += cooling_write_particles(parts, xparts, list + *num_fields, + e->cooling_func); + } + if (with_fof) { + *num_fields += fof_write_parts(parts, xparts, list + *num_fields); + } + if (with_stf) { + *num_fields += velociraptor_write_parts(parts, xparts, list + *num_fields); + } + *num_fields += tracers_write_particles(parts, xparts, list + *num_fields, + with_cosmology); + *num_fields += + star_formation_write_particles(parts, xparts, list + *num_fields); + if (with_rt) { + *num_fields += rt_write_particles(parts, list + *num_fields); + } +} + +/** + * @brief Select the fields to write to snapshots for the DM particles. + * + * @param gparts The #gpart's + * @param with_fof Are we running FoF? + * @param with_stf Are we running with structure finding? + * @param e The #engine (to access scheme properties). + * @param num_fields (return) The number of fields to write. + * @param list (return) The list of fields to write. + */ +void io_select_dm_fields(const struct gpart* const gparts, const int with_fof, + const int with_stf, const struct engine* const e, + int* const num_fields, struct io_props* const list) { + + darkmatter_write_particles(gparts, list, num_fields); + if (with_fof) { + *num_fields += fof_write_gparts(gparts, list + *num_fields); + } + if (with_stf) { + *num_fields += + velociraptor_write_gparts(e->s->gpart_group_data, list + *num_fields); + } +} + +/** + * @brief Select the fields to write to snapshots for the sink particles. + * + * @param sinks The #sink's + * @param with_cosmology Are we running with cosmology switched on? + * @param with_fof Are we running FoF? + * @param with_stf Are we running with structure finding? + * @param e The #engine (to access scheme properties). + * @param num_fields (return) The number of fields to write. + * @param list (return) The list of fields to write. + */ +void io_select_sink_fields(const struct sink* const sinks, + const int with_cosmology, const int with_fof, + const int with_stf, const struct engine* const e, + int* const num_fields, struct io_props* const list) { + + sink_write_particles(sinks, list, num_fields, with_cosmology); +} + +/** + * @brief Select the fields to write to snapshots for the star particles. + * + * @param sparts The #spart's + * @param with_cosmology Are we running with cosmology switched on? + * @param with_fof Are we running FoF? + * @param with_stf Are we running with structure finding? + * @param with_rt Are we running with radiative transfer? + * @param e The #engine (to access scheme properties). + * @param num_fields (return) The number of fields to write. + * @param list (return) The list of fields to write. + */ +void io_select_star_fields(const struct spart* const sparts, + const int with_cosmology, const int with_fof, + const int with_stf, const int with_rt, + const struct engine* const e, int* const num_fields, + struct io_props* const list) { + + stars_write_particles(sparts, list, num_fields, with_cosmology); + *num_fields += + particle_splitting_write_sparticles(sparts, list + *num_fields); + *num_fields += chemistry_write_sparticles(sparts, list + *num_fields); + *num_fields += + tracers_write_sparticles(sparts, list + *num_fields, with_cosmology); + *num_fields += star_formation_write_sparticles(sparts, list + *num_fields); + if (with_fof) { + *num_fields += fof_write_sparts(sparts, list + *num_fields); + } + if (with_stf) { + *num_fields += velociraptor_write_sparts(sparts, list + *num_fields); + } + if (with_rt) { + *num_fields += rt_write_stars(sparts, list + *num_fields); + } +} + +/** + * @brief Select the fields to write to snapshots for the BH particles. + * + * @param bparts The #bpart's + * @param with_cosmology Are we running with cosmology switched on? + * @param with_fof Are we running FoF? + * @param with_stf Are we running with structure finding? + * @param e The #engine (to access scheme properties). + * @param num_fields (return) The number of fields to write. + * @param list (return) The list of fields to write. + */ +void io_select_bh_fields(const struct bpart* const bparts, + const int with_cosmology, const int with_fof, + const int with_stf, const struct engine* const e, + int* const num_fields, struct io_props* const list) { + + black_holes_write_particles(bparts, list, num_fields, with_cosmology); + *num_fields += + particle_splitting_write_bparticles(bparts, list + *num_fields); + *num_fields += chemistry_write_bparticles(bparts, list + *num_fields); + if (with_fof) { + *num_fields += fof_write_bparts(bparts, list + *num_fields); + } + if (with_stf) { + *num_fields += velociraptor_write_bparts(bparts, list + *num_fields); + } +} diff --git a/src/common_io.h b/src/common_io.h index 481a6c4ad73c4b6ae626f7076b05d7babe719fdf..1d639d96ea7a299517a8fed84ae48fdda7c8e365 100644 --- a/src/common_io.h +++ b/src/common_io.h @@ -44,6 +44,7 @@ struct sink; struct io_props; struct engine; struct threadpool; +struct output_list; struct output_options; struct unit_system; @@ -102,6 +103,7 @@ void io_write_meta_data(hid_t h_file, const struct engine* e, void io_write_code_description(hid_t h_file); void io_write_engine_policy(hid_t h_file, const struct engine* e); +void io_write_part_type_names(hid_t h_grp); void io_write_cell_offsets(hid_t h_grp, const int cdim[3], const double dim[3], const struct cell* cells_top, const int nr_cells, @@ -189,11 +191,41 @@ void io_write_output_field_parameter(const char* filename, int with_cosmology); void io_make_snapshot_subdir(const char* dirname); void io_get_snapshot_filename(char filename[1024], char xmf_filename[1024], - const int use_time_label, - const int snapshots_invoke_stf, const double time, + const struct output_list* output_list, + const int snapshots_invoke_stf, const int stf_count, const int snap_count, - const char* subdir, const char* basename); + const char* default_subdir, const char* subdir, + const char* default_basename, + const char* basename); void io_set_ids_to_one(struct gpart* gparts, const size_t Ngparts); +void io_select_hydro_fields(const struct part* const parts, + const struct xpart* const xparts, + const int with_cosmology, const int with_cooling, + const int with_temperature, const int with_fof, + const int with_stf, const int with_rt, + const struct engine* const e, int* const num_fields, + struct io_props* const list); + +void io_select_dm_fields(const struct gpart* const gparts, const int with_fof, + const int with_stf, const struct engine* const e, + int* const num_fields, struct io_props* const list); + +void io_select_sink_fields(const struct sink* const sinks, + const int with_cosmology, const int with_fof, + const int with_stf, const struct engine* const e, + int* const num_fields, struct io_props* const list); + +void io_select_star_fields(const struct spart* const sparts, + const int with_cosmology, const int with_fof, + const int with_stf, const int with_rt, + const struct engine* const e, int* const num_fields, + struct io_props* const list); + +void io_select_bh_fields(const struct bpart* const bparts, + const int with_cosmology, const int with_fof, + const int with_stf, const struct engine* const e, + int* const num_fields, struct io_props* const list); + #endif /* SWIFT_COMMON_IO_H */ diff --git a/src/common_io_fields.c b/src/common_io_fields.c index 804dbb39557b5f3fd70de2a58f1adc712e2f4c3a..dceae510da4ffb334ee8c2b7dc3abc6af3355bf6 100644 --- a/src/common_io_fields.c +++ b/src/common_io_fields.c @@ -25,23 +25,10 @@ /* Local includes. */ #include "error.h" +#include "io_properties.h" +#include "output_options.h" #include "units.h" -/* I/O functions of each sub-module */ -#include "black_holes_io.h" -#include "chemistry_io.h" -#include "cooling_io.h" -#include "fof_io.h" -#include "gravity_io.h" -#include "hydro_io.h" -#include "particle_splitting.h" -#include "rt_io.h" -#include "sink_io.h" -#include "star_formation_io.h" -#include "stars_io.h" -#include "tracers_io.h" -#include "velociraptor_io.h" - /* Some standard headers. */ #include <string.h> @@ -91,64 +78,35 @@ int io_get_ptype_fields(const int ptype, struct io_props* list, switch (ptype) { case swift_type_gas: - hydro_write_particles(NULL, NULL, list, &num_fields); - num_fields += particle_splitting_write_particles( - NULL, NULL, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles(NULL, NULL, list + num_fields, - with_cosmology); - num_fields += - cooling_write_particles(NULL, NULL, list + num_fields, NULL); - num_fields += tracers_write_particles(NULL, NULL, list + num_fields, - with_cosmology); - num_fields += - star_formation_write_particles(NULL, NULL, list + num_fields); - if (with_fof) - num_fields += fof_write_parts(NULL, NULL, list + num_fields); - if (with_stf) - num_fields += velociraptor_write_parts(NULL, NULL, list + num_fields); - num_fields += rt_write_particles(NULL, list + num_fields); + io_select_hydro_fields(NULL, NULL, with_cosmology, /*with_cooling=*/1, + /*with_temperature=*/1, with_fof, with_stf, + /*with_rt=*/1, /*e=*/NULL, &num_fields, list); break; case swift_type_dark_matter: - darkmatter_write_particles(NULL, list, &num_fields); - if (with_fof) num_fields += fof_write_gparts(NULL, list + num_fields); - if (with_stf) - num_fields += velociraptor_write_gparts(NULL, list + num_fields); + io_select_dm_fields(NULL, with_fof, with_stf, /*e=*/NULL, &num_fields, + list); break; case swift_type_dark_matter_background: - darkmatter_write_particles(NULL, list, &num_fields); - if (with_fof) num_fields += fof_write_gparts(NULL, list + num_fields); - if (with_stf) - num_fields += velociraptor_write_gparts(NULL, list + num_fields); + io_select_dm_fields(NULL, with_fof, with_stf, /*e=*/NULL, &num_fields, + list); break; case swift_type_stars: - stars_write_particles(NULL, list, &num_fields, with_cosmology); - num_fields += - particle_splitting_write_sparticles(NULL, list + num_fields); - num_fields += chemistry_write_sparticles(NULL, list + num_fields); - num_fields += - tracers_write_sparticles(NULL, list + num_fields, with_cosmology); - num_fields += star_formation_write_sparticles(NULL, list + num_fields); - if (with_fof) num_fields += fof_write_sparts(NULL, list + num_fields); - if (with_stf) - num_fields += velociraptor_write_sparts(NULL, list + num_fields); - num_fields += rt_write_stars(NULL, list + num_fields); + io_select_star_fields(NULL, with_cosmology, with_fof, with_stf, + /*with_rt=*/1, + /*e=*/NULL, &num_fields, list); break; case swift_type_sink: - sink_write_particles(NULL, list, &num_fields, with_cosmology); + io_select_sink_fields(NULL, with_cosmology, with_fof, with_stf, + /*e=*/NULL, &num_fields, list); break; case swift_type_black_hole: - black_holes_write_particles(NULL, list, &num_fields, with_cosmology); - num_fields += - particle_splitting_write_bparticles(NULL, list + num_fields); - num_fields += chemistry_write_bparticles(NULL, list + num_fields); - if (with_fof) num_fields += fof_write_bparts(NULL, list + num_fields); - if (with_stf) - num_fields += velociraptor_write_bparts(NULL, list + num_fields); + io_select_bh_fields(NULL, with_cosmology, with_fof, with_stf, /*e=*/NULL, + &num_fields, list); break; default: @@ -245,6 +203,8 @@ void io_prepare_output_fields(struct output_options* output_options, * 'Standard' parameter */ if (strstr(param_name, section_name) == NULL) continue; if (strstr(param_name, ":Standard_") != NULL) continue; + if (strstr(param_name, ":basename") != NULL) continue; + if (strstr(param_name, ":subdir") != NULL) continue; /* Get the particle type for current parameter * (raises an error if it could not determine it) */ diff --git a/src/distributed_io.c b/src/distributed_io.c index 21417cc5c23b42f26562d0fd6b378838231a2d6e..49ce285caf030ac27a944da4714432b6256d43b5 100644 --- a/src/distributed_io.c +++ b/src/distributed_io.c @@ -39,32 +39,30 @@ #include "black_holes_io.h" #include "chemistry_io.h" #include "common_io.h" -#include "cooling_io.h" #include "dimension.h" #include "engine.h" #include "error.h" -#include "fof_io.h" #include "gravity_io.h" #include "gravity_properties.h" #include "hydro_io.h" #include "hydro_properties.h" +#include "io_compression.h" #include "io_properties.h" #include "memuse.h" #include "output_list.h" #include "output_options.h" #include "part.h" #include "part_type.h" -#include "particle_splitting.h" -#include "rt_io.h" #include "sink_io.h" #include "star_formation_io.h" #include "stars_io.h" #include "tools.h" -#include "tracers_io.h" #include "units.h" -#include "velociraptor_io.h" #include "xmf.h" +/* Are we timing the i/o? */ +//#define IO_SPEED_MEASUREMENT + /** * @brief Writes a data array in given HDF5 group. * @@ -89,6 +87,10 @@ void write_distributed_array( const struct unit_system* internal_units, const struct unit_system* snapshot_units) { +#ifdef IO_SPEED_MEASUREMENT + const ticks tic_total = getticks(); +#endif + const size_t typeSize = io_sizeof_type(props.type); const size_t num_elements = N * props.dimension; @@ -100,9 +102,19 @@ void write_distributed_array( num_elements * typeSize) != 0) error("Unable to allocate temporary i/o buffer"); +#ifdef IO_SPEED_MEASUREMENT + ticks tic = getticks(); +#endif + /* Copy the particle data to the temporary buffer */ io_copy_temp_buffer(temp, e, props, N, internal_units, snapshot_units); +#ifdef IO_SPEED_MEASUREMENT + if (engine_rank == IO_SPEED_MEASUREMENT || IO_SPEED_MEASUREMENT == -1) + message("Copying for '%s' took %.3f %s.", props.name, + clocks_from_ticks(getticks() - tic), clocks_getunit()); +#endif + /* Create data space */ hid_t h_space; if (N > 0) @@ -186,11 +198,26 @@ void write_distributed_array( h_prop, H5P_DEFAULT); if (h_data < 0) error("Error while creating dataspace '%s'.", props.name); +#ifdef IO_SPEED_MEASUREMENT + tic = getticks(); +#endif + /* Write temporary buffer to HDF5 dataspace */ h_err = H5Dwrite(h_data, io_hdf5_type(props.type), h_space, H5S_ALL, H5P_DEFAULT, temp); if (h_err < 0) error("Error while writing data array '%s'.", props.name); +#ifdef IO_SPEED_MEASUREMENT + ticks toc = getticks(); + float ms = clocks_from_ticks(toc - tic); + int megaBytes = N * props.dimension * typeSize / (1024 * 1024); + if (engine_rank == IO_SPEED_MEASUREMENT || IO_SPEED_MEASUREMENT == -1) + message( + "H5Dwrite for '%s' (%d MB) on rank %d took %.3f %s (speed = %f MB/s).", + props.name, megaBytes, engine_rank, ms, clocks_getunit(), + megaBytes / (ms / 1000.)); +#endif + /* Write unit conversion factors for this data set */ char buffer[FIELD_BUFFER_SIZE] = {0}; units_cgs_conversion_string(buffer, snapshot_units, props.units, @@ -232,6 +259,12 @@ void write_distributed_array( H5Pclose(h_prop); H5Dclose(h_data); H5Sclose(h_space); + +#ifdef IO_SPEED_MEASUREMENT + if (engine_rank == IO_SPEED_MEASUREMENT || IO_SPEED_MEASUREMENT == -1) + message("'%s' took %.3f %s.", props.name, + clocks_from_ticks(getticks() - tic), clocks_getunit()); +#endif } /** @@ -308,42 +341,58 @@ void write_output_distributed(struct engine* e, const size_t Ndm_written = Ntot_written > 0 ? Ntot_written - Nbaryons_written - Ndm_background : 0; + /* Determine if we are writing a reduced snapshot, and if so which + * output selection type to use */ + char current_selection_name[FIELD_BUFFER_SIZE] = + select_output_header_default_name; + if (output_list) { + /* Users could have specified a different Select Output scheme for each + * snapshot. */ + output_list_get_current_select_output(output_list, current_selection_name); + } + int snap_count = -1; - if (e->snapshot_int_time_label_on) - snap_count = (int)round(e->time); - else if (e->snapshot_invoke_stf) + int number_digits = -1; + if (output_list && output_list->alternative_labels_on) { + snap_count = output_list->snapshot_labels[snap_count]; + number_digits = 0; + } else if (e->snapshot_invoke_stf) { snap_count = e->stf_output_count; - else + number_digits = 4; + } else { snap_count = e->snapshot_output_count; - - int number_digits = -1; - if (e->snapshot_int_time_label_on) - number_digits = 6; - else number_digits = 4; + } /* Directory and file name */ char dirName[1024]; char fileName[1024]; + char snapshot_subdir_name[FILENAME_BUFFER_SIZE]; + char snapshot_base_name[FILENAME_BUFFER_SIZE]; + + output_options_get_basename(output_options, current_selection_name, + e->snapshot_subdir, e->snapshot_base_name, + snapshot_subdir_name, snapshot_base_name); + /* Are we using a sub-dir? */ if (strnlen(e->snapshot_subdir, PARSER_MAX_LINE_SIZE) > 0) { - sprintf(dirName, "%s/%s_%0*d", e->snapshot_subdir, e->snapshot_base_name, + sprintf(dirName, "%s/%s_%0*d", snapshot_subdir_name, snapshot_base_name, number_digits, snap_count); - sprintf(fileName, "%s/%s_%0*d/%s_%0*d.%d.hdf5", e->snapshot_subdir, - e->snapshot_base_name, number_digits, snap_count, - e->snapshot_base_name, number_digits, snap_count, mpi_rank); + sprintf(fileName, "%s/%s_%0*d/%s_%0*d.%d.hdf5", snapshot_subdir_name, + snapshot_base_name, number_digits, snap_count, snapshot_base_name, + number_digits, snap_count, mpi_rank); } else { - sprintf(dirName, "%s_%0*d", e->snapshot_base_name, number_digits, - snap_count); + sprintf(dirName, "%s_%0*d", snapshot_base_name, number_digits, snap_count); - sprintf(fileName, "%s_%0*d/%s_%0*d.%d.hdf5", e->snapshot_base_name, - number_digits, snap_count, e->snapshot_base_name, number_digits, + sprintf(fileName, "%s_%0*d/%s_%0*d.%d.hdf5", snapshot_base_name, + number_digits, snap_count, snapshot_base_name, number_digits, snap_count, mpi_rank); } /* Create the directory */ + if (mpi_rank == 0) safe_checkdir(snapshot_subdir_name, /*create=*/1); if (mpi_rank == 0) safe_checkdir(dirName, /*create=*/1); MPI_Barrier(comm); @@ -376,16 +425,6 @@ void write_output_distributed(struct engine* e, e->s->dim[1] * factor_length, e->s->dim[2] * factor_length}; - /* Determine if we are writing a reduced snapshot, and if so which - * output selection type to use */ - char current_selection_name[FIELD_BUFFER_SIZE] = - select_output_header_default_name; - if (output_list) { - /* Users could have specified a different Select Output scheme for each - * snapshot. */ - output_list_get_current_select_output(output_list, current_selection_name); - } - /* Print the relevant information and print status */ io_write_attribute(h_grp, "BoxSize", DOUBLE, dim, 3); io_write_attribute(h_grp, "Time", DOUBLE, &dblTime, 1); @@ -396,6 +435,9 @@ void write_output_distributed(struct engine* e, io_write_attribute_s(h_grp, "Code", "SWIFT"); io_write_attribute_s(h_grp, "RunName", e->run_name); + /* Write out the particle types */ + io_write_part_type_names(h_grp); + /* Write out the time-base */ if (with_cosmology) { io_write_attribute_d(h_grp, "TimeBase_dloga", e->time_base); @@ -414,6 +456,7 @@ void write_output_distributed(struct engine* e, io_write_attribute_s(h_grp, "Snapshot date", snapshot_date); /* GADGET-2 legacy values: Number of particles of each type */ + long long numParticlesThisFile[swift_type_count] = {0}; unsigned int numParticles[swift_type_count] = {0}; unsigned int numParticlesHighWord[swift_type_count] = {0}; @@ -426,9 +469,16 @@ void write_output_distributed(struct engine* e, numFields[ptype] = output_options_get_num_fields_to_write( output_options, current_selection_name, ptype); + + if (numFields[ptype] == 0) { + numParticlesThisFile[ptype] = 0; + } else { + numParticlesThisFile[ptype] = N[ptype]; + } } - io_write_attribute(h_grp, "NumPart_ThisFile", LONGLONG, N, swift_type_count); + io_write_attribute(h_grp, "NumPart_ThisFile", LONGLONG, numParticlesThisFile, + swift_type_count); io_write_attribute(h_grp, "NumPart_Total", UINT, numParticles, swift_type_count); io_write_attribute(h_grp, "NumPart_Total_HighWord", UINT, @@ -513,29 +563,11 @@ void write_output_distributed(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Ngas; - hydro_write_particles(parts, xparts, list, &num_fields); - num_fields += particle_splitting_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles( - parts, xparts, list + num_fields, with_cosmology); - if (with_cooling || with_temperature) { - num_fields += cooling_write_particles( - parts, xparts, list + num_fields, e->cooling_func); - } - if (with_fof) { - num_fields += fof_write_parts(parts, xparts, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_parts(parts, xparts, list + num_fields); - } - num_fields += tracers_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += - star_formation_write_particles(parts, xparts, list + num_fields); - if (with_rt) { - num_fields += rt_write_particles(parts, list + num_fields); - } + + /* Select the fields to write */ + io_select_hydro_fields(parts, xparts, with_cosmology, with_cooling, + with_temperature, with_fof, with_stf, with_rt, + e, &num_fields, list); } else { @@ -557,32 +589,9 @@ void write_output_distributed(struct engine* e, xparts_written, Ngas, Ngas_written); /* Select the fields to write */ - hydro_write_particles(parts_written, xparts_written, list, - &num_fields); - num_fields += particle_splitting_write_particles( - parts_written, xparts_written, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles( - parts_written, xparts_written, list + num_fields, with_cosmology); - if (with_cooling || with_temperature) { - num_fields += - cooling_write_particles(parts_written, xparts_written, - list + num_fields, e->cooling_func); - } - if (with_fof) { - num_fields += fof_write_parts(parts_written, xparts_written, - list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_parts( - parts_written, xparts_written, list + num_fields); - } - num_fields += tracers_write_particles( - parts_written, xparts_written, list + num_fields, with_cosmology); - num_fields += star_formation_write_particles( - parts_written, xparts_written, list + num_fields); - if (with_rt) { - num_fields += rt_write_particles(parts_written, list + num_fields); - } + io_select_hydro_fields(parts_written, xparts_written, with_cosmology, + with_cooling, with_temperature, with_fof, + with_stf, with_rt, e, &num_fields, list); } } break; @@ -591,14 +600,10 @@ void write_output_distributed(struct engine* e, /* This is a DM-only run without background or inhibited particles */ Nparticles = Ntot; - darkmatter_write_particles(gparts, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(e->s->gpart_group_data, - list + num_fields); - } + + /* Select the fields to write */ + io_select_dm_fields(gparts, with_fof, with_stf, e, &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -626,14 +631,8 @@ void write_output_distributed(struct engine* e, Ntot, Ndm_written, with_stf); /* Select the fields to write */ - darkmatter_write_particles(gparts_written, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(gpart_group_data_written, - list + num_fields); - } + io_select_dm_fields(gparts_written, with_fof, with_stf, e, + &num_fields, list); } } break; @@ -664,14 +663,8 @@ void write_output_distributed(struct engine* e, gpart_group_data_written, Ntot, Ndm_background, with_stf); /* Select the fields to write */ - darkmatter_write_particles(gparts_written, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(gpart_group_data_written, - list + num_fields); - } + io_select_dm_fields(gparts_written, with_fof, with_stf, e, &num_fields, + list); } break; case swift_type_sink: { @@ -679,7 +672,10 @@ void write_output_distributed(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Nsinks; - sink_write_particles(sinks, list, &num_fields, with_cosmology); + + /* Select the fields to write */ + io_select_sink_fields(sinks, with_cosmology, with_fof, with_stf, e, + &num_fields, list); } else { @@ -697,8 +693,8 @@ void write_output_distributed(struct engine* e, Nsinks_written); /* Select the fields to write */ - sink_write_particles(sinks_written, list, &num_fields, - with_cosmology); + io_select_sink_fields(sinks_written, with_cosmology, with_fof, + with_stf, e, &num_fields, list); } } break; @@ -707,21 +703,11 @@ void write_output_distributed(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Nstars; - stars_write_particles(sparts, list, &num_fields, with_cosmology); - num_fields += - particle_splitting_write_sparticles(sparts, list + num_fields); - num_fields += chemistry_write_sparticles(sparts, list + num_fields); - num_fields += tracers_write_sparticles(sparts, list + num_fields, - with_cosmology); - if (with_fof) { - num_fields += fof_write_sparts(sparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_sparts(sparts, list + num_fields); - } - if (with_rt) { - num_fields += rt_write_stars(sparts, list + num_fields); - } + + /* Select the fields to write */ + io_select_star_fields(sparts, with_cosmology, with_fof, with_stf, + with_rt, e, &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -738,24 +724,8 @@ void write_output_distributed(struct engine* e, Nstars_written); /* Select the fields to write */ - stars_write_particles(sparts_written, list, &num_fields, - with_cosmology); - num_fields += particle_splitting_write_sparticles(sparts_written, - list + num_fields); - num_fields += - chemistry_write_sparticles(sparts_written, list + num_fields); - num_fields += tracers_write_sparticles( - sparts_written, list + num_fields, with_cosmology); - if (with_fof) { - num_fields += fof_write_sparts(sparts_written, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_sparts(sparts_written, list + num_fields); - } - if (with_rt) { - num_fields += rt_write_stars(sparts_written, list + num_fields); - } + io_select_star_fields(sparts_written, with_cosmology, with_fof, + with_stf, with_rt, e, &num_fields, list); } } break; @@ -764,17 +734,11 @@ void write_output_distributed(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Nblackholes; - black_holes_write_particles(bparts, list, &num_fields, - with_cosmology); - num_fields += - particle_splitting_write_bparticles(bparts, list + num_fields); - num_fields += chemistry_write_bparticles(bparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_bparts(bparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_bparts(bparts, list + num_fields); - } + + /* Select the fields to write */ + io_select_bh_fields(bparts, with_cosmology, with_fof, with_stf, e, + &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -791,19 +755,8 @@ void write_output_distributed(struct engine* e, Nblackholes_written); /* Select the fields to write */ - black_holes_write_particles(bparts_written, list, &num_fields, - with_cosmology); - num_fields += particle_splitting_write_bparticles(bparts_written, - list + num_fields); - num_fields += - chemistry_write_bparticles(bparts_written, list + num_fields); - if (with_fof) { - num_fields += fof_write_bparts(bparts_written, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_bparts(bparts_written, list + num_fields); - } + io_select_bh_fields(bparts_written, with_cosmology, with_fof, + with_stf, e, &num_fields, list); } } break; @@ -859,6 +812,9 @@ void write_output_distributed(struct engine* e, /* Close file */ H5Fclose(h_file); + /* Make sure nobody is allowed to progress until everyone is done. */ + MPI_Barrier(comm); + e->snapshot_output_count++; if (e->snapshot_invoke_stf) e->stf_output_count++; } diff --git a/src/engine.c b/src/engine.c index 337ee530a3c47f2235b2bb15492bb88ffe1eef68..433b92801f5e44d3b43435829973f494481afe59 100644 --- a/src/engine.c +++ b/src/engine.c @@ -2759,8 +2759,6 @@ void engine_init(struct engine *e, struct space *s, struct swift_params *params, parser_get_opt_param_int(params, "Snapshots:compression", 0); e->snapshot_distributed = parser_get_opt_param_int(params, "Snapshots:distributed", 0); - e->snapshot_int_time_label_on = - parser_get_opt_param_int(params, "Snapshots:int_time_label_on", 0); e->snapshot_invoke_stf = parser_get_opt_param_int(params, "Snapshots:invoke_stf", 0); e->snapshot_invoke_fof = @@ -2909,8 +2907,6 @@ void engine_init(struct engine *e, struct space *s, struct swift_params *params, if (e->policy & engine_policy_star_formation) { star_formation_logger_accumulator_init(&e->sfh); } - - engine_init_output_lists(e, params); } /** @@ -3270,12 +3266,10 @@ void engine_struct_dump(struct engine *e, FILE *stream) { los_struct_dump(e->los_properties, stream); parser_struct_dump(e->parameter_file, stream); output_options_struct_dump(e->output_options, stream); - if (e->output_list_snapshots) - output_list_struct_dump(e->output_list_snapshots, stream); - if (e->output_list_stats) - output_list_struct_dump(e->output_list_stats, stream); - if (e->output_list_stf) output_list_struct_dump(e->output_list_stf, stream); - if (e->output_list_los) output_list_struct_dump(e->output_list_los, stream); + if (e->output_list_snapshots) output_list_clean(&e->output_list_snapshots); + if (e->output_list_stats) output_list_clean(&e->output_list_stats); + if (e->output_list_stf) output_list_clean(&e->output_list_stf); + if (e->output_list_los) output_list_clean(&e->output_list_los); #ifdef WITH_LOGGER if (e->policy & engine_policy_logger) { @@ -3421,34 +3415,6 @@ void engine_struct_restore(struct engine *e, FILE *stream) { output_options_struct_restore(output_options, stream); e->output_options = output_options; - if (e->output_list_snapshots) { - struct output_list *output_list_snapshots = - (struct output_list *)malloc(sizeof(struct output_list)); - output_list_struct_restore(output_list_snapshots, stream); - e->output_list_snapshots = output_list_snapshots; - } - - if (e->output_list_stats) { - struct output_list *output_list_stats = - (struct output_list *)malloc(sizeof(struct output_list)); - output_list_struct_restore(output_list_stats, stream); - e->output_list_stats = output_list_stats; - } - - if (e->output_list_stf) { - struct output_list *output_list_stf = - (struct output_list *)malloc(sizeof(struct output_list)); - output_list_struct_restore(output_list_stf, stream); - e->output_list_stf = output_list_stf; - } - - if (e->output_list_los) { - struct output_list *output_list_los = - (struct output_list *)malloc(sizeof(struct output_list)); - output_list_struct_restore(output_list_los, stream); - e->output_list_los = output_list_los; - } - #ifdef WITH_LOGGER if (e->policy & engine_policy_logger) { struct logger_writer *log = diff --git a/src/engine.h b/src/engine.h index 06bdc3be5dde1054d1590617c2154b3ceb920c9f..744382e828e2dd4d3557d455fdcbc4f8300f8a91 100644 --- a/src/engine.h +++ b/src/engine.h @@ -315,7 +315,6 @@ struct engine { int snapshot_run_on_dump; int snapshot_distributed; int snapshot_compression; - int snapshot_int_time_label_on; int snapshot_invoke_stf; int snapshot_invoke_fof; struct unit_system *snapshot_units; diff --git a/src/engine_config.c b/src/engine_config.c index 99e7b32a978f7920713f8f67484190c70bed5e55..f8c8705d31b5e4e2d1eb8a84fe5ac97ad2a4335e 100644 --- a/src/engine_config.c +++ b/src/engine_config.c @@ -427,6 +427,7 @@ void engine_config(int restart, int fof, struct engine *e, if (e->policy & engine_policy_self_gravity) if (e->nodeID == 0) gravity_props_print(e->gravity_properties); + /* Print information about the stellar scheme */ if (e->policy & engine_policy_stars) if (e->nodeID == 0) stars_props_print(e->stars_properties); @@ -437,10 +438,6 @@ void engine_config(int restart, int fof, struct engine *e, "time (t_beg = %e)", e->time_end, e->time_begin); - /* Check we don't have inappropriate time labels */ - if ((e->snapshot_int_time_label_on == 1) && (e->time_end <= 1.f)) - error("Snapshot integer time labels enabled but end time <= 1"); - /* Check we have sensible time-step values */ if (e->dt_min > e->dt_max) error( @@ -474,7 +471,10 @@ void engine_config(int restart, int fof, struct engine *e, error("Maximal time-step size larger than the simulation run time t=%e", e->time_end - e->time_begin); - /* Deal with outputs */ + /* Read (or re-read the list of outputs */ + engine_init_output_lists(e, params); + + /* Check whether output quantities make sense */ if (e->policy & engine_policy_cosmology) { if (e->delta_time_snapshot <= 1.) @@ -560,9 +560,6 @@ void engine_config(int restart, int fof, struct engine *e, } } - /* Try to ensure the snapshot directory exists */ - if (e->nodeID == 0) io_make_snapshot_subdir(e->snapshot_subdir); - /* Get the total mass */ e->total_mass = 0.; for (size_t i = 0; i < e->s->nr_gparts; ++i) @@ -603,12 +600,6 @@ void engine_config(int restart, int fof, struct engine *e, engine_compute_next_fof_time(e); } - /* Check that the snapshot naming policy is valid */ - if (e->snapshot_invoke_stf && e->snapshot_int_time_label_on) - error( - "Cannot use snapshot time labels and VELOCIraptor invocations " - "together!"); - /* Check that we are invoking VELOCIraptor only if we have it */ if (e->snapshot_invoke_stf && !(e->policy & engine_policy_structure_finding)) { diff --git a/src/engine_io.c b/src/engine_io.c index e5ed79a04352badefe7890757ec65f1abb542b1b..9f544499a7ebe69fc188d405d0332fc619ca0378 100644 --- a/src/engine_io.c +++ b/src/engine_io.c @@ -883,55 +883,65 @@ void engine_compute_next_fof_time(struct engine *e) { * @param params The #swift_params. */ void engine_init_output_lists(struct engine *e, struct swift_params *params) { + /* Deal with snapshots */ - double snaps_time_first; e->output_list_snapshots = NULL; output_list_init(&e->output_list_snapshots, e, "Snapshots", - &e->delta_time_snapshot, &snaps_time_first); + &e->delta_time_snapshot); if (e->output_list_snapshots) { + engine_compute_next_snapshot_time(e); + if (e->policy & engine_policy_cosmology) - e->a_first_snapshot = snaps_time_first; + e->a_first_snapshot = + exp(e->ti_next_snapshot * e->time_base) * e->cosmology->a_begin; else - e->time_first_snapshot = snaps_time_first; + e->time_first_snapshot = + e->ti_next_snapshot * e->time_base + e->time_begin; } /* Deal with stats */ - double stats_time_first; e->output_list_stats = NULL; output_list_init(&e->output_list_stats, e, "Statistics", - &e->delta_time_statistics, &stats_time_first); + &e->delta_time_statistics); if (e->output_list_stats) { + engine_compute_next_statistics_time(e); + if (e->policy & engine_policy_cosmology) - e->a_first_statistics = stats_time_first; + e->a_first_statistics = + exp(e->ti_next_stats * e->time_base) * e->cosmology->a_begin; else - e->time_first_statistics = stats_time_first; + e->time_first_statistics = + e->ti_next_stats * e->time_base + e->time_begin; } /* Deal with stf */ - double stf_time_first; e->output_list_stf = NULL; output_list_init(&e->output_list_stf, e, "StructureFinding", - &e->delta_time_stf, &stf_time_first); + &e->delta_time_stf); if (e->output_list_stf) { + engine_compute_next_stf_time(e); + if (e->policy & engine_policy_cosmology) - e->a_first_stf_output = stf_time_first; + e->a_first_stf_output = + exp(e->ti_next_stf * e->time_base) * e->cosmology->a_begin; else - e->time_first_stf_output = stf_time_first; + e->time_first_stf_output = e->ti_next_stf * e->time_base + e->time_begin; } /* Deal with line of sight */ - double los_time_first; e->output_list_los = NULL; - output_list_init(&e->output_list_los, e, "LineOfSight", &e->delta_time_los, - &los_time_first); + output_list_init(&e->output_list_los, e, "LineOfSight", &e->delta_time_los); if (e->output_list_los) { + engine_compute_next_los_time(e); + if (e->policy & engine_policy_cosmology) - e->a_first_los = los_time_first; + e->a_first_los = + exp(e->ti_next_los * e->time_base) * e->cosmology->a_begin; else - e->time_first_los = los_time_first; + e->time_first_los = e->ti_next_los * e->time_base + e->time_begin; } } diff --git a/src/feedback/EAGLE/feedback.c b/src/feedback/EAGLE/feedback.c index 8046de77262a49d3b250359d3861dc1e8f09e58f..54510bb4ffd68ae0b4642d8a1c0e21aafd2a2685 100644 --- a/src/feedback/EAGLE/feedback.c +++ b/src/feedback/EAGLE/feedback.c @@ -1480,5 +1480,6 @@ void feedback_struct_restore(struct feedback_props* feedback, FILE* stream) { restart_read_blocks((void*)feedback, sizeof(struct feedback_props), 1, stream, NULL, "feedback function"); - feedback_restore_tables(feedback); + if (strlen(feedback->yield_table_path) != 0) + feedback_restore_tables(feedback); } diff --git a/src/fof.c b/src/fof.c index 4bb791771af57f59638107a73c1d9d7b5ef02d3a..33c6d8219276de087e0e278b7dd2c3a577af95fe 100644 --- a/src/fof.c +++ b/src/fof.c @@ -1933,17 +1933,30 @@ void fof_find_foreign_links_mapper(void *map_data, int num_elements, #endif } +/** + * @brief Seed black holes from gas particles in the haloes on the local MPI + * rank that passed the criteria. + * + * @param props The properties of the FOF scheme. + * @param bh_props The properties of the black hole scheme. + * @param constants The physical constants. + * @param cosmo The cosmological model. + * @param s The @space we act on. + * @param num_groups_local The number of groups on the current MPI rank. + * @param group_sizes List of groups sorted in size order. + */ void fof_seed_black_holes(const struct fof_props *props, const struct black_holes_props *bh_props, const struct phys_const *constants, const struct cosmology *cosmo, struct space *s, - int num_groups, struct group_length *group_sizes) { + const int num_groups_local, + struct group_length *group_sizes) { const long long *max_part_density_index = props->max_part_density_index; /* Count the number of black holes to seed */ int num_seed_black_holes = 0; - for (int i = 0; i < num_groups + props->extra_bh_seed_count; i++) { + for (int i = 0; i < num_groups_local + props->extra_bh_seed_count; i++) { if (max_part_density_index[i] >= 0) ++num_seed_black_holes; } @@ -1981,7 +1994,7 @@ void fof_seed_black_holes(const struct fof_props *props, int k = s->nr_bparts; /* Loop over the local groups */ - for (int i = 0; i < num_groups + props->extra_bh_seed_count; i++) { + for (int i = 0; i < num_groups_local + props->extra_bh_seed_count; i++) { const long long part_index = max_part_density_index[i]; @@ -2608,9 +2621,9 @@ void fof_search_tree(struct fof_props *props, #endif struct gpart *gparts = s->gparts; size_t *group_index, *group_size; - int num_groups = 0, num_parts_in_groups = 0, max_group_size = 0; - int verbose = s->e->verbose; - ticks tic_total = getticks(); + long long num_groups = 0, num_parts_in_groups = 0, max_group_size = 0; + const int verbose = s->e->verbose; + const ticks tic_total = getticks(); char output_file_name[PARSER_MAX_LINE_SIZE]; snprintf(output_file_name, PARSER_MAX_LINE_SIZE, "%s", props->base_name); @@ -2631,7 +2644,7 @@ void fof_search_tree(struct fof_props *props, long long nr_gparts_cumulative; long long nr_gparts_local = s->nr_gparts; - ticks comms_tic = getticks(); + const ticks comms_tic = getticks(); MPI_Scan(&nr_gparts_local, &nr_gparts_cumulative, 1, MPI_LONG_LONG, MPI_SUM, MPI_COMM_WORLD); @@ -2653,7 +2666,7 @@ void fof_search_tree(struct fof_props *props, group_index = props->group_index; group_size = props->group_size; - ticks tic_calc_group_size = getticks(); + const ticks tic_calc_group_size = getticks(); threadpool_map(&s->e->threadpool, fof_calc_group_size_mapper, gparts, nr_gparts, sizeof(struct gpart), threadpool_auto_chunk_size, @@ -2666,7 +2679,7 @@ void fof_search_tree(struct fof_props *props, #ifdef WITH_MPI if (nr_nodes > 1) { - ticks tic_mpi = getticks(); + const ticks tic_mpi = getticks(); /* Search for group links across MPI domains. */ fof_search_foreign_cells(props, s); @@ -2677,8 +2690,7 @@ void fof_search_tree(struct fof_props *props, message( "fof_search_foreign_cells() + calc_group_size took (FOF SCALING): " - "%.3f " - "%s.", + "%.3f %s.", clocks_from_ticks(getticks() - tic_total), clocks_getunit()); } } @@ -2690,7 +2702,7 @@ void fof_search_tree(struct fof_props *props, size_t max_group_size_local = 0; #endif - ticks tic_num_groups_calc = getticks(); + const ticks tic_num_groups_calc = getticks(); for (size_t i = 0; i < nr_gparts; i++) { @@ -2752,7 +2764,7 @@ void fof_search_tree(struct fof_props *props, /* Find global properties. */ #ifdef WITH_MPI - MPI_Allreduce(&num_groups_local, &num_groups, 1, MPI_INT, MPI_SUM, + MPI_Allreduce(&num_groups_local, &num_groups, 1, MPI_LONG_LONG_INT, MPI_SUM, MPI_COMM_WORLD); if (verbose) @@ -2761,10 +2773,10 @@ void fof_search_tree(struct fof_props *props, clocks_getunit()); #ifndef WITHOUT_GROUP_PROPS - MPI_Reduce(&num_parts_in_groups_local, &num_parts_in_groups, 1, MPI_INT, - MPI_SUM, 0, MPI_COMM_WORLD); - MPI_Reduce(&max_group_size_local, &max_group_size, 1, MPI_INT, MPI_MAX, 0, - MPI_COMM_WORLD); + MPI_Reduce(&num_parts_in_groups_local, &num_parts_in_groups, 1, + MPI_LONG_LONG_INT, MPI_SUM, 0, MPI_COMM_WORLD); + MPI_Reduce(&max_group_size_local, &max_group_size, 1, MPI_LONG_LONG_INT, + MPI_MAX, 0, MPI_COMM_WORLD); #endif /* #ifndef WITHOUT_GROUP_PROPS */ #else num_groups = num_groups_local; @@ -2948,7 +2960,7 @@ void fof_search_tree(struct fof_props *props, bzero(props->group_mass, num_groups_local * sizeof(double)); - ticks tic_seeding = getticks(); + const ticks tic_seeding = getticks(); double *group_mass = props->group_mass; #ifdef WITH_MPI @@ -2961,7 +2973,7 @@ void fof_search_tree(struct fof_props *props, #endif if (verbose) - message("Black hole seeding took: %.3f %s.", + message("Computing group properties took: %.3f %s.", clocks_from_ticks(getticks() - tic_seeding), clocks_getunit()); /* Dump group data. */ @@ -2992,12 +3004,12 @@ void fof_search_tree(struct fof_props *props, if (engine_rank == 0) { message( - "No. of groups: %d. No. of particles in groups: %d. No. of particles " - "not in groups: %lld.", + "No. of groups: %lld. No. of particles in groups: %lld. No. of " + "particles not in groups: %lld.", num_groups, num_parts_in_groups, s->e->total_nr_gparts - num_parts_in_groups); - message("Largest group by size: %d", max_group_size); + message("Largest group by size: %lld", max_group_size); } if (verbose) message("took %.3f %s.", clocks_from_ticks(getticks() - tic_total), diff --git a/src/fof.h b/src/fof.h index dd24a6964e5354fa17a6574e9c830b0eefefe79a..54c873af1d3d88743360688bcf604fcb3f059993 100644 --- a/src/fof.h +++ b/src/fof.h @@ -87,7 +87,7 @@ struct fof_props { /* ------------ Group properties ----------------- */ /*! Number of groups */ - int num_groups; + long long num_groups; /*! Number of local black holes that belong to groups whose roots are on a * different node. */ diff --git a/src/io_properties.h b/src/io_properties.h index aa385be540f4ff2659f14f193b75276d8a9b12b5..95387e5cec3a06d5076ec7ae57a277e4714bcb59 100644 --- a/src/io_properties.h +++ b/src/io_properties.h @@ -111,6 +111,9 @@ struct io_props { /* Units of the quantity */ enum unit_conversion_factor units; + /* Default value to apply for optional fields when not found in the ICs */ + float default_value; + /* Scale-factor exponent to apply for unit conversion to physical */ float scale_factor_exponent; @@ -177,10 +180,37 @@ struct io_props { /** * @brief Constructs an #io_props from its parameters + * + * @param name The name of the field in the ICs. + * @param type The data type. + * @param dim The dimensionality of the field. + * @param importance Is this field compulsory or optional? + * @param units The units used for this field. + * @param part Pointer to the particle array where to write. + * @param field Name of the field in the particle structure to write to. */ #define io_make_input_field(name, type, dim, importance, units, part, field) \ io_make_input_field_(name, type, dim, importance, units, \ - (char*)(&(part[0]).field), sizeof(part[0])) + (char*)(&(part[0]).field), sizeof(part[0]), 0.) + +/** + * @brief Constructs an #io_props from its parameters with a user-defined + * default value to use for optional fields. + * + * @param name The name of the field in the ICs. + * @param type The data type. + * @param dim The dimensionality of the field. + * @param importance Is this field compulsory or optional? + * @param units The units used for this field. + * @param part Pointer to the particle array where to write. + * @param field Name of the field in the particle structure to write to. + * @param def The value to use as a default if the field is optional and not + * found in the ICs. + */ +#define io_make_input_field_default(name, type, dim, importance, units, part, \ + field, def) \ + io_make_input_field_(name, type, dim, importance, units, \ + (char*)(&(part[0]).field), sizeof(part[0]), def) /** * @brief Construct an #io_props from its parameters @@ -198,8 +228,10 @@ struct io_props { INLINE static struct io_props io_make_input_field_( const char name[FIELD_BUFFER_SIZE], enum IO_DATA_TYPE type, int dimension, enum DATA_IMPORTANCE importance, enum unit_conversion_factor units, - char* field, size_t partSize) { + char* field, size_t partSize, const float default_value) { struct io_props r; + bzero(&r, sizeof(struct io_props)); + strcpy(r.name, name); r.type = type; r.dimension = dimension; @@ -207,27 +239,13 @@ INLINE static struct io_props io_make_input_field_( r.units = units; r.field = field; r.partSize = partSize; - r.parts = NULL; - r.xparts = NULL; - r.gparts = NULL; - r.sparts = NULL; - r.bparts = NULL; - r.conversion = 0; - r.convert_part_f = NULL; - r.convert_part_d = NULL; - r.convert_part_l = NULL; - r.convert_gpart_f = NULL; - r.convert_gpart_d = NULL; - r.convert_gpart_l = NULL; - r.convert_spart_f = NULL; - r.convert_spart_d = NULL; - r.convert_spart_l = NULL; - r.convert_bpart_f = NULL; - r.convert_bpart_d = NULL; - r.convert_bpart_l = NULL; - r.convert_sink_f = NULL; - r.convert_sink_d = NULL; - r.convert_sink_l = NULL; + r.default_value = default_value; + + if (default_value != 0.f && importance != OPTIONAL) + error("Cannot set a non-zero default value for a compulsory field!"); + if (default_value != 0.f && type != FLOAT) + error( + "Can only set non-zero default value for a field using a FLOAT type!"); return r; } diff --git a/src/line_of_sight.c b/src/line_of_sight.c index 0d6c2745e9926e4f10fd19fddca546ae73115e6e..6af74ade6255a44a1f2cfc3e68aeabf733a8e289 100644 --- a/src/line_of_sight.c +++ b/src/line_of_sight.c @@ -27,20 +27,12 @@ #endif #include "atomic.h" -#include "chemistry_io.h" -#include "cooling_io.h" #include "engine.h" -#include "fof_io.h" #include "hydro_io.h" #include "io_properties.h" #include "kernel_hydro.h" #include "line_of_sight.h" -#include "particle_splitting.h" #include "periodic.h" -#include "rt_io.h" -#include "star_formation_io.h" -#include "tracers_io.h" -#include "velociraptor_io.h" #include <stdio.h> #include <stdlib.h> @@ -440,28 +432,9 @@ void write_los_hdf5_datasets(hid_t grp, const int j, const size_t N, struct io_props list[100]; /* Find all the gas output fields */ - hydro_write_particles(parts, xparts, list, &num_fields); - num_fields += particle_splitting_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles(parts, xparts, list + num_fields, - with_cosmology); - if (with_cooling || with_temperature) { - num_fields += cooling_write_particles(parts, xparts, list + num_fields, - e->cooling_func); - } - if (with_fof) { - num_fields += fof_write_parts(parts, xparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_parts(parts, xparts, list + num_fields); - } - num_fields += - tracers_write_particles(parts, xparts, list + num_fields, with_cosmology); - num_fields += - star_formation_write_particles(parts, xparts, list + num_fields); - if (with_rt) { - num_fields += rt_write_particles(parts, list + num_fields); - } + io_select_hydro_fields(parts, xparts, with_cosmology, with_cooling, + with_temperature, with_fof, with_stf, with_rt, e, + &num_fields, list); /* Loop over each output field */ for (int i = 0; i < num_fields; i++) { @@ -511,6 +484,9 @@ void write_hdf5_header(hid_t h_file, const struct engine *e, io_write_attribute_s(h_grp, "Code", "SWIFT"); io_write_attribute_s(h_grp, "RunName", e->run_name); + /* Write out the particle types */ + io_write_part_type_names(h_grp); + /* Store the time at which the snapshot was written */ time_t tm = time(NULL); struct tm *timeinfo = localtime(&tm); diff --git a/src/output_list.c b/src/output_list.c index bdc173504f0ac2a4736f0446bf76c3ff9da0fdef..6519b59e8226e417a2eaf2f43db9985eec10fe8d 100644 --- a/src/output_list.c +++ b/src/output_list.c @@ -58,6 +58,7 @@ void output_list_read_file(struct output_list *output_list, /* Return to start of file and initialize time array */ fseek(file, 0, SEEK_SET); output_list->times = (double *)malloc(sizeof(double) * output_list->size); + output_list->snapshot_labels = (int *)malloc(sizeof(int) * output_list->size); output_list->select_output_indices = (int *)malloc(sizeof(int) * output_list->size); @@ -79,6 +80,7 @@ void output_list_read_file(struct output_list *output_list, int type = -1; output_list->select_output_on = 0; output_list->select_output_number_of_names = 0; + output_list->alternative_labels_on = 0; trim_trailing(line); @@ -97,6 +99,18 @@ void output_list_read_file(struct output_list *output_list, } else if (strcasecmp(line, "# Scale Factor, Select Output") == 0) { type = OUTPUT_LIST_SCALE_FACTOR; output_list->select_output_on = 1; + } else if (strcasecmp(line, "# Redshift, Select Output, Label") == 0) { + type = OUTPUT_LIST_REDSHIFT; + output_list->select_output_on = 1; + output_list->alternative_labels_on = 1; + } else if (strcasecmp(line, "# Time, Select Output, Label") == 0) { + type = OUTPUT_LIST_AGE; + output_list->select_output_on = 1; + output_list->alternative_labels_on = 1; + } else if (strcasecmp(line, "# Scale Factor, Select Output, Label") == 0) { + type = OUTPUT_LIST_SCALE_FACTOR; + output_list->select_output_on = 1; + output_list->alternative_labels_on = 1; } else { error("Unable to interpret the header (%s) in file '%s'", line, filename); } @@ -108,6 +122,11 @@ void output_list_read_file(struct output_list *output_list, "Please change the header in '%s'", filename); + if (!output_list->select_output_on && output_list->alternative_labels_on) + error( + "Found an output list with alternative labels but not individual " + "output selections"); + /* Read file */ size_t ind = 0; int read_successfully = 0; @@ -115,9 +134,15 @@ void output_list_read_file(struct output_list *output_list, char select_output_buffer[FIELD_BUFFER_SIZE] = select_output_header_default_name; while (getline(&line, &len, file) != -1) { + double *time = &output_list->times[ind]; + int *label = &output_list->snapshot_labels[ind]; + /* Write data to output_list */ - if (output_list->select_output_on) { + if (output_list->select_output_on && output_list->alternative_labels_on) { + read_successfully = sscanf(line, "%lf, %[^,], %d", time, + select_output_buffer, label) == 3; + } else if (output_list->select_output_on) { read_successfully = sscanf(line, "%lf, %s", time, select_output_buffer) == 2; } else { @@ -141,16 +166,15 @@ void output_list_read_file(struct output_list *output_list, * in the select_output_names array that corresponds to this select output * name. */ found_select_output = 0; - for (int select_output_index = 0; - select_output_index < output_list->select_output_number_of_names; - select_output_index++) { - if (!strcmp(select_output_buffer, - output_list->select_output_names[select_output_index])) { + for (int i = 0; i < output_list->select_output_number_of_names; i++) { + + if (!strcmp(select_output_buffer, output_list->select_output_names[i])) { /* We already have this select output list string in the buffer! */ - output_list->select_output_indices[ind] = select_output_index; + output_list->select_output_indices[ind] = i; found_select_output = 1; } } + /* If we did not assign it above, we haven't encountered this name before * and we need to create this name in the array */ if (!found_select_output) { @@ -305,26 +329,26 @@ void output_list_get_current_select_output(struct output_list *t, /** * @brief initialize an output list * - * @param list The output list to initialize - * @param e The #engine - * @param name The name of the section in params - * @param delta_time updated to the initial delta time - * @param time_first updated to the time of first output (scale factor or - * cosmic time) + * @param list The output list to initialize. + * @param e The #engine. + * @param name The name of the section in the param file. + * @param delta_time (return) The delta between the first two outputs */ void output_list_init(struct output_list **list, const struct engine *e, - const char *name, double *delta_time, - double *time_first) { + const char *name, double *const delta_time) { + struct swift_params *params = e->parameter_file; - /* get cosmo */ + if (*list != NULL) error("Output list already allocated!"); + + /* Get cosmo */ struct cosmology *cosmo = NULL; if (e->policy & engine_policy_cosmology) cosmo = e->cosmology; /* Read output on/off */ char param_name[PARSER_MAX_LINE_SIZE]; sprintf(param_name, "%s:output_list_on", name); - int output_list_on = parser_get_opt_param_int(params, param_name, 0); + const int output_list_on = parser_get_opt_param_int(params, param_name, 0); /* Check if read output_list */ if (!output_list_on) return; @@ -348,10 +372,8 @@ void output_list_init(struct output_list **list, const struct engine *e, /* Set data for later checks */ if (cosmo) { *delta_time = (*list)->times[1] / (*list)->times[0]; - *time_first = (*list)->times[0]; } else { *delta_time = (*list)->times[1] - (*list)->times[0]; - *time_first = (*list)->times[0]; } } @@ -388,6 +410,7 @@ void output_list_print(const struct output_list *output_list) { void output_list_clean(struct output_list **output_list) { if (*output_list) { free((*output_list)->times); + free((*output_list)->snapshot_labels); free((*output_list)->select_output_indices); free(*output_list); *output_list = NULL; diff --git a/src/output_list.h b/src/output_list.h index e536d7f12a1faef3132fd7eaa091bf8e5d895a4c..f68d931cd2ba3c93e79228816ee9c51618df6042 100644 --- a/src/output_list.h +++ b/src/output_list.h @@ -62,6 +62,9 @@ struct output_list { * pointers because of restarts. */ int *select_output_indices; + /* List of snapshot labels if not using the defaults */ + int *snapshot_labels; + /* Total number of currently used select output names */ int select_output_number_of_names; @@ -74,6 +77,9 @@ struct output_list { /* Was the Select Output option used? */ int select_output_on; + /* Are we using individual labels for the runs? */ + int alternative_labels_on; + /* Is this output list activated? */ int output_list_on; @@ -88,7 +94,7 @@ void output_list_read_next_time(struct output_list *t, const struct engine *e, void output_list_get_current_select_output(struct output_list *t, char *select_output_name); void output_list_init(struct output_list **list, const struct engine *e, - const char *name, double *delta_time, double *time_first); + const char *name, double *const delta_time); void output_list_print(const struct output_list *output_list); void output_list_clean(struct output_list **output_list); void output_list_struct_dump(struct output_list *list, FILE *stream); diff --git a/src/output_options.c b/src/output_options.c index a03b1cd5e3dda12d3521faf7ca0ddb6ede1d1e4d..d9f02357e239bfe83eca57d95f6e85fb445c113a 100644 --- a/src/output_options.c +++ b/src/output_options.c @@ -281,3 +281,34 @@ int output_options_get_num_fields_to_write( return output_options->num_fields_to_write[selection_id][ptype]; } + +/** + * @brief Return the sub-directory and snapshot basename for the current output + * selection. + * + * @param output_options The #output_options structure + * @param selection_name The current output selection name. + * @param default_subdirname The default general sub-directory name. + * @param default_basename The default general snapshot base name. + * @param subdir_name (return) The sub-directory name to use for this dump. + * @param basename (return) The snapshot base name to use for this dump, + */ +void output_options_get_basename(const struct output_options* output_options, + const char* selection_name, + const char* default_subdirname, + const char* default_basename, + char subdir_name[FILENAME_BUFFER_SIZE], + char basename[FILENAME_BUFFER_SIZE]) { + + /* Full name for the default path */ + char field[PARSER_MAX_LINE_SIZE]; + sprintf(field, "%.*s:basename", FIELD_BUFFER_SIZE, selection_name); + + parser_get_opt_param_string(output_options->select_output, field, basename, + default_basename); + + sprintf(field, "%.*s:subdir", FIELD_BUFFER_SIZE, selection_name); + + parser_get_opt_param_string(output_options->select_output, field, subdir_name, + default_subdirname); +} diff --git a/src/output_options.h b/src/output_options.h index 21c454a29f0e0dfdc4c48eb99f20668aab3da72a..9fac755a7fee85db7ccdfa67825ff2e4b6f5f8bf 100644 --- a/src/output_options.h +++ b/src/output_options.h @@ -72,4 +72,11 @@ int output_options_get_num_fields_to_write( const struct output_options* output_options, const char* selection_name, const int ptype); +void output_options_get_basename(const struct output_options* output_options, + const char* selection_name, + const char* default_subdirname, + const char* default_basename, + char subdir_name[FILENAME_BUFFER_SIZE], + char snap_basename[FILENAME_BUFFER_SIZE]); + #endif diff --git a/src/parallel_io.c b/src/parallel_io.c index c38c02f6300f93ef334a72f753fa4d92165ef98c..d7388c9720c6be21b64ac231e60ead44a7875973 100644 --- a/src/parallel_io.c +++ b/src/parallel_io.c @@ -40,11 +40,9 @@ #include "black_holes_io.h" #include "chemistry_io.h" #include "common_io.h" -#include "cooling_io.h" #include "dimension.h" #include "engine.h" #include "error.h" -#include "fof_io.h" #include "gravity_io.h" #include "gravity_properties.h" #include "hydro_io.h" @@ -56,13 +54,11 @@ #include "part.h" #include "part_type.h" #include "particle_splitting.h" -#include "rt_io.h" #include "sink_io.h" #include "star_formation_io.h" #include "stars_io.h" -#include "tracers_io.h" +#include "tools.h" #include "units.h" -#include "velociraptor_io.h" #include "xmf.h" /* The current limit of ROMIO (the underlying MPI-IO layer) is 2GB */ @@ -261,8 +257,16 @@ void read_array_parallel(hid_t grp, struct io_props props, size_t N, if (props.importance == COMPULSORY) { error("Compulsory data set '%s' not present in the file.", props.name); } else { + + /* Create a single instance of the default value */ + float* temp = (float*)malloc(copySize); + for (int i = 0; i < props.dimension; ++i) temp[i] = props.default_value; + + /* Copy it everywhere in the particle array */ for (size_t i = 0; i < N; ++i) - memset(props.field + i * props.partSize, 0, copySize); + memcpy(props.field + i * props.partSize, temp, copySize); + + free(temp); return; } } @@ -1113,13 +1117,6 @@ void prepare_file(struct engine* e, const char* fileName, const struct unit_system* internal_units, const struct unit_system* snapshot_units) { - const struct part* parts = e->s->parts; - const struct xpart* xparts = e->s->xparts; - const struct gpart* gparts = e->s->gparts; - const struct spart* sparts = e->s->sparts; - const struct bpart* bparts = e->s->bparts; - const struct sink* sinks = e->s->sinks; - struct output_options* output_options = e->output_options; const int with_cosmology = e->policy & engine_policy_cosmology; const int with_cooling = e->policy & engine_policy_cooling; @@ -1176,6 +1173,9 @@ void prepare_file(struct engine* e, const char* fileName, io_write_attribute_s(h_grp, "Code", "SWIFT"); io_write_attribute_s(h_grp, "RunName", e->run_name); + /* Write out the particle types */ + io_write_part_type_names(h_grp); + /* Write out the time-base */ if (with_cosmology) { io_write_attribute_d(h_grp, "TimeBase_dloga", e->time_base); @@ -1195,13 +1195,22 @@ void prepare_file(struct engine* e, const char* fileName, /* GADGET-2 legacy values */ /* Number of particles of each type */ + long long numParticlesThisFile[swift_type_count] = {0}; unsigned int numParticles[swift_type_count] = {0}; unsigned int numParticlesHighWord[swift_type_count] = {0}; + for (int ptype = 0; ptype < swift_type_count; ++ptype) { numParticles[ptype] = (unsigned int)N_total[ptype]; numParticlesHighWord[ptype] = (unsigned int)(N_total[ptype] >> 32); + + if (numFields[ptype] == 0) { + numParticlesThisFile[ptype] = 0; + } else { + numParticlesThisFile[ptype] = N_total[ptype]; + } } - io_write_attribute(h_grp, "NumPart_ThisFile", LONGLONG, N_total, + + io_write_attribute(h_grp, "NumPart_ThisFile", LONGLONG, numParticlesThisFile, swift_type_count); io_write_attribute(h_grp, "NumPart_Total", UINT, numParticles, swift_type_count); @@ -1265,88 +1274,32 @@ void prepare_file(struct engine* e, const char* fileName, switch (ptype) { case swift_type_gas: - hydro_write_particles(parts, xparts, list, &num_fields); - num_fields += particle_splitting_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles( - parts, xparts, list + num_fields, with_cosmology); - if (with_cooling || with_temperature) { - num_fields += cooling_write_particles( - parts, xparts, list + num_fields, e->cooling_func); - } - num_fields += tracers_write_particles(parts, xparts, list + num_fields, - with_cosmology); - num_fields += - star_formation_write_particles(parts, xparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_parts(parts, xparts, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_parts(parts, xparts, list + num_fields); - } - if (with_rt) { - num_fields += rt_write_particles(parts, list + num_fields); - } + io_select_hydro_fields(NULL, NULL, with_cosmology, with_cooling, + with_temperature, with_fof, with_stf, with_rt, e, + &num_fields, list); break; case swift_type_dark_matter: - darkmatter_write_particles(gparts, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(e->s->gpart_group_data, - list + num_fields); - } + io_select_dm_fields(NULL, with_fof, with_stf, e, &num_fields, list); break; case swift_type_dark_matter_background: - darkmatter_write_particles(gparts, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(e->s->gpart_group_data, - list + num_fields); - } + io_select_dm_fields(NULL, with_fof, with_stf, e, &num_fields, list); break; case swift_type_sink: - sink_write_particles(sinks, list, &num_fields, with_cosmology); + io_select_sink_fields(NULL, with_cosmology, with_fof, with_stf, e, + &num_fields, list); break; case swift_type_stars: - stars_write_particles(sparts, list, &num_fields, with_cosmology); - num_fields += - particle_splitting_write_sparticles(sparts, list + num_fields); - num_fields += chemistry_write_sparticles(sparts, list + num_fields); - num_fields += - tracers_write_sparticles(sparts, list + num_fields, with_cosmology); - num_fields += - star_formation_write_sparticles(sparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_sparts(sparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_sparts(sparts, list + num_fields); - } - if (with_rt) { - num_fields += rt_write_stars(sparts, list + num_fields); - } + io_select_star_fields(NULL, with_cosmology, with_fof, with_stf, with_rt, + e, &num_fields, list); break; case swift_type_black_hole: - black_holes_write_particles(bparts, list, &num_fields, with_cosmology); - num_fields += - particle_splitting_write_bparticles(bparts, list + num_fields); - num_fields += chemistry_write_bparticles(bparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_bparts(bparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_bparts(bparts, list + num_fields); - } + io_select_bh_fields(NULL, with_cosmology, with_fof, with_stf, e, + &num_fields, list); break; default: @@ -1494,14 +1447,8 @@ void write_output_parallel(struct engine* e, ticks tic = getticks(); #endif - /* File names */ - char fileName[FILENAME_BUFFER_SIZE]; - char xmfFileName[FILENAME_BUFFER_SIZE]; - io_get_snapshot_filename(fileName, xmfFileName, e->snapshot_int_time_label_on, - e->snapshot_invoke_stf, e->time, e->stf_output_count, - e->snapshot_output_count, e->snapshot_subdir, - e->snapshot_base_name); - + /* Determine if we are writing a reduced snapshot, and if so which + * output selection type to use */ char current_selection_name[FIELD_BUFFER_SIZE] = select_output_header_default_name; if (output_list) { @@ -1510,6 +1457,24 @@ void write_output_parallel(struct engine* e, output_list_get_current_select_output(output_list, current_selection_name); } + /* File names */ + char fileName[FILENAME_BUFFER_SIZE]; + char xmfFileName[FILENAME_BUFFER_SIZE]; + char snapshot_subdir_name[FILENAME_BUFFER_SIZE]; + char snapshot_base_name[FILENAME_BUFFER_SIZE]; + + output_options_get_basename(output_options, current_selection_name, + e->snapshot_subdir, e->snapshot_base_name, + snapshot_subdir_name, snapshot_base_name); + + io_get_snapshot_filename( + fileName, xmfFileName, output_list, e->snapshot_invoke_stf, + e->stf_output_count, e->snapshot_output_count, e->snapshot_subdir, + snapshot_subdir_name, e->snapshot_base_name, snapshot_base_name); + + /* Create the directory */ + if (mpi_rank == 0) safe_checkdir(snapshot_subdir_name, /*create=*/1); + /* Total number of fields to write per ptype */ int numFields[swift_type_count] = {0}; for (int ptype = 0; ptype < swift_type_count; ++ptype) { @@ -1652,29 +1617,11 @@ void write_output_parallel(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Ngas; - hydro_write_particles(parts, xparts, list, &num_fields); - num_fields += particle_splitting_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles( - parts, xparts, list + num_fields, with_cosmology); - if (with_cooling || with_temperature) { - num_fields += cooling_write_particles( - parts, xparts, list + num_fields, e->cooling_func); - } - if (with_fof) { - num_fields += fof_write_parts(parts, xparts, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_parts(parts, xparts, list + num_fields); - } - num_fields += tracers_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += - star_formation_write_particles(parts, xparts, list + num_fields); - if (with_rt) { - num_fields += rt_write_particles(parts, list + num_fields); - } + + /* Select the fields to write */ + io_select_hydro_fields(parts, xparts, with_cosmology, with_cooling, + with_temperature, with_fof, with_stf, with_rt, + e, &num_fields, list); } else { @@ -1696,32 +1643,9 @@ void write_output_parallel(struct engine* e, xparts_written, Ngas, Ngas_written); /* Select the fields to write */ - hydro_write_particles(parts_written, xparts_written, list, - &num_fields); - num_fields += particle_splitting_write_particles( - parts_written, xparts_written, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles( - parts_written, xparts_written, list + num_fields, with_cosmology); - if (with_cooling || with_temperature) { - num_fields += - cooling_write_particles(parts_written, xparts_written, - list + num_fields, e->cooling_func); - } - if (with_fof) { - num_fields += fof_write_parts(parts_written, xparts_written, - list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_parts( - parts_written, xparts_written, list + num_fields); - } - num_fields += tracers_write_particles( - parts_written, xparts_written, list + num_fields, with_cosmology); - num_fields += star_formation_write_particles( - parts_written, xparts_written, list + num_fields); - if (with_rt) { - num_fields += rt_write_particles(parts_written, list + num_fields); - } + io_select_hydro_fields(parts_written, xparts_written, with_cosmology, + with_cooling, with_temperature, with_fof, + with_stf, with_rt, e, &num_fields, list); } } break; @@ -1730,14 +1654,10 @@ void write_output_parallel(struct engine* e, /* This is a DM-only run without inhibited particles */ Nparticles = Ntot; - darkmatter_write_particles(gparts, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(e->s->gpart_group_data, - list + num_fields); - } + + /* Select the fields to write */ + io_select_dm_fields(gparts, with_fof, with_stf, e, &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -1765,14 +1685,8 @@ void write_output_parallel(struct engine* e, Ntot, Ndm_written, with_stf); /* Select the fields to write */ - darkmatter_write_particles(gparts_written, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(gpart_group_data_written, - list + num_fields); - } + io_select_dm_fields(gparts_written, with_fof, with_stf, e, + &num_fields, list); } } break; @@ -1803,14 +1717,8 @@ void write_output_parallel(struct engine* e, gpart_group_data_written, Ntot, Ndm_background, with_stf); /* Select the fields to write */ - darkmatter_write_particles(gparts_written, list, &num_fields); - if (with_stf) { -#ifdef HAVE_VELOCIRAPTOR - num_fields += velociraptor_write_gparts(gpart_group_data_written, - list + num_fields); -#endif - } - + io_select_dm_fields(gparts_written, with_fof, with_stf, e, &num_fields, + list); } break; case swift_type_sink: { @@ -1818,7 +1726,11 @@ void write_output_parallel(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Nsinks; - sink_write_particles(sinks, list, &num_fields, with_cosmology); + + /* Select the fields to write */ + io_select_sink_fields(sinks, with_cosmology, with_fof, with_stf, e, + &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -1835,8 +1747,8 @@ void write_output_parallel(struct engine* e, Nsinks_written); /* Select the fields to write */ - sink_write_particles(sinks_written, list, &num_fields, - with_cosmology); + io_select_sink_fields(sinks_written, with_cosmology, with_fof, + with_stf, e, &num_fields, list); } } break; @@ -1845,23 +1757,10 @@ void write_output_parallel(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Nstars; - stars_write_particles(sparts, list, &num_fields, with_cosmology); - num_fields += - particle_splitting_write_sparticles(sparts, list + num_fields); - num_fields += chemistry_write_sparticles(sparts, list + num_fields); - num_fields += tracers_write_sparticles(sparts, list + num_fields, - with_cosmology); - num_fields += - star_formation_write_sparticles(sparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_sparts(sparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_sparts(sparts, list + num_fields); - } - if (with_rt) { - num_fields += rt_write_stars(sparts, list + num_fields); - } + + /* Select the fields to write */ + io_select_star_fields(sparts, with_cosmology, with_fof, with_stf, + with_rt, e, &num_fields, list); } else { @@ -1879,26 +1778,8 @@ void write_output_parallel(struct engine* e, Nstars_written); /* Select the fields to write */ - stars_write_particles(sparts_written, list, &num_fields, - with_cosmology); - num_fields += particle_splitting_write_sparticles(sparts_written, - list + num_fields); - num_fields += - chemistry_write_sparticles(sparts_written, list + num_fields); - num_fields += tracers_write_sparticles( - sparts_written, list + num_fields, with_cosmology); - num_fields += star_formation_write_sparticles(sparts_written, - list + num_fields); - if (with_fof) { - num_fields += fof_write_sparts(sparts_written, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_sparts(sparts_written, list + num_fields); - } - if (with_rt) { - num_fields += rt_write_stars(sparts_written, list + num_fields); - } + io_select_star_fields(sparts_written, with_cosmology, with_fof, + with_stf, with_rt, e, &num_fields, list); } } break; @@ -1907,17 +1788,11 @@ void write_output_parallel(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Nblackholes; - black_holes_write_particles(bparts, list, &num_fields, - with_cosmology); - num_fields += - particle_splitting_write_bparticles(bparts, list + num_fields); - num_fields += chemistry_write_bparticles(bparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_bparts(bparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_bparts(bparts, list + num_fields); - } + + /* Select the fields to write */ + io_select_bh_fields(bparts, with_cosmology, with_fof, with_stf, e, + &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -1934,19 +1809,8 @@ void write_output_parallel(struct engine* e, Nblackholes_written); /* Select the fields to write */ - black_holes_write_particles(bparts_written, list, &num_fields, - with_cosmology); - num_fields += particle_splitting_write_bparticles(bparts_written, - list + num_fields); - num_fields += - chemistry_write_bparticles(bparts_written, list + num_fields); - if (with_fof) { - num_fields += fof_write_bparts(bparts_written, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_bparts(bparts_written, list + num_fields); - } + io_select_bh_fields(bparts_written, with_cosmology, with_fof, + with_stf, e, &num_fields, list); } } break; diff --git a/src/serial_io.c b/src/serial_io.c index f133ae55183f2ba07f3f1aa49ade50ec974f7fa0..d0ff71bdb315d56cfc924f794ae55a2c4a682a04 100644 --- a/src/serial_io.c +++ b/src/serial_io.c @@ -40,11 +40,9 @@ #include "black_holes_io.h" #include "chemistry_io.h" #include "common_io.h" -#include "cooling_io.h" #include "dimension.h" #include "engine.h" #include "error.h" -#include "fof_io.h" #include "gravity_io.h" #include "gravity_properties.h" #include "hydro_io.h" @@ -55,14 +53,11 @@ #include "output_options.h" #include "part.h" #include "part_type.h" -#include "particle_splitting.h" -#include "rt_io.h" #include "sink_io.h" #include "star_formation_io.h" #include "stars_io.h" -#include "tracers_io.h" +#include "tools.h" #include "units.h" -#include "velociraptor_io.h" #include "xmf.h" /** @@ -102,8 +97,16 @@ void read_array_serial(hid_t grp, const struct io_props props, size_t N, if (props.importance == COMPULSORY) { error("Compulsory data set '%s' not present in the file.", props.name); } else { + + /* Create a single instance of the default value */ + float* temp = (float*)malloc(copySize); + for (int i = 0; i < props.dimension; ++i) temp[i] = props.default_value; + + /* Copy it everywhere in the particle array */ for (size_t i = 0; i < N; ++i) - memset(props.field + i * props.partSize, 0, copySize); + memcpy(props.field + i * props.partSize, temp, copySize); + + free(temp); return; } } @@ -977,14 +980,6 @@ void write_output_serial(struct engine* e, const size_t Ndm_written = Ntot_written > 0 ? Ntot_written - Nbaryons_written - Ndm_background : 0; - /* File name */ - char fileName[FILENAME_BUFFER_SIZE]; - char xmfFileName[FILENAME_BUFFER_SIZE]; - io_get_snapshot_filename(fileName, xmfFileName, e->snapshot_int_time_label_on, - e->snapshot_invoke_stf, e->time, e->stf_output_count, - e->snapshot_output_count, e->snapshot_subdir, - e->snapshot_base_name); - /* Determine if we are writing a reduced snapshot, and if so which * output selection type to use. Can just create a copy of this on * each rank. */ @@ -996,6 +991,24 @@ void write_output_serial(struct engine* e, output_list_get_current_select_output(output_list, current_selection_name); } + /* File name */ + char fileName[FILENAME_BUFFER_SIZE]; + char xmfFileName[FILENAME_BUFFER_SIZE]; + char snapshot_subdir_name[FILENAME_BUFFER_SIZE]; + char snapshot_base_name[FILENAME_BUFFER_SIZE]; + + output_options_get_basename(output_options, current_selection_name, + e->snapshot_subdir, e->snapshot_base_name, + snapshot_subdir_name, snapshot_base_name); + + io_get_snapshot_filename( + fileName, xmfFileName, output_list, e->snapshot_invoke_stf, + e->stf_output_count, e->snapshot_output_count, e->snapshot_subdir, + snapshot_subdir_name, e->snapshot_base_name, snapshot_base_name); + + /* Create the directory */ + if (mpi_rank == 0) safe_checkdir(snapshot_subdir_name, /*create=*/1); + /* Total number of fields to write per ptype */ int numFields[swift_type_count] = {0}; for (int ptype = 0; ptype < swift_type_count; ++ptype) { @@ -1061,6 +1074,9 @@ void write_output_serial(struct engine* e, io_write_attribute_s(h_grp, "Code", "SWIFT"); io_write_attribute_s(h_grp, "RunName", e->run_name); + /* Write out the particle types */ + io_write_part_type_names(h_grp); + /* Write out the time-base */ if (with_cosmology) { io_write_attribute_d(h_grp, "TimeBase_dloga", e->time_base); @@ -1080,15 +1096,23 @@ void write_output_serial(struct engine* e, io_write_attribute_s(h_grp, "Snapshot date", snapshot_date); /* GADGET-2 legacy values: Number of particles of each type */ + long long numParticlesThisFile[swift_type_count] = {0}; unsigned int numParticles[swift_type_count] = {0}; unsigned int numParticlesHighWord[swift_type_count] = {0}; + for (int ptype = 0; ptype < swift_type_count; ++ptype) { numParticles[ptype] = (unsigned int)N_total[ptype]; numParticlesHighWord[ptype] = (unsigned int)(N_total[ptype] >> 32); + + if (numFields[ptype] == 0) { + numParticlesThisFile[ptype] = 0; + } else { + numParticlesThisFile[ptype] = N_total[ptype]; + } } - io_write_attribute(h_grp, "NumPart_ThisFile", LONGLONG, N_total, - swift_type_count); + io_write_attribute(h_grp, "NumPart_ThisFile", LONGLONG, + numParticlesThisFile, swift_type_count); io_write_attribute(h_grp, "NumPart_Total", UINT, numParticles, swift_type_count); io_write_attribute(h_grp, "NumPart_Total_HighWord", UINT, @@ -1224,29 +1248,11 @@ void write_output_serial(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Ngas; - hydro_write_particles(parts, xparts, list, &num_fields); - num_fields += particle_splitting_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles( - parts, xparts, list + num_fields, with_cosmology); - if (with_cooling || with_temperature) { - num_fields += cooling_write_particles( - parts, xparts, list + num_fields, e->cooling_func); - } - if (with_fof) { - num_fields += fof_write_parts(parts, xparts, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_parts(parts, xparts, list + num_fields); - } - num_fields += tracers_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += star_formation_write_particles(parts, xparts, - list + num_fields); - if (with_rt) { - num_fields += rt_write_particles(parts, list + num_fields); - } + + /* Select the fields to write */ + io_select_hydro_fields(parts, xparts, with_cosmology, + with_cooling, with_temperature, with_fof, + with_stf, with_rt, e, &num_fields, list); } else { @@ -1268,36 +1274,10 @@ void write_output_serial(struct engine* e, xparts_written, Ngas, Ngas_written); /* Select the fields to write */ - hydro_write_particles(parts_written, xparts_written, list, - &num_fields); - num_fields += particle_splitting_write_particles( - parts_written, xparts_written, list + num_fields, - with_cosmology); - num_fields += - chemistry_write_particles(parts_written, xparts_written, - list + num_fields, with_cosmology); - if (with_cooling || with_temperature) { - num_fields += - cooling_write_particles(parts_written, xparts_written, - list + num_fields, e->cooling_func); - } - if (with_fof) { - num_fields += fof_write_parts(parts_written, xparts_written, - list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_parts( - parts_written, xparts_written, list + num_fields); - } - num_fields += - tracers_write_particles(parts_written, xparts_written, - list + num_fields, with_cosmology); - num_fields += star_formation_write_particles( - parts_written, xparts_written, list + num_fields); - if (with_rt) { - num_fields += - rt_write_particles(parts_written, list + num_fields); - } + io_select_hydro_fields(parts_written, xparts_written, + with_cosmology, with_cooling, + with_temperature, with_fof, with_stf, + with_rt, e, &num_fields, list); } } break; @@ -1307,15 +1287,11 @@ void write_output_serial(struct engine* e, /* This is a DM-only run without background or inhibited particles */ Nparticles = Ntot; - darkmatter_write_particles(gparts, list, &num_fields); - if (with_fof) { - num_fields += - fof_write_gparts(gparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(e->s->gpart_group_data, - list + num_fields); - } + + /* Select the fields to write */ + io_select_dm_fields(gparts, with_fof, with_stf, e, &num_fields, + list); + } else { /* Ok, we need to fish out the particles we want */ @@ -1344,15 +1320,8 @@ void write_output_serial(struct engine* e, gpart_group_data_written, Ntot, Ndm_written, with_stf); /* Select the fields to write */ - darkmatter_write_particles(gparts_written, list, &num_fields); - if (with_fof) { - num_fields += - fof_write_gparts(gparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts( - gpart_group_data_written, list + num_fields); - } + io_select_dm_fields(gparts_written, with_fof, with_stf, e, + &num_fields, list); } } break; @@ -1384,14 +1353,8 @@ void write_output_serial(struct engine* e, gpart_group_data_written, Ntot, Ndm_background, with_stf); /* Select the fields to write */ - darkmatter_write_particles(gparts_written, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(gpart_group_data_written, - list + num_fields); - } + io_select_dm_fields(gparts_written, with_fof, with_stf, e, + &num_fields, list); } break; @@ -1400,7 +1363,10 @@ void write_output_serial(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Nsinks; - sink_write_particles(sinks, list, &num_fields, with_cosmology); + + /* Select the fields to write */ + io_select_sink_fields(sinks, with_cosmology, with_fof, with_stf, + e, &num_fields, list); } else { /* Ok, we need to fish out the particles we want */ @@ -1417,8 +1383,8 @@ void write_output_serial(struct engine* e, Nsinks_written); /* Select the fields to write */ - sink_write_particles(sinks_written, list, &num_fields, - with_cosmology); + io_select_sink_fields(sinks_written, with_cosmology, with_fof, + with_stf, e, &num_fields, list); } } break; @@ -1427,25 +1393,11 @@ void write_output_serial(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Nstars; - stars_write_particles(sparts, list, &num_fields, with_cosmology); - num_fields += particle_splitting_write_sparticles( - sparts, list + num_fields); - num_fields += - chemistry_write_sparticles(sparts, list + num_fields); - num_fields += tracers_write_sparticles(sparts, list + num_fields, - with_cosmology); - num_fields += - star_formation_write_sparticles(sparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_sparts(sparts, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_sparts(sparts, list + num_fields); - } - if (with_rt) { - num_fields += rt_write_stars(sparts, list + num_fields); - } + + /* Select the fields to write */ + io_select_star_fields(sparts, with_cosmology, with_fof, with_stf, + with_rt, e, &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -1462,27 +1414,8 @@ void write_output_serial(struct engine* e, Nstars_written); /* Select the fields to write */ - stars_write_particles(sparts_written, list, &num_fields, - with_cosmology); - num_fields += particle_splitting_write_sparticles( - sparts_written, list + num_fields); - num_fields += - chemistry_write_sparticles(sparts_written, list + num_fields); - num_fields += tracers_write_sparticles( - sparts_written, list + num_fields, with_cosmology); - num_fields += star_formation_write_sparticles(sparts_written, - list + num_fields); - if (with_fof) { - num_fields += - fof_write_sparts(sparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_sparts(sparts_written, - list + num_fields); - } - if (with_rt) { - num_fields += rt_write_stars(sparts_written, list + num_fields); - } + io_select_star_fields(sparts_written, with_cosmology, with_fof, + with_stf, with_rt, e, &num_fields, list); } } break; @@ -1491,19 +1424,11 @@ void write_output_serial(struct engine* e, /* No inhibted particles: easy case */ Nparticles = Nblackholes; - black_holes_write_particles(bparts, list, &num_fields, - with_cosmology); - num_fields += particle_splitting_write_bparticles( - bparts, list + num_fields); - num_fields += - chemistry_write_bparticles(bparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_bparts(bparts, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_bparts(bparts, list + num_fields); - } + + /* Select the fields to write */ + io_select_bh_fields(bparts, with_cosmology, with_fof, with_stf, e, + &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -1520,20 +1445,8 @@ void write_output_serial(struct engine* e, Nblackholes_written); /* Select the fields to write */ - black_holes_write_particles(bparts_written, list, &num_fields, - with_cosmology); - num_fields += particle_splitting_write_bparticles( - bparts_written, list + num_fields); - num_fields += - chemistry_write_bparticles(bparts_written, list + num_fields); - if (with_fof) { - num_fields += - fof_write_bparts(bparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_bparts(bparts_written, - list + num_fields); - } + io_select_bh_fields(bparts_written, with_cosmology, with_fof, + with_stf, e, &num_fields, list); } } break; diff --git a/src/single_io.c b/src/single_io.c index 55a5f52346a8588c4efbc833a7c47770e636dc17..6a0035ede7ae74a95a139d742c815c6cef2daf28 100644 --- a/src/single_io.c +++ b/src/single_io.c @@ -39,11 +39,9 @@ #include "black_holes_io.h" #include "chemistry_io.h" #include "common_io.h" -#include "cooling_io.h" #include "dimension.h" #include "engine.h" #include "error.h" -#include "fof_io.h" #include "gravity_io.h" #include "gravity_properties.h" #include "hydro_io.h" @@ -55,14 +53,11 @@ #include "output_options.h" #include "part.h" #include "part_type.h" -#include "particle_splitting.h" -#include "rt_io.h" #include "sink_io.h" #include "star_formation_io.h" #include "stars_io.h" -#include "tracers_io.h" +#include "tools.h" #include "units.h" -#include "velociraptor_io.h" #include "xmf.h" /** @@ -99,12 +94,16 @@ void read_array_single(hid_t h_grp, const struct io_props props, size_t N, if (props.importance == COMPULSORY) { error("Compulsory data set '%s' not present in the file.", props.name); } else { - /* message("Optional data set '%s' not present. Zeroing this particle - * props...", name); */ + /* Create a single instance of the default value */ + float* temp = (float*)malloc(copySize); + for (int i = 0; i < props.dimension; ++i) temp[i] = props.default_value; + + /* Copy it everywhere in the particle array */ for (size_t i = 0; i < N; ++i) - memset(props.field + i * props.partSize, 0, copySize); + memcpy(props.field + i * props.partSize, temp, copySize); + free(temp); return; } } @@ -836,13 +835,33 @@ void write_output_single(struct engine* e, (long long)Ndm_background, (long long)Nsinks_written, (long long)Nstars_written, (long long)Nblackholes_written}; + /* Determine if we are writing a reduced snapshot, and if so which + * output selection type to use */ + char current_selection_name[FIELD_BUFFER_SIZE] = + select_output_header_default_name; + if (output_list) { + /* Users could have specified a different Select Output scheme for each + * snapshot. */ + output_list_get_current_select_output(output_list, current_selection_name); + } + /* File name */ char fileName[FILENAME_BUFFER_SIZE]; char xmfFileName[FILENAME_BUFFER_SIZE]; - io_get_snapshot_filename(fileName, xmfFileName, e->snapshot_int_time_label_on, - e->snapshot_invoke_stf, e->time, e->stf_output_count, - e->snapshot_output_count, e->snapshot_subdir, - e->snapshot_base_name); + char snapshot_subdir_name[FILENAME_BUFFER_SIZE]; + char snapshot_base_name[FILENAME_BUFFER_SIZE]; + + output_options_get_basename(output_options, current_selection_name, + e->snapshot_subdir, e->snapshot_base_name, + snapshot_subdir_name, snapshot_base_name); + + io_get_snapshot_filename( + fileName, xmfFileName, output_list, e->snapshot_invoke_stf, + e->stf_output_count, e->snapshot_output_count, e->snapshot_subdir, + snapshot_subdir_name, e->snapshot_base_name, snapshot_base_name); + + /* Create the directory */ + safe_checkdir(snapshot_subdir_name, /*create=*/1); /* First time, we need to create the XMF file */ if (e->snapshot_output_count == 0) xmf_create_file(xmfFileName); @@ -874,16 +893,6 @@ void write_output_single(struct engine* e, e->s->dim[1] * factor_length, e->s->dim[2] * factor_length}; - /* Determine if we are writing a reduced snapshot, and if so which - * output selection type to use */ - char current_selection_name[FIELD_BUFFER_SIZE] = - select_output_header_default_name; - if (output_list) { - /* Users could have specified a different Select Output scheme for each - * snapshot. */ - output_list_get_current_select_output(output_list, current_selection_name); - } - /* Print the relevant information and print status */ io_write_attribute(h_grp, "BoxSize", DOUBLE, dim, 3); io_write_attribute(h_grp, "Time", DOUBLE, &dblTime, 1); @@ -894,6 +903,9 @@ void write_output_single(struct engine* e, io_write_attribute_s(h_grp, "Code", "SWIFT"); io_write_attribute_s(h_grp, "RunName", e->run_name); + /* Write out the particle types */ + io_write_part_type_names(h_grp); + /* Write out the time-base */ if (with_cosmology) { io_write_attribute_d(h_grp, "TimeBase_dloga", e->time_base); @@ -912,6 +924,7 @@ void write_output_single(struct engine* e, io_write_attribute_s(h_grp, "Snapshot date", snapshot_date); /* GADGET-2 legacy values: number of particles of each type */ + long long numParticlesThisFile[swift_type_count] = {0}; unsigned int numParticles[swift_type_count] = {0}; unsigned int numParticlesHighWord[swift_type_count] = {0}; @@ -924,9 +937,15 @@ void write_output_single(struct engine* e, numFields[ptype] = output_options_get_num_fields_to_write( output_options, current_selection_name, ptype); + + if (numFields[ptype] == 0) { + numParticlesThisFile[ptype] = 0; + } else { + numParticlesThisFile[ptype] = N_total[ptype]; + } } - io_write_attribute(h_grp, "NumPart_ThisFile", LONGLONG, N_total, + io_write_attribute(h_grp, "NumPart_ThisFile", LONGLONG, numParticlesThisFile, swift_type_count); io_write_attribute(h_grp, "NumPart_Total", UINT, numParticles, swift_type_count); @@ -1013,29 +1032,11 @@ void write_output_single(struct engine* e, /* No inhibted particles: easy case */ N = Ngas; - hydro_write_particles(parts, xparts, list, &num_fields); - num_fields += particle_splitting_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles( - parts, xparts, list + num_fields, with_cosmology); - if (with_cooling || with_temperature) { - num_fields += cooling_write_particles( - parts, xparts, list + num_fields, e->cooling_func); - } - if (with_fof) { - num_fields += fof_write_parts(parts, xparts, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_parts(parts, xparts, list + num_fields); - } - num_fields += tracers_write_particles( - parts, xparts, list + num_fields, with_cosmology); - num_fields += - star_formation_write_particles(parts, xparts, list + num_fields); - if (with_rt) { - num_fields += rt_write_particles(parts, list + num_fields); - } + + /* Select the fields to write */ + io_select_hydro_fields(parts, xparts, with_cosmology, with_cooling, + with_temperature, with_fof, with_stf, with_rt, + e, &num_fields, list); } else { @@ -1057,32 +1058,9 @@ void write_output_single(struct engine* e, xparts_written, Ngas, Ngas_written); /* Select the fields to write */ - hydro_write_particles(parts_written, xparts_written, list, - &num_fields); - num_fields += particle_splitting_write_particles( - parts_written, xparts_written, list + num_fields, with_cosmology); - num_fields += chemistry_write_particles( - parts_written, xparts_written, list + num_fields, with_cosmology); - if (with_cooling || with_temperature) { - num_fields += - cooling_write_particles(parts_written, xparts_written, - list + num_fields, e->cooling_func); - } - if (with_fof) { - num_fields += fof_write_parts(parts_written, xparts_written, - list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_parts( - parts_written, xparts_written, list + num_fields); - } - num_fields += tracers_write_particles( - parts_written, xparts_written, list + num_fields, with_cosmology); - num_fields += star_formation_write_particles( - parts_written, xparts_written, list + num_fields); - if (with_rt) { - num_fields += rt_write_particles(parts_written, list + num_fields); - } + io_select_hydro_fields(parts_written, xparts_written, with_cosmology, + with_cooling, with_temperature, with_fof, + with_stf, with_rt, e, &num_fields, list); } } break; @@ -1091,14 +1069,10 @@ void write_output_single(struct engine* e, /* This is a DM-only run without background or inhibited particles */ N = Ntot; - darkmatter_write_particles(gparts, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(e->s->gpart_group_data, - list + num_fields); - } + + /* Select the fields to write */ + io_select_dm_fields(gparts, with_fof, with_stf, e, &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -1126,14 +1100,8 @@ void write_output_single(struct engine* e, Ntot, Ndm_written, with_stf); /* Select the fields to write */ - darkmatter_write_particles(gparts_written, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(gpart_group_data_written, - list + num_fields); - } + io_select_dm_fields(gparts_written, with_fof, with_stf, e, + &num_fields, list); } } break; @@ -1164,14 +1132,9 @@ void write_output_single(struct engine* e, gpart_group_data_written, Ntot, Ndm_background, with_stf); /* Select the fields to write */ - darkmatter_write_particles(gparts_written, list, &num_fields); - if (with_fof) { - num_fields += fof_write_gparts(gparts_written, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_gparts(gpart_group_data_written, - list + num_fields); - } + io_select_dm_fields(gparts_written, with_fof, with_stf, e, &num_fields, + list); + } break; case swift_type_sink: { @@ -1179,7 +1142,10 @@ void write_output_single(struct engine* e, /* No inhibted particles: easy case */ N = Nsinks; - sink_write_particles(sinks, list, &num_fields, with_cosmology); + + /* Select the fields to write */ + io_select_sink_fields(sinks, with_cosmology, with_fof, with_stf, e, + &num_fields, list); } else { /* Ok, we need to fish out the particles we want */ @@ -1196,8 +1162,8 @@ void write_output_single(struct engine* e, Nsinks_written); /* Select the fields to write */ - sink_write_particles(sinks_written, list, &num_fields, - with_cosmology); + io_select_sink_fields(sinks_written, with_cosmology, with_fof, + with_stf, e, &num_fields, list); } } break; @@ -1206,23 +1172,11 @@ void write_output_single(struct engine* e, /* No inhibited particles: easy case */ N = Nstars; - stars_write_particles(sparts, list, &num_fields, with_cosmology); - num_fields += - particle_splitting_write_sparticles(sparts, list + num_fields); - num_fields += chemistry_write_sparticles(sparts, list + num_fields); - num_fields += tracers_write_sparticles(sparts, list + num_fields, - with_cosmology); - num_fields += - star_formation_write_sparticles(sparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_sparts(sparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_sparts(sparts, list + num_fields); - } - if (with_rt) { - num_fields += rt_write_stars(sparts, list + num_fields); - } + + /* Select the fields to write */ + io_select_star_fields(sparts, with_cosmology, with_fof, with_stf, + with_rt, e, &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -1239,26 +1193,8 @@ void write_output_single(struct engine* e, Nstars_written); /* Select the fields to write */ - stars_write_particles(sparts_written, list, &num_fields, - with_cosmology); - num_fields += particle_splitting_write_sparticles(sparts_written, - list + num_fields); - num_fields += - chemistry_write_sparticles(sparts_written, list + num_fields); - num_fields += tracers_write_sparticles( - sparts_written, list + num_fields, with_cosmology); - num_fields += star_formation_write_sparticles(sparts_written, - list + num_fields); - if (with_fof) { - num_fields += fof_write_sparts(sparts_written, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_sparts(sparts_written, list + num_fields); - } - if (with_rt) { - num_fields += rt_write_stars(sparts_written, list + num_fields); - } + io_select_star_fields(sparts_written, with_cosmology, with_fof, + with_stf, with_rt, e, &num_fields, list); } } break; @@ -1267,17 +1203,11 @@ void write_output_single(struct engine* e, /* No inhibited particles: easy case */ N = Nblackholes; - black_holes_write_particles(bparts, list, &num_fields, - with_cosmology); - num_fields += - particle_splitting_write_bparticles(bparts, list + num_fields); - num_fields += chemistry_write_bparticles(bparts, list + num_fields); - if (with_fof) { - num_fields += fof_write_bparts(bparts, list + num_fields); - } - if (with_stf) { - num_fields += velociraptor_write_bparts(bparts, list + num_fields); - } + + /* Select the fields to write */ + io_select_bh_fields(bparts, with_cosmology, with_fof, with_stf, e, + &num_fields, list); + } else { /* Ok, we need to fish out the particles we want */ @@ -1294,19 +1224,8 @@ void write_output_single(struct engine* e, Nblackholes_written); /* Select the fields to write */ - black_holes_write_particles(bparts_written, list, &num_fields, - with_cosmology); - num_fields += particle_splitting_write_bparticles(bparts_written, - list + num_fields); - num_fields += - chemistry_write_bparticles(bparts_written, list + num_fields); - if (with_fof) { - num_fields += fof_write_bparts(bparts_written, list + num_fields); - } - if (with_stf) { - num_fields += - velociraptor_write_bparts(bparts_written, list + num_fields); - } + io_select_bh_fields(bparts_written, with_cosmology, with_fof, + with_stf, e, &num_fields, list); } } break; diff --git a/src/stars/EAGLE/stars_io.h b/src/stars/EAGLE/stars_io.h index dc0d347a548d231ba09e94b1903dd864539c369c..49a592cb91c136b96f690c7f94f5f3438e2cf76a 100644 --- a/src/stars/EAGLE/stars_io.h +++ b/src/stars/EAGLE/stars_io.h @@ -50,8 +50,9 @@ INLINE static void stars_read_particles(struct spart *sparts, UNIT_CONV_LENGTH, sparts, h); list[5] = io_make_input_field("Masses", FLOAT, 1, COMPULSORY, UNIT_CONV_MASS, sparts, mass_init); - list[6] = io_make_input_field("StellarFormationTime", FLOAT, 1, OPTIONAL, - UNIT_CONV_NO_UNITS, sparts, birth_time); + list[6] = + io_make_input_field_default("StellarFormationTime", FLOAT, 1, OPTIONAL, + UNIT_CONV_NO_UNITS, sparts, birth_time, -1.); list[7] = io_make_input_field("BirthDensities", FLOAT, 1, OPTIONAL, UNIT_CONV_DENSITY, sparts, birth_density); list[8] =