Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 840-unit-test-testtimeline-fails
  • 875-wendland-c6-missing-neighbour-contributions
  • 887-code-does-not-compile-with-parmetis-installed-locally-but-without-metis
  • CubeTest
  • FS_Del
  • GEARRT_Iliev1
  • GEARRT_Iliev3
  • GEARRT_Iliev4
  • GEARRT_Iliev5
  • GEARRT_Iliev5-fixed-nr-subcycles
  • GEARRT_Iliev7
  • GEARRT_Iliev_static
  • GEARRT_Ivanova
  • GEARRT_fixed_nr_subcycles
  • GEARRT_injection_tests_Iliev0
  • GPU_swift
  • GrackleCoolingUpdates2
  • Lambda-T-table
  • MAGMA2
  • MAGMA2_matthieu
  • MHD_FS
  • MHD_FS_TESTs
  • MHD_FS_VP_AdvectGauge
  • MHD_Orestis
  • MHD_canvas
  • MHD_canvas_RF_128
  • MHD_canvas_RF_growth_rate
  • MHD_canvas_RobertsFlow
  • MHD_canvas_SPH_errors
  • MHD_canvas_matthieu
  • MHD_canvas_nickishch
  • MHD_canvas_nickishch_Lorentz_force_test
  • MHD_canvas_nickishch_track_everything
  • MHD_canvas_sid
  • OAK/CPAW_updates
  • OAK/LoopAdvectionTest
  • OAK/adaptive_divv
  • OAK/kinetic_dedner
  • REMIX_cosmo
  • RT_dualc
  • RT_recombination_radiation
  • RT_test_mladen
  • SIDM
  • SIDM_wKDSDK
  • SNdust
  • SPHM1RT_CosmologicalStromgrenSphere
  • SPHM1RT_bincheck
  • SPHM1RT_smoothedRT
  • TangoSIDM
  • TestPropagation3D
  • Test_fixedhProb
  • activate_fewer_comms
  • active_h_max_optimization
  • adaptive_softening_Lieuwe
  • add_2p5D
  • add_black_holes_checks
  • adding_sidm_to_master
  • agn_crksph
  • agn_crksph_subtask_speedup
  • amd-optimization
  • arm_vec
  • automatic_tasks
  • better_ray_RNG
  • black_holes_accreted_angular_momenta_from_gas
  • burkert-potential
  • c11
  • c11_atomics_copy
  • cancel_all_sorts
  • cell_exchange_improvements
  • cell_types
  • cherry-pick-cd1c39e0
  • comm_tasks_are_special
  • conduction_velocities
  • cpp-fixes
  • cuda_test
  • darwin/adaptive_softening
  • darwin/gear_chemistry_fluxes
  • darwin/gear_mechanical_feedback
  • darwin/gear_preSN_feedback
  • darwin/gear_radiation
  • darwin/simulations
  • darwin/sink_formation_proba
  • darwin/sink_mpi
  • darwin/sink_mpi_physics
  • dead-time-stats
  • derijcke_cooling
  • dev_cms
  • do-not-activate-empty-star-pairs
  • domain_zoom_nometis
  • drift_flag_debug_check
  • driven_turbulence
  • driven_turbulence_forcings
  • engineering
  • eos_updates
  • evrard_disc
  • expand_fof_2022
  • explict_bkg_cdim
  • fewer_gpart_comms
  • fewer_star_comms
  • fewer_timestep_comms_no_empty_pairs
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
  • v0.8.5
  • v0.9.0
  • v1.0.0
  • v2025.01
  • v2025.04
119 results

Target

Select target project
  • dc-oman1/swiftsim
  • swift/swiftsim
  • pdraper/swiftsim
  • tkchan/swiftsim
  • dc-turn5/swiftsim
5 results
Select Git revision
  • GPU_swift
  • Nifty-Clutser-Solution
  • Rsend_repart
  • Subsize
  • active_h_max
  • add-convergence-scripts
  • add_dehnen_aly_density_correction
  • add_particles_debug
  • advanced_opening_criteria
  • arm_vec
  • assume_for_gcc
  • atomic_read_writes
  • back_of_the_queue
  • c11_atomics
  • c11_atomics_copy
  • c11_standard
  • cache_time_bin
  • comm_tasks_are_special
  • cpp
  • cuda_test
  • dumper-thread
  • eagle-cooling-improvements
  • energy_logger
  • engineering
  • evrard_disc
  • ewald_correction
  • extra_EAGLE_EoS
  • feedback_after_hydro
  • gear
  • gear_feedback
  • gear_star_formation
  • generic_cache
  • genetic_partitioning2
  • gizmo
  • gizmo_mfv_entropy
  • gpart_assignment_speedup
  • gravity_testing
  • hydro_validation
  • isolated_galaxy_update
  • ivanova
  • ivanova-dirty
  • kahip
  • local_part
  • logger_index_file
  • logger_restart
  • logger_skip_fields
  • logger_write_hdf5
  • master
  • memalign-test
  • more_task_info
  • move_configure_to_m4
  • mpi-one-thread
  • mpi-packed-parts
  • mpi-send-subparts
  • mpi-send-subparts-vector
  • mpi-subparts-vector-grav
  • mpi-testsome
  • mpi-threads
  • multi_phase_bh
  • new_cpp
  • non-periodic-repart
  • non-periodic-repart-update
  • numa_alloc
  • numa_awareness
  • ontheflyimages
  • optimized_entropy_floor
  • parallel_compression
  • paranoid
  • planetary_ideal_gas
  • pyswiftsim
  • recurse_less_in_sort
  • recursive_star_ghosts
  • remove_long_long
  • reorder_rebuild
  • reverted_grav_depth_logic
  • reweight-fitted-costs
  • reweight-scaled-costs
  • sampled_stellar_evolution
  • scheduler_determinism
  • scotch
  • search-window-tests
  • signal-handler-dump
  • simba-stellar-feedback
  • skeleton
  • smarter_sends
  • snipes_data
  • sort-soa
  • spiral_potential
  • star_formation
  • stellar_disc_example
  • stf_output_dirs
  • streaming_io
  • subcell_sort
  • thread-dump-extract-waiters
  • threadpool_rmapper
  • timestep_limiter_update
  • top_level_cells
  • traphic
  • update-gravity
  • update_brute_force_checks
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
114 results
Show changes
Showing
with 2362 additions and 154 deletions
.. SPHENIX SPH
Josh Borrow 8th January 2020
SPHENIX
=======
This is the new scheme that will be used in EAGLE-XL. This scheme includes:
+ Durier & Dalla Vecchia (2012) time-step limiter
+ Density-Energy SPH
+ Thermal diffusion following Price (2012)
+ A simplified version of the 'Inviscid SPH' artificial viscosity
(Cullen & Dehnen 2010), with a Balsara switch
+ A diffusion limiter, used to prevent energy leakage out of EAGLE
supernovae (Borrow+ 2020).
The simplified version of the 'Inviscid SPH' artificial viscosity calculates
the time differential of the velocity divergence explicitly, using the value
from the previous step. We also use the Balsara switch instead of the improved
neighbour-based limiter from Cullen & Dehnen 2010, to avoid matrix
calculations.
To configure with this scheme, use
.. code-block:: bash
./configure --with-hydro=sphenix --with-kernel=quintic-spline --disable-hand-vec
The diffusion limiter is implemented to ensure that the diffusion is turned
off in very viscous flows and works as follows:
.. code-block:: C
float new_diffusion_alpha = old_diffusion_alpha;
const float viscous_diffusion_limit =
diffusion_alpha_max *
(1.f - maximum_alpha_visc_over_ngb / viscosity_alpha_max);
new_diffusion_alpha = min(new_diffusion_alpha, viscous_diffusion_limit);
The parameters available for this scheme, and their defaults, are:
.. code-block:: yaml
SPH:
viscosity_alpha: 0.1 # Initial value for the alpha viscosity
viscosity_length: 0.25 # Viscosity decay length (in terms of sound-crossing time)
# These are enforced each time-step
viscosity_alpha_max: 2.0 # Maximal allowed value for the viscosity alpha
viscosity_alpha_min: 0.0 # Minimal allowed value for the viscosity alpha
diffusion_alpha: 0.0 # Initial value for the diffusion alpha
diffusion_beta: 0.25 # Timescale to raise the diffusion coefficient over
# (decay is on the sound-crossing time)
# These are enforced each time-step
diffusion_alpha_max: 1.0
diffusion_alpha_min: 0.0
There is also a compile-time parameter, ``viscosity_beta`` that we set to
3.0. During feedback events, the viscosity is set to the compile-time
``hydro_props_default_viscosity_alpha_feedback_reset = 2.0`` and the
diffusion is set to ``hydro_props_default_diffusion_alpha_feedback_reset =
0.0``. These can be changed in ``src/hydro/SPHENIX/hydro_parameters.h``.
Pressure Floor
~~~~~~~~~~~~~~
The pressure floor is implemented for this scheme. The
pressure floor is used in the sound-speed and hence the
time-step will be lower in this case than if the pressure
floor was not taken into account. The only other impact
of the pressure floor is in the force loop, where the
sub-grid floor pressure is used instead of the pressure
calculated from the internal energy and density.
.. Implementation details
Loic Hausammann, 2020
Matthieu Schaller, 2020
.. _implementation_details:
Implementation details
======================
This section contains technical information about internals of the code.
Random number generator
~~~~~~~~~~~~~~~~~~~~~~~
Often subgrid models require random numbers, for this purpose
SWIFT has a random number generator. The random number generator
of SWIFT is based on a combination of the standard `rand_r` and `erand48`
random number generators. Since for some applications in cosmology
we want to be able to sample random numbers with a probability lower than
:math:`10^{-8}`, we could not simply use the 32-bit `rand_r` due to its cut-off
and spacing of around :math:`2^{-32} \approx 2 \times 10^{-10}`.
For the `erand48` algorithm with 48 bits the spacing and cutoff are
significantly smaller and around :math:`2^{-48} \approx 3.5 \times 10^{-15}`,
so this is very suitable for our applications.
Reproducible random numbers
---------------------------
In our simulations we want to be able to reproduce the exact same random
numbers if we do exactly the same simulation twice. Because of this we
want to seed the random number generator in a reproducible way. In order to do this
we seed with the particle ID of the current particle and the current
integer simulation time.
To produce several different types of random numbers we have an additional
argument called the type of random number which is basically the nth random
number for the specified seed, which is added to the particle ID, thus providing
a distinct state per random number type.
If the user wishes to run a simulation with a different set of random number,
an option during the configuration (``--with-random-seed=INT``) is available.
This option simply flip some bits in the initial number composed of the ID and the
current simulation time through the binary operator XOR.
Implementation
--------------
Our random number generator packs the particle ID (plus the random number type) and
the current simulation time as two 64-bit values, plus a constant 16-bit value,
into a 144-bit buffer. This buffer is treated as an array of 9 `uint16` values.
In a first pass we initialize the seed to 0 and run through the 9 `uint16` values,
xor-ing them with the seed and calling `rand_r` on the seed to advance it. Using
`rand_r` with the thus-generated seed, we generate a sequence of 9 16-bit values
and xor them with the original 144-bit buffer.
The 9 bytes of this modified buffer are then used for three passes of `erand48`,
xor-ing the state in the same way as before. `erand48` is then called one final
time with the last state, producing a random double-precision value with a
48-bit mantissa.
What to do if we break the random number generator?
---------------------------------------------------
The most likely case is that the RNG is not strong enough for our application,
in this case we could simply do multiple passes of both shuffling the state and
generating the final value from the state. This increases the computational cost but
will make the random number generator stronger.
An other case is that we need probabilities that are lower than :math:`1 \times 10^{-17}`,
in this case we simply cannot use our random number generator and for example
need to generate two random numbers in order to probe these low probabilities.
------------------------------------------------------
Generating new unique IDs
~~~~~~~~~~~~~~~~~~~~~~~~~
When spawning new particles (not converting them) for star formation or other
similar processes, the code needs to create new unique particle IDs. This is
implemented in the file ``space_unique_id.c`` and can be switched on/off in the
star formation file ``star_formation_struct.h`` by setting the variable
``star_formation_need_unique_id`` to 1 or 0.
The generation of new IDs is done by computing the maximal ID present in the
initial condition (across all particle types) and then attributing two batches
of new, unused IDs to each MPI rank. The size of each batch is computed in the
same way as the count of extra particles in order to ensure that we will have
enough available IDs between two tree rebuilds (where the extra particles are
regenerated).
When a new ID is requested, the next available ID in the first batch is
returned. If the last available ID in the first batch is requested, we switch to
the next batch of IDs and flag it for regeneration at the next rebuild time. If
the second batch is also fully used, the code will exit with an error message
[#f1]_. At each tree-rebuild steps, the ranks will request a new batch if
required and make sure the batches are unique across all MPI ranks.
As we are using the maximal ID from the ICs, nothing can be done against the user
providing the maximal integer possible as an ID (that can for instance be the
case in some of the EAGLE ICs as the ID encode their Lagrangian position on a
Peano-Hilbert curve).
.. [#f1] Thanks to the size of the fresh ID batches, the code should run out of
extra particles before reaching this point and triggered a new rebuild
if this is allowed by the star formation scheme.
.. Initial Conditions
Josh Borrow, 5th April 2018
.. _Initial_Conditions_label:
Initial Conditions
==================
......@@ -44,27 +46,31 @@ There are several groups that contain 'auxiliary' information, such as
the particles. Some types are currently ignored by SWIFT but are kept in the
file format for compatibility reasons.
+---------------------+------------------------+----------------------------+
| HDF5 Group Name | Physical Particle Type | In code ``enum part_type`` |
+=====================+========================+============================+
| ``/PartType0/`` | Gas | ``swift_type_gas`` |
+---------------------+------------------------+----------------------------+
| ``/PartType1/`` | Dark Matter | ``swift_type_dark_matter`` |
+---------------------+------------------------+----------------------------+
| ``/PartType2/`` | Ignored | |
+---------------------+------------------------+----------------------------+
| ``/PartType3/`` | Ignored | |
+---------------------+------------------------+----------------------------+
| ``/PartType4/`` | Stars | ``swift_type_star`` |
+---------------------+------------------------+----------------------------+
| ``/PartType5/`` | Black Holes | ``swift_type_black_hole`` |
+---------------------+------------------------+----------------------------+
+---------------------+------------------------+----------------------------------------+
| HDF5 Group Name | Physical Particle Type | In code ``enum part_type`` |
+=====================+========================+========================================+
| ``/PartType0/`` | Gas | ``swift_type_gas`` |
+---------------------+------------------------+----------------------------------------+
| ``/PartType1/`` | Dark Matter | ``swift_type_dark_matter`` |
+---------------------+------------------------+----------------------------------------+
| ``/PartType2/`` | Background Dark Matter | ``swift_type_dark_matter_background`` |
+---------------------+------------------------+----------------------------------------+
| ``/PartType3/`` | Sinks | ``swift_type_sink`` |
+---------------------+------------------------+----------------------------------------+
| ``/PartType4/`` | Stars | ``swift_type_star`` |
+---------------------+------------------------+----------------------------------------+
| ``/PartType5/`` | Black Holes | ``swift_type_black_hole`` |
+---------------------+------------------------+----------------------------------------+
| ``/PartType6/`` | Neutrino Dark Matter | ``swift_type_neutrino`` |
+---------------------+------------------------+----------------------------------------+
The last column in the table gives the ``enum`` value from ``part_type.h``
corresponding to a given entry in the files.
Note that the only particles that have hydrodynamical forces calculated between
them are those in ``PartType0``.
Note that the only particles that have hydrodynamical forces calculated
between them are those in ``PartType0``. The background dark matter
particles are used for zoom-in simulations and can have different masses
(and as a consequence softening length) within the ``/PartType2`` arrays.
Necessary Components
......@@ -121,8 +127,14 @@ GADGET-2 based analysis programs:
this to 1. If this field is present in a SWIFT IC file and has a
value different from 1, the code will return an error message.
+ ``Time``, time of the start of the simulation in internal units or expressed
as a scale-factor for cosmological runs. SWIFT ignores this and reads it from
the parameter file.
as a scale-factor for cosmological runs. **SWIFT ignores this and reads it
from the parameter file**, behaviour that matches the GADGET-2 code. Note
that SWIFT writes the current time since the Big Bang, not scale-factor, to
this variable in snapshots.
+ ``Redshift``, the redshift at the start of the simulation. SWIFT checks this
(if present) against ``a_begin`` in the parameter file at the start of
cosmological runs. Note that we explicitly do **not** compare the ``Time``
variable due to its ambiguous meaning.
Particle Data
......@@ -137,13 +149,15 @@ individual particle type (e.g. ``/PartType0/``) that have the following *dataset
within [0, L)^3 where L is the side-length of the simulation volume. In the
case of cosmological simulations, these are the co-moving positions.
+ ``Velocities``, an array of shape (N, 3) that is the cartesian velocities of
the particles. When running cosmological simulations, these are the peculiar
velocities. Note that this is different from GADGET which uses peculiar
the particles. When running cosmological simulations, these are the **peculiar
velocities**. Note that this is different from GADGET which uses peculiar
velocities divided by ``sqrt(a)`` (see below for a fix).
+ ``ParticleIDs``, an array of length N that are unique identifying numbers for
each particle. Note that these have to be unique to a particle, and cannot be
the same even between particle types. The **IDs must be >= 0**. Negative
IDs will be rejected by the code.
Note, however, that if the parameters to remap the IDs upon startup is switched
on (see :ref:`Parameters_ICs`), the IDs can be omitted entirely from the ICs.
+ ``Masses``, an array of length N that gives the masses of the particles.
For ``PartType0`` (i.e. particles that interact through hydro-dynamics), you will
......
.. Light Cones
John Helly 29th April 2021
.. _lightcone_adding_outputs_label:
Adding New Types of Output
~~~~~~~~~~~~~~~~~~~~~~~~~~~
New particle properties can be added to the particle light cones as follows:
* Add a field to the ``lightcone_<type>_data`` struct in ``lightcone_particle_io.h`` to store the new quantity
* Modify the ``lightcone_store_<type>`` function in ``lightcone_particle_io.c`` to set the new struct field from the particle data
* in ``lightcone_io_make_output_fields()``, add a call to ``lightcone_io_make_output_field()`` to define the new output
Here, <type> is the particle type: gas, dark_matter, stars, black_hole or neutrino.
To add a new type of HEALPIX map:
* Add a function to compute the quantity in ``lightcone_map_types.c``. See ``lightcone_map_total_mass()`` for an example.
* Add a new entry to the ``lightcone_map_types`` array in lightcone_map_types.h. This should specify the name of the new map type, a pointer to the function to compute the quantity, and the units of the quantity. The last entry in the array is not used and must have a NULL function pointer to act as an end marker.
.. Light Cones
John Helly 29th April 2021
.. _lightcone_algorithm_description_label:
Light Cone Output Algorithm
~~~~~~~~~~~~~~~~~~~~~~~~~~~
In cosmological simulations it is possible to specify the location of
an observer in the simulation box and have SWIFT output information
about particles in the simulation as they cross the observer's past
light cone.
Whenever a particle is drifted the code checks if any periodic copy of
the particle crosses the lightcone during the drift, and if so that
copy of the particle is buffered for output. As an optimization, at the
start of each time step the code computes which periodic copies of the
simulation box could contribute to the light cone and only those copies
are searched. When drifting the particles in a particular cell the list of
replications is further narrowed down using the spatial extent of the
cell.
Particles can be output directly to HDF5 files or accumulated to healpix
maps corresponding to spherical shells centred on the observer.
.. Light Cones
John Helly 29th April 2021
.. _Light_Cones_label:
Light Cone Outputs
==================
This section describes the light cone outputs
and related parameters
.. toctree::
:maxdepth: 2
:caption: Contents:
algorithm_description
lightcone_particle_output
lightcone_healpix_maps
running_with_lightcones
adding_outputs
.. Light Cones
John Helly 29th April 2021
.. _lightcone_healpix_maps_label:
Light Cone HEALPix Maps
~~~~~~~~~~~~~~~~~~~~~~~
SWIFT can accumulate particle properties to HEALPix maps as they
cross the observer's past light cone. Each map corresponds to a
spherical shell centred on the observer. When a particle crosses
the lightcone its distance from the observer is calculated and the
particle's contribution is added to a buffer so that at the end of
the time step it can be added to the corresponding HEALPix map.
Maps can be generated for multiple concentric shells and multiple
quantities can be accumulated for each shell. The HEALPix map for a
shell is allocated and zeroed out when the simulation first reaches
a redshift where particles could contribute to that map. The map is
written out and deallocated when the simulation advances to a point
where there can be no further contributions. In MPI runs the pixel
data for the maps are distributed across all MPI ranks.
Updates to the maps are buffered in order to avoid the need for
communication during the time step. At the end of the step if any
MPI rank has a large amount of updates buffered then all pending
updates will be applied to the pixel data.
For gas particles, the HEALPix maps are smoothed using a projected
version of the same kernel used for the hydro calculations. Other
particle types are not smoothed.
The code writes one output file for each spherical shell. In MPI mode
all ranks write to the same file using parallel HDF5. If maps of
multiple quantities are being made they will be written to a single
file as separate 1D datasets with one element per pixel.
.. Light Cones
John Helly 29th June 2021
.. _lightcone_particle_output_label:
Light Cone Particle Output
~~~~~~~~~~~~~~~~~~~~~~~~~~
SWIFT can output particles to HDF5 output files (similar to the
snapshots) as they cross the observer's light cone. During each time
step, any particles which cross the light cone are added to a buffer.
If this buffer is large at the end of the step then its contents
are written to an output file. In MPI runs each MPI rank writes its
own output file and decides independently when to flush its particle
buffer.
A new output file is started whenever restart files are written. This
allows the code to automatically continue from the point of the restart
dump if the run is interrupted. Any files written after the restart
dump will be overwritten when the simulation is resumed, preventing
duplication of particles in the light cone output.
The output files have names of the form ``basename_XXXX.Y.hdf5``, where
XXXX numbers the files written by a single MPI rank and Y is the index
of the MPI rank.
The output files contain one HDF5 group for each particle type. Within
each group there are datasets corresponding to particle properties in
a similar format to the snapshots.
.. Light Cones
John Helly 29th April 2021
.. _lightcone_running_label:
Running SWIFT with Light Cone Output
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To produce light cone particle output swift must be configured
with ``--enable-lightcone``. Additionally, making HEALPix maps
requires the HEALPix C library. If using MPI then parallel HDF5
is also required.
One lightcone is produced for each ``LightconeX`` section in the
parameter file, where X=0-7. This allows generation of up to 8
light cones. See :ref:`Parameters_light_cone` for details.
SWIFT must be run with the ``--lightcone`` flag to activate light
cone outputs, otherwise the Lightcone sections in the parameter file
are ignored.
.. Snapshots
Matthieu Schaller, 23rd May 2020
.. _line_of_sight:
Line-of-sights outputs
======================
The line-of-sight outputs are designed to be processed by the
``SpecWizard`` tool (`Theuns et al. 1998 <https://ui.adsabs.harvard.edu/abs/1998MNRAS.301..478T/>`_,
`Tepper-Garcia et al. 2011
<https://ui.adsabs.harvard.edu/abs/2011MNRAS.413..190T/>`_).
TO BE DONE.
.. Neutrinos
Willem Elbers, 7 April 2021
.. _neutrinos:
Neutrino implementation
=======================
SWIFT can also accurately model the effects of massive neutrinos in
cosmological simulations. At the background level, massive neutrinos
and other relativistic species can be included by specifying their
number and masses in the cosmology section of the parameter file
(see :ref:`Parameters_cosmology`).
At the perturbation level, neutrinos can be included as a separate particle
species (``PartType6``). To facilitate this, SWIFT implements the
:math:`\delta f` method for shot noise suppression (`Elbers et al. 2020
<https://ui.adsabs.harvard.edu/abs/2020arXiv201007321E/>`_). The method
works by statistically weighting the particles during the simulation,
with weights computed from the Liouville equation using current and
initial momenta. The method can be activated by specifying
``Neutrino:use_delta_f`` in the parameter file.
The implementation of the :math:`\delta f` method in SWIFT assumes a
specific method for generating the initial neutrino momenta (see below).
This makes it possible to reproduce the initial momentum when it is
needed without increasing the memory footprint of the neutrino particles.
If perturbed initial conditions are not needed, the initial momenta can
be generated internally by specifying ``Neutrino:generate_ics`` in the
parameter file. This will assign ``PartType6`` particles to each
neutrino mass specified in the cosmology and generate new velocities
based on the homogeneous (unperturbed) Fermi-Dirac distribution. In
this case, placeholder neutrino particles should be provided in the
initial conditions with arbitrary masses and velocities, distributed
uniformly in the box. Placeholders can be spawned with the python
script ``tools/spawn_neutrinos.py``.
Relativistic Drift
------------------
At high redshift, neutrino particles move faster than the speed of light
if the usual Newtonian expressions are used. To rectify this, SWIFT
implements a relativistic drift correction. In this convention, the
internal velocity variable (see theory/Cosmology) is
:math:`v^i=a^2u^i=a^2\dot{x}^i\gamma^{-1}`, where :math:`u^i` is the
spatial part of the 4-velocity, :math:`a` the scale factor, and
:math:`x^i` a comoving position vector. The conversion factor to the
coordinate 3-velocity is :math:`\gamma=ac/\sqrt{a^2c^2+v^2}`. This
factor is applied to the neutrino particles throughout the simulation.
Generating Fermi-Dirac momenta
------------------------------
The implementation of the :math:`\delta f` method in SWIFT assumes that
neutrinos were initially assigned a Fermi-Dirac momentum using the following
method. Each particle has a fixed 64-bit unsigned integer :math:`\ell` given
by the particle ID [#f1]_ (plus an optional seed: ``Neutrino:neutrino_seed``).
This number is transformed into a floating point number :math:`u\in(0,1)`,
using the following pseudo-code based on splitmix64:
.. code-block:: none
m = l + 0x9E3779B97f4A7C15
m = (m ^ (m >> 30)) * 0xBF58476D1CE4E5B9;
m = (m ^ (m >> 27)) * 0x94D049BB133111EB;
m = m ^ (m >> 31);
u = (m + 0.5) / (UINT64_MAX + 1)
This is subsequently transformed into a Fermi-Dirac momentum
:math:`q = F^{-1}(u)` by evaluating the quantile function. To generate
neutrino particle initial conditions with perturbations, one first generates
momenta from the unperturbed Fermi-Dirac distribution using the above method
and then applies perturbations in any suitable manner.
When using the :math:`\delta f` method, SWIFT also assumes that ``PartType6``
particles are assigned to all :math:`N_\nu` massive species present in the
cosmology, such that the particle with fixed integer :math:`\ell` corresponds
to species :math:`i = \ell\; \% \;N_\nu\in[0,N_\nu-1]`.
The sampled Fermi-Dirac speeds and neutrino masses are written into the
snapshot files as ``SampledSpeeds`` and ``MicroscopicMasses``.
Mesh Neutrinos
--------------
There are two additional implementations of neutrino physics. The first
is an option to only apply the delta-f weighting scheme on the mesh. In
this case, particle neutrinos participate like dark matter in the remaining
gravity calculations. This mode can be activated with
``Neutrino:use_delta_f_mesh_only``.
The second option is an implementation of the linear response method,
once again on the mesh only, which requires a separate data file with
transfer functions. Example settings in the paramter file for this mode
are:
.. code:: YAML
Neutrino:
use_linear_response: 1 # Option to use the linear response method
transfer_functions_filename: perturb.hdf5 # For linear response neutrinos, path to an hdf5 file with transfer functions, redshifts, and wavenumbers
dataset_redshifts: Redshifts # For linear response neutrinos, name of the dataset with the redshifts (a vector of length N_z)
dataset_wavenumbers: Wavenumbers # For linear response neutrinos, name of the dataset with the wavenumbers (a vector of length N_k)
dataset_delta_cdm: Functions/d_cdm # For linear response neutrinos, name of the dataset with the cdm density transfer function (N_z x N_k)
dataset_delta_baryon: Functions/d_b # For linear response neutrinos, name of the dataset with the baryon density transfer function (N_z x N_k)
dataset_delta_nu: Functions/d_ncdm[0] # For linear response neutrinos, name of the dataset with the neutrino density transfer function (N_z x N_k)
fixed_bg_density: 1 # For linear response neutrinos, whether to use a fixed present-day background density
In this example, the code reads an HDF5 file "perturb.hdf5" with transfer
functions. The file must contain a vector with redshifts of length :math:`N_z`,
a vector with wavenumbers :math:`N_k`, and three arrays with dimensions
:math:`N_z \times N_k` of density transfer functions for cdm, baryons, and
neutrinos respectively. It is recommended to store the units of the wavenumbers
as an attribute at "Units/Unit length in cgs (U_L)". The ``fixed_bg_density``
flag determines whether the linear response scales as :math:`\Omega_\nu(a)`
or the present-day value :math:`\Omega_{\nu,0}`, either of which may be
appropriate depending on the particle initial conditions. An HDF5 file
can be generated using classy with the script ``tools/create_perturb_file.py``.
The linear response mode currently only supports degenerate mass models
with a single neutrino transfer function.
Background Neutrinos Only
-------------------------
It is also possible to run without neutrino perturbations, even when
specifying neutrinos in the background cosmology. This mode can be
activated with ``Neutrino:use_model_none``.
.. [#f1] Currently, it is not guaranteed that a particle ID is unique.
......@@ -34,3 +34,8 @@ In order to add a new scheme, you will need to:
``nobase_noinst_HEADERS``, add your new header files.
6. Update the documentation. Add your equations/documentation to ``doc/RTD``.
.. toctree::
:caption: Table of Contents
sink_adding_new_scheme
.. Adding new schemes
Darwin Roduit, 16 Ocotber 2024
.. _new_option_sink:
How to add your sink scheme
-------------------------------
Here, we provide comprehensive information to guide you in adding your sink scheme into Swift. To better understand how to add new schemes within Swift, read the general information provided on :ref:`new_option` page.
The default sink scheme is empty and gives you an idea of the minimum required fields and functions for the code to compile. The GEAR sink module has the base functions plus some extra ones for its operations. It can provide you with a working example. However, it can only work with the GEAR feedback module since it relies on IMF properties that are only located there.
As discussed in the GEAR sink :ref:`sink_GEAR_model_summary`, the physics relies on the following tasks: sink formation, gas and sink particle flagging, gas swallowing, sink swallowing and star formation. You do not need to care about the tasks, only the core functions within the sink module. However, you may need to locate where the code calls these functions. The file ``src/runner_others.c`` contains the ``runner_do_star_formation_sink()`` and ``runner_do_sink_formation()``. These functions are responsible for generating stars out of sinks and sinks from gas particles. The other general task-related functions are in ``src/runner_sinks.c``.
The following presents the most essential functions you need to implement. This will give you an idea of the workload.
Sink formation
~~~~~~~~~~~~~~
Before forming a sink, the code loops over all gas and sink particles to gather data about its neighbourhood. This is performed in ``sink_prepare_part_sink_formation_gas_criteria()`` and ``sink_prepare_part_sink_formation_sink_criteria()`` functions. For instance, in GEAR, we compute the total potential energy, thermal energy, etc.
Then, to decide if we can turn a gas particle into a sink particle, the function ``sink_is_forming()`` is called. Before forming a sink particle, there is a call to ``sink_should_convert_to_sink()``. This function determines whether the gas particle must transform into a sink. Both functions return either 0 or 1.
Gas-sink density interactions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first interaction task to be run for the sinks is the density task. This task updates the smoothing length for the sink particle, unless a fixed cutoff radius is being used (coming soon). It can also calculate the contributions made by neigbouring gas particles to the density, sound speed, velocity etc. at the location of the sink. Code for these interactions should be added to ``sink_iact.h/runner_iact_nonsym_sinks_gas_density()``.
Once the contributions of all neigbouring gas particles have been calculated, the density calculation is completed by the sink density ghost task. You can set what this task does with the functions ``sink_end_density()`` and ``sink_prepare_swallow()`` in ``sink.h``.
The ``sink_end_density()`` function completes the calculation of the smoothing length (coming soon), and this is where you can finish density-based calculations by e.g. dividing mass-weighted contributions to the velocity field by the total density in the kernel. For examples of this, see the equivalent task for the black hole particles.
The ``sink_prepare_swallow()`` task is where you can calculate density-based quantities that you might need to use in swallowing interactions later. For example, a Bondi-Hoyle accretion prescription should calculate an accretion rate and target mass to be accreted here.
Gas and sink flagging: finding whom to eat
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before accreting the gas/sink particles, the sink needs to look for eligible particles. The gas swallow interactions are performed within ``runner_iact_nonsym_sinks_gas_swallow()`` and the sink swallow in ``runner_iact_nonsym_sinks_sink_swallow()``.
Gas and sink swallowing: updating the sink properties
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When the sink swallows gas particles, it updates its internal properties based on the gas particles' properties. The ``sink_swallow_part()`` function takes care of this.
Similarly, when the sink swallows sink particles, it updates its properties from the to-be-swallowed sink particles. The ``sink_swallow_sink()`` performs the update.
There is no more than that. The code properly removes the swallowed particles.
Star formation
~~~~~~~~~~~~~~
The most important function is ``sink_spawn_star()``. It controls whether the code should continue creating stars and must return 0 or 1.
The ``sink_copy_properties_to_star()`` does what it says. This function is also responsible for adequately initialising the stars' properties. In GEAR, we give new positions and velocities within this function.
The following three functions allow you to update your sink particles (e.g. their masses) before, during and after the star formation loop: ``sink_update_sink_properties_before_star_formation()``, ``sink_update_sink_properties_during_star_formation()`` and ``sink_update_sink_properties_after_star_formation()``.
These functions are located in ``sink/Default/sink.h``
......@@ -6,8 +6,7 @@
Parameter Files
===============
This section desrcibes the options that are available in the
parameter files.
This section describes the options that are available in the parameter files.
.. toctree::
:maxdepth: 2
......@@ -15,3 +14,4 @@ parameter files.
parameter_description
output_selection
lossy_filters
.. Lossy compression filters
.. _Compression_filters:
Compression Filters
~~~~~~~~~~~~~~~~~~~
Filters to compress the data in snapshots can be applied to reduce the
disk footprint of the datasets. The filters provided by SWIFT are
filters natively provided by HDF5, implying that the library will
automatically and transparently apply the reverse filter when reading
the data stored on disk. They can be applied in combination with, or
instead of, the lossless gzip compression filter.
**These compression filters are lossy, meaning that they modify the
data written to disk**
.. warning::
The filters will reduce the accuracy of the data stored. No check is
made inside SWIFT to verify that the applied filters make sense. Poor
choices can lead to all the values of a given array reduced to 0, Inf,
or to have lost too much accuracy to be useful. The onus is entirely
on the user to choose wisely how they want to compress their data.
The filters are not applied when using parallel-hdf5.
The name of any filter applied is carried by each individual field in
the snapshot using the meta-data attribute ``Lossy compression
filter``.
.. warning::
Starting with HDF5 version 1.14.4, filters which compress the data
by more than 2x are flagged as problematic (see their
`doc <https://docs.hdfgroup.org/hdf5/v1_14/group___f_a_p_l.html#gafa8e677af3200e155e9208522f8e05c0>`_
). SWIFT can nevertheless write files with them by setting the
appropriate file-level flags. However, some tools (such as
``h5py``) may *not* be able to read these fields.
The available filters are listed below.
N-bit filters for long long integers
------------------------------------
The N-bit filter takes a `long long` and saves only the most
significant N bits.
This can be used in cases similar to the particle IDs. For instance,
if they cover the range :math:`[1, 10^{10}]` then 64-bits is too many
and a lot of disk space is wasted storing the 0s. In this case
:math:`\left\lceil{\log_2(10^{10})}\right\rceil + 1 = 35` bits are
sufficient (The extra "+1" is for the sign bit).
SWIFT implements 5 variants of this filter:
* ``Nbit32`` stores the 32 most significant bits (Numbers up to
:math:`2\times10^{9}`, comp. ratio: 2)
* ``Nbit36`` stores the 36 most significant bits (Numbers up to
:math:`3.4\times10^{10}`, comp. ratio: 1.78)
* ``Nbit40`` stores the 40 most significant bits (Numbers up to
:math:`5.4\times10^{11}`, comp. ratio: 1.6)
* ``Nbit44`` stores the 44 most significant bits (Numbers up to
:math:`8.7\times10^{12}`, comp. ratio: 1.45)
* ``Nbit48`` stores the 48 most significant bits (Numbers up to
:math:`1.4\times10^{14}`, comp. ratio: 1.33)
* ``Nbit56`` stores the 56 most significant bits (Numbers up to
:math:`3.6\times10^{16}`, comp. ratio: 1.14)
Note that if the data written to disk is requiring more than the N
bits then part of the information written to the snapshot will
lost. SWIFT **does not apply any verification** before applying the
filter.
Scaling filters for floating-point numbers
------------------------------------------
The D-scale filters can be used to round floating-point values to a fixed
*absolute* accuracy.
They start by computing the minimum of an array that is then deducted from
all the values. The array is then multiplied by :math:`10^n` and truncated
to the nearest integer. These integers are stored with the minimal number
of bits required to store the values. That process is reversed when reading
the data.
For an array of values
+--------+--------+-------+
| 1.2345| -0.1267| 0.0897|
+--------+--------+-------+
and :math:`n=2`, we get stored on disk (but hidden to the user):
+--------+--------+-------+
| 136 | 0 | 22|
+--------+--------+-------+
This can be stored with 8 bits instead of the 32 bits needed to store the
original values in floating-point precision, realising a gain of 4x.
When reading the values (for example via ``h5py`` or ``swiftsimio``), that
process is transparently reversed and we get:
+--------+--------+-------+
| 1.2333| -0.1267| 0.0933|
+--------+--------+-------+
Using a scaling of :math:`n=2` hence rounds the values to two digits after
the decimal point.
SWIFT implements 4 variants of this filter:
* ``DScale1`` scales by :math:`10^1`
* ``DScale2`` scales by :math:`10^2`
* ``DScale3`` scales by :math:`10^3`
* ``DScale4`` scales by :math:`10^4`
* ``DScale5`` scales by :math:`10^5`
* ``DScale6`` scales by :math:`10^6`
An example application is to store the positions with ``pc`` accuracy in
simulations that use ``Mpc`` as their base unit by using the ``DScale6``
filter.
The compression rate of these filters depends on the data. On an
EAGLE-like simulation (100 Mpc box), compressing the positions from ``Mpc`` to
``pc`` (via ``Dscale6``) leads to rate of around 2.2x.
Modified floating-point representation filters
----------------------------------------------
These filters modify the bit-representation of floating point numbers
to get a different *relative* accuracy.
In brief, floating point (FP) numbers are represented in memory as
:math:`(\pm 1)\times a \times 2^b` with a certain number of bits used to store each
of :math:`a` (the mantissa) and :math:`b` (the exponent) as well as
one bit for the overall sign [#f1]_. For example, a standard 4-bytes
`float` uses 23 bits for :math:`a` and 8 bits for :math:`b`. The
number of bits in the exponent mainly drives the range of values that
can be represented whilst the number of bits in the mantissa drives
the relative accuracy of the numbers.
Converting to the more familiar decimal notation, we get that the
number of decimal digits that are correctly represented is
:math:`\log_{10}(2^{n(a)+1})`, with :math:`n(x)` the number of bits in
:math:`x`. The range of positive numbers that can be represented is
given by :math:`[2^{-2^{n(b)-1}+2}, 2^{2^{n(b)-1}}]`. For a standard
`float`, this gives a relative accuracy of :math:`7.2` decimal digits
and a representable range of :math:`[1.17\times 10^{-38}, 3.40\times
10^{38}]`. Numbers above the upper limit are labeled as `Inf` and
below this range they default to zero.
The filters in this category change the number of bits in the mantissa and
exponent. When reading the values (for example via ``h5py`` or
``swiftsimio``) the numbers are transparently restored to regular ``float``
but with 0s in the bits of the mantissa that were not stored on disk, hence
changing the result from what was stored originally before compression.
These filters offer a fixed compression ratio and a fixed relative
accuracy. The available options in SWIFT for a ``float`` (32 bits) output are:
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| Filter name | :math:`n(a)` | :math:`n(b)` | Accuracy | Range | Compression ratio |
+=================+==============+==============+=============+===================================================+===================+
| No filter | 23 | 8 | 7.22 digits | :math:`[1.17\times 10^{-38}, 3.40\times 10^{38}]` | --- |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``FMantissa13`` | 13 | 8 | 4.21 digits | :math:`[1.17\times 10^{-38}, 3.40\times 10^{38}]` | 1.45x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``FMantissa9`` | 9 | 8 | 3.01 digits | :math:`[1.17\times 10^{-38}, 3.40\times 10^{38}]` | 1.78x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``BFloat16`` | 7 | 8 | 2.41 digits | :math:`[1.17\times 10^{-38}, 3.40\times 10^{38}]` | 2x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``HalfFloat`` | 10 | 5 | 3.31 digits | :math:`[6.1\times 10^{-5}, 6.5\times 10^{4}]` | 2x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
Same for a ``double`` (64 bits) output:
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| Filter name | :math:`n(a)` | :math:`n(b)` | Accuracy | Range | Compression ratio |
+=================+==============+==============+=============+===================================================+===================+
| No filter | 52 | 11 | 15.9 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | --- |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``DMantissa21`` | 21 | 11 | 6.62 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | 1.93x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``DMantissa13`` | 13 | 11 | 4.21 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | 2.56x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``DMantissa9`` | 9 | 11 | 3.01 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | 3.05x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
The accuracy given in the table corresponds to the number of decimal digits
that can be correctly stored. The "no filter" row is displayed for
comparison purposes.
In the first table, the first two filters are useful to keep the same range as a
standard `float` but with a reduced accuracy of 3 or 4 decimal digits. The last
two are the two standard reduced precision options fitting within 16 bits: one
with a much reduced relative accuracy and one with a much reduced representable
range.
The compression filters for the `double` quantities are useful if the values one
want to store fall outside the exponent range of `float` numbers but only a
lower relative precision is necessary.
An example application is to store the densities with the ``FMantissa9``
filter as we rarely need more than 3 decimal digits of accuracy for this
quantity.
------------------------
.. [#f1] Note that the representation in memory of FP numbers is more
complicated than this simple picture. See for instance this
`Wikipedia
<https://en.wikipedia.org/wiki/Single-precision_floating-point_format>`_
article.
.. Parameter File
Loic Hausammann, 1 June 2018
.. Output selection
.. _Output_list_label:
Output List
~~~~~~~~~~~
In the sections ``Snapshots`` and ``Statistics``, you can specify the
options ``output_list_on`` and ``output_list`` which receive an int
and a filename. The ``output_list_on`` enable or not the output list
and ``output_list`` is the filename containing the output times. With
the file header, you can choose between writing redshifts, scale
factors or times.
In the sections ``Snapshots``, ``Statistics``, ``StructureFinding``,
``LineOfSight``, ``PowerSpectrum``, and ``FOF`` you can
specify the options ``output_list_on`` and ``output_list`` which receive an int
and a filename. The ``output_list_on`` enable or not the output list and
``output_list`` is the filename containing the output times. With the file
header, you can choose between writing redshifts, scale factors or times.
Example of file containing with times (in internal units)::
......@@ -39,31 +38,224 @@ Example of file with redshift::
If an output list is specified, the basic values for the first
snapshot (``time_first``, ``scale_factor_first``) and difference
(``delta_time``) are ignored.
When an output list is used SWIFT will not write a "0th" snapshot
straight after having read the ICs. Similarly, SWIFT will also *not*
write a snapshot at the end of a simulation unless a snapshot at the
final time is specified in the list.
Note that if a simulation is restarted using check-point files, the
list of outputs will be re-read. This means that it must be found on
the disk at the same place as it was when the simulation was first
started. It also implies that the content of the file can be altered
if the need for additional snapshots suddenly arises.
.. _Output_selection_label:
Output Selection
~~~~~~~~~~~~~~~~
With SWIFT, you can select the particle fields to output in snapshot
using the parameter file. In section ``SelectOutput``, you can remove
a field by adding a parameter formatted in the following way
``field_parttype`` where ``field`` is the name of the field that you
want to remove (e.g. ``Masses``) and ``parttype`` is the type of
particles that contains this field (e.g. ``Gas``, ``DM`` or ``Star``).
For a parameter, the only values accepted are 0 (skip this field when
writing) or 1 (default, do not skip this field when writing). By
default all fields are written.
This field is mostly used to remove unnecessary output by listing them
with 0's. A classic use-case for this feature is a DM-only simulation
(pure n-body) where all particles have the same mass. Outputting the
mass field in the snapshots results in extra i/o time and unnecessary
waste of disk space. The corresponding section of the ``yaml``
parameter file would look like this::
SelectOutput:
Masses_DM: 0
You can generate a ``yaml`` file containing all the possible fields
available for a given configuration of SWIFT by running ``./swift --output-params output.yml``.
Users can generate a ``yaml`` file containing all the possible fields
available for a given configuration of SWIFT by running
``./swift --output-params output.yml`` or equivalently ``./swift -o
output.yml``. The file generated contains the list of fields that a
simulation running with this config would output in each snapshot. It
also lists the description string of each field and the unit
conversion string to go from internal co-moving units to physical
CGS. Entries in the file look like:
.. code:: YAML
Default:
# Particle Type Gas
Coordinates_Gas: off # Co-moving positions of the particles : a U_L [ cm ]
Velocities_Gas: on # Peculiar velocities of the stars. This is (a * dx/dt) where x is the co-moving positions of the particles : U_L U_t^-1 [ cm s^-1 ]
Masses_Gas: on # Masses of the particles : U_M [ g ]
SmoothingLengths_Gas: on # Co-moving smoothing lengths (FWHM of the kernel) of the particles : a U_L [ cm ]
...
This can also be used to set the outputs produced by the
:ref:`fof_stand_alone_label`.
For cosmological simulations, users can optionally add the ``--cosmology`` flag
to generate the field names appropriate for such a run.
Users can select the particle fields to output in snapshot using a (separate)
YAML parameter file. By default, you can define a section `Default` at the
top level of this file (in the exact same way as the file dumped by using the
`-o` option in SWIFT). By default, all fields are written, but by using the
"off" string, you can force the code to skip over unwanted outputs.
Users must then, in the regular SWIFT parameter file, select the following
options:
.. code:: YAML
Snapshots:
select_output_on: 1
select_output: your_select_output_yaml.yml
This field is mostly used to remove unnecessary output by listing them as
"off". A classic use-case for this feature is a DM-only simulation (pure
N-body) where all particles have the same mass. Outputting the mass field in
the snapshots results in extra i/o time and unnecessary waste of disk space.
The corresponding section of the YAML file would look like:
.. code:: YAML
Default:
Masses_DM: off
Entries can simply be copied from the ``output.yml`` generated by the
``-o`` runtime flag.
Alternatively, instead of "on" or "off", users can also provide the name of
one of the :ref:`Compression_filters` to write the field with reduced
accuracy and reduced disk space. The corresponding YAML file would, for
example, look like:
.. code:: YAML
Default:
Coordinates_Gas: DScale6
Masses_Gas: off
Velocities_Gas: DScale1
Densities_Gas: FMantissa9
For convenience, there is also the option to set a default output status for
all fields of a particular particle type. This can be used, for example, to
skip an entire particle type in certain snapshots (see below for how to define
per-snapshot output policies). This is achieved with the special ``Standard``
field for each particle type:
.. code:: YAML
Default:
Standard_Gas: off
Standard_DM: off
Standard_DMBackground: off
Standard_Stars: off
Standard_BH: on # Not strictly necessary, on is already the default
Additionally, users can use the different sections to specify an alternative
base name and sub-directory for the snapshots corresponding to a given
selection:
.. code:: YAML
Default:
basename: bh
subdir: snip
This will put the outputs corresponding to this "BlackHolesOnly" selection into
a sub-directory called ``snip`` and have the files themselves called
``bh_0000.hdf5`` where the number corresponds to the global number of
snapshots. The counter is global and not reset for each type of selection.
If the basename or sub-directory keywords are omitted then the code will use the
default values specified in the ``Snapshots`` section of the main parameter file.
The sub-directories are created when writing the first snapshot of a given
category; the onus is hence on the user to ensure correct writing permissions
ahead of that time.
Finally, it is possible to specify individual sub-sampling ratios for each
output selection:
.. code:: YAML
Default:
subsample: [0, 1, 0, 0, 0, 0, 1] # Sub-sample the DM and neutrinos
subsmple_fractions: [0, 0.01, 0, 0, 0, 0, 0.1]
If these keywords are omitted then the code will use the default values
specified in the ``Snapshots`` section of the main parameter file.
Combining Output Lists and Output Selection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to combine the behaviour of the output list and the select
output file. To do so, you will need to enable both the ``select_output`` and
``output_list`` options in your main ``parameter_file.yml`` as follows:
.. code:: YAML
Snapshots:
output_list_on: 1
output_list: "output_list.txt"
select_output_on: 1
select_output: "select_output.yml"
A typical use case for such a scenario is the dumping of 'snapshots' and
so-called 'snipshots', containing less information than their full snapshot
cousins. To do this, we will define two top-level sections in our
``select_output.yml`` file as follows:
.. code:: YAML
# Only turn off DM masses in snapshots, everything else is turned on
Snapshot:
Masses_DM: off
# Turn off and compress lots of stuff in snipshots!
Snipshot:
Metal_Mass_Fractions_Gas: off
Element_Mass_Fractions_Gas: off
Densities_Gas: FMantissa9
basename: snip
subsample: [0, 1, 0, 0, 0, 0, 1] # Sub-sample the DM and neutrinos
subsmple_fractions: [0, 0.01, 0, 0, 0, 0, 0.1]
...
To then select which outputs are 'snapshots' and which are 'snipshots', you
will need to add the ``Select Output`` column to the ``output_list.txt`` as
follows::
# Redshift, Select Output
100.0, Snapshot
90.0, Snipshot
80.0, Snipshot
70.0, Snipshot
60.0, Snapshot
...
This will enable your simulation to perform partial dumps only at the outputs
labelled as ``Snipshot``. The name of the output selection that corresponds
to your choice in the output list will be written to the snapshot header as
``Header/SelectOutput``.
Note that if a the name used in the ``Select Output`` column does not
exist as a section in the output selection YAML file, SWIFT will exit
with an error message.
Using non-regular snapshot numbers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In some cases it may be helpful to have snapshot numbers that do not
simply increase by one each time. This could be used to encode the
simulation time in the filename for instance. To achieve this, a third
column can be added to the output list giving the snapshot labels to
use for each output::
# Redshift, Select Output, Label
100.0, Snapshot, 100
90.0, Snapshot, 90
1.0, Snapshot, 1
...
The label has to be an integer. This will lead to the following
snapshots being produced:
.. code:: bash
snap_100.hdf5
snap_90.hdf5
snap_1.hdf5
Assuming the snapshot basename (either global or set for the
``Snapshot`` output selection) was set to ``snap``.
Note that to specify labels, the ``Select Output`` column needs to be
specified.
......@@ -33,9 +33,9 @@ parameters:
.. code:: YAML
Cosmology: # Planck13
Omega_m: 0.307
Omega_cdm: 0.2587481
Omega_lambda: 0.693
Omega_b: 0.0455
Omega_b: 0.0482519
h: 0.6777
a_begin: 0.0078125 # z = 127
......@@ -53,18 +53,29 @@ a compulsory parameter is missing an error will be raised at
start-up.
Finally, SWIFT outputs two YAML files at the start of a run. The first one
``used_parameters.yml`` contains all the parameters that were used for this run,
**including all the optional parameters left unspecified with their default
values**. This file can be used to start an exact copy of the run. The second
file, ``unused_parameters.yml`` contains all the values that were not read from
the parameter file. This can be used to simplify the parameter file or check
that nothing important was ignored (for instance because the code is not
configured to use some options).
``used_parameters.yml`` contains all the parameters that were used for this
run, **including all the optional parameters left unspecified with their
default values**. This file can be used to start an exact copy of the run. The
second file, ``unused_parameters.yml`` contains all the values that were not
read from the parameter file. This can be used to simplify the parameter file
or check that nothing important was ignored (for instance because the code is
not configured to use some options). Note that on restart a new file
``used_parameters.yml.stepno`` is created and any changed parameters will be
written to it.
The rest of this page describes all the SWIFT parameters, split by
section. A list of all the possible parameters is kept in the file
``examples/parameter_examples.yml``.
.. _Parameters_meta_data:
Meta Data
---------
The ``MetaData`` section contains basic information about the simulation. It
currently only contains one parameter: ``run_name``. This is a string of
characters describing the simulation. It is written to the snapshots' headers.
.. _Parameters_units:
Internal Unit System
......@@ -73,7 +84,7 @@ Internal Unit System
The ``InternalUnitSystem`` section describes the units used internally by the
code. This is the system of units in which all the equations are solved. All
physical constants are converted to this system and if the ICs use a different
system (see the snapshots' ref:`ICs_units_label` section of the documentation)
system (see the snapshots' :ref:`ICs_units_label` section of the documentation)
the particle quantities will be converted when read in.
The system of units is described using the value of the 5 basic units
......@@ -137,7 +148,7 @@ cosmological model. The expanded :math:`\Lambda\rm{CDM}` parameters governing th
background evolution of the Universe need to be specified here. These are:
* The reduced Hubble constant: :math:`h`: ``h``,
* The matter density parameter :math:`\Omega_m`: ``Omega_m``,
* The cold dark matter density parameter :math:`\Omega_cdm`: ``Omega_cdm``,
* The cosmological constant density parameter :math:`\Omega_\Lambda`: ``Omega_lambda``,
* The baryon density parameter :math:`\Omega_b`: ``Omega_b``,
* The radiation density parameter :math:`\Omega_r`: ``Omega_r``.
......@@ -162,24 +173,49 @@ w_0 + w_a (1 - a)`. The two parameters in the YAML file are:
If unspecified these parameters default to the default
:math:`\Lambda\rm{CDM}` values of :math:`w_0 = -1` and :math:`w_a = 0`.
The radiation density :math:`\Omega_r` can also be specified by setting
an alternative optional parameter:
* The number of ultra-relativistic degrees of freedom :math:`N_{\rm{ur}}`:
``N_ur``.
The radiation density :math:`\Omega_r` is then automatically inferred from
:math:`N_{\rm{ur}}` and the present-day CMB temperature
:math:`T_{\rm{CMB},0}=2.7255` Kelvin. This parametrization cannot
be used together with :math:`\Omega_r`. If neither parameter is used, SWIFT
defaults to :math:`\Omega_r = 0`. Note that :math:`N_{\rm{ur}}` differs from
:math:`N_{\rm{eff}}`, the latter of which also includes massive neutrinos.
Massive neutrinos can be included by specifying the optional parameters:
* The number of massive neutrino species :math:`N_{\nu}`: ``N_nu``,
* A comma-separated list of neutrino masses in eV: ``M_nu_eV``,
* A comma-separated list of neutrino degeneracies: ``deg_nu``,
* The present-day neutrino temperature :math:`T_{\nu,0}`: ``T_nu_0``.
When including massive neutrinos, only ``N_nu`` and ``M_nu_eV`` are necessary.
By default, SWIFT will assume non-degenerate species and
:math:`T_{\nu,0}=(4/11)^{1/3}T_{\rm{CMB},0}`. Neutrinos do not contribute to
:math:`\Omega_m = \Omega_{\rm{cdm}} + \Omega_b` in our conventions.
For a Planck+13 cosmological model (ignoring radiation density as is
commonly done) and running from :math:`z=127` to :math:`z=0`, one would hence
use the following parameters:
.. code:: YAML
Cosmology:
Cosmology: # Planck13 (EAGLE flavour)
a_begin: 0.0078125 # z = 127
a_end: 1.0 # z = 0
h: 0.6777
Omega_m: 0.307
Omega_cdm: 0.2587481
Omega_lambda: 0.693
Omega_b: 0.0455
Omega_b: 0.0482519
Omega_r: 0. # (Optional)
w_0: -1.0 # (Optional)
w_a: 0. # (Optional)
When running a non-cosmological simulation (i.e. without the ``-c`` run-time
When running a non-cosmological simulation (i.e. without the ``--cosmology`` run-time
flag) this section of the YAML file is entirely ignored.
.. _Parameters_gravity:
......@@ -191,31 +227,92 @@ The behaviour of the self-gravity solver can be modified by the parameters
provided in the ``Gravity`` section. The theory document puts these parameters into the
context of the equations being solved. We give a brief overview here.
* The Plummer-equivalent co-moving softening length used for all particles :math:`\epsilon_{com}`: ``comoving_softening``,
* The Plummer-equivalent maximal physical softening length used for all particles :math:`\epsilon_{max}`: ``comoving_softening``,
At any redshift :math:`z`, the Plummer-equivalent softening length used by the
code will be :math:`\epsilon=\min(\epsilon_{max},
\frac{\epsilon_{com}}{z+1})`. This is expressed in internal units.
* The opening angle (multipole acceptance criterion) used in the FMM :math:`\theta`: ``theta``,
* The Plummer-equivalent co-moving softening length used for all dark matter particles :math:`\epsilon_{\rm com,DM}`: ``comoving_DM_softening``,
* The Plummer-equivalent co-moving softening length used for all baryon particles (gas, stars, BHs) :math:`\epsilon_{\rm com,bar}`: ``comoving_baryon_softening``,
* The Plummer-equivalent maximal physical softening length used for all dark matter particles :math:`\epsilon_{\rm max,DM}`: ``max_physical_DM_softening``,
* The Plummer-equivalent maximal physical softening length used for all baryon particles (gas, stars, BHs) :math:`\epsilon_{\rm max,bar}`: ``max_physical_baryon_softening``,
At any redshift :math:`z`, the Plummer-equivalent softening length used by
the code will be :math:`\epsilon=\min(\epsilon_{max},
\frac{\epsilon_{com}}{z+1})`. The same calculation is performed
independently for the dark matter and baryon particles. All the softening
quantities are expressed in internal units. Calculations that only involve
DM or baryons can leave the unused quantities out of the parameter
file. For non-cosmological runs, only the physical softening lengths need
to be supplied.
In case of zoom simulations, the softening of the additional, more massive, background
particles is specified via the parameter
``softening_ratio_background``. Since these particles will typically have
different masses to degrade the resolution away from the zoom-in region, the
particles won't have a single softening value. Instead, we specify the
fraction of the mean inter-particle separation to use. The code will then
derive the softening length of each particle assuming the mean density of
the Universe. That is :math:`\epsilon_{\rm background} =
f\sqrt[3]{\frac{m}{\Omega_m\rho_{\rm crit}}}`, where :math:`f` is the
user-defined value (typically of order 0.05).
The accuracy of the gravity calculation is governed by the following four parameters:
* The multipole acceptance criterion: ``MAC``
* The fixed opening angle used in the geometric MAC :math:`\theta_{\rm cr}`: ``theta_cr``,
* The accuracy criterion used in the adaptive MAC: :math:`\epsilon_{\rm fmm}`: ``epsilon_fmm``,
* The time-step size pre-factor :math:`\eta`: ``eta``,
The first three parameters govern the way the Fast-Multipole method tree-walk is
done (see the theory documents for full details). The ``MAC`` parameter can
take three values: ``adaptive``, ``geometric``, or ``gadget``. In the first
case, the tree recursion decision is based on the estimated accelerations that a
given tree node will produce, trying to recurse to levels where the fractional
contribution of the accelerations to the cell is less than :math:`\epsilon_{\rm
fmm}`. In the second case, a fixed Barnes-Hut-like opening angle
:math:`\theta_{\rm cr}` is used. The final case corresponds to the choice made
in the Gadget-4 code. It is an implementation using eq. 36 of `Springel et
al. (2021) <https://adsabs.harvard.edu/abs/2021MNRAS.506.2871S>`_.
The time-step of a given particle is given by :math:`\Delta t =
\eta\sqrt{\frac{\epsilon}{|\overrightarrow{a}|}}`, where
:math:`\overrightarrow{a}` is the particle's acceleration. `Power et al. (2003) <http://adsabs.harvard.edu/abs/2003MNRAS.338...14P>`_ recommend using :math:`\eta=0.025`.
The last tree-related parameter is
\sqrt{2\eta\epsilon_i/|\overrightarrow{a}_i|}`, where
:math:`\overrightarrow{a}_i` is the particle's acceleration and
:math:`\epsilon_i` its (spline) softening length. `Power et al. (2003)
<http://adsabs.harvard.edu/abs/2003MNRAS.338...14P>`_ recommend using
:math:`\eta=0.025`.
Two further parameters determine when the gravity tree is reconstructed:
* The tree rebuild frequency: ``rebuild_frequency``.
* The fraction of active particles to trigger a rebuild:
``rebuild_active_fraction``.
The tree rebuild frequency is an optional parameter defaulting to
:math:`0.01`. It is used to trigger the re-construction of the tree every time a
fraction of the particles have been integrated (kicked) forward in time.
:math:`0.01`. It is used to trigger the re-construction of the tree every
time a fraction of the particles have been integrated (kicked) forward in
time. The second parameter is also optional and determines a separate rebuild
criterion, based on the fraction of particles that is active at the
beginning of a step. This can be seen as a forward-looking version of the
first criterion, which can be useful for runs with very fast particles.
The second criterion is not used for values :math:`>1`, which is the default
assumption.
The last tree-related parameters are:
* Whether or not to use the approximate gravity from the FMM tree below the
softening scale: ``use_tree_below_softening`` (default: 0)
* Whether or not the truncated force estimator in the adaptive tree-walk
considers the exponential mesh-related cut-off:
``allow_truncation_in_MAC`` (default: 0)
These parameters default to good all-around choices. See the
theory documentation about their exact effects.
Simulations using periodic boundary conditions use additional parameters for the
Particle-Mesh part of the calculation. The last three are optional:
Particle-Mesh part of the calculation. The last five are optional:
* The number cells along each axis of the mesh :math:`N`: ``mesh_side_length``,
* Whether or not to use a distributed mesh when running over MPI: ``distributed_mesh`` (default: ``0``),
* Whether or not to use local patches instead of direct atomic operations to
write to the mesh in the non-MPI case (this is a performance tuning
parameter): ``mesh_uses_local_patches`` (default: ``1``),
* The mesh smoothing scale in units of the mesh cell-size :math:`a_{\rm
smooth}`: ``a_smooth`` (default: ``1.25``),
* The scale above which the short-range forces are assumed to be 0 (in units of
......@@ -229,6 +326,18 @@ For most runs, the default values can be used. Only the number of cells along
each axis needs to be specified. The remaining three values are best described
in the context of the full set of equations in the theory documents.
By default, SWIFT will replicate the mesh on each MPI rank. This means that a
single MPI reduction is used to ensure all ranks have a full copy of the density
field. Each node then solves for the potential in Fourier space independently of
the others. This is a fast option for small meshes. This technique is limited to
mesh with sizes :math:`N<1291` due to the limitations of MPI. Larger meshes need
to use the distributed version of the algorithm. The code then also needs to be
compiled with ``--enable-mpi-mesh-gravity``. That algorithm is slower for small
meshes but has no limits on the size of the mesh and truly huge Fourier
transforms can be performed without any problems. The only limitation is the
amount of memory on each node. The algorithm will use ``N^3 * 8 * 2 / M`` bytes
on each of the ``M`` MPI ranks.
As a summary, here are the values used for the EAGLE :math:`100^3~{\rm Mpc}^3`
simulation:
......@@ -236,22 +345,260 @@ simulation:
# Parameters for the self-gravity scheme for the EAGLE-100 box
Gravity:
eta: 0.025
theta: 0.7
comoving_softening: 0.0026994 # 0.7 proper kpc at z=2.8.
max_physical_softening: 0.0007 # 0.7 proper kpc
rebuild_frequency: 0.01 # Default optional value
mesh_side_length: 512
a_smooth: 1.25 # Default optional value
r_cut_max: 4.5 # Default optional value
r_cut_min: 0.1 # Default optional value
eta: 0.025
MAC: adaptive
theta_cr: 0.6
epsilon_fmm: 0.001
mesh_side_length: 2048
distributed_mesh: 0
comoving_DM_softening: 0.0026994 # 0.7 proper kpc at z=2.8.
max_physical_DM_softening: 0.0007 # 0.7 proper kpc
comoving_baryon_softening: 0.0026994 # 0.7 proper kpc at z=2.8.
max_physical_baryon_softening: 0.0007 # 0.7 proper kpc
rebuild_frequency: 0.01 # Default optional value
a_smooth: 1.25 # Default optional value
r_cut_max: 4.5 # Default optional value
r_cut_min: 0.1 # Default optional value
use_tree_below_softening: 0 # Default optional value
allow_truncation_in_MAC: 0 # Default optional value
.. _Parameters_SPH:
SPH
---
The ``SPH`` section is used to set parameters that describe the SPH
calculations. There are some scheme-specific values that are detailed in the
:ref:`hydro` section. The common parameters are detailed below.
In all cases, users have to specify two values:
* The smoothing length in terms of mean inter-particle separation:
``resolution_eta``
* The CFL condition that enters the time-step calculation: ``CFL_condition``
These quantities are dimensionless. The first, ``resolution_eta``, specifies
how smooth the simulation should be, and is used here instead of the number
of neighbours to smooth over as this also takes into account non-uniform
particle distributions. A value of 1.2348 gives approximately 48 neighbours
in 3D with the cubic spline kernel. More information on the choices behind
these parameters can be found in
`Dehnen & Aly 2012 <https://ui.adsabs.harvard.edu/abs/2012MNRAS.425.1068D/>`_.
The second quantity, the CFL condition, specifies how accurate the time
integration should be and enters as a pre-factor into the hydrodynamics
time-step calculation. This factor should be strictly bounded by 0 and 1, and
typically takes a value of 0.1 for SPH calculations.
The next set of parameters deal with the calculation of the smoothing lengths
directly and are all optional:
* Whether to use or not the mass-weighted definition of the SPH number of
neighbours: ``use_mass_weighted_num_ngb`` (Default: 0)
* The (relative) tolerance to converge smoothing lengths within:
``h_tolerance`` (Default: 1e-4)
* The maximal smoothing length in internal units: ``h_max`` (Default: FLT_MAX)
* The minimal allowed smoothing length in terms of the gravitational
softening: ``h_min_ratio`` (Default: 0.0, i.e. no minimum)
* The maximal (relative) allowed change in volume over one time-step:
``max_volume_change`` (Default: 1.4)
* The maximal number of iterations allowed to converge the smoothing
lengths: ``max_ghost_iterations`` (Default: 30)
These parameters all set the accuracy of the smoothing lengths in various
ways. The first one specified what definition of the local number density
of particles to use. By default, we use
.. math::
n_i = \sum_j W(\|\mathbf{r}_i - \mathbf{r}_j\|, h_i)
but switching on the ``use_mass_weighted_num_ngb`` flag changes the
defintion to:
.. math::
n_i = \frac{\rho_i}{m_i}
where the density has been computed in the traditional SPH way
(i.e. :math:`\rho_i = \sum_j m_j W(\|\mathbf{r}_i - \mathbf{r}_j\|,
h_i)`). Note that in the case where all the particles in the simulation
have the same mass, the two definitions lead to the same number density
value.
**We dot not recommend using this alternative neighbour number definition
in production runs.** It is mainly provided for backward compatibility with
earlier simulations.
The second one, the relative tolerance for the smoothing length, specifies
the convergence criteria for the smoothing length when using the
Newton-Raphson scheme. This works with the maximal number of iterations,
``max_ghost_iterations`` (so called because the smoothing length calculation
occurs in the ghost task), to ensure that the values of the smoothing lengths
are consistent with the local number density. We solve:
.. math::
(\eta \gamma)^{n_D} = n_i
with :math:`\gamma` the ratio of smoothing length to kernel support (this
is fixed for a given kernel shape), :math:`n_D` the number of spatial
dimensions, :math:`\eta` the value of ``resolution_eta``, and :math:`n_i`
the local number density. We adapt the value of the smoothing length,
:math:`h`, to be consistent with the number density.
The maximal smoothing length, by default, is set to ``FLT_MAX``, and if set
prevents the smoothing length from going beyond ``h_max`` (in internal units)
during the run, irrespective of the above equation. The minimal smoothing
length is set in terms of the gravitational softening, ``h_min_ratio``, to
prevent the smoothing length from going below this value in dense
environments. This will lead to smoothing over more particles than specified
by :math:`\eta`.
The optional parameter ``particle_splitting`` (Default: 0) activates the
splitting of overly massive particles into 2. By switching this on, the code
will loop over all the particles at every tree rebuild and split the particles
with a mass above a fixed threshold into two copies that are slightly shifted
(by a randomly orientated vector of norm :math:`0.2h`). Their masses and other
relevant particle-carried quantities are then halved. The mass threshold for
splitting is set by the parameter ``particle_splitting_mass_threshold`` which is
specified using the internal unit system. The IDs of the newly created particles
can be either drawn randomly by setting the parameter ``generate_random_ids``
(Default: 0) to :math:`1`. When this is activated, there is no check that the
newly generated IDs do not clash with any other pre-existing particle. If this
option is set to :math:`0` (the default setting) then the new IDs are created in
increasing order from the maximal pre-existing value in the simulation, hence
preventing any clash. Finally, if the option
``particle_splitting_log_extra_splits`` is set, the code will log all the splits
that go beyond the maximal allowed (typically 64) in a file so that the split tree
for these particles can still be reconstructed.
The final set of parameters in this section determine the initial and minimum
temperatures of the particles.
* The initial temperature of all particles: ``initial_temperature`` (Default:
InternalEnergy from the initial conditions)
* The minimal temperature of any particle: ``minimal_temperature`` (Default: 0)
* The mass fraction of hydrogen used to set the initial temperature:
``H_mass_fraction`` (Default: 0.755)
* The ionization temperature (from neutral to ionized) for primordial gas,
again used in this conversion: ``H_ionization_temperature`` (Default: 1e4)
These parameters, if not present, are set to the default values. The initial
temperature is used, along with the hydrogen mass fraction and ionization
temperature, to set the initial internal energy per unit mass (or entropy per
unit mass) of the particles.
Throughout the run, if the temperature of a particle drops below
``minimal_temperature``, the particle has energy added to it such that it
remains at that temperature. The run is not terminated prematurely. The
temperatures specified in this section are in internal units.
The full section to start a typical cosmological run would be:
.. code:: YAML
SPH:
resolution_eta: 1.2
CFL_condition: 0.1
h_tolerance: 1e-4
h_min_ratio: 0.1
h_max: 1. # U_L
initial_temperature: 273 # U_T
minimal_temperature: 100 # U_T
H_mass_fraction: 0.755
H_ionization_temperature: 1e4 # U_T
particle_splitting: 1
particle_splitting_mass_threshold: 5e-3 # U_M
.. _Parameters_Stars:
Stars
-----
The ``Stars`` section is used to set parameters that describe the Stars
calculations when doing feedback or enrichment. Note that if stars only act
gravitationally (i.e. SWIFT is run *without* ``--feedback``) no parameters
in this section are used.
The first four parameters are related to the neighbour search:
* The (relative) tolerance to converge smoothing lengths within:
``h_tolerance`` (Default: same as SPH scheme)
* The maximal smoothing length in internal units: ``h_max`` (Default: same
as SPH scheme)
* The minimal allowed smoothing length in terms of the gravitational
softening: ``h_min_ratio`` (Default: same as SPH scheme)
* The maximal (relative) allowed change in volume over one time-step:
``max_volume_change`` (Default: same as SPH scheme)
These four parameters are optional and will default to their SPH equivalent
if left unspecified. That is the value specified by the user in that
section or the default SPH value if left unspecified there as well.
The next four parameters govern the time-step size choices for star
particles. By default star particles get their time-step sizes set
solely by the condition based on gravity. Additional criteria can be
applied by setting some of the following parameters (the actual
time-step size is then the minimum of this criterion and of the gravity
criterion):
* Time-step size for young stars in Mega-years:
``max_timestep_young_Myr`` (Default: FLT_MAX)
* Time-step size for old stars in Mega-years: ``max_timestep_old_Myr``
(Default: FLT_MAX)
* Age transition from young to old in Mega-years:
``timestep_age_threshold_Myr`` (Default: FLT_MAX)
* Age above which no time-step limit is applied in Mega-years:
``timestep_age_threshold_unlimited_Myr`` (Default: 0)
Star particles with ages above the unlimited threshold only use the
gravity condition. Star particles with ages below that limit use
either the young or old time-step sizes based on their ages. These
parameters effectively allow for three different age brackets with the
last age bracket imposing no time-step length.
The remaining parameters can be used to overwrite the birth time (or
scale-factor), birth density and birth temperatures (if these
quantities exist) of the stars that were read from the ICs. This can
be useful to start a simulation with stars already of a given age or
with specific (uniform and non-0) properties at birth. The parameters
are:
* Whether or not to overwrite *all* the birth times: ``overwrite_birth_time``
(Default: 0)
* The value to use: ``birth_time``
* Whether or not to overwrite *all* the birth densities: ``overwrite_birth_density``
(Default: 0)
* The value to use: ``birth_density``
* Whether or not to overwrite *all* the birth temperatures: ``overwrite_birth_temperature``
(Default: 0)
* The value to use: ``birth_temperature``
Note that if the birth time is set to ``-1`` then the stars will never
enter any feedback or enrichment loop. When these values are not
specified, SWIFT will start and use the birth times specified in the
ICs. If no values are given in the ICs, the stars' birth times will be
zeroed, which can cause issues depending on the type of run performed.
.. _Parameters_Sinks:
Sinks
-----
Currently, there are two models for the sink particles, the Default model and the GEAR one. Their parameters are described below. To choose a model, configure the code with ``--with-sink=<model>``, where ``<model>`` can be ``none`` or ``GEAR``. To run with sink particles, add the option ``--sinks``.
Below you will find the description of the ``none`` which is the default model. For ``GEAR`` model, please refer to :ref:`sink_GEAR_model`.
By default, the code is configured with ``--with-sink=none``. Then, the ``DefaultSink`` section is used to set parameters that describe the sinks in this model. The unique parameter is the sink accretion radius (also called cut-off radius): ``cut_off_radius``.
Note that this model does not create sink particles or accrete gas.
The full section is:
.. code:: YAML
DefaultSink:
cut_off_radius: 1e-3 # Cut off radius of the sink particles (in internal units). This parameter should be adapted with the resolution..
.. _Parameters_time_integration:
Time Integration
......@@ -291,12 +638,19 @@ end scale-factors in the cosmology section of the parameter file.
Additionally, when running a cosmological volume, advanced users can specify the
value of the dimensionless pre-factor entering the time-step condition linked
with the motion of particles with respect to the background expansion and mesh
size. See the theory document for the exact equations.
size. See the theory document for the exact equations. Note that we explicitly
ignore the ``Header/Time`` attribute in initial conditions files, and only read
the start and end times or scale factors from the parameter file.
* Dimensionless pre-factor of the maximal allowed displacement:
``max_dt_RMS_factor`` (default: ``0.25``)
* Whether or not only the gas particle masses should be considered for
the baryon component of the calculation: ``dt_RMS_use_gas_only`` (default: ``0``)
This value rarely needs altering.
These values rarely need altering. The second parameter is only
meaningful if a subgrid model produces star (or other) particles with
masses substantially smaller than the gas masses. See the theory
documents for the precise meanings.
A full time-step section for a non-cosmological run would be:
......@@ -313,9 +667,10 @@ Whilst for a cosmological run, one would need:
.. code:: YAML
TimeIntegration:
dt_max: 1e-4
dt_min: 1e-10
max_dt_RMS_factor: 0.25 # Default optional value
dt_max: 1e-4
dt_min: 1e-10
max_dt_RMS_factor: 0.25 # Default optional value
dt_RMS_use_gas_only: 0 # Default optional value
.. _Parameters_ICs:
......@@ -359,12 +714,34 @@ Finally, SWIFT also offers these options:
* A factor to re-scale all the smoothing-lengths by a fixed amount: ``smoothing_length_scaling`` (default: ``1.``),
* A shift to apply to all the particles: ``shift`` (default: ``[0.0,0.0,0.0]``),
* Whether to replicate the box along each axis: ``replicate`` (default: ``1``).
The shift is expressed in internal units. The option to replicate the
box is especially useful for weak-scaling tests. When set to an
integer >1, the box size is multiplied by this integer along each axis
and the particles are duplicated and shifted such as to create exact
copies of the simulation volume.
* Whether to re-map the IDs to the range ``[0, N]`` and hence discard
the original IDs from the IC file: ``remap_ids`` (default: ``0``).
The shift is expressed in internal units and will be written to the header of
the snapshots. The option to replicate the box is especially useful for
weak-scaling tests. When set to an integer >1, the box size is multiplied by
this integer along each axis and the particles are duplicated and shifted such
as to create exact copies of the simulation volume.
The remapping of IDs is especially useful in combination with the option to
generate increasing IDs when splitting gas particles as it allows for the
creation of a compact range of IDs beyond which the new IDs generated by
splitting can be safely drawn from. Note that, when ``remap_ids`` is
switched on the ICs do not need to contain a ``ParticleIDs`` field.
Both replication and remapping explicitly overwrite any particle IDs
provided in the initial conditions. This may cause problems for runs
with neutrino particles, as some models assume that that the particle
ID was used as a random seed for the Fermi-Dirac momentum. In this case,
the ``Neutrino:generate_ics`` option can be used to generate new initial
conditions based on the replicated or remapped IDs. See :ref:`Neutrinos`
for details.
* Name of a HDF5 group to copy from the ICs file(s): ``metadata_group_name`` (default: ``ICs_parameters``)
If the initial conditions generator writes a HDF5 group with the parameters
used to make the initial conditions, this group can be copied through to
the output snapshots by specifying its name.
The full section to start a DM+hydro run from Gadget DM-only ICs would
be:
......@@ -378,27 +755,28 @@ be:
cleanup_velocity_factors: 1
generate_gas_in_ics: 1
cleanup_smoothing_lengths: 1
metadata_group_name: ICs_parameters
.. _Parameters_constants:
Physical Constants
------------------
For some idealised test it can be useful to overwrite the value of
some physical constants; in particular the value of the gravitational
constant. SWIFT offers an optional parameter to overwrite the value of
:math:`G_N`.
For some idealised test it can be useful to overwrite the value of some physical
constants; in particular the value of the gravitational constant and vacuum
permeability. SWIFT offers an optional parameter to overwrite the value of
:math:`G_N` and :math:`\mu_0`.
.. code:: YAML
PhysicalConstants:
G: 1
G: 1
mu_0: 1
Note that this set :math:`G` to the specified value in the internal system
of units. Setting a value of `1` when using the system of units (10^10 Msun,
Mpc, km/s) will mean that :math:`G_N=1` in these units [#f2]_ instead of the
normal value :math:`G_N=43.00927`.
normal value :math:`G_N=43.00927`. The same applies to :math:`\mu_0`.
This option is only used for specific tests and debugging. This entire
section of the YAML file can typically be left out. More constants may
......@@ -418,10 +796,8 @@ parameter is the base name that will be used for all the outputs in the run:
This name will then be appended by an under-score and 4 digits followed by
``.hdf5`` (e.g. ``base_name_1234.hdf5``). The 4 digits are used to label the
different outputs, starting at ``0000``. In the default setup the digits simply
increase by one for each snapshot. However, if the optional parameter
``int_time_label_on`` is switched on, then we use 6 digits and these will the
physical time of the simulation rounded to the nearest integer
(e.g. ``base_name_001234.hdf5``) [#f3]_.
increase by one for each snapshot. (See :ref:`Output_list_label` to change that
behaviour.)
The time of the first snapshot is controlled by the two following options:
......@@ -438,9 +814,22 @@ in the internal units of time. Users also have to provide the difference in time
In non-cosmological runs this is also expressed in internal units. For
cosmological runs, this value is *multiplied* to obtain the
scale-factor of the next snapshot. This implies that the outputs are
equally space in :math:`\log(a)` (See :ref:`Output_list_label` to have
equally spaced in :math:`\log(a)` (See :ref:`Output_list_label` to have
snapshots not regularly spaced in time).
The location and naming of the snapshots is altered by the following options:
* Directory in which to write snapshots: ``subdir``.
(default: empty string).
If this is set then the full path to the snapshot files will be generated
by taking this value and appending a slash and then the snapshot file name
described above - e.g. ``subdir/base_name_1234.hdf5``. The directory is
created if necessary. Note however, that the sub-directories are created
when writing the first snapshot of a given category; the onus is hence on
the user to ensure correct writing permissions ahead of that time. Any
VELOCIraptor output produced by the run is also written to this directory.
When running the code with structure finding activated, it is often
useful to have a structure catalog written at the same simulation time
as the snapshots. To activate this, the following parameter can be
......@@ -455,7 +844,88 @@ with a base name and output number that matches the snapshot name
(e.g. ``stf_base_name_1234.hdf5``) irrespective of the name specified in the
section dedicated to VELOCIraptor. Note that the invocation of VELOCIraptor at
every dump is done additionally to the stand-alone dumps that can be specified
in the corresponding section of the YAML parameter file.
in the corresponding section of the YAML parameter file. When running with
_more_ calls to VELOCIraptor than snapshots, gaps between snapshot numbers will
be created to accommodate for the intervening VELOCIraptor-only catalogs.
It is also possible to run the FOF algorithm just before writing each snapshot.
* Run FOF every time a snapshot is dumped: ``invoke_fof``
(default: ``0``).
See the section :ref:`Parameters_fof` for details of the FOF parameters.
It is also possible to run the power spectrum calculation just before writing
each snapshot.
* Run PS every time a snapshot is dumped: ``invoke_ps``
(default: ``0``).
See the section :ref:`Parameters_ps` for details of the power spectrum parameters.
When running over MPI, users have the option to split the snapshot over more
than one file. This can be useful if the parallel-io on a given system is slow
but has the drawback of producing many files per time slice. This is activated
by setting the parameter:
* Distributed snapshots over MPI: ``distributed`` (default: ``0``).
This option has no effect when running the non-MPI version of the code. Note
also that unlike other codes, SWIFT does *not* let the users chose the number of
individual files over which a snapshot is distributed. This is set by the number
of MPI ranks used in a given run. The individual files of snapshot 1234 will
have the name ``base_name_1234.x.hdf5`` where when running on N MPI ranks, ``x``
runs from 0 to N-1. If HDF5 1.10.0 or a more recent version is available,
an additional meta-snapshot named ``base_name_1234.hdf5`` will be produced
that can be used as if it was a non-distributed snapshot. In this case, the
HDF5 library itself can figure out which file is needed when manipulating the
snapshot.
On Lustre filesystems [#f4]_ it is important to properly stripe files to
achieve a good writing and reading speed. If the parameter
``lustre_OST_checks`` is set and the lustre API is available SWIFT will
determine the number of OSTs available and rank these by free space, it will
then set the `stripe count` of each file to `1` and choose an OST
`offset` so each rank writes to a different OST, unless there are more ranks
than OSTs in which case the assignment wraps. In this way OSTs should be
filled evenly and written to using an optimal access pattern.
If the parameter is not set then the files will be created with the default
system policy (or whatever was set for the directory where the files are
written). This parameter has no effect on non-Lustre file systems.
Other parameters are also provided to handle the cases when individual OSTs do
not have sufficient free space to write a file: ``lustre_OST_free`` and
when OSTs are closed for administrative reasons: ``lustre_OST_test``, in which
case they cannot be written. This is important as the `offset` assignment in
this case is not used by lustre which picks the next writable OST, so in our
scheme such OSTs will be used more times than we intended.
* Use the lustre API to assign a stripe and offset to the distributed snapshot
files:
``lustre_OST_checks`` (default: ``0``)
* Do not use OSTs that do not have a certain amount of free space in MiB.
Zero disables and -1 activates a guess based on the size of the process:
``lustre_OST_free`` (default: ``0``)
* Check OSTs can be written to and remove those from consideration:
``lustre_OST_test`` (default: ``0``)
Users can optionally ask to randomly sub-sample the particles in the snapshots.
This is specified for each particle type individually:
* Whether to switch on sub-sampling: ``subsample``
* Whether to switch on sub-sampling: ``subsample_fraction``
These are arrays of 7 elements defaulting to seven 0s if left unspecified. Each
entry corresponds to the particle type used in the initial conditions and
snapshots [#f3]_. The ``subsample`` array is made of ``0`` and ``1`` to indicate which
particle types to subsample. The other array is a float between ``0`` and ``1``
indicating the fraction of particles to keep in the outputs. Note that the
selection of particles is selected randomly for each individual
snapshot. Particles can hence not be traced back from output to output when this
is switched on.
Users can optionally specify the level of compression used by the HDF5 library
using the parameter:
......@@ -464,10 +934,53 @@ using the parameter:
The default level of ``0`` implies no compression and values have to be in the
range :math:`[0-9]`. This integer is passed to the i/o library and used for the
loss-less GZIP compression algorithm. Higher values imply higher compression but
also more time spent deflating and inflating the data. Note that up until HDF5
1.10.x this option is not available when using the MPI-parallel version of the
i/o routines.
loss-less GZIP compression algorithm. The compression is applied to *all* the
fields in the snapshots. Higher values imply higher compression but also more
time spent deflating and inflating the data. When compression is switched on
the SHUFFLE filter is also applied to get higher compression rates. Note that up
until HDF5 1.10.x this option is not available when using the MPI-parallel
version of the i/o routines.
When applying lossy compression (see :ref:`Compression_filters`), particles may
be be getting positions that are marginally beyond the edge of the simulation
volume. A small vector perpendicular to the edge can be added to the particles
to alleviate this issue. This can be switched on by setting the parameter
``use_delta_from_edge`` (default: ``0``) to ``1`` and the buffer size from the
edge ``delta_from_edge`` (default: ``0.``). An example would be when using
Mega-parsec as the base unit and using a filter rounding to the nearest 10
parsec (``DScale5``). Adopting a buffer of 10pc (``delta_from_edge:1e-5``) would
alleviate any possible issue of seeing particles beyond the simulation volume in
the snapshots. In all practical applications the shift would be << than the
softening.
Users can run a program after a snapshot is dumped to disk using the following
parameters:
* Use the extra command after snapshot creation: ``run_on_dump`` (default :``0``)
* Command to run after snapshot creation: ``dump_command`` (default: nothing)
These are particularly useful should you wish to submit a job for postprocessing
the snapshot after it has just been created. Your script will be invoked with
two parameters, the snapshot base-name, and the snapshot number that was just
output as a zero-padded integer. For example, if the base-name is "eagle" and
snapshot 7 was just dumped, with ``dump_command`` set to ``./postprocess.sh``,
then SWIFT will run ``./postprocess.sh eagle 0007``.
For some quantities, especially in the subgrid models, it can be advantageous to
start recording numbers at a fixed time before the dump of a snapshot. Classic
examples are an averaged star-formation rate or accretion rate onto BHs. For the
subgrid models that support it, the triggers can be specified by setting the
following parameters:
* for gas: ``recording_triggers_part`` (no default, array of size set by each subgrid model)
* for stars: ``recording_triggers_spart`` (no default, array of size set by each subgrid model)
* for BHs: ``recording_triggers_bpart`` (no default, array of size set by each subgrid model)
The time is specified in internal time units (See the :ref:`Parameters_units`
section) and a recording can be ignored by setting the parameter to ``-1``. Note
that the code will verify that the recording time is smaller than the gap in
between consecutive snapshot dumps and if the recording window is longer, it
will reduce it to the gap size between the snapshots.
Finally, it is possible to specify a different system of units for the snapshots
than the one that was used internally by SWIFT. The format is identical to the
......@@ -479,7 +992,7 @@ one described above (See the :ref:`Parameters_units` section) and read:
* a unit of electric current ``UnitCurrent_in_cgs`` (default: ``InternalUnitSystem:UnitCurrent_in_cgs``),
* a unit of temperature ``UnitTemp_in_cgs`` (default: ``InternalUnitSystem:UnitTemp_in_cgs``).
When un-specified, these all take the same value as assumed by the internal
When unspecified, these all take the same value as assumed by the internal
system of units. These are rarely used but can offer a practical alternative to
converting data in the post-processing of the simulations.
......@@ -494,29 +1007,389 @@ full section would be:
delta_time: 1.02
invoke_stf: 1
Showing all the parameters for a basic hydro test-case, one would have:
Showing all the parameters for a basic non-cosmological hydro test-case, one
would have:
.. code:: YAML
Snapshots:
basename: sedov
subdir: snapshots
time_first: 0.01
delta_time: 0.005
invoke_stf: 0
int_time_label_on: 0
invoke_fof: 1
compression: 3
distributed: 1
lustre_OST_count: 48 # System has 48 Lustre OSTs to distribute the files over
UnitLength_in_cgs: 1. # Use cm in outputs
UnitMass_in_cgs: 1. # Use grams in outputs
UnitVelocity_in_cgs: 1. # Use cm/s in outputs
UnitCurrent_in_cgs: 1. # Use Ampere in outputs
UnitTemp_in_cgs: 1. # Use Kelvin in outputs
subsample: [0, 1, 0, 0, 0, 0, 1] # Sub-sample the DM and neutrinos
subsample_fraction: [0, 0.01, 0, 0, 0, 0, 0.1] # Write 1% of the DM parts and 10% of the neutrinos
run_on_dump: 1
dump_command: ./submit_analysis.sh
use_delta_from_edge: 1
delta_from_edge: 1e-6 # Move particles away from the edge by 1-e6 of the length unit.
recording_triggers_part: [1.0227e-4, 1.0227e-5] # Recording starts 100M and 10M years before a snapshot
recording_triggers_spart: [-1, -1] # No recording
recording_triggers_bpart: [1.0227e-4, 1.0227e-5] # Recording starts 100M and 10M years before a snapshot
Some additional specific options for the snapshot outputs are described in the
following pages:
* :ref:`Output_list_label` (to have snapshots not evenly spaced in time),
* :ref:`Output_list_label` (to have snapshots not evenly spaced in time or with
non-regular labels),
* :ref:`Output_selection_label` (to select what particle fields to write).
.. _Parameters_line_of_sight:
Line-of-sight outputs
---------------------
The ``LineOfSight`` section of the parameter file contains all the options related to
the dump of simulation outputs in the form of HDF5 :ref:`line_of_sight` data to
be processed by the ``SpecWizard`` tool
(See `Theuns et al. 1998 <https://ui.adsabs.harvard.edu/abs/1998MNRAS.301..478T/>`_,
`Tepper-Garcia et al. 2011
<https://ui.adsabs.harvard.edu/abs/2011MNRAS.413..190T/>`_). The parameters are:
.. code:: YAML
LineOfSight:
basename: los
scale_factor_first: 0.02 # Only used when running in cosmological mode
delta_time: 1.02
time_first: 0.01 # Only used when running in non-cosmological mode
output_list_on: 0 # Overwrite the regular output times with a list of output times
num_along_x: 0
num_along_y: 0
num_along_z: 100
allowed_los_range_x: [0, 100.] # Range along the x-axis where LoS along Y or Z are allowed
allowed_los_range_y: [0, 100.] # Range along the y-axis where LoS along X or Z are allowed
allowed_los_range_z: [0, 100.] # Range along the z-axis where LoS along X or Y are allowed
range_when_shooting_down_x: 100. # Range along the x-axis of LoS along x
range_when_shooting_down_y: 100. # Range along the y-axis of LoS along y
range_when_shooting_down_z: 100. # Range along the z-axis of LoS along z
.. _Parameters_light_cone:
Light Cone Outputs
---------------------
One or more light cone outputs can be configured by including ``LightconeX`` sections
in the parameter file, where X is in the range 0-7. It is also possible to include a
``LightconeCommon`` section for parameters which are the same for all lightcones. The
parameters for each light cone are:
* Switch to enable or disable a lightcone: ``enabled``
This should be set to 1 to enable the corresponding lightcone or 0 to disable it.
Has no effect if specified in the LightconeCommon section.
* Directory in which to write light cone output: ``subdir``
All light cone output files will be written in the specified directory.
* Base name for particle and HEALPix map outputs: ``basename``.
Particles will be written to files ``<basename>_XXXX.Y.hdf5``, where XXXX numbers the files
written by a single MPI rank and Y is the MPI rank index. HEALPix maps are written to files
with names ``<basename>.shell_X.hdf5``, where X is the index of the shell. The basename must
be unique for each light cone so it cannot be specified in the LightconeCommon section.
See :ref:`lightcone_adding_outputs_label` for information on adding new output quantities.
* Location of the observer in the simulation box, in internal units: ``observer_position``
* Size of in memory chunks used to store particles and map updates: ``buffer_chunk_size``
During each time step buffered particles and HEALPix map updates are stored in a linked
list of chunks of ``buffer_chunk_size`` elements. Additional chunks are allocated as needed.
The map update process is parallelized over chunks so the chunks should be small enough that
each MPI rank typically has more chunks than threads.
* Maximum amount of map updates (in MB) to send on each iteration: ``max_map_update_send_size_mb``
Flushing the map update buffer involves sending the updates to the MPI ranks with the affected
pixel data. Sending all updates at once can consume a large amount of memory so this parameter
allows updates to be applied over multiple iterations to reduce peak memory usage.
* Redshift range to output each particle type: ``z_range_for_<type>``
A two element array with the minimum and maximum redshift at which particles of type ``<type>``
will be output as they cross the lightcone. ``<type>`` can be Gas, DM, DMBackground, Stars, BH
or Neutrino. If this parameter is not present for a particular type then that type will not
be output.
* The number of buffered particles which triggers a write to disk: ``max_particles_buffered``
If an MPI rank has at least max_particles_buffered particles which have crossed the lightcone,
it will write them to disk at the end of the current time step.
* Size of chunks in the particle output file
This sets the HDF5 chunk size. Particle outputs must be chunked because the number of particles
which will be written out is not known when the file is created.
* Whether to use lossy compression in the particle outputs: ``particles_lossy_compression``
If this is 1 then the HDF5 lossy compression filter named in the definition of each particle
output field will be enabled. If this is 0 lossy compression is not applied.
* Whether to use lossless compression in the particle outputs: ``particles_gzip_level``
If this is non-zero the HDF5 deflate filter will be applied to lightcone particle output with
the compression level set to the specified value.
* HEALPix map resolution: ``nside``
* Name of the file with shell radii: ``radius_file.txt``
This specifies the name of a file with the inner and outer radii of the shells used to make
HEALPix maps. It should be a text file with a one line header and then two comma separated columns
of numbers with the inner and outer radii. The units are determined by the header. The header must
be one of the following:
``# Minimum comoving distance, Maximum comoving distance``,
``# Minimum redshift, Maximum redshift``, or
``# Maximum expansion factor, Minimum expansion factor``. Comoving distances are in internal units.
The shells must be in ascending order of radius and must not overlap.
* Number of pending HEALPix map updates before the buffers are flushed: ``max_updates_buffered``
In MPI mode applying updates to the HEALPix maps requires communication and forces synchronisation
of all MPI ranks, so it is not done every time step. If any MPI rank has at least
``max_updates_buffered`` pending updates at the end of a time step, then all ranks will apply
their updates to the HEALPix maps.
* Which types of HEALPix maps to create: ``map_names_file``
This is the name of a file which specifies what quantities should be accumulated to HEALPix maps.
The possible map types are defined in the lightcone_map_types array in ``lightcone_map_types.h``.
See :ref:`lightcone_adding_outputs_label` if you'd like to add a new map type.
* Whether to distribute HEALPix maps over multiple files: ``distributed_maps``
If this is 0 then the code uses HDF5 collective writes to write each map to a single file. If this
is 1 then each MPI rank writes its part of the HEALPix map to a separate file.
The file contains two columns: the first column is the name of the map type and the second is the
name of the compression filter to apply to it. See io_compression.c for the list of compression
filter names. Set the filter name to ``on`` to disable compression.
* Whether to use lossless compression in the HEALPix map outputs: ``maps_gzip_level``
If this is non-zero the HDF5 deflate filter will be applied to the lightcone map output with
the compression level set to the specified value.
The following shows a full set of light cone parameters for the case where we're making two
light cones which only differ in the location of the observer:
.. code:: YAML
LightconeCommon:
# Common parameters
subdir: lightcones
buffer_chunk_size: 100000
max_particles_buffered: 1000000
hdf5_chunk_size: 10000
# Redshift ranges for particle types
z_range_for_Gas: [0.0, 0.05]
z_range_for_DM: [0.0, 0.05]
z_range_for_DMBackground: [0.0, 0.05]
z_range_for_Stars: [0.0, 0.05]
z_range_for_BH: [0.0, 0.05]
z_range_for_Neutrino: [0.0, 0.05]
# Healpix map parameters
nside: 512
radius_file: ./shell_radii.txt
max_updates_buffered: 100000
map_names_file: map_names.txt
max_map_update_send_size_mb: 1.0
distributed_maps: 0
# Compression options
particles_lossy_compression: 0
particles_gzip_level: 6
maps_gzip_level: 6
Lightcone0:
enabled: 1
basename: lightcone0
observer_position: [35.5, 78.12, 12.45]
Lightcone0:
enabled: 1
basename: lightcone1
observer_position: [74.2, 10.80, 53.59]
An example of the radius file::
# Minimum comoving distance, Maximum comoving distance
0.0, 50.0
50.0, 100.0
150.0, 200.0
200.0, 400.0
400.0, 1000.0
An example of the map names file::
TotalMass on
SmoothedGasMass on
UnsmoothedGasMass on
DarkMatterMass on
.. _Parameters_eos:
Equation of State (EoS)
-----------------------
The ``EoS`` section contains options for the equations of state.
Multiple EoS can be used for :ref:`planetary`,
see :ref:`planetary_eos` for more information.
To enable one or multiple EoS, the corresponding ``planetary_use_*:``
flag(s) must be set to ``1`` in the parameter file for a simulation,
along with the path to any table files, which are set by the
``planetary_*_table_file:`` parameters.
For the (non-planetary) isothermal EoS, the ``isothermal_internal_energy:``
parameter sets the thermal energy per unit mass.
.. code:: YAML
EoS:
isothermal_internal_energy: 20.26784 # Thermal energy per unit mass for the case of isothermal equation of state (in internal units).
barotropic_vacuum_sound_speed: 2e4 # Vacuum sound speed in the case of the barotropic equation of state (in internal units).
barotropic_core_density: 1e-13 # Core density in the case of the barotropic equation of state (in internal units).
# Select which planetary EoS material(s) to enable for use.
planetary_use_idg_def: 0 # Default ideal gas, material ID 0
planetary_use_Til_iron: 1 # Tillotson iron, material ID 100
planetary_use_Til_granite: 1 # Tillotson granite, material ID 101
planetary_use_Til_water: 0 # Tillotson water, material ID 102
planetary_use_Til_basalt: 0 # Tillotson basalt, material ID 103
planetary_use_HM80_HHe: 0 # Hubbard & MacFarlane (1980) hydrogen-helium atmosphere, material ID 200
planetary_use_HM80_ice: 0 # Hubbard & MacFarlane (1980) H20-CH4-NH3 ice mix, material ID 201
planetary_use_HM80_rock: 0 # Hubbard & MacFarlane (1980) SiO2-MgO-FeS-FeO rock mix, material ID 202
planetary_use_SESAME_iron: 0 # SESAME iron 2140, material ID 300
planetary_use_SESAME_basalt: 0 # SESAME basalt 7530, material ID 301
planetary_use_SESAME_water: 0 # SESAME water 7154, material ID 302
planetary_use_SS08_water: 0 # Senft & Stewart (2008) SESAME-like water, material ID 303
planetary_use_ANEOS_forsterite: 0 # ANEOS forsterite (Stewart et al. 2019), material ID 400
planetary_use_ANEOS_iron: 0 # ANEOS iron (Stewart 2020), material ID 401
planetary_use_ANEOS_Fe85Si15: 0 # ANEOS Fe85Si15 (Stewart 2020), material ID 402
# Tablulated EoS file paths.
planetary_HM80_HHe_table_file: ./EoSTables/HM80_HHe.txt
planetary_HM80_ice_table_file: ./EoSTables/HM80_ice.txt
planetary_HM80_rock_table_file: ./EoSTables/HM80_rock.txt
planetary_SESAME_iron_table_file: ./EoSTables/SESAME_iron_2140.txt
planetary_SESAME_basalt_table_file: ./EoSTables/SESAME_basalt_7530.txt
planetary_SESAME_water_table_file: ./EoSTables/SESAME_water_7154.txt
planetary_SS08_water_table_file: ./EoSTables/SS08_water.txt
planetary_ANEOS_forsterite_table_file: ./EoSTables/ANEOS_forsterite_S19.txt
planetary_ANEOS_iron_table_file: ./EoSTables/ANEOS_iron_S20.txt
planetary_ANEOS_Fe85Si15_table_file: ./EoSTables/ANEOS_Fe85Si15_S20.txt
.. _Parameters_ps:
Power Spectra Calculation
-------------------------
SWIFT can compute a variety of auto- and cross- power spectra at user-specified
intervals. The behaviour of this output type is governed by the ``PowerSpectrum``
section of the parameter file. The calculation is performed on a regular grid
(typically of size 256^3) and foldings are used to extend the range probed to
smaller scales.
The options are:
* The size of the base grid to perform the PS calculation:
``grid_side_length``.
* The number of grid foldings to use: ``num_folds``.
* The factor by which to fold at each iteration: ``fold_factor`` (default: 4)
* The order of the window function: ``window_order`` (default: 3)
* Whether or not to correct the placement of the centre of the k-bins for small k values: ``shift_centre_small_k_bins`` (default: 1)
The window order sets the way the particle properties get assigned to the mesh.
Order 1 corresponds to the nearest-grid-point (NGP), order 2 to cloud-in-cell
(CIC), and order 3 to triangular-shaped-cloud (TSC). Higher-order schemes are not
implemented.
Finally, the quantities for which a PS should be computed are specified as a
list of pairs of values for the parameter ``requested_spectra``. Auto-spectra
are specified by using the same type for both pair members. The available values
listed in the following table:
+---------------------+---------------------------------------------------+
| Name | Description |
+=====================+===================================================+
| ``matter`` | Mass density of all matter |
+---------------------+---------------------------------------------------+
| ``cdm`` | Mass density of all dark matter |
+---------------------+---------------------------------------------------+
| ``gas`` | Mass density of all gas |
+---------------------+---------------------------------------------------+
| ``starBH`` | Mass density of all stars and BHs |
+---------------------+---------------------------------------------------+
| ``neutrino`` | Mass density of all neutrinos |
+---------------------+---------------------------------------------------+
| ``neutrino1`` | Mass density of a random half of the neutrinos |
+---------------------+---------------------------------------------------+
| ``neutrino2`` | Mass density of a the other half of the neutrinos |
+---------------------+---------------------------------------------------+
| ``pressure`` | Electron pressure |
+---------------------+---------------------------------------------------+
A dark matter mass density auto-spectrum is specified as ``cdm-cdm`` and a gas
density - electron pressure cross-spectrum as ``gas-pressure``.
The ``neutrino1`` and ``neutrino2`` selections are based on the particle IDs and
are mutually exclusive. The particles selected in each half are different in
each output. Note that neutrino PS can only be computed when neutrinos are
simulated using particles.
SWIFT uses bins of integer :math:`k`, with bins :math:`[0.5,1.5]`, :math:`[1.5,2.5]` etc. The
representative :math:`k` values used to be assigned to the bin centres (so k=1, 2, etc), which are
then transformed to physical :math:`k` by a factor :math:`kL/(2*pi)`. For the first few bins, only
few modes contribute to each bin. It is then advantageous to move the "centre" of the bin to the
actual location correponding to the mean of the contributing modes. The :math:`k` label of the bin
is thus shifted by a small amount. The way to calculate these shifts is to consider a 3D cube of
:math:`(kx,ky,kz)` cells and check which cells fall inside a spherical shell with boundaries
:math:`(i+0.5,i+1.5)`, then calculate the average :math:`k=\sqrt{kx^2+ky^2+kz^2}`. So for
:math:`i=0` there cells :math:`k=1` and 12 cells :math:`k=\sqrt(2)`, so the weighted k becomes
:math:`(6 * 1 + 12 * \sqrt(2)) / 18 = 1.2761424`. Note that only the first 7 (22) bins require a
correction larger than 1 (0.1) percent. We apply a correction to the first 128 terms. This
correction is activated when ``shift_centre_small_k_bins`` is switched on (that's the default
behaviour).
An example of a valid power-spectrum section of the parameter file looks like:
.. code:: YAML
PowerSpectrum:
grid_side_length: 256
num_folds: 3
requested_spectra: ["matter-matter", "cdm-cdm", "cdm-matter"] # Total-matter and CDM auto-spectra + CDM-total cross-spectrum
Some additional specific options for the power-spectra outputs are described in the
following pages:
* :ref:`Output_list_label` (to have PS not evenly spaced in time)
.. _Parameters_fof:
Friends-Of-Friends (FOF)
......@@ -557,10 +1430,10 @@ other options require the ``enable`` parameter to be set to ``1``.
* Whether or not to dump a set of restart file on regular exit: ``onexit``
(default: ``0``),
* The wall-clock time in hours between two sets of restart files:
``delta_hours`` (default: ``6.0``).
``delta_hours`` (default: ``5.0``).
Note that there is no buffer time added to the ``delta_hours`` value. If the
system's batch queue run time limit is set to 6 hours, the user must specify a
system's batch queue run time limit is set to 5 hours, the user must specify a
smaller value to allow for enough time to safely dump the check-point files.
* The sub-directory in which to store the restart files: ``subdir`` (default:
......@@ -575,6 +1448,36 @@ the MPI-rank. SWIFT writes one file per MPI rank. If the ``save`` option has
been activated, the previous set of restart files will be named
``basename_000000.rst.prev``.
On Lustre filesystems [#f4]_ it is important to properly stripe files to
achieve a good writing and reading speed. If the parameter
``lustre_OST_checks`` is set and the lustre API is available SWIFT will
determine the number of OSTs available and rank these by free space, it will
then set the `stripe count` of each restart file to `1` and choose an OST
`offset` so each rank writes to a different OST, unless there are more ranks
than OSTs in which case the assignment wraps. In this way OSTs should be
filled evenly and written to using an optimal access pattern.
If the parameter is not set then the files will be created with the default
system policy (or whatever was set for the directory where the files are
written). This parameter has no effect on non-Lustre file systems.
Other parameters are also provided to handle the cases when individual OSTs do
not have sufficient free space to write a restart file: ``lustre_OST_free`` and
when OSTs are closed for administrative reasons: ``lustre_OST_test``, in which
case they cannot be written. This is important as the `offset` assignment in
this case is not used by lustre which picks the next writable OST, so in our
scheme such OSTs will be used more times than we intended.
* Use the lustre API to assign a stripe and offset to restart files:
``lustre_OST_checks`` (default: ``0``)
* Do not use OSTs that do not have a certain amount of free space in MiB.
Zero disables and -1 activates a guess based on the size of the process:
``lustre_OST_free`` (default: ``0``)
* Check OSTs can be written to and remove those from consideration:
``lustre_OST_test`` (default: ``0``)
SWIFT can also be stopped by creating an empty file called ``stop`` in the
directory where the restart files are written (i.e. the directory speicified by
the parameter ``subdir``). This will make SWIFT dump a fresh set of restart file
......@@ -613,9 +1516,10 @@ hours after which a shell command will be run, one would use:
onexit: 0
subdir: restart # Sub-directory of the directory where SWIFT is run
basename: swift
delta_hours: 6.0
delta_hours: 5.0
stop_steps: 100
max_run_time: 24.0 # In hours
lustre_OST_count: 48 # System has 48 Lustre OSTs to distribute the files over
resubmit_on_exit: 1
resubmit_command: ./resub.sh
......@@ -665,15 +1569,18 @@ which stops these from being done at the scale of the leaf cells, of which
there can be a large number. In this case cells with gravity tasks must be at
least 4 levels above the leaf cells (when possible).
To control the depth at which the ghost tasks are placed, there are
two parameters (one for the gas, one for the stars). These specify the
maximum number of particles allowed in such a task before splitting
into finer ones. These parameters are:
To control the depth at which the ghost tasks are placed, there are two
parameters (one for the gas, one for the stars). These specify the maximum
number of particles allowed in such a task before splitting into finer ones. A
similar parameter exists for the cooling tasks, which can be useful to tweak for
models in which the cooling operations are expensive. These three parameters
are:
.. code:: YAML
engine_max_parts_per_ghost: 1000
engine_max_sparts_per_ghost: 1000
engine_max_parts_per_ghost: 1000
engine_max_sparts_per_ghost: 1000
engine_max_parts_per_cooling: 10000
Extra space is required when particles are created in the system (to the time
......@@ -767,26 +1674,34 @@ should contain. The type of partitioning attempted is controlled by the::
DomainDecomposition:
initial_type:
parameter. Which can have the values *memory*, *region*, *grid* or
parameter. Which can have the values *memory*, *edgememory*, *region*, *grid* or
*vectorized*:
* *edgememory*
This is the default if METIS or ParMETIS is available. It performs a
partition based on the memory use of all the particles in each cell.
The total memory per cell is used to weight the cell vertex and all the
associated edges. This attempts to equalize the memory used by all the
ranks but with some consideration given to the need to not cut dense
regions (by also minimizing the edge cut). How successful this
attempt is depends on the granularity of cells and particles and the
number of ranks, clearly if most of the particles are in one cell, or a
small region of the volume, balance is impossible or difficult. Having
more top-level cells makes it easier to calculate a good distribution
(but this comes at the cost of greater overheads).
* *memory*
This is the default if METIS or ParMETIS is available. It performs a
partition based on the memory use of all the particles in each cell,
attempting to equalize the memory used by all the ranks.
How successful this attempt is depends on the granularity of cells and particles
and the number of ranks, clearly if most of the particles are in one cell,
or a small region of the volume, balance is impossible or
difficult. Having more top-level cells makes it easier to calculate a
good distribution (but this comes at the cost of greater overheads).
This is like *edgememory*, but doesn't include any edge weights, it should
balance the particle memory use per rank more exactly (but note effects
like the numbers of cells per rank will also have an effect, as that
changes the need for foreign cells).
* *region*
The one other METIS/ParMETIS option is "region". This attempts to assign equal
numbers of cells to each rank, with the surface area of the regions minimised
(so we get blobs, rather than rectangular volumes of cells).
numbers of cells to each rank, with the surface area of the regions minimised.
If ParMETIS and METIS are not available two other options are possible, but
will give a poorer partition:
......@@ -923,7 +1838,7 @@ at some arbitrary steps, and indeed do better than the initial partition
earlier in the run. This can be done using *fixed cost* repartitioning.
Fixed costs are output during each repartition step into the file
`partition_fixed_costs.h`, this should be created by a test run of your your
`partition_fixed_costs.h`, this should be created by a test run of your
full simulation (with possibly with a smaller volume, but all the physics
enabled). This file can then be used to replace the same file found in the
`src/` directory and SWIFT should then be recompiled. Once you have that, you
......@@ -935,8 +1850,8 @@ to control whether they are used or not. If enabled these will be used to
repartition after the second step, which will generally give as good a
repartition immediately as you get at the first unforced repartition.
Also once these have been enabled you can change the `trigger:` value to
numbers greater than 2, and repartitioning will be forced every `trigger`
Also once these have been enabled you can change the ``trigger`` value to
numbers greater than 2, and repartitioning will be forced every ``trigger``
steps. This latter option is probably only useful for developers, but tuning
the second step to use fixed costs can give some improvements.
......@@ -945,16 +1860,138 @@ the second step to use fixed costs can give some improvements.
Structure finding (VELOCIraptor)
--------------------------------
This section describes the behaviour of the on-the-fly structure
finding using the VELOCIraptor library (see
:ref:`VELOCIraptor_interface`). The section is named
``StructureFinding`` and also governs the behaviour of the
structure finding code when invoked at snapshots dumping time via
the parameter ``Snapshots:invoke_stf``.
The main parameters are:
* The VELOCIraptor parameter file to use for the run:
``config_file_name``,
* The directory in which the structure catalogs will be written: ``basename``.
Both these parameters must always be specified when running SWIFT with
on-the-fly calls to the structure finding code. In particular, when
only running VELOCIraptor when snapshots are written, nothing more is
necessary and one would use:
.. code:: YAML
Snapshots:
invoke_stf: 1 # We want VELOCIraptor to be called when snapshots are dumped.
# ...
# Rest of the snapshots properties
StructureFinding:
config_file_name: my_stf_configuration_file.cfg # See the VELOCIraptor manual for the content of this file.
basename: ./haloes/ # Write the catalogs in this sub-directory
If one additionally want to call VELOCIraptor at times not linked with
snapshots, the additional parameters need to be supplied.
The time of the first call is controlled by the two following options:
* Time of the first call to VELOCIraptor (non-cosmological runs): ``time_first``,
* Scale-factor of the first call to VELOCIraptor (cosmological runs): ``scale_factor_first``.
One of those two parameters has to be provided depending on the type of run. In
the case of non-cosmological runs, the time of the first call is expressed
in the internal units of time. Users also have to provide the difference in time
(or scale-factor) between consecutive outputs:
* Time difference between consecutive outputs: ``delta_time``.
In non-cosmological runs this is also expressed in internal units. For
cosmological runs, this value is *multiplied* to obtain the
scale-factor of the next call. This implies that the outputs are
equally spaced in :math:`\log(a)` (See :ref:`Output_list_label` to have
calls not regularly spaced in time).
Since VELOCIraptor produces many small output files when running with MPI,
it can be useful to make a separate directory for each output time:
* Base name of directory created for each VELOCIraptor output: ``subdir_per_output``
(default: empty string).
If this is set then a new directory is created each time VELOCIraptor is run.
The directory name will be subdir_per_output followed by the same output number
used in the filenames. Note that this directory is relative to the ``subdir`` parameter
from the Snapshots section if that is set.
By default this is an empty string, which means that all VELOCIraptor outputs will
be written to a single directory.
Showing all the parameters for a basic cosmologica test-case, one would have:
.. code:: YAML
StructureFinding:
config_file_name: my_stf_configuration_file.cfg # See the VELOCIraptor manual for the content of this file.
basename: haloes # Base name for VELOCIraptor output files
subdir_per_output: stf # Make a stf_XXXX subdirectory for each output
scale_factor_first: 0.1 # Scale-factor of the first output
delta_time: 1.1 # Delta log-a between outputs
Gravity Force Checks
--------------------
By default, when the code is configured with ``--enable-gravity-force-checks``,
the "exact" forces of all active gparts are computed during each timestep.
To give a bit more control over this, you can select to only perform the exact
force computation during the timesteps that all gparts are active, and/or only
at the timesteps when a snapshot is being dumped, i.e.,
.. code:: YAML
ForceChecks:
only_when_all_active: 1 # Only compute exact forces during timesteps when all gparts are active.
only_at_snapshots: 1 # Only compute exact forces during timesteps when a snapshot is being dumped.
If ``only_when_all_active:1`` and ``only_at_snapshots:1`` are enabled together,
and all the gparts are not active during the timestep of the snapshot dump, the
exact forces computation is performed on the first timestep at which all the
gparts are active after that snapshot output timestep.
Neutrinos
---------
The ``Neutrino`` section of the parameter file controls the behaviour of
neutrino particles (``PartType6``). This assumes that massive neutrinos have
been specified in the ``Cosmology`` section described above. Random
Fermi-Dirac momenta will be generated if ``generate_ics`` is used. The
:math:`\delta f` method for shot noise reduction can be activated with
``use_delta_f``. Finally, a random seed for the Fermi-Dirac momenta can
be set with ``neutrino_seed``.
For mode details on the neutrino implementation, refer to :ref:`Neutrinos`.
A complete specification of the model looks like
.. code:: YAML
Neutrino:
generate_ics: 1 # Replace neutrino particle velocities with random Fermi-Dirac momenta at the start
use_delta_f: 1 # Use the delta-f method for shot noise reduction
neutrino_seed: 1234 # A random seed used for the Fermi-Dirac momenta
------------------------
.. [#f1] The thorough reader (or overly keen SWIFT tester) would find that the speed of light is :math:`c=1.8026\times10^{12}\,\rm{fur}\,\rm{ftn}^{-1}`, Newton's constant becomes :math:`G_N=4.896735\times10^{-4}~\rm{fur}^3\,\rm{fir}^{-1}\,\rm{ftn}^{-2}` and Planck's constant turns into :math:`h=4.851453\times 10^{-34}~\rm{fur}^2\,\rm{fir}\,\rm{ftn}^{-1}`.
.. [#f2] which would translate into a constant :math:`G_N=1.5517771\times10^{-9}~cm^{3}\,g^{-1}\,s^{-2}` if expressed in the CGS system.
.. [#f3] This feature only makes sense for non-cosmological runs for which the
internal time unit is such that when rounded to the nearest integer a
sensible number is obtained. A use-case for this feature would be to
compare runs over the same physical time but with different numbers of
snapshots. Snapshots at a given time would always have the same set of
digits irrespective of the number of snapshots produced before.
.. [#f3] The mapping is 0 --> gas, 1 --> dark matter, 2 --> background dark
matter, 3 --> sinks, 4 --> stars, 5 --> black holes, 6 --> neutrinos.
.. [#f4] https://wiki.lustre.org/Main_Page
.. [#f5] We add a per-output random integer to the OST value such that we don't
generate a bias towards low OSTs. This averages the load over all OSTs
over the course of a run even if the number of OSTs does not divide the
number of files and vice-versa.
.. Planetary EoS
Jacob Kegerreis, 14th July 2022
.. _planetary_eos:
Planetary Equations of State
============================
Configuring SWIFT with the ``--with-equation-of-state=planetary`` and
``--with-hydro=planetary`` options enables the use of multiple
equations of state (EoS).
Every SPH particle then requires and carries the additional ``MaterialID`` flag
from the initial conditions file. This flag indicates the particle's material
and which EoS it should use.
If you have another EoS that you would like us to add, then just let us know!
It is important to check that the EoS you use are appropriate
for the conditions in the simulation that you run.
Please follow the original sources of these EoS for more information and
to check the regions of validity. If an EoS sets particles to have a pressure
of zero, then particles may end up overlapping, especially if the gravitational
softening is very small.
So far, we have implemented several Tillotson, ANEOS, SESAME,
and Hubbard \& MacFarlane (1980) materials, with more on the way.
Custom materials in SESAME-style tables can also be provided.
The material's ID is set by a somewhat arbitrary base type ID
(multiplied by 100) plus an individual value, matching our code for making
planetary initial conditions, `WoMa <https://github.com/srbonilla/WoMa>`_:
+ Ideal gas: ``0``
+ Default (Set :math:`\gamma` using ``--with-adiabatic-index``, default 5/3): ``0``
+ Tillotson (Melosh, 2007): ``1``
+ Iron: ``100``
+ Granite: ``101``
+ Water: ``102``
+ Basalt: ``103``
+ Ice: ``104``
+ Custom user-provided parameters: ``190``, ``191``, ..., ``199``
+ Hubbard \& MacFarlane (1980): ``2``
+ Hydrogen-helium atmosphere: ``200``
+ Ice H20-CH4-NH3 mix: ``201``
+ Rock SiO2-MgO-FeS-FeO mix: ``202``
+ SESAME (and others in similar-style tables): ``3``
+ Iron (2140): ``300``
+ Basalt (7530): ``301``
+ Water (7154): ``302``
+ Senft \& Stewart (2008) water: ``303``
+ AQUA, Haldemann, J. et al. (2020) water: ``304``
+ Chabrier, G. et al. (2019) Hydrogen: ``305``
+ Chabrier, G. et al. (2019) Helium: ``306``
+ Chabrier & Debras (2021) H/He mixture Y=0.245 (Jupiter): ``307``
+ ANEOS (in SESAME-style tables): ``4``
+ Forsterite (Stewart et al. 2019): ``400``
+ Iron (Stewart, zenodo.org/record/3866507): ``401``
+ Fe85Si15 (Stewart, zenodo.org/record/3866550): ``402``
+ Custom (in SESAME-style tables): ``9``
+ User-provided custom material(s): ``900``, ``901``, ..., ``909``
The data files for the tabulated EoS can be downloaded using
the ``examples/Planetary/EoSTables/get_eos_tables.sh`` script.
To enable one or multiple EoS, the corresponding ``planetary_use_*:``
flag(s) must be set from ``0`` to ``1`` in the parameter file for a simulation,
along with the path to any table files, which are set by the
``planetary_*_table_file:`` parameters,
as detailed in :ref:`Parameters_eos` and ``examples/parameter_example.yml``.
Unlike the EoS for an ideal or isothermal gas, these more complicated materials
do not always include transformations between the internal energy,
temperature, and entropy. At the moment, we have implemented
:math:`P(\rho, u)` and :math:`c_s(\rho, u)` (and more in some cases),
which is sufficient for the :ref:`planetary_sph` hydro scheme,
but some materials may thus currently be incompatible with
e.g. entropy-based schemes.
The Tillotson sound speed was derived using
:math:`c_s^2 = \left. ( \partial P / \partial \rho ) \right|_S`
as described in
`Kegerreis et al. (2019) <https://doi.org/10.1093/mnras/stz1606>`_.
Note that there is a typo in the sign of
:math:`du = T dS - P dV = T dS + (P / \rho^2) d\rho` in the appendix,
but the correct version was used in the actual derivation.
The ideal gas uses the same equations detailed in :ref:`equation_of_state`.
The data files for the tabulated EoS can be downloaded using
the ``examples/EoSTables/get_eos_tables.sh`` script.
The format of the data files for SESAME, ANEOS, and similar-EoS tables
is similar to the SESAME 301 (etc) style. The file contents are:
.. code-block:: python
# header (12 lines)
version_date (YYYYMMDD)
num_rho num_T
rho[0] rho[1] ... rho[num_rho] (kg/m^3)
T[0] T[1] ... T[num_T] (K)
u[0, 0] P[0, 0] c[0, 0] s[0, 0] (J/kg, Pa, m/s, J/K/kg)
u[1, 0] ... ... ...
... ... ... ...
u[num_rho-1, 0] ... ... ...
u[0, 1] ... ... ...
... ... ... ...
u[num_rho-1, num_T-1] ... ... s[num_rho-1, num_T-1]
The ``version_date`` must match the value in the ``sesame.h`` ``SESAME_params``
objects, so we can ensure that any version updates work with the git repository.
This is ignored for custom materials.
The header contains a first line that gives the material name, followed by the
same 11 lines printed here to describe the contents.
.. Planetary SPH
Jacob Kegerreis, 13th March 2020
.. _planetary_hydro:
Planetary Hydro Scheme
======================
This scheme is based on :ref:`minimal` but also allows multiple materials,
meaning that different SPH particles can be assigned different
`equations of state <equations_of_state.html>`_ (EoS).
Every SPH particle then requires and carries the additional ``MaterialID`` flag
from the initial conditions file. This flag indicates the particle's material
and which EoS it should use.
The Balsara viscosity switch is used by default, but can be disabled by
compiling SWIFT with ``make CFLAGS=-DPLANETARY_SPH_NO_BALSARA``.
Note: to access the boundary-improvement method presented in Ruiz-Bonilla+2022,
use the ``planetary_imbalance_RB22`` git branch and compile with
``--with-hydro=planetary-gdf``. However, we instead recommend using the REMIX
SPH scheme, as it has effectively replaced this method.
.. _planetary_remix_hydro:
REMIX SPH
======================
REMIX is an SPH scheme designed to alleviate effects that typically suppress
mixing and instability growth at density discontinuities in SPH simulations
(Sandnes et al. 2025), and also includes the multiple EoS options as the base
Planetary scheme. For more information on what is included in the REMIX
scheme and how to configure SWIFT to use REMIX, see: :ref:`remix_sph`.
.. Planetary Simulations
Jacob Kegerreis, 14th July 2022
.. _planetary:
Planetary Simulations
=====================
SWIFT is also designed for running planetary simulations
such as of giant impacts, as introduced in
`Kegerreis et al. (2019) <https://doi.org/10.1093/mnras/stz1606>`_,
and of any non-planetary simulations with more complicated equations of state
and/or multiple materials, etc.
Active development is ongoing of more features, supporting tools, and examples,
so please let us know if you are interested in using SWIFT
or have any implementation requests.
You can find an example of creating initial conditions and running an impact
simulation in ``swiftsim/examples/Planetary/`` under ``DemoImpactInitCond/``
and ``DemoImpact/``. Check out their `README.md` files for more info.
The tabulated equation of state files can be downloaded there using
``EoSTables/get_eos_tables.sh``.
To run planetary or similar simulations with a traditional SPH formulation,
SWIFT should be configured to use the planetary hydrodynamics scheme and
equations of state: ``--with-hydro=planetary`` and
``--with-equation-of-state=planetary``. These allow for multiple materials to be
used, chosen from the several available equations of state.
Planetary simulations can also be carried out using the advanced
:ref:`remix_sph` scheme, which improves the treatment of mixing and material
interfaces (Sandnes et al. 2025).
To configure within REMIX, use ``--with-hydro=remix`` and
``--with-equation-of-state=planetary``. Note that REMIX simulations require
initial particle densities in the initial conditions. We also recommend
configuring with ``--with-kernel=wendland-C2`` and with ``resolution_eta: 1.487``
in the parameter file for improved hydrodynamic behaviour and since this is the
configuration used for the validation simulations of Sandnes et al. (2025).
.. toctree::
:maxdepth: 1
:caption: More information:
Hydro Scheme <hydro_scheme>
Equations of State <equations_of_state>
Initial Conditions <initial_conditions>
Removed Particles <removed_particles>