Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 840-unit-test-testtimeline-fails
  • 875-wendland-c6-missing-neighbour-contributions
  • 887-code-does-not-compile-with-parmetis-installed-locally-but-without-metis
  • CubeTest
  • FS_Del
  • GEARRT_Iliev1
  • GEARRT_Iliev3
  • GEARRT_Iliev4
  • GEARRT_Iliev5
  • GEARRT_Iliev5-fixed-nr-subcycles
  • GEARRT_Iliev7
  • GEARRT_Iliev_static
  • GEARRT_Ivanova
  • GEARRT_fixed_nr_subcycles
  • GEARRT_injection_tests_Iliev0
  • GPU_swift
  • GrackleCoolingUpdates2
  • Lambda-T-table
  • MAGMA2
  • MAGMA2_matthieu
  • MHD_FS
  • MHD_FS_TESTs
  • MHD_FS_VP_AdvectGauge
  • MHD_Orestis
  • MHD_canvas
  • MHD_canvas_RF_128
  • MHD_canvas_RF_growth_rate
  • MHD_canvas_RobertsFlow
  • MHD_canvas_SPH_errors
  • MHD_canvas_matthieu
  • MHD_canvas_nickishch
  • MHD_canvas_nickishch_Lorentz_force_test
  • MHD_canvas_nickishch_track_everything
  • MHD_canvas_sid
  • OAK/CPAW_updates
  • OAK/LoopAdvectionTest
  • OAK/adaptive_divv
  • OAK/kinetic_dedner
  • REMIX_cosmo
  • RT_dualc
  • RT_recombination_radiation
  • RT_test_mladen
  • SIDM
  • SIDM_wKDSDK
  • SNdust
  • SPHM1RT_CosmologicalStromgrenSphere
  • SPHM1RT_bincheck
  • SPHM1RT_smoothedRT
  • TangoSIDM
  • TestPropagation3D
  • Test_fixedhProb
  • activate_fewer_comms
  • active_h_max_optimization
  • adaptive_softening_Lieuwe
  • add_2p5D
  • add_black_holes_checks
  • adding_sidm_to_master
  • agn_crksph
  • agn_crksph_subtask_speedup
  • amd-optimization
  • arm_vec
  • automatic_tasks
  • better_ray_RNG
  • black_holes_accreted_angular_momenta_from_gas
  • burkert-potential
  • c11
  • c11_atomics_copy
  • cancel_all_sorts
  • cell_exchange_improvements
  • cell_types
  • cherry-pick-cd1c39e0
  • comm_tasks_are_special
  • conduction_velocities
  • cpp-fixes
  • cuda_test
  • darwin/adaptive_softening
  • darwin/gear_chemistry_fluxes
  • darwin/gear_mechanical_feedback
  • darwin/gear_preSN_feedback
  • darwin/gear_radiation
  • darwin/simulations
  • darwin/sink_formation_proba
  • darwin/sink_mpi
  • darwin/sink_mpi_physics
  • dead-time-stats
  • derijcke_cooling
  • dev_cms
  • do-not-activate-empty-star-pairs
  • domain_zoom_nometis
  • drift_flag_debug_check
  • driven_turbulence
  • driven_turbulence_forcings
  • engineering
  • eos_updates
  • evrard_disc
  • expand_fof_2022
  • explict_bkg_cdim
  • fewer_gpart_comms
  • fewer_star_comms
  • fewer_timestep_comms_no_empty_pairs
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
  • v0.8.5
  • v0.9.0
  • v1.0.0
  • v2025.01
  • v2025.04
119 results

Target

Select target project
  • dc-oman1/swiftsim
  • swift/swiftsim
  • pdraper/swiftsim
  • tkchan/swiftsim
  • dc-turn5/swiftsim
5 results
Select Git revision
  • CubeTest
  • GPU_swift
  • TangoSIDM
  • active_h_max_optimization
  • arm_vec
  • c11
  • c11_atomics_copy
  • comm_tasks_are_special
  • cuda_test
  • domain_zoom_nometis
  • drift_flag_debug_check
  • driven_turbulence
  • engineering
  • evrard_disc
  • expand_fof
  • fix_sink_timestep
  • fixed_hSIDM
  • fof_snapshots
  • gear_metal_diffusion
  • generic_cache
  • genetic_partitioning2
  • gizmo
  • gizmo_entropy_switch
  • gizmo_mfv_entropy
  • hashmap_mesh
  • isotropic_feedback
  • ivanova-testing
  • jsw/6dfof
  • kahip
  • lean_gparts
  • load-balance-testing
  • locked_hydro
  • logger_read_history
  • logger_read_history2
  • logger_write_hdf5
  • mass_dependent_h_max
  • master
  • mpi-one-thread
  • mpi-packed-parts
  • mpi-send-subparts
  • mpi-send-subparts-vector
  • mpi-subparts-vector-grav
  • mpi-testsome
  • mpi-threads
  • mpi_force_checks
  • numa_awareness
  • onesided-mpi-rdma
  • onesided-mpi-recv-cache
  • onesided-mpi-recv-window
  • onesided-mpi-single-recv-window
  • origin-master
  • parallel_exchange_cells
  • paranoid
  • phantom
  • planetary
  • planetary_boundary
  • queue-timers
  • queue-timers-clean
  • rdma-only
  • rdma-only-multiple-sends
  • rdma-only-subcopies
  • rdma-only-subparts
  • rdma-only-subparts-update
  • rdma-only-subparts-update-flamingo
  • rdma-only-subparts-update-flamingo-cellids
  • rdma-only-subparts-update-keep
  • rdma-only-subparts-update-keep-update
  • rdma-only-subsends
  • reweight-fitted-costs
  • reweight-scaled-costs
  • rgb-engineering
  • rt-gas-interactions
  • rt-ghost2-and-thermochemistry
  • scheduler_determinism
  • search-window-tests
  • signal-handler-dump
  • simba-stellar-feedback
  • sink_formation2
  • sink_merger
  • sink_merger2
  • skeleton
  • smarter_sends
  • snipes_data
  • spiral_potential
  • subgrid_SF_threshold
  • subsends
  • swift-rdma
  • swift_zoom_support
  • sync-send
  • thread-dump-extract-waiters
  • threadpool_rmapper
  • traphic
  • variable_hSIDM
  • whe-nu-bg-cosmo
  • when_to_proxy
  • yb-bhdev
  • yb-sndev
  • yb-sndev-dev
  • yb-varsndt-isotropic
  • yb-vi-gastrack
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
  • v0.8.5
  • v0.9.0
116 results
Show changes
Showing
with 1424 additions and 169 deletions
doc/RTD/source/HydroSchemes/hydro.png

1010 KiB

...@@ -2,13 +2,31 @@ ...@@ -2,13 +2,31 @@
Josh Borrow 4th April 2018 Josh Borrow 4th April 2018
.. _hydro: .. _hydro:
Hydrodynamics Schemes Hydrodynamics Schemes
===================== =====================
This section of the documentation includes information on the hydrodynamics This section of the documentation includes information on the hydrodynamics
schemes available in SWIFT, as well as how to implement your own. schemes available in SWIFT, as well as how to implement your own.
Depending on the scheme used, the algorithm will need either 2
(e.g. GADGET-2) or 3 (e.g. GIZMO and SPHENIX) interaction loops.
Here we show the task dependencies for the hydrodynamics assuming 3 loops.
In case the case of a 2 loop scheme, SWIFT removes the gradient loop and the extra ghost.
.. figure:: hydro.png
:width: 400px
:align: center
:figclass: align-center
:alt: Task dependencies for the hydrodynamics.
This figure shows the task dependencies for the hydrodynamics assuming a scheme with the gradient loop.
The first tasks to be executed are at top the (without any incoming links) and then in the order of the links
until the last tasks without any outgoing links.
For the hydrodynamics tasks (in blue), the rectangles represent (from top to bottom) the density, gradient and force loops.
As this graph was created manually, the task dependencies might not reflect a real run depending on the physics simulated.
This was done with SWIFT v0.9.0.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: Contents: :caption: Contents:
...@@ -19,7 +37,10 @@ schemes available in SWIFT, as well as how to implement your own. ...@@ -19,7 +37,10 @@ schemes available in SWIFT, as well as how to implement your own.
hopkins_sph hopkins_sph
anarchy_sph anarchy_sph
sphenix_sph sphenix_sph
gasoline_sph
phantom_sph phantom_sph
remix_sph
adaptive_softening
gizmo gizmo
shadowswift
adding_your_own adding_your_own
.. REMIX SPH
Thomas Sandnes, 13th May 2025
.. _remix_sph:
REMIX SPH
==============================================
.. toctree::
:maxdepth: 2
:hidden:
:caption: Contents:
REMIX is an SPH scheme designed to alleviate effects that typically suppress
mixing and instability growth at density discontinuities in SPH simulations
(Sandnes et al. 2025). REMIX addresses this problem by directly targeting sources
of kernel smoothing error and discretisation error, resulting in a generalised,
material-independent formulation that improves the treatment both of
discontinuities within a single material, for example in an ideal gas, and of
interfaces between dissimilar materials. The scheme combines:
+ An evolved density estimate to avoid the kernel smoothing error in the
standard SPH integral density estimate;
+ Thermodynamically consistent, conservative equations of motion, with
free functions chosen to limit zeroth-order error;
+ Linear-order reproducing kernels with grad-h terms and a vacuum interface
treatment;
+ A "kernel normalising term" to avoid potential accumulation of error in
the evolved density estimate, such that densities are ensured to remain
representative of the distribution of particle masses in the simulation volume;
+ Advanced artificial viscosity and diffusion schemes with linear reconstruction
of quantities to particle midpoints, and a set of novel improvements to
effectively switch between treatments for shock-capturing under compression and
noise-smoothing in shearing regions.
To configure with this scheme, use
.. code-block:: bash
./configure --with-hydro=remix --with-equation-of-state=planetary
This scheme allows multiple materials,
meaning that different SPH particles can be assigned different
`equations of state <equations_of_state.html>`_ (EoS).
Every SPH particle then requires and carries the additional ``MaterialID`` flag
from the initial conditions file. This flag indicates the particle's material
and which EoS it should use. Note that configuring with
``--with-equation-of-state=planetary`` is required for this scheme, although
for simulations that use a single, ideal gas EoS, setting all MaterialIDs to
``0`` and including
.. code-block:: yaml
EoS:
planetary_use_idg_def: 1
in the parameter file are the only EoS-related additions needed compared with
other non-Planetary hydro schemes. Note also that since densities are evolved in
time, initial particle densities are required in initial conditions.
We additionally recommend configuring with ``--with-kernel=wendland-C2`` and with
.. code-block:: yaml
SPH:
resolution_eta: 1.487
in the parameter file for improved hydrodynamic behaviour and since this is the
configuration used for the validation simulations of Sandnes et al. (2025).
The current implementation of the REMIX hydro scheme has been validated for
planetary applications and various hydrodynamic test cases, and does not include
all necessary functionality for e.g. cosmological simulations.
Default parameters used in the artificial viscosity and diffusion schemes and the
normalising term (see Sandnes et al. 2025) are:
.. code-block:: c
#define const_remix_visc_alpha 1.5f
#define const_remix_visc_beta 3.f
#define const_remix_visc_epsilon 0.1f
#define const_remix_visc_a 2.0f / 3.0f
#define const_remix_visc_b 1.0f / 3.0f
#define const_remix_difn_a_u 0.05f
#define const_remix_difn_b_u 0.95f
#define const_remix_difn_a_rho 0.05f
#define const_remix_difn_b_rho 0.95f
#define const_remix_norm_alpha 1.0f
#define const_remix_slope_limiter_exp_denom 0.04f
These can be changed in ``src/hydro/REMIX/hydro_parameters.h``.
.. ShadowSWIFT (Moving mesh hydrodynamics)
Yolan Uyttenhove September 2023
ShadowSWIFT (moving mesh hydrodynamics)
=======================================
.. warning::
The moving mesh hydrodynamics solver is currently in the process of being merged into master and will **NOT**
work on the master branch. To use it, compile the code using the ``moving_mesh`` branch.
This is an implementation of the moving-mesh finite-volume method for hydrodynamics in SWIFT.
To use this scheme, a Riemann solver is also needed. Configure SWIFT as follows:
.. code-block:: bash
./configure --with-hydro="shadowswift" --with-riemann-solver="hllc"
Current status
~~~~~~~~~~~~~~
Due to the completely different task structure compared to SPH hydrodynamics, currently only a subset of the features of
SWIFT is supported in this scheme.
- Hydrodynamics is fully supported in 1D, 2D and 3D and over MPI.
- Both self-gravity and external potentials are supported.
- Cosmological time-integration is supported.
- Cooling and chemistry are supported, with the exception of the ``GEAR_diffusion`` chemistry scheme. Metals are
properly according to mass fluxes.
- Choice between periodic, reflective, open, inflow and vacuum boundary conditions (for non-periodic boundary
conditions, the desired variant must be selected in ``const.h``). Additionally, reflective boundary conditions
are applied to SWIFT's boundary particles. Configure with ``--with-boundary-particles=<N>`` to use this (e.g. to
simulate walls).
Caveats
~~~~~~~
These are currently the main limitations of the ShadowSWIFT hydro scheme:
- Unlike SPH the cells of the moving mesh must form a partition of the entire simulation volume. This means that there
cannot be empty SWIFT cells and vacuum must be explicitly represented by zero (or negligible) mass particles.
- Most other subgrid physics, most notably, star formation and stellar feedback are not supported yet.
- No MHD schemes are supported.
- No radiative-transfer schemes are supported.
.. Initial Conditions .. Initial Conditions
Josh Borrow, 5th April 2018 Josh Borrow, 5th April 2018
.. _Initial_Conditions_label:
Initial Conditions Initial Conditions
================== ==================
...@@ -53,12 +55,14 @@ file format for compatibility reasons. ...@@ -53,12 +55,14 @@ file format for compatibility reasons.
+---------------------+------------------------+----------------------------------------+ +---------------------+------------------------+----------------------------------------+
| ``/PartType2/`` | Background Dark Matter | ``swift_type_dark_matter_background`` | | ``/PartType2/`` | Background Dark Matter | ``swift_type_dark_matter_background`` |
+---------------------+------------------------+----------------------------------------+ +---------------------+------------------------+----------------------------------------+
| ``/PartType3/`` | Ignored | | | ``/PartType3/`` | Sinks | ``swift_type_sink`` |
+---------------------+------------------------+----------------------------------------+ +---------------------+------------------------+----------------------------------------+
| ``/PartType4/`` | Stars | ``swift_type_star`` | | ``/PartType4/`` | Stars | ``swift_type_star`` |
+---------------------+------------------------+----------------------------------------+ +---------------------+------------------------+----------------------------------------+
| ``/PartType5/`` | Black Holes | ``swift_type_black_hole`` | | ``/PartType5/`` | Black Holes | ``swift_type_black_hole`` |
+---------------------+------------------------+----------------------------------------+ +---------------------+------------------------+----------------------------------------+
| ``/PartType6/`` | Neutrino Dark Matter | ``swift_type_neutrino`` |
+---------------------+------------------------+----------------------------------------+
The last column in the table gives the ``enum`` value from ``part_type.h`` The last column in the table gives the ``enum`` value from ``part_type.h``
corresponding to a given entry in the files. corresponding to a given entry in the files.
......
.. Light Cones
John Helly 29th April 2021
.. _lightcone_adding_outputs_label:
Adding New Types of Output
~~~~~~~~~~~~~~~~~~~~~~~~~~~
New particle properties can be added to the particle light cones as follows:
* Add a field to the ``lightcone_<type>_data`` struct in ``lightcone_particle_io.h`` to store the new quantity
* Modify the ``lightcone_store_<type>`` function in ``lightcone_particle_io.c`` to set the new struct field from the particle data
* in ``lightcone_io_make_output_fields()``, add a call to ``lightcone_io_make_output_field()`` to define the new output
Here, <type> is the particle type: gas, dark_matter, stars, black_hole or neutrino.
To add a new type of HEALPIX map:
* Add a function to compute the quantity in ``lightcone_map_types.c``. See ``lightcone_map_total_mass()`` for an example.
* Add a new entry to the ``lightcone_map_types`` array in lightcone_map_types.h. This should specify the name of the new map type, a pointer to the function to compute the quantity, and the units of the quantity. The last entry in the array is not used and must have a NULL function pointer to act as an end marker.
.. Light Cones
John Helly 29th April 2021
.. _lightcone_algorithm_description_label:
Light Cone Output Algorithm
~~~~~~~~~~~~~~~~~~~~~~~~~~~
In cosmological simulations it is possible to specify the location of
an observer in the simulation box and have SWIFT output information
about particles in the simulation as they cross the observer's past
light cone.
Whenever a particle is drifted the code checks if any periodic copy of
the particle crosses the lightcone during the drift, and if so that
copy of the particle is buffered for output. As an optimization, at the
start of each time step the code computes which periodic copies of the
simulation box could contribute to the light cone and only those copies
are searched. When drifting the particles in a particular cell the list of
replications is further narrowed down using the spatial extent of the
cell.
Particles can be output directly to HDF5 files or accumulated to healpix
maps corresponding to spherical shells centred on the observer.
.. Light Cones
John Helly 29th April 2021
.. _Light_Cones_label:
Light Cone Outputs
==================
This section describes the light cone outputs
and related parameters
.. toctree::
:maxdepth: 2
:caption: Contents:
algorithm_description
lightcone_particle_output
lightcone_healpix_maps
running_with_lightcones
adding_outputs
.. Light Cones
John Helly 29th April 2021
.. _lightcone_healpix_maps_label:
Light Cone HEALPix Maps
~~~~~~~~~~~~~~~~~~~~~~~
SWIFT can accumulate particle properties to HEALPix maps as they
cross the observer's past light cone. Each map corresponds to a
spherical shell centred on the observer. When a particle crosses
the lightcone its distance from the observer is calculated and the
particle's contribution is added to a buffer so that at the end of
the time step it can be added to the corresponding HEALPix map.
Maps can be generated for multiple concentric shells and multiple
quantities can be accumulated for each shell. The HEALPix map for a
shell is allocated and zeroed out when the simulation first reaches
a redshift where particles could contribute to that map. The map is
written out and deallocated when the simulation advances to a point
where there can be no further contributions. In MPI runs the pixel
data for the maps are distributed across all MPI ranks.
Updates to the maps are buffered in order to avoid the need for
communication during the time step. At the end of the step if any
MPI rank has a large amount of updates buffered then all pending
updates will be applied to the pixel data.
For gas particles, the HEALPix maps are smoothed using a projected
version of the same kernel used for the hydro calculations. Other
particle types are not smoothed.
The code writes one output file for each spherical shell. In MPI mode
all ranks write to the same file using parallel HDF5. If maps of
multiple quantities are being made they will be written to a single
file as separate 1D datasets with one element per pixel.
.. Light Cones
John Helly 29th June 2021
.. _lightcone_particle_output_label:
Light Cone Particle Output
~~~~~~~~~~~~~~~~~~~~~~~~~~
SWIFT can output particles to HDF5 output files (similar to the
snapshots) as they cross the observer's light cone. During each time
step, any particles which cross the light cone are added to a buffer.
If this buffer is large at the end of the step then its contents
are written to an output file. In MPI runs each MPI rank writes its
own output file and decides independently when to flush its particle
buffer.
A new output file is started whenever restart files are written. This
allows the code to automatically continue from the point of the restart
dump if the run is interrupted. Any files written after the restart
dump will be overwritten when the simulation is resumed, preventing
duplication of particles in the light cone output.
The output files have names of the form ``basename_XXXX.Y.hdf5``, where
XXXX numbers the files written by a single MPI rank and Y is the index
of the MPI rank.
The output files contain one HDF5 group for each particle type. Within
each group there are datasets corresponding to particle properties in
a similar format to the snapshots.
.. Light Cones
John Helly 29th April 2021
.. _lightcone_running_label:
Running SWIFT with Light Cone Output
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To produce light cone particle output swift must be configured
with ``--enable-lightcone``. Additionally, making HEALPix maps
requires the HEALPix C library. If using MPI then parallel HDF5
is also required.
One lightcone is produced for each ``LightconeX`` section in the
parameter file, where X=0-7. This allows generation of up to 8
light cones. See :ref:`Parameters_light_cone` for details.
SWIFT must be run with the ``--lightcone`` flag to activate light
cone outputs, otherwise the Lightcone sections in the parameter file
are ignored.
Logger Output
=============
The logger is a particle based output (e.g. snapshot) that takes into account the large difference of timescale.
If you have any question, a slack channel is available for it in SWIFT's slack.
To run it, you will need to use the configuration option ``--enable-logger``.
Currently the logger is implemented only for Gadget2 and the default gravity / stars, but can be easily extended to the other schemes by adding the logger structure to the particles (see ``src/hydro/Gadget2/hydro_part.h``).
The main parameters of the logger are ``Logger:delta_step`` and ``Logger:index_mem_frac`` that define the time accuracy of the logger and the number of index files.
The first parameter defines the number of active steps that a particle is doing before writing and the second defines the total storage size of the index files as function of the dump file.
Unfortunately, the API is not really developed yet. Therefore if you wish to dump another field, you will need to trick the logger by replacing a field in the ``logger_log_part`` function.
For reading, the python wrapper is available through the configuration option ``--with-python``. Once compiled, you will be able to use the file ``logger/examples/reader_example.py``.
The first argument is the basename of the index file and the second one is the time requested.
During the first reading, the library is manipulating the dump file and therefore it should not be killed and may take a bit more time than usual.
.. Neutrinos
Willem Elbers, 7 April 2021
.. _neutrinos:
Neutrino implementation
=======================
SWIFT can also accurately model the effects of massive neutrinos in
cosmological simulations. At the background level, massive neutrinos
and other relativistic species can be included by specifying their
number and masses in the cosmology section of the parameter file
(see :ref:`Parameters_cosmology`).
At the perturbation level, neutrinos can be included as a separate particle
species (``PartType6``). To facilitate this, SWIFT implements the
:math:`\delta f` method for shot noise suppression (`Elbers et al. 2020
<https://ui.adsabs.harvard.edu/abs/2020arXiv201007321E/>`_). The method
works by statistically weighting the particles during the simulation,
with weights computed from the Liouville equation using current and
initial momenta. The method can be activated by specifying
``Neutrino:use_delta_f`` in the parameter file.
The implementation of the :math:`\delta f` method in SWIFT assumes a
specific method for generating the initial neutrino momenta (see below).
This makes it possible to reproduce the initial momentum when it is
needed without increasing the memory footprint of the neutrino particles.
If perturbed initial conditions are not needed, the initial momenta can
be generated internally by specifying ``Neutrino:generate_ics`` in the
parameter file. This will assign ``PartType6`` particles to each
neutrino mass specified in the cosmology and generate new velocities
based on the homogeneous (unperturbed) Fermi-Dirac distribution. In
this case, placeholder neutrino particles should be provided in the
initial conditions with arbitrary masses and velocities, distributed
uniformly in the box. Placeholders can be spawned with the python
script ``tools/spawn_neutrinos.py``.
Relativistic Drift
------------------
At high redshift, neutrino particles move faster than the speed of light
if the usual Newtonian expressions are used. To rectify this, SWIFT
implements a relativistic drift correction. In this convention, the
internal velocity variable (see theory/Cosmology) is
:math:`v^i=a^2u^i=a^2\dot{x}^i\gamma^{-1}`, where :math:`u^i` is the
spatial part of the 4-velocity, :math:`a` the scale factor, and
:math:`x^i` a comoving position vector. The conversion factor to the
coordinate 3-velocity is :math:`\gamma=ac/\sqrt{a^2c^2+v^2}`. This
factor is applied to the neutrino particles throughout the simulation.
Generating Fermi-Dirac momenta
------------------------------
The implementation of the :math:`\delta f` method in SWIFT assumes that
neutrinos were initially assigned a Fermi-Dirac momentum using the following
method. Each particle has a fixed 64-bit unsigned integer :math:`\ell` given
by the particle ID [#f1]_ (plus an optional seed: ``Neutrino:neutrino_seed``).
This number is transformed into a floating point number :math:`u\in(0,1)`,
using the following pseudo-code based on splitmix64:
.. code-block:: none
m = l + 0x9E3779B97f4A7C15
m = (m ^ (m >> 30)) * 0xBF58476D1CE4E5B9;
m = (m ^ (m >> 27)) * 0x94D049BB133111EB;
m = m ^ (m >> 31);
u = (m + 0.5) / (UINT64_MAX + 1)
This is subsequently transformed into a Fermi-Dirac momentum
:math:`q = F^{-1}(u)` by evaluating the quantile function. To generate
neutrino particle initial conditions with perturbations, one first generates
momenta from the unperturbed Fermi-Dirac distribution using the above method
and then applies perturbations in any suitable manner.
When using the :math:`\delta f` method, SWIFT also assumes that ``PartType6``
particles are assigned to all :math:`N_\nu` massive species present in the
cosmology, such that the particle with fixed integer :math:`\ell` corresponds
to species :math:`i = \ell\; \% \;N_\nu\in[0,N_\nu-1]`.
The sampled Fermi-Dirac speeds and neutrino masses are written into the
snapshot files as ``SampledSpeeds`` and ``MicroscopicMasses``.
Mesh Neutrinos
--------------
There are two additional implementations of neutrino physics. The first
is an option to only apply the delta-f weighting scheme on the mesh. In
this case, particle neutrinos participate like dark matter in the remaining
gravity calculations. This mode can be activated with
``Neutrino:use_delta_f_mesh_only``.
The second option is an implementation of the linear response method,
once again on the mesh only, which requires a separate data file with
transfer functions. Example settings in the paramter file for this mode
are:
.. code:: YAML
Neutrino:
use_linear_response: 1 # Option to use the linear response method
transfer_functions_filename: perturb.hdf5 # For linear response neutrinos, path to an hdf5 file with transfer functions, redshifts, and wavenumbers
dataset_redshifts: Redshifts # For linear response neutrinos, name of the dataset with the redshifts (a vector of length N_z)
dataset_wavenumbers: Wavenumbers # For linear response neutrinos, name of the dataset with the wavenumbers (a vector of length N_k)
dataset_delta_cdm: Functions/d_cdm # For linear response neutrinos, name of the dataset with the cdm density transfer function (N_z x N_k)
dataset_delta_baryon: Functions/d_b # For linear response neutrinos, name of the dataset with the baryon density transfer function (N_z x N_k)
dataset_delta_nu: Functions/d_ncdm[0] # For linear response neutrinos, name of the dataset with the neutrino density transfer function (N_z x N_k)
fixed_bg_density: 1 # For linear response neutrinos, whether to use a fixed present-day background density
In this example, the code reads an HDF5 file "perturb.hdf5" with transfer
functions. The file must contain a vector with redshifts of length :math:`N_z`,
a vector with wavenumbers :math:`N_k`, and three arrays with dimensions
:math:`N_z \times N_k` of density transfer functions for cdm, baryons, and
neutrinos respectively. It is recommended to store the units of the wavenumbers
as an attribute at "Units/Unit length in cgs (U_L)". The ``fixed_bg_density``
flag determines whether the linear response scales as :math:`\Omega_\nu(a)`
or the present-day value :math:`\Omega_{\nu,0}`, either of which may be
appropriate depending on the particle initial conditions. An HDF5 file
can be generated using classy with the script ``tools/create_perturb_file.py``.
The linear response mode currently only supports degenerate mass models
with a single neutrino transfer function.
Background Neutrinos Only
-------------------------
It is also possible to run without neutrino perturbations, even when
specifying neutrinos in the background cosmology. This mode can be
activated with ``Neutrino:use_model_none``.
.. [#f1] Currently, it is not guaranteed that a particle ID is unique.
...@@ -34,3 +34,8 @@ In order to add a new scheme, you will need to: ...@@ -34,3 +34,8 @@ In order to add a new scheme, you will need to:
``nobase_noinst_HEADERS``, add your new header files. ``nobase_noinst_HEADERS``, add your new header files.
6. Update the documentation. Add your equations/documentation to ``doc/RTD``. 6. Update the documentation. Add your equations/documentation to ``doc/RTD``.
.. toctree::
:caption: Table of Contents
sink_adding_new_scheme
.. Adding new schemes
Darwin Roduit, 16 Ocotber 2024
.. _new_option_sink:
How to add your sink scheme
-------------------------------
Here, we provide comprehensive information to guide you in adding your sink scheme into Swift. To better understand how to add new schemes within Swift, read the general information provided on :ref:`new_option` page.
The default sink scheme is empty and gives you an idea of the minimum required fields and functions for the code to compile. The GEAR sink module has the base functions plus some extra ones for its operations. It can provide you with a working example. However, it can only work with the GEAR feedback module since it relies on IMF properties that are only located there.
As discussed in the GEAR sink :ref:`sink_GEAR_model_summary`, the physics relies on the following tasks: sink formation, gas and sink particle flagging, gas swallowing, sink swallowing and star formation. You do not need to care about the tasks, only the core functions within the sink module. However, you may need to locate where the code calls these functions. The file ``src/runner_others.c`` contains the ``runner_do_star_formation_sink()`` and ``runner_do_sink_formation()``. These functions are responsible for generating stars out of sinks and sinks from gas particles. The other general task-related functions are in ``src/runner_sinks.c``.
The following presents the most essential functions you need to implement. This will give you an idea of the workload.
Sink formation
~~~~~~~~~~~~~~
Before forming a sink, the code loops over all gas and sink particles to gather data about its neighbourhood. This is performed in ``sink_prepare_part_sink_formation_gas_criteria()`` and ``sink_prepare_part_sink_formation_sink_criteria()`` functions. For instance, in GEAR, we compute the total potential energy, thermal energy, etc.
Then, to decide if we can turn a gas particle into a sink particle, the function ``sink_is_forming()`` is called. Before forming a sink particle, there is a call to ``sink_should_convert_to_sink()``. This function determines whether the gas particle must transform into a sink. Both functions return either 0 or 1.
Gas-sink density interactions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first interaction task to be run for the sinks is the density task. This task updates the smoothing length for the sink particle, unless a fixed cutoff radius is being used (coming soon). It can also calculate the contributions made by neigbouring gas particles to the density, sound speed, velocity etc. at the location of the sink. Code for these interactions should be added to ``sink_iact.h/runner_iact_nonsym_sinks_gas_density()``.
Once the contributions of all neigbouring gas particles have been calculated, the density calculation is completed by the sink density ghost task. You can set what this task does with the functions ``sink_end_density()`` and ``sink_prepare_swallow()`` in ``sink.h``.
The ``sink_end_density()`` function completes the calculation of the smoothing length (coming soon), and this is where you can finish density-based calculations by e.g. dividing mass-weighted contributions to the velocity field by the total density in the kernel. For examples of this, see the equivalent task for the black hole particles.
The ``sink_prepare_swallow()`` task is where you can calculate density-based quantities that you might need to use in swallowing interactions later. For example, a Bondi-Hoyle accretion prescription should calculate an accretion rate and target mass to be accreted here.
Gas and sink flagging: finding whom to eat
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before accreting the gas/sink particles, the sink needs to look for eligible particles. The gas swallow interactions are performed within ``runner_iact_nonsym_sinks_gas_swallow()`` and the sink swallow in ``runner_iact_nonsym_sinks_sink_swallow()``.
Gas and sink swallowing: updating the sink properties
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When the sink swallows gas particles, it updates its internal properties based on the gas particles' properties. The ``sink_swallow_part()`` function takes care of this.
Similarly, when the sink swallows sink particles, it updates its properties from the to-be-swallowed sink particles. The ``sink_swallow_sink()`` performs the update.
There is no more than that. The code properly removes the swallowed particles.
Star formation
~~~~~~~~~~~~~~
The most important function is ``sink_spawn_star()``. It controls whether the code should continue creating stars and must return 0 or 1.
The ``sink_copy_properties_to_star()`` does what it says. This function is also responsible for adequately initialising the stars' properties. In GEAR, we give new positions and velocities within this function.
The following three functions allow you to update your sink particles (e.g. their masses) before, during and after the star formation loop: ``sink_update_sink_properties_before_star_formation()``, ``sink_update_sink_properties_during_star_formation()`` and ``sink_update_sink_properties_after_star_formation()``.
These functions are located in ``sink/Default/sink.h``
...@@ -15,16 +15,61 @@ instead of, the lossless gzip compression filter. ...@@ -15,16 +15,61 @@ instead of, the lossless gzip compression filter.
**These compression filters are lossy, meaning that they modify the **These compression filters are lossy, meaning that they modify the
data written to disk** data written to disk**
*The filters will reduce the accuracy of the data stored. No check is .. warning::
made inside SWIFT to verify that the applied filters make sense. Poor The filters will reduce the accuracy of the data stored. No check is
choices can lead to all the values of a given array reduced to 0, Inf, made inside SWIFT to verify that the applied filters make sense. Poor
or to have lost too much accuracy to be useful. The onus is entirely choices can lead to all the values of a given array reduced to 0, Inf,
on the user to choose wisely how they want to compress their data.* or to have lost too much accuracy to be useful. The onus is entirely
on the user to choose wisely how they want to compress their data.
The filters are not applied when using parallel-hdf5. The filters are not applied when using parallel-hdf5.
The name of any filter applied is carried by each individual field in
the snapshot using the meta-data attribute ``Lossy compression
filter``.
.. warning::
Starting with HDF5 version 1.14.4, filters which compress the data
by more than 2x are flagged as problematic (see their
`doc <https://docs.hdfgroup.org/hdf5/v1_14/group___f_a_p_l.html#gafa8e677af3200e155e9208522f8e05c0>`_
). SWIFT can nevertheless write files with them by setting the
appropriate file-level flags. However, some tools (such as
``h5py``) may *not* be able to read these fields.
The available filters are listed below. The available filters are listed below.
N-bit filters for long long integers
------------------------------------
The N-bit filter takes a `long long` and saves only the most
significant N bits.
This can be used in cases similar to the particle IDs. For instance,
if they cover the range :math:`[1, 10^{10}]` then 64-bits is too many
and a lot of disk space is wasted storing the 0s. In this case
:math:`\left\lceil{\log_2(10^{10})}\right\rceil + 1 = 35` bits are
sufficient (The extra "+1" is for the sign bit).
SWIFT implements 5 variants of this filter:
* ``Nbit32`` stores the 32 most significant bits (Numbers up to
:math:`2\times10^{9}`, comp. ratio: 2)
* ``Nbit36`` stores the 36 most significant bits (Numbers up to
:math:`3.4\times10^{10}`, comp. ratio: 1.78)
* ``Nbit40`` stores the 40 most significant bits (Numbers up to
:math:`5.4\times10^{11}`, comp. ratio: 1.6)
* ``Nbit44`` stores the 44 most significant bits (Numbers up to
:math:`8.7\times10^{12}`, comp. ratio: 1.45)
* ``Nbit48`` stores the 48 most significant bits (Numbers up to
:math:`1.4\times10^{14}`, comp. ratio: 1.33)
* ``Nbit56`` stores the 56 most significant bits (Numbers up to
:math:`3.6\times10^{16}`, comp. ratio: 1.14)
Note that if the data written to disk is requiring more than the N
bits then part of the information written to the snapshot will
lost. SWIFT **does not apply any verification** before applying the
filter.
Scaling filters for floating-point numbers Scaling filters for floating-point numbers
------------------------------------------ ------------------------------------------
...@@ -67,6 +112,8 @@ SWIFT implements 4 variants of this filter: ...@@ -67,6 +112,8 @@ SWIFT implements 4 variants of this filter:
* ``DScale1`` scales by :math:`10^1` * ``DScale1`` scales by :math:`10^1`
* ``DScale2`` scales by :math:`10^2` * ``DScale2`` scales by :math:`10^2`
* ``DScale3`` scales by :math:`10^3` * ``DScale3`` scales by :math:`10^3`
* ``DScale4`` scales by :math:`10^4`
* ``DScale5`` scales by :math:`10^5`
* ``DScale6`` scales by :math:`10^6` * ``DScale6`` scales by :math:`10^6`
An example application is to store the positions with ``pc`` accuracy in An example application is to store the positions with ``pc`` accuracy in
...@@ -74,8 +121,8 @@ simulations that use ``Mpc`` as their base unit by using the ``DScale6`` ...@@ -74,8 +121,8 @@ simulations that use ``Mpc`` as their base unit by using the ``DScale6``
filter. filter.
The compression rate of these filters depends on the data. On an The compression rate of these filters depends on the data. On an
EAGLE-like simulation, compressing the positions from ``Mpc`` to ``pc`` (via EAGLE-like simulation (100 Mpc box), compressing the positions from ``Mpc`` to
``Dscale6``) leads to rate of around 2.2x. ``pc`` (via ``Dscale6``) leads to rate of around 2.2x.
Modified floating-point representation filters Modified floating-point representation filters
---------------------------------------------- ----------------------------------------------
...@@ -109,7 +156,7 @@ but with 0s in the bits of the mantissa that were not stored on disk, hence ...@@ -109,7 +156,7 @@ but with 0s in the bits of the mantissa that were not stored on disk, hence
changing the result from what was stored originally before compression. changing the result from what was stored originally before compression.
These filters offer a fixed compression ratio and a fixed relative These filters offer a fixed compression ratio and a fixed relative
accuracy. The available options in SWIFT are: accuracy. The available options in SWIFT for a ``float`` (32 bits) output are:
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+ +-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
...@@ -126,15 +173,34 @@ accuracy. The available options in SWIFT are: ...@@ -126,15 +173,34 @@ accuracy. The available options in SWIFT are:
| ``HalfFloat`` | 10 | 5 | 3.31 digits | :math:`[6.1\times 10^{-5}, 6.5\times 10^{4}]` | 2x | | ``HalfFloat`` | 10 | 5 | 3.31 digits | :math:`[6.1\times 10^{-5}, 6.5\times 10^{4}]` | 2x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+ +-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
Same for a ``double`` (64 bits) output:
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| Filter name | :math:`n(a)` | :math:`n(b)` | Accuracy | Range | Compression ratio |
+=================+==============+==============+=============+===================================================+===================+
| No filter | 52 | 11 | 15.9 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | --- |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``DMantissa21`` | 21 | 11 | 6.62 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | 1.93x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``DMantissa13`` | 13 | 11 | 4.21 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | 2.56x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``DMantissa9`` | 9 | 11 | 3.01 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | 3.05x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
The accuracy given in the table corresponds to the number of decimal digits The accuracy given in the table corresponds to the number of decimal digits
that can be correctly stored. The "no filter" row is displayed for that can be correctly stored. The "no filter" row is displayed for
comparison purposes. comparison purposes.
The first two filters are useful to keep the same range as a standard In the first table, the first two filters are useful to keep the same range as a
`float` but with a reduced accuracy of 3 or 4 decimal digits. The last two standard `float` but with a reduced accuracy of 3 or 4 decimal digits. The last
are the two standard reduced precision options fitting within 16 bits: one two are the two standard reduced precision options fitting within 16 bits: one
with a much reduced relative accuracy and one with a much reduced with a much reduced relative accuracy and one with a much reduced representable
representable range. range.
The compression filters for the `double` quantities are useful if the values one
want to store fall outside the exponent range of `float` numbers but only a
lower relative precision is necessary.
An example application is to store the densities with the ``FMantissa9`` An example application is to store the densities with the ``FMantissa9``
filter as we rarely need more than 3 decimal digits of accuracy for this filter as we rarely need more than 3 decimal digits of accuracy for this
......
...@@ -5,7 +5,8 @@ ...@@ -5,7 +5,8 @@
Output List Output List
~~~~~~~~~~~ ~~~~~~~~~~~
In the sections ``Snapshots``, ``Statistics`` and ``StructureFinding``, you can In the sections ``Snapshots``, ``Statistics``, ``StructureFinding``,
``LineOfSight``, ``PowerSpectrum``, and ``FOF`` you can
specify the options ``output_list_on`` and ``output_list`` which receive an int specify the options ``output_list_on`` and ``output_list`` which receive an int
and a filename. The ``output_list_on`` enable or not the output list and and a filename. The ``output_list_on`` enable or not the output list and
``output_list`` is the filename containing the output times. With the file ``output_list`` is the filename containing the output times. With the file
...@@ -43,6 +44,12 @@ straight after having read the ICs. Similarly, SWIFT will also *not* ...@@ -43,6 +44,12 @@ straight after having read the ICs. Similarly, SWIFT will also *not*
write a snapshot at the end of a simulation unless a snapshot at the write a snapshot at the end of a simulation unless a snapshot at the
final time is specified in the list. final time is specified in the list.
Note that if a simulation is restarted using check-point files, the
list of outputs will be re-read. This means that it must be found on
the disk at the same place as it was when the simulation was first
started. It also implies that the content of the file can be altered
if the need for additional snapshots suddenly arises.
.. _Output_selection_label: .. _Output_selection_label:
Output Selection Output Selection
...@@ -54,7 +61,7 @@ available for a given configuration of SWIFT by running ...@@ -54,7 +61,7 @@ available for a given configuration of SWIFT by running
output.yml``. The file generated contains the list of fields that a output.yml``. The file generated contains the list of fields that a
simulation running with this config would output in each snapshot. It simulation running with this config would output in each snapshot. It
also lists the description string of each field and the unit also lists the description string of each field and the unit
conversion string to go from internal comoving units to physical conversion string to go from internal co-moving units to physical
CGS. Entries in the file look like: CGS. Entries in the file look like:
.. code:: YAML .. code:: YAML
...@@ -67,6 +74,9 @@ CGS. Entries in the file look like: ...@@ -67,6 +74,9 @@ CGS. Entries in the file look like:
SmoothingLengths_Gas: on # Co-moving smoothing lengths (FWHM of the kernel) of the particles : a U_L [ cm ] SmoothingLengths_Gas: on # Co-moving smoothing lengths (FWHM of the kernel) of the particles : a U_L [ cm ]
... ...
This can also be used to set the outputs produced by the
:ref:`fof_stand_alone_label`.
For cosmological simulations, users can optionally add the ``--cosmology`` flag For cosmological simulations, users can optionally add the ``--cosmology`` flag
to generate the field names appropriate for such a run. to generate the field names appropriate for such a run.
...@@ -111,7 +121,6 @@ example, look like: ...@@ -111,7 +121,6 @@ example, look like:
Masses_Gas: off Masses_Gas: off
Velocities_Gas: DScale1 Velocities_Gas: DScale1
Densities_Gas: FMantissa9 Densities_Gas: FMantissa9
ParticleIDs_Gas: IntegerNBits
For convenience, there is also the option to set a default output status for For convenience, there is also the option to set a default output status for
...@@ -122,7 +131,7 @@ field for each particle type: ...@@ -122,7 +131,7 @@ field for each particle type:
.. code:: YAML .. code:: YAML
BlackHolesOnly: Default:
Standard_Gas: off Standard_Gas: off
Standard_DM: off Standard_DM: off
Standard_DMBackground: off Standard_DMBackground: off
...@@ -130,6 +139,39 @@ field for each particle type: ...@@ -130,6 +139,39 @@ field for each particle type:
Standard_BH: on # Not strictly necessary, on is already the default Standard_BH: on # Not strictly necessary, on is already the default
Additionally, users can use the different sections to specify an alternative
base name and sub-directory for the snapshots corresponding to a given
selection:
.. code:: YAML
Default:
basename: bh
subdir: snip
This will put the outputs corresponding to this "BlackHolesOnly" selection into
a sub-directory called ``snip`` and have the files themselves called
``bh_0000.hdf5`` where the number corresponds to the global number of
snapshots. The counter is global and not reset for each type of selection.
If the basename or sub-directory keywords are omitted then the code will use the
default values specified in the ``Snapshots`` section of the main parameter file.
The sub-directories are created when writing the first snapshot of a given
category; the onus is hence on the user to ensure correct writing permissions
ahead of that time.
Finally, it is possible to specify individual sub-sampling ratios for each
output selection:
.. code:: YAML
Default:
subsample: [0, 1, 0, 0, 0, 0, 1] # Sub-sample the DM and neutrinos
subsmple_fractions: [0, 0.01, 0, 0, 0, 0, 0.1]
If these keywords are omitted then the code will use the default values
specified in the ``Snapshots`` section of the main parameter file.
Combining Output Lists and Output Selection Combining Output Lists and Output Selection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...@@ -161,6 +203,9 @@ cousins. To do this, we will define two top-level sections in our ...@@ -161,6 +203,9 @@ cousins. To do this, we will define two top-level sections in our
Metal_Mass_Fractions_Gas: off Metal_Mass_Fractions_Gas: off
Element_Mass_Fractions_Gas: off Element_Mass_Fractions_Gas: off
Densities_Gas: FMantissa9 Densities_Gas: FMantissa9
basename: snip
subsample: [0, 1, 0, 0, 0, 0, 1] # Sub-sample the DM and neutrinos
subsmple_fractions: [0, 0.01, 0, 0, 0, 0, 0.1]
... ...
To then select which outputs are 'snapshots' and which are 'snipshots', you To then select which outputs are 'snapshots' and which are 'snipshots', you
...@@ -179,3 +224,38 @@ This will enable your simulation to perform partial dumps only at the outputs ...@@ -179,3 +224,38 @@ This will enable your simulation to perform partial dumps only at the outputs
labelled as ``Snipshot``. The name of the output selection that corresponds labelled as ``Snipshot``. The name of the output selection that corresponds
to your choice in the output list will be written to the snapshot header as to your choice in the output list will be written to the snapshot header as
``Header/SelectOutput``. ``Header/SelectOutput``.
Note that if a the name used in the ``Select Output`` column does not
exist as a section in the output selection YAML file, SWIFT will exit
with an error message.
Using non-regular snapshot numbers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In some cases it may be helpful to have snapshot numbers that do not
simply increase by one each time. This could be used to encode the
simulation time in the filename for instance. To achieve this, a third
column can be added to the output list giving the snapshot labels to
use for each output::
# Redshift, Select Output, Label
100.0, Snapshot, 100
90.0, Snapshot, 90
1.0, Snapshot, 1
...
The label has to be an integer. This will lead to the following
snapshots being produced:
.. code:: bash
snap_100.hdf5
snap_90.hdf5
snap_1.hdf5
Assuming the snapshot basename (either global or set for the
``Snapshot`` output selection) was set to ``snap``.
Note that to specify labels, the ``Select Output`` column needs to be
specified.
...@@ -33,9 +33,9 @@ parameters: ...@@ -33,9 +33,9 @@ parameters:
.. code:: YAML .. code:: YAML
Cosmology: # Planck13 Cosmology: # Planck13
Omega_m: 0.307 Omega_cdm: 0.2587481
Omega_lambda: 0.693 Omega_lambda: 0.693
Omega_b: 0.0455 Omega_b: 0.0482519
h: 0.6777 h: 0.6777
a_begin: 0.0078125 # z = 127 a_begin: 0.0078125 # z = 127
...@@ -53,13 +53,15 @@ a compulsory parameter is missing an error will be raised at ...@@ -53,13 +53,15 @@ a compulsory parameter is missing an error will be raised at
start-up. start-up.
Finally, SWIFT outputs two YAML files at the start of a run. The first one Finally, SWIFT outputs two YAML files at the start of a run. The first one
``used_parameters.yml`` contains all the parameters that were used for this run, ``used_parameters.yml`` contains all the parameters that were used for this
**including all the optional parameters left unspecified with their default run, **including all the optional parameters left unspecified with their
values**. This file can be used to start an exact copy of the run. The second default values**. This file can be used to start an exact copy of the run. The
file, ``unused_parameters.yml`` contains all the values that were not read from second file, ``unused_parameters.yml`` contains all the values that were not
the parameter file. This can be used to simplify the parameter file or check read from the parameter file. This can be used to simplify the parameter file
that nothing important was ignored (for instance because the code is not or check that nothing important was ignored (for instance because the code is
configured to use some options). not configured to use some options). Note that on restart a new file
``used_parameters.yml.stepno`` is created and any changed parameters will be
written to it.
The rest of this page describes all the SWIFT parameters, split by The rest of this page describes all the SWIFT parameters, split by
section. A list of all the possible parameters is kept in the file section. A list of all the possible parameters is kept in the file
...@@ -146,7 +148,7 @@ cosmological model. The expanded :math:`\Lambda\rm{CDM}` parameters governing th ...@@ -146,7 +148,7 @@ cosmological model. The expanded :math:`\Lambda\rm{CDM}` parameters governing th
background evolution of the Universe need to be specified here. These are: background evolution of the Universe need to be specified here. These are:
* The reduced Hubble constant: :math:`h`: ``h``, * The reduced Hubble constant: :math:`h`: ``h``,
* The matter density parameter :math:`\Omega_m`: ``Omega_m``, * The cold dark matter density parameter :math:`\Omega_cdm`: ``Omega_cdm``,
* The cosmological constant density parameter :math:`\Omega_\Lambda`: ``Omega_lambda``, * The cosmological constant density parameter :math:`\Omega_\Lambda`: ``Omega_lambda``,
* The baryon density parameter :math:`\Omega_b`: ``Omega_b``, * The baryon density parameter :math:`\Omega_b`: ``Omega_b``,
* The radiation density parameter :math:`\Omega_r`: ``Omega_r``. * The radiation density parameter :math:`\Omega_r`: ``Omega_r``.
...@@ -171,24 +173,49 @@ w_0 + w_a (1 - a)`. The two parameters in the YAML file are: ...@@ -171,24 +173,49 @@ w_0 + w_a (1 - a)`. The two parameters in the YAML file are:
If unspecified these parameters default to the default If unspecified these parameters default to the default
:math:`\Lambda\rm{CDM}` values of :math:`w_0 = -1` and :math:`w_a = 0`. :math:`\Lambda\rm{CDM}` values of :math:`w_0 = -1` and :math:`w_a = 0`.
The radiation density :math:`\Omega_r` can also be specified by setting
an alternative optional parameter:
* The number of ultra-relativistic degrees of freedom :math:`N_{\rm{ur}}`:
``N_ur``.
The radiation density :math:`\Omega_r` is then automatically inferred from
:math:`N_{\rm{ur}}` and the present-day CMB temperature
:math:`T_{\rm{CMB},0}=2.7255` Kelvin. This parametrization cannot
be used together with :math:`\Omega_r`. If neither parameter is used, SWIFT
defaults to :math:`\Omega_r = 0`. Note that :math:`N_{\rm{ur}}` differs from
:math:`N_{\rm{eff}}`, the latter of which also includes massive neutrinos.
Massive neutrinos can be included by specifying the optional parameters:
* The number of massive neutrino species :math:`N_{\nu}`: ``N_nu``,
* A comma-separated list of neutrino masses in eV: ``M_nu_eV``,
* A comma-separated list of neutrino degeneracies: ``deg_nu``,
* The present-day neutrino temperature :math:`T_{\nu,0}`: ``T_nu_0``.
When including massive neutrinos, only ``N_nu`` and ``M_nu_eV`` are necessary.
By default, SWIFT will assume non-degenerate species and
:math:`T_{\nu,0}=(4/11)^{1/3}T_{\rm{CMB},0}`. Neutrinos do not contribute to
:math:`\Omega_m = \Omega_{\rm{cdm}} + \Omega_b` in our conventions.
For a Planck+13 cosmological model (ignoring radiation density as is For a Planck+13 cosmological model (ignoring radiation density as is
commonly done) and running from :math:`z=127` to :math:`z=0`, one would hence commonly done) and running from :math:`z=127` to :math:`z=0`, one would hence
use the following parameters: use the following parameters:
.. code:: YAML .. code:: YAML
Cosmology: Cosmology: # Planck13 (EAGLE flavour)
a_begin: 0.0078125 # z = 127 a_begin: 0.0078125 # z = 127
a_end: 1.0 # z = 0 a_end: 1.0 # z = 0
h: 0.6777 h: 0.6777
Omega_m: 0.307 Omega_cdm: 0.2587481
Omega_lambda: 0.693 Omega_lambda: 0.693
Omega_b: 0.0482519 Omega_b: 0.0482519
Omega_r: 0. # (Optional) Omega_r: 0. # (Optional)
w_0: -1.0 # (Optional) w_0: -1.0 # (Optional)
w_a: 0. # (Optional) w_a: 0. # (Optional)
When running a non-cosmological simulation (i.e. without the ``-c`` run-time When running a non-cosmological simulation (i.e. without the ``--cosmology`` run-time
flag) this section of the YAML file is entirely ignored. flag) this section of the YAML file is entirely ignored.
.. _Parameters_gravity: .. _Parameters_gravity:
...@@ -232,14 +259,16 @@ The accuracy of the gravity calculation is governed by the following four parame ...@@ -232,14 +259,16 @@ The accuracy of the gravity calculation is governed by the following four parame
* The accuracy criterion used in the adaptive MAC: :math:`\epsilon_{\rm fmm}`: ``epsilon_fmm``, * The accuracy criterion used in the adaptive MAC: :math:`\epsilon_{\rm fmm}`: ``epsilon_fmm``,
* The time-step size pre-factor :math:`\eta`: ``eta``, * The time-step size pre-factor :math:`\eta`: ``eta``,
The first three parameters govern the way the Fast-Multipole method The first three parameters govern the way the Fast-Multipole method tree-walk is
tree-walk is done (see the theory documents for full details). The ``MAC`` done (see the theory documents for full details). The ``MAC`` parameter can
parameter can take two values: ``adaptive`` or ``geometric``. In the first take three values: ``adaptive``, ``geometric``, or ``gadget``. In the first
case, the tree recursion decision is based on the estimated accelerations case, the tree recursion decision is based on the estimated accelerations that a
that a given tree node will produce, trying to recurse to levels where the given tree node will produce, trying to recurse to levels where the fractional
fractional contribution of the accelerations to the cell is less than contribution of the accelerations to the cell is less than :math:`\epsilon_{\rm
:math:`\epsilon_{\rm fmm}`. In the second case, a fixed Barnes-Hut-like fmm}`. In the second case, a fixed Barnes-Hut-like opening angle
opening angle :math:`\theta_{\rm cr}` is used. :math:`\theta_{\rm cr}` is used. The final case corresponds to the choice made
in the Gadget-4 code. It is an implementation using eq. 36 of `Springel et
al. (2021) <https://adsabs.harvard.edu/abs/2021MNRAS.506.2871S>`_.
The time-step of a given particle is given by :math:`\Delta t = The time-step of a given particle is given by :math:`\Delta t =
\sqrt{2\eta\epsilon_i/|\overrightarrow{a}_i|}`, where \sqrt{2\eta\epsilon_i/|\overrightarrow{a}_i|}`, where
...@@ -248,25 +277,42 @@ The time-step of a given particle is given by :math:`\Delta t = ...@@ -248,25 +277,42 @@ The time-step of a given particle is given by :math:`\Delta t =
<http://adsabs.harvard.edu/abs/2003MNRAS.338...14P>`_ recommend using <http://adsabs.harvard.edu/abs/2003MNRAS.338...14P>`_ recommend using
:math:`\eta=0.025`. :math:`\eta=0.025`.
The last tree-related parameters are: Two further parameters determine when the gravity tree is reconstructed:
* The tree rebuild frequency: ``rebuild_frequency``. * The tree rebuild frequency: ``rebuild_frequency``.
* The fraction of active particles to trigger a rebuild:
``rebuild_active_fraction``.
The tree rebuild frequency is an optional parameter defaulting to
:math:`0.01`. It is used to trigger the re-construction of the tree every
time a fraction of the particles have been integrated (kicked) forward in
time. The second parameter is also optional and determines a separate rebuild
criterion, based on the fraction of particles that is active at the
beginning of a step. This can be seen as a forward-looking version of the
first criterion, which can be useful for runs with very fast particles.
The second criterion is not used for values :math:`>1`, which is the default
assumption.
The last tree-related parameters are:
* Whether or not to use the approximate gravity from the FMM tree below the * Whether or not to use the approximate gravity from the FMM tree below the
softening scale: ``use_tree_below_softening`` (default: 0) softening scale: ``use_tree_below_softening`` (default: 0)
* Whether or not the truncated force estimator in the adaptive tree-walk * Whether or not the truncated force estimator in the adaptive tree-walk
considers the exponential mesh-related cut-off: considers the exponential mesh-related cut-off:
``allow_truncation_in_MAC`` (default: 0) ``allow_truncation_in_MAC`` (default: 0)
The tree rebuild frequency is an optional parameter defaulting to These parameters default to good all-around choices. See the
:math:`0.01`. It is used to trigger the re-construction of the tree every
time a fraction of the particles have been integrated (kicked) forward in
time. The other two parameters default to good all-around choices. See the
theory documentation about their exact effects. theory documentation about their exact effects.
Simulations using periodic boundary conditions use additional parameters for the Simulations using periodic boundary conditions use additional parameters for the
Particle-Mesh part of the calculation. The last five are optional: Particle-Mesh part of the calculation. The last five are optional:
* The number cells along each axis of the mesh :math:`N`: ``mesh_side_length``, * The number cells along each axis of the mesh :math:`N`: ``mesh_side_length``,
* Whether or not to use a distributed mesh when running over MPI: ``distributed_mesh`` (default: ``0``),
* Whether or not to use local patches instead of direct atomic operations to
write to the mesh in the non-MPI case (this is a performance tuning
parameter): ``mesh_uses_local_patches`` (default: ``1``),
* The mesh smoothing scale in units of the mesh cell-size :math:`a_{\rm * The mesh smoothing scale in units of the mesh cell-size :math:`a_{\rm
smooth}`: ``a_smooth`` (default: ``1.25``), smooth}`: ``a_smooth`` (default: ``1.25``),
* The scale above which the short-range forces are assumed to be 0 (in units of * The scale above which the short-range forces are assumed to be 0 (in units of
...@@ -280,6 +326,18 @@ For most runs, the default values can be used. Only the number of cells along ...@@ -280,6 +326,18 @@ For most runs, the default values can be used. Only the number of cells along
each axis needs to be specified. The remaining three values are best described each axis needs to be specified. The remaining three values are best described
in the context of the full set of equations in the theory documents. in the context of the full set of equations in the theory documents.
By default, SWIFT will replicate the mesh on each MPI rank. This means that a
single MPI reduction is used to ensure all ranks have a full copy of the density
field. Each node then solves for the potential in Fourier space independently of
the others. This is a fast option for small meshes. This technique is limited to
mesh with sizes :math:`N<1291` due to the limitations of MPI. Larger meshes need
to use the distributed version of the algorithm. The code then also needs to be
compiled with ``--enable-mpi-mesh-gravity``. That algorithm is slower for small
meshes but has no limits on the size of the mesh and truly huge Fourier
transforms can be performed without any problems. The only limitation is the
amount of memory on each node. The algorithm will use ``N^3 * 8 * 2 / M`` bytes
on each of the ``M`` MPI ranks.
As a summary, here are the values used for the EAGLE :math:`100^3~{\rm Mpc}^3` As a summary, here are the values used for the EAGLE :math:`100^3~{\rm Mpc}^3`
simulation: simulation:
...@@ -291,7 +349,8 @@ simulation: ...@@ -291,7 +349,8 @@ simulation:
MAC: adaptive MAC: adaptive
theta_cr: 0.6 theta_cr: 0.6
epsilon_fmm: 0.001 epsilon_fmm: 0.001
mesh_side_length: 512 mesh_side_length: 2048
distributed_mesh: 0
comoving_DM_softening: 0.0026994 # 0.7 proper kpc at z=2.8. comoving_DM_softening: 0.0026994 # 0.7 proper kpc at z=2.8.
max_physical_DM_softening: 0.0007 # 0.7 proper kpc max_physical_DM_softening: 0.0007 # 0.7 proper kpc
comoving_baryon_softening: 0.0026994 # 0.7 proper kpc at z=2.8. comoving_baryon_softening: 0.0026994 # 0.7 proper kpc at z=2.8.
...@@ -407,7 +466,10 @@ can be either drawn randomly by setting the parameter ``generate_random_ids`` ...@@ -407,7 +466,10 @@ can be either drawn randomly by setting the parameter ``generate_random_ids``
newly generated IDs do not clash with any other pre-existing particle. If this newly generated IDs do not clash with any other pre-existing particle. If this
option is set to :math:`0` (the default setting) then the new IDs are created in option is set to :math:`0` (the default setting) then the new IDs are created in
increasing order from the maximal pre-existing value in the simulation, hence increasing order from the maximal pre-existing value in the simulation, hence
preventing any clash. preventing any clash. Finally, if the option
``particle_splitting_log_extra_splits`` is set, the code will log all the splits
that go beyond the maximal allowed (typically 64) in a file so that the split tree
for these particles can still be reconstructed.
The final set of parameters in this section determine the initial and minimum The final set of parameters in this section determine the initial and minimum
temperatures of the particles. temperatures of the particles.
...@@ -444,7 +506,7 @@ The full section to start a typical cosmological run would be: ...@@ -444,7 +506,7 @@ The full section to start a typical cosmological run would be:
minimal_temperature: 100 # U_T minimal_temperature: 100 # U_T
H_mass_fraction: 0.755 H_mass_fraction: 0.755
H_ionization_temperature: 1e4 # U_T H_ionization_temperature: 1e4 # U_T
particle_splitting: 1 particle_splitting: 1
particle_splitting_mass_threshold: 5e-3 # U_M particle_splitting_mass_threshold: 5e-3 # U_M
.. _Parameters_Stars: .. _Parameters_Stars:
...@@ -455,7 +517,7 @@ Stars ...@@ -455,7 +517,7 @@ Stars
The ``Stars`` section is used to set parameters that describe the Stars The ``Stars`` section is used to set parameters that describe the Stars
calculations when doing feedback or enrichment. Note that if stars only act calculations when doing feedback or enrichment. Note that if stars only act
gravitationally (i.e. SWIFT is run *without* ``--feedback``) no parameters gravitationally (i.e. SWIFT is run *without* ``--feedback``) no parameters
in this section are used. in this section are used.
The first four parameters are related to the neighbour search: The first four parameters are related to the neighbour search:
...@@ -517,6 +579,26 @@ specified, SWIFT will start and use the birth times specified in the ...@@ -517,6 +579,26 @@ specified, SWIFT will start and use the birth times specified in the
ICs. If no values are given in the ICs, the stars' birth times will be ICs. If no values are given in the ICs, the stars' birth times will be
zeroed, which can cause issues depending on the type of run performed. zeroed, which can cause issues depending on the type of run performed.
.. _Parameters_Sinks:
Sinks
-----
Currently, there are two models for the sink particles, the Default model and the GEAR one. Their parameters are described below. To choose a model, configure the code with ``--with-sink=<model>``, where ``<model>`` can be ``none`` or ``GEAR``. To run with sink particles, add the option ``--sinks``.
Below you will find the description of the ``none`` which is the default model. For ``GEAR`` model, please refer to :ref:`sink_GEAR_model`.
By default, the code is configured with ``--with-sink=none``. Then, the ``DefaultSink`` section is used to set parameters that describe the sinks in this model. The unique parameter is the sink accretion radius (also called cut-off radius): ``cut_off_radius``.
Note that this model does not create sink particles or accrete gas.
The full section is:
.. code:: YAML
DefaultSink:
cut_off_radius: 1e-3 # Cut off radius of the sink particles (in internal units). This parameter should be adapted with the resolution..
.. _Parameters_time_integration: .. _Parameters_time_integration:
Time Integration Time Integration
...@@ -562,9 +644,13 @@ the start and end times or scale factors from the parameter file. ...@@ -562,9 +644,13 @@ the start and end times or scale factors from the parameter file.
* Dimensionless pre-factor of the maximal allowed displacement: * Dimensionless pre-factor of the maximal allowed displacement:
``max_dt_RMS_factor`` (default: ``0.25``) ``max_dt_RMS_factor`` (default: ``0.25``)
* Whether or not only the gas particle masses should be considered for
the baryon component of the calculation: ``dt_RMS_use_gas_only`` (default: ``0``)
This value rarely needs altering. See the theory documents for its These values rarely need altering. The second parameter is only
precise meaning. meaningful if a subgrid model produces star (or other) particles with
masses substantially smaller than the gas masses. See the theory
documents for the precise meanings.
A full time-step section for a non-cosmological run would be: A full time-step section for a non-cosmological run would be:
...@@ -581,9 +667,10 @@ Whilst for a cosmological run, one would need: ...@@ -581,9 +667,10 @@ Whilst for a cosmological run, one would need:
.. code:: YAML .. code:: YAML
TimeIntegration: TimeIntegration:
dt_max: 1e-4 dt_max: 1e-4
dt_min: 1e-10 dt_min: 1e-10
max_dt_RMS_factor: 0.25 # Default optional value max_dt_RMS_factor: 0.25 # Default optional value
dt_RMS_use_gas_only: 0 # Default optional value
.. _Parameters_ICs: .. _Parameters_ICs:
...@@ -629,12 +716,12 @@ Finally, SWIFT also offers these options: ...@@ -629,12 +716,12 @@ Finally, SWIFT also offers these options:
* Whether to replicate the box along each axis: ``replicate`` (default: ``1``). * Whether to replicate the box along each axis: ``replicate`` (default: ``1``).
* Whether to re-map the IDs to the range ``[0, N]`` and hence discard * Whether to re-map the IDs to the range ``[0, N]`` and hence discard
the original IDs from the IC file: ``remap_ids`` (default: ``0``). the original IDs from the IC file: ``remap_ids`` (default: ``0``).
The shift is expressed in internal units. The option to replicate the The shift is expressed in internal units and will be written to the header of
box is especially useful for weak-scaling tests. When set to an the snapshots. The option to replicate the box is especially useful for
integer >1, the box size is multiplied by this integer along each axis weak-scaling tests. When set to an integer >1, the box size is multiplied by
and the particles are duplicated and shifted such as to create exact this integer along each axis and the particles are duplicated and shifted such
copies of the simulation volume. as to create exact copies of the simulation volume.
The remapping of IDs is especially useful in combination with the option to The remapping of IDs is especially useful in combination with the option to
generate increasing IDs when splitting gas particles as it allows for the generate increasing IDs when splitting gas particles as it allows for the
...@@ -642,6 +729,20 @@ creation of a compact range of IDs beyond which the new IDs generated by ...@@ -642,6 +729,20 @@ creation of a compact range of IDs beyond which the new IDs generated by
splitting can be safely drawn from. Note that, when ``remap_ids`` is splitting can be safely drawn from. Note that, when ``remap_ids`` is
switched on the ICs do not need to contain a ``ParticleIDs`` field. switched on the ICs do not need to contain a ``ParticleIDs`` field.
Both replication and remapping explicitly overwrite any particle IDs
provided in the initial conditions. This may cause problems for runs
with neutrino particles, as some models assume that that the particle
ID was used as a random seed for the Fermi-Dirac momentum. In this case,
the ``Neutrino:generate_ics`` option can be used to generate new initial
conditions based on the replicated or remapped IDs. See :ref:`Neutrinos`
for details.
* Name of a HDF5 group to copy from the ICs file(s): ``metadata_group_name`` (default: ``ICs_parameters``)
If the initial conditions generator writes a HDF5 group with the parameters
used to make the initial conditions, this group can be copied through to
the output snapshots by specifying its name.
The full section to start a DM+hydro run from Gadget DM-only ICs would The full section to start a DM+hydro run from Gadget DM-only ICs would
be: be:
...@@ -654,27 +755,28 @@ be: ...@@ -654,27 +755,28 @@ be:
cleanup_velocity_factors: 1 cleanup_velocity_factors: 1
generate_gas_in_ics: 1 generate_gas_in_ics: 1
cleanup_smoothing_lengths: 1 cleanup_smoothing_lengths: 1
metadata_group_name: ICs_parameters
.. _Parameters_constants: .. _Parameters_constants:
Physical Constants Physical Constants
------------------ ------------------
For some idealised test it can be useful to overwrite the value of For some idealised test it can be useful to overwrite the value of some physical
some physical constants; in particular the value of the gravitational constants; in particular the value of the gravitational constant and vacuum
constant. SWIFT offers an optional parameter to overwrite the value of permeability. SWIFT offers an optional parameter to overwrite the value of
:math:`G_N`. :math:`G_N` and :math:`\mu_0`.
.. code:: YAML .. code:: YAML
PhysicalConstants: PhysicalConstants:
G: 1 G: 1
mu_0: 1
Note that this set :math:`G` to the specified value in the internal system Note that this set :math:`G` to the specified value in the internal system
of units. Setting a value of `1` when using the system of units (10^10 Msun, of units. Setting a value of `1` when using the system of units (10^10 Msun,
Mpc, km/s) will mean that :math:`G_N=1` in these units [#f2]_ instead of the Mpc, km/s) will mean that :math:`G_N=1` in these units [#f2]_ instead of the
normal value :math:`G_N=43.00927`. normal value :math:`G_N=43.00927`. The same applies to :math:`\mu_0`.
This option is only used for specific tests and debugging. This entire This option is only used for specific tests and debugging. This entire
section of the YAML file can typically be left out. More constants may section of the YAML file can typically be left out. More constants may
...@@ -694,10 +796,8 @@ parameter is the base name that will be used for all the outputs in the run: ...@@ -694,10 +796,8 @@ parameter is the base name that will be used for all the outputs in the run:
This name will then be appended by an under-score and 4 digits followed by This name will then be appended by an under-score and 4 digits followed by
``.hdf5`` (e.g. ``base_name_1234.hdf5``). The 4 digits are used to label the ``.hdf5`` (e.g. ``base_name_1234.hdf5``). The 4 digits are used to label the
different outputs, starting at ``0000``. In the default setup the digits simply different outputs, starting at ``0000``. In the default setup the digits simply
increase by one for each snapshot. However, if the optional parameter increase by one for each snapshot. (See :ref:`Output_list_label` to change that
``int_time_label_on`` is switched on, then we use 6 digits and these will the behaviour.)
physical time of the simulation rounded to the nearest integer
(e.g. ``base_name_001234.hdf5``) [#f3]_.
The time of the first snapshot is controlled by the two following options: The time of the first snapshot is controlled by the two following options:
...@@ -722,11 +822,13 @@ The location and naming of the snapshots is altered by the following options: ...@@ -722,11 +822,13 @@ The location and naming of the snapshots is altered by the following options:
* Directory in which to write snapshots: ``subdir``. * Directory in which to write snapshots: ``subdir``.
(default: empty string). (default: empty string).
If this is set then the full path to the snapshot files will be generated by If this is set then the full path to the snapshot files will be generated
taking this value and appending a slash and then the snapshot file name by taking this value and appending a slash and then the snapshot file name
described above - e.g. ``subdir/base_name_1234.hdf5``. The directory is described above - e.g. ``subdir/base_name_1234.hdf5``. The directory is
created if necessary. Any VELOCIraptor output produced by the run is also written created if necessary. Note however, that the sub-directories are created
to this directory. when writing the first snapshot of a given category; the onus is hence on
the user to ensure correct writing permissions ahead of that time. Any
VELOCIraptor output produced by the run is also written to this directory.
When running the code with structure finding activated, it is often When running the code with structure finding activated, it is often
useful to have a structure catalog written at the same simulation time useful to have a structure catalog written at the same simulation time
...@@ -746,6 +848,21 @@ in the corresponding section of the YAML parameter file. When running with ...@@ -746,6 +848,21 @@ in the corresponding section of the YAML parameter file. When running with
_more_ calls to VELOCIraptor than snapshots, gaps between snapshot numbers will _more_ calls to VELOCIraptor than snapshots, gaps between snapshot numbers will
be created to accommodate for the intervening VELOCIraptor-only catalogs. be created to accommodate for the intervening VELOCIraptor-only catalogs.
It is also possible to run the FOF algorithm just before writing each snapshot.
* Run FOF every time a snapshot is dumped: ``invoke_fof``
(default: ``0``).
See the section :ref:`Parameters_fof` for details of the FOF parameters.
It is also possible to run the power spectrum calculation just before writing
each snapshot.
* Run PS every time a snapshot is dumped: ``invoke_ps``
(default: ``0``).
See the section :ref:`Parameters_ps` for details of the power spectrum parameters.
When running over MPI, users have the option to split the snapshot over more When running over MPI, users have the option to split the snapshot over more
than one file. This can be useful if the parallel-io on a given system is slow than one file. This can be useful if the parallel-io on a given system is slow
but has the drawback of producing many files per time slice. This is activated but has the drawback of producing many files per time slice. This is activated
...@@ -758,7 +875,57 @@ also that unlike other codes, SWIFT does *not* let the users chose the number of ...@@ -758,7 +875,57 @@ also that unlike other codes, SWIFT does *not* let the users chose the number of
individual files over which a snapshot is distributed. This is set by the number individual files over which a snapshot is distributed. This is set by the number
of MPI ranks used in a given run. The individual files of snapshot 1234 will of MPI ranks used in a given run. The individual files of snapshot 1234 will
have the name ``base_name_1234.x.hdf5`` where when running on N MPI ranks, ``x`` have the name ``base_name_1234.x.hdf5`` where when running on N MPI ranks, ``x``
runs from 0 to N-1. runs from 0 to N-1. If HDF5 1.10.0 or a more recent version is available,
an additional meta-snapshot named ``base_name_1234.hdf5`` will be produced
that can be used as if it was a non-distributed snapshot. In this case, the
HDF5 library itself can figure out which file is needed when manipulating the
snapshot.
On Lustre filesystems [#f4]_ it is important to properly stripe files to
achieve a good writing and reading speed. If the parameter
``lustre_OST_checks`` is set and the lustre API is available SWIFT will
determine the number of OSTs available and rank these by free space, it will
then set the `stripe count` of each file to `1` and choose an OST
`offset` so each rank writes to a different OST, unless there are more ranks
than OSTs in which case the assignment wraps. In this way OSTs should be
filled evenly and written to using an optimal access pattern.
If the parameter is not set then the files will be created with the default
system policy (or whatever was set for the directory where the files are
written). This parameter has no effect on non-Lustre file systems.
Other parameters are also provided to handle the cases when individual OSTs do
not have sufficient free space to write a file: ``lustre_OST_free`` and
when OSTs are closed for administrative reasons: ``lustre_OST_test``, in which
case they cannot be written. This is important as the `offset` assignment in
this case is not used by lustre which picks the next writable OST, so in our
scheme such OSTs will be used more times than we intended.
* Use the lustre API to assign a stripe and offset to the distributed snapshot
files:
``lustre_OST_checks`` (default: ``0``)
* Do not use OSTs that do not have a certain amount of free space in MiB.
Zero disables and -1 activates a guess based on the size of the process:
``lustre_OST_free`` (default: ``0``)
* Check OSTs can be written to and remove those from consideration:
``lustre_OST_test`` (default: ``0``)
Users can optionally ask to randomly sub-sample the particles in the snapshots.
This is specified for each particle type individually:
* Whether to switch on sub-sampling: ``subsample``
* Whether to switch on sub-sampling: ``subsample_fraction``
These are arrays of 7 elements defaulting to seven 0s if left unspecified. Each
entry corresponds to the particle type used in the initial conditions and
snapshots [#f3]_. The ``subsample`` array is made of ``0`` and ``1`` to indicate which
particle types to subsample. The other array is a float between ``0`` and ``1``
indicating the fraction of particles to keep in the outputs. Note that the
selection of particles is selected randomly for each individual
snapshot. Particles can hence not be traced back from output to output when this
is switched on.
Users can optionally specify the level of compression used by the HDF5 library Users can optionally specify the level of compression used by the HDF5 library
using the parameter: using the parameter:
...@@ -774,6 +941,47 @@ the SHUFFLE filter is also applied to get higher compression rates. Note that up ...@@ -774,6 +941,47 @@ the SHUFFLE filter is also applied to get higher compression rates. Note that up
until HDF5 1.10.x this option is not available when using the MPI-parallel until HDF5 1.10.x this option is not available when using the MPI-parallel
version of the i/o routines. version of the i/o routines.
When applying lossy compression (see :ref:`Compression_filters`), particles may
be be getting positions that are marginally beyond the edge of the simulation
volume. A small vector perpendicular to the edge can be added to the particles
to alleviate this issue. This can be switched on by setting the parameter
``use_delta_from_edge`` (default: ``0``) to ``1`` and the buffer size from the
edge ``delta_from_edge`` (default: ``0.``). An example would be when using
Mega-parsec as the base unit and using a filter rounding to the nearest 10
parsec (``DScale5``). Adopting a buffer of 10pc (``delta_from_edge:1e-5``) would
alleviate any possible issue of seeing particles beyond the simulation volume in
the snapshots. In all practical applications the shift would be << than the
softening.
Users can run a program after a snapshot is dumped to disk using the following
parameters:
* Use the extra command after snapshot creation: ``run_on_dump`` (default :``0``)
* Command to run after snapshot creation: ``dump_command`` (default: nothing)
These are particularly useful should you wish to submit a job for postprocessing
the snapshot after it has just been created. Your script will be invoked with
two parameters, the snapshot base-name, and the snapshot number that was just
output as a zero-padded integer. For example, if the base-name is "eagle" and
snapshot 7 was just dumped, with ``dump_command`` set to ``./postprocess.sh``,
then SWIFT will run ``./postprocess.sh eagle 0007``.
For some quantities, especially in the subgrid models, it can be advantageous to
start recording numbers at a fixed time before the dump of a snapshot. Classic
examples are an averaged star-formation rate or accretion rate onto BHs. For the
subgrid models that support it, the triggers can be specified by setting the
following parameters:
* for gas: ``recording_triggers_part`` (no default, array of size set by each subgrid model)
* for stars: ``recording_triggers_spart`` (no default, array of size set by each subgrid model)
* for BHs: ``recording_triggers_bpart`` (no default, array of size set by each subgrid model)
The time is specified in internal time units (See the :ref:`Parameters_units`
section) and a recording can be ignored by setting the parameter to ``-1``. Note
that the code will verify that the recording time is smaller than the gap in
between consecutive snapshot dumps and if the recording window is longer, it
will reduce it to the gap size between the snapshots.
Finally, it is possible to specify a different system of units for the snapshots Finally, it is possible to specify a different system of units for the snapshots
than the one that was used internally by SWIFT. The format is identical to the than the one that was used internally by SWIFT. The format is identical to the
one described above (See the :ref:`Parameters_units` section) and read: one described above (See the :ref:`Parameters_units` section) and read:
...@@ -810,19 +1018,31 @@ would have: ...@@ -810,19 +1018,31 @@ would have:
time_first: 0.01 time_first: 0.01
delta_time: 0.005 delta_time: 0.005
invoke_stf: 0 invoke_stf: 0
int_time_label_on: 0 invoke_fof: 1
compression: 3 compression: 3
distributed: 1 distributed: 1
lustre_OST_count: 48 # System has 48 Lustre OSTs to distribute the files over
UnitLength_in_cgs: 1. # Use cm in outputs UnitLength_in_cgs: 1. # Use cm in outputs
UnitMass_in_cgs: 1. # Use grams in outputs UnitMass_in_cgs: 1. # Use grams in outputs
UnitVelocity_in_cgs: 1. # Use cm/s in outputs UnitVelocity_in_cgs: 1. # Use cm/s in outputs
UnitCurrent_in_cgs: 1. # Use Ampere in outputs UnitCurrent_in_cgs: 1. # Use Ampere in outputs
UnitTemp_in_cgs: 1. # Use Kelvin in outputs UnitTemp_in_cgs: 1. # Use Kelvin in outputs
subsample: [0, 1, 0, 0, 0, 0, 1] # Sub-sample the DM and neutrinos
subsample_fraction: [0, 0.01, 0, 0, 0, 0, 0.1] # Write 1% of the DM parts and 10% of the neutrinos
run_on_dump: 1
dump_command: ./submit_analysis.sh
use_delta_from_edge: 1
delta_from_edge: 1e-6 # Move particles away from the edge by 1-e6 of the length unit.
recording_triggers_part: [1.0227e-4, 1.0227e-5] # Recording starts 100M and 10M years before a snapshot
recording_triggers_spart: [-1, -1] # No recording
recording_triggers_bpart: [1.0227e-4, 1.0227e-5] # Recording starts 100M and 10M years before a snapshot
Some additional specific options for the snapshot outputs are described in the Some additional specific options for the snapshot outputs are described in the
following pages: following pages:
* :ref:`Output_list_label` (to have snapshots not evenly spaced in time), * :ref:`Output_list_label` (to have snapshots not evenly spaced in time or with
non-regular labels),
* :ref:`Output_selection_label` (to select what particle fields to write). * :ref:`Output_selection_label` (to select what particle fields to write).
.. _Parameters_line_of_sight: .. _Parameters_line_of_sight:
...@@ -855,6 +1075,182 @@ be processed by the ``SpecWizard`` tool ...@@ -855,6 +1075,182 @@ be processed by the ``SpecWizard`` tool
range_when_shooting_down_y: 100. # Range along the y-axis of LoS along y range_when_shooting_down_y: 100. # Range along the y-axis of LoS along y
range_when_shooting_down_z: 100. # Range along the z-axis of LoS along z range_when_shooting_down_z: 100. # Range along the z-axis of LoS along z
.. _Parameters_light_cone:
Light Cone Outputs
---------------------
One or more light cone outputs can be configured by including ``LightconeX`` sections
in the parameter file, where X is in the range 0-7. It is also possible to include a
``LightconeCommon`` section for parameters which are the same for all lightcones. The
parameters for each light cone are:
* Switch to enable or disable a lightcone: ``enabled``
This should be set to 1 to enable the corresponding lightcone or 0 to disable it.
Has no effect if specified in the LightconeCommon section.
* Directory in which to write light cone output: ``subdir``
All light cone output files will be written in the specified directory.
* Base name for particle and HEALPix map outputs: ``basename``.
Particles will be written to files ``<basename>_XXXX.Y.hdf5``, where XXXX numbers the files
written by a single MPI rank and Y is the MPI rank index. HEALPix maps are written to files
with names ``<basename>.shell_X.hdf5``, where X is the index of the shell. The basename must
be unique for each light cone so it cannot be specified in the LightconeCommon section.
See :ref:`lightcone_adding_outputs_label` for information on adding new output quantities.
* Location of the observer in the simulation box, in internal units: ``observer_position``
* Size of in memory chunks used to store particles and map updates: ``buffer_chunk_size``
During each time step buffered particles and HEALPix map updates are stored in a linked
list of chunks of ``buffer_chunk_size`` elements. Additional chunks are allocated as needed.
The map update process is parallelized over chunks so the chunks should be small enough that
each MPI rank typically has more chunks than threads.
* Maximum amount of map updates (in MB) to send on each iteration: ``max_map_update_send_size_mb``
Flushing the map update buffer involves sending the updates to the MPI ranks with the affected
pixel data. Sending all updates at once can consume a large amount of memory so this parameter
allows updates to be applied over multiple iterations to reduce peak memory usage.
* Redshift range to output each particle type: ``z_range_for_<type>``
A two element array with the minimum and maximum redshift at which particles of type ``<type>``
will be output as they cross the lightcone. ``<type>`` can be Gas, DM, DMBackground, Stars, BH
or Neutrino. If this parameter is not present for a particular type then that type will not
be output.
* The number of buffered particles which triggers a write to disk: ``max_particles_buffered``
If an MPI rank has at least max_particles_buffered particles which have crossed the lightcone,
it will write them to disk at the end of the current time step.
* Size of chunks in the particle output file
This sets the HDF5 chunk size. Particle outputs must be chunked because the number of particles
which will be written out is not known when the file is created.
* Whether to use lossy compression in the particle outputs: ``particles_lossy_compression``
If this is 1 then the HDF5 lossy compression filter named in the definition of each particle
output field will be enabled. If this is 0 lossy compression is not applied.
* Whether to use lossless compression in the particle outputs: ``particles_gzip_level``
If this is non-zero the HDF5 deflate filter will be applied to lightcone particle output with
the compression level set to the specified value.
* HEALPix map resolution: ``nside``
* Name of the file with shell radii: ``radius_file.txt``
This specifies the name of a file with the inner and outer radii of the shells used to make
HEALPix maps. It should be a text file with a one line header and then two comma separated columns
of numbers with the inner and outer radii. The units are determined by the header. The header must
be one of the following:
``# Minimum comoving distance, Maximum comoving distance``,
``# Minimum redshift, Maximum redshift``, or
``# Maximum expansion factor, Minimum expansion factor``. Comoving distances are in internal units.
The shells must be in ascending order of radius and must not overlap.
* Number of pending HEALPix map updates before the buffers are flushed: ``max_updates_buffered``
In MPI mode applying updates to the HEALPix maps requires communication and forces synchronisation
of all MPI ranks, so it is not done every time step. If any MPI rank has at least
``max_updates_buffered`` pending updates at the end of a time step, then all ranks will apply
their updates to the HEALPix maps.
* Which types of HEALPix maps to create: ``map_names_file``
This is the name of a file which specifies what quantities should be accumulated to HEALPix maps.
The possible map types are defined in the lightcone_map_types array in ``lightcone_map_types.h``.
See :ref:`lightcone_adding_outputs_label` if you'd like to add a new map type.
* Whether to distribute HEALPix maps over multiple files: ``distributed_maps``
If this is 0 then the code uses HDF5 collective writes to write each map to a single file. If this
is 1 then each MPI rank writes its part of the HEALPix map to a separate file.
The file contains two columns: the first column is the name of the map type and the second is the
name of the compression filter to apply to it. See io_compression.c for the list of compression
filter names. Set the filter name to ``on`` to disable compression.
* Whether to use lossless compression in the HEALPix map outputs: ``maps_gzip_level``
If this is non-zero the HDF5 deflate filter will be applied to the lightcone map output with
the compression level set to the specified value.
The following shows a full set of light cone parameters for the case where we're making two
light cones which only differ in the location of the observer:
.. code:: YAML
LightconeCommon:
# Common parameters
subdir: lightcones
buffer_chunk_size: 100000
max_particles_buffered: 1000000
hdf5_chunk_size: 10000
# Redshift ranges for particle types
z_range_for_Gas: [0.0, 0.05]
z_range_for_DM: [0.0, 0.05]
z_range_for_DMBackground: [0.0, 0.05]
z_range_for_Stars: [0.0, 0.05]
z_range_for_BH: [0.0, 0.05]
z_range_for_Neutrino: [0.0, 0.05]
# Healpix map parameters
nside: 512
radius_file: ./shell_radii.txt
max_updates_buffered: 100000
map_names_file: map_names.txt
max_map_update_send_size_mb: 1.0
distributed_maps: 0
# Compression options
particles_lossy_compression: 0
particles_gzip_level: 6
maps_gzip_level: 6
Lightcone0:
enabled: 1
basename: lightcone0
observer_position: [35.5, 78.12, 12.45]
Lightcone0:
enabled: 1
basename: lightcone1
observer_position: [74.2, 10.80, 53.59]
An example of the radius file::
# Minimum comoving distance, Maximum comoving distance
0.0, 50.0
50.0, 100.0
150.0, 200.0
200.0, 400.0
400.0, 1000.0
An example of the map names file::
TotalMass on
SmoothedGasMass on
UnsmoothedGasMass on
DarkMatterMass on
.. _Parameters_eos: .. _Parameters_eos:
Equation of State (EoS) Equation of State (EoS)
...@@ -862,17 +1258,12 @@ Equation of State (EoS) ...@@ -862,17 +1258,12 @@ Equation of State (EoS)
The ``EoS`` section contains options for the equations of state. The ``EoS`` section contains options for the equations of state.
Multiple EoS can be used for :ref:`planetary`, Multiple EoS can be used for :ref:`planetary`,
see :ref:`planetary_eos` for more information. see :ref:`planetary_eos` for more information.
To enable one or multiple of these EoS, the corresponding ``planetary_use_*:`` To enable one or multiple EoS, the corresponding ``planetary_use_*:``
flag(s) must be set to ``1`` in the parameter file for a simulation, flag(s) must be set to ``1`` in the parameter file for a simulation,
along with the path to any table files, which are provided with the along with the path to any table files, which are set by the
``planetary_*_table_file:`` parameters. ``planetary_*_table_file:`` parameters.
This currently means that all EoS within each base type are prepared at once,
which we intend to simplify in the future.
The data files for the tabulated EoS can be downloaded using
the ``examples/EoSTables/get_eos_tables.sh`` script.
For the (non-planetary) isothermal EoS, the ``isothermal_internal_energy:`` For the (non-planetary) isothermal EoS, the ``isothermal_internal_energy:``
parameter sets the thermal energy per unit mass. parameter sets the thermal energy per unit mass.
...@@ -880,24 +1271,125 @@ parameter sets the thermal energy per unit mass. ...@@ -880,24 +1271,125 @@ parameter sets the thermal energy per unit mass.
.. code:: YAML .. code:: YAML
EoS: EoS:
isothermal_internal_energy: 20.26784 # Thermal energy per unit mass for the case of isothermal equation of state (in internal units). isothermal_internal_energy: 20.26784 # Thermal energy per unit mass for the case of isothermal equation of state (in internal units).
barotropic_vacuum_sound_speed: 2e4 # Vacuum sound speed in the case of the barotropic equation of state (in internal units).
planetary_use_Til: 1 # (Optional) Whether to prepare the Tillotson EoS barotropic_core_density: 1e-13 # Core density in the case of the barotropic equation of state (in internal units).
planetary_use_HM80: 0 # (Optional) Whether to prepare the Hubbard & MacFarlane (1980) EoS # Select which planetary EoS material(s) to enable for use.
planetary_use_SESAME: 0 # (Optional) Whether to prepare the SESAME EoS planetary_use_idg_def: 0 # Default ideal gas, material ID 0
planetary_use_ANEOS: 0 # (Optional) Whether to prepare the ANEOS EoS planetary_use_Til_iron: 1 # Tillotson iron, material ID 100
# (Optional) Table file paths planetary_use_Til_granite: 1 # Tillotson granite, material ID 101
planetary_HM80_HHe_table_file: ./EoSTables/HM80_HHe.txt planetary_use_Til_water: 0 # Tillotson water, material ID 102
planetary_HM80_ice_table_file: ./EoSTables/HM80_ice.txt planetary_use_Til_basalt: 0 # Tillotson basalt, material ID 103
planetary_HM80_rock_table_file: ./EoSTables/HM80_rock.txt planetary_use_HM80_HHe: 0 # Hubbard & MacFarlane (1980) hydrogen-helium atmosphere, material ID 200
planetary_SESAME_iron_table_file: ./EoSTables/SESAME_iron_2140.txt planetary_use_HM80_ice: 0 # Hubbard & MacFarlane (1980) H20-CH4-NH3 ice mix, material ID 201
planetary_SESAME_basalt_table_file: ./EoSTables/SESAME_basalt_7530.txt planetary_use_HM80_rock: 0 # Hubbard & MacFarlane (1980) SiO2-MgO-FeS-FeO rock mix, material ID 202
planetary_SESAME_water_table_file: ./EoSTables/SESAME_water_7154.txt planetary_use_SESAME_iron: 0 # SESAME iron 2140, material ID 300
planetary_SS08_water_table_file: ./EoSTables/SS08_water.txt planetary_use_SESAME_basalt: 0 # SESAME basalt 7530, material ID 301
planetary_use_SESAME_water: 0 # SESAME water 7154, material ID 302
planetary_use_SS08_water: 0 # Senft & Stewart (2008) SESAME-like water, material ID 303
planetary_use_ANEOS_forsterite: 0 # ANEOS forsterite (Stewart et al. 2019), material ID 400
planetary_use_ANEOS_iron: 0 # ANEOS iron (Stewart 2020), material ID 401
planetary_use_ANEOS_Fe85Si15: 0 # ANEOS Fe85Si15 (Stewart 2020), material ID 402
# Tablulated EoS file paths.
planetary_HM80_HHe_table_file: ./EoSTables/HM80_HHe.txt
planetary_HM80_ice_table_file: ./EoSTables/HM80_ice.txt
planetary_HM80_rock_table_file: ./EoSTables/HM80_rock.txt
planetary_SESAME_iron_table_file: ./EoSTables/SESAME_iron_2140.txt
planetary_SESAME_basalt_table_file: ./EoSTables/SESAME_basalt_7530.txt
planetary_SESAME_water_table_file: ./EoSTables/SESAME_water_7154.txt
planetary_SS08_water_table_file: ./EoSTables/SS08_water.txt
planetary_ANEOS_forsterite_table_file: ./EoSTables/ANEOS_forsterite_S19.txt planetary_ANEOS_forsterite_table_file: ./EoSTables/ANEOS_forsterite_S19.txt
planetary_ANEOS_iron_table_file: ./EoSTables/ANEOS_iron_S20.txt planetary_ANEOS_iron_table_file: ./EoSTables/ANEOS_iron_S20.txt
planetary_ANEOS_Fe85Si15_table_file: ./EoSTables/ANEOS_Fe85Si15_S20.txt planetary_ANEOS_Fe85Si15_table_file: ./EoSTables/ANEOS_Fe85Si15_S20.txt
.. _Parameters_ps:
Power Spectra Calculation
-------------------------
SWIFT can compute a variety of auto- and cross- power spectra at user-specified
intervals. The behaviour of this output type is governed by the ``PowerSpectrum``
section of the parameter file. The calculation is performed on a regular grid
(typically of size 256^3) and foldings are used to extend the range probed to
smaller scales.
The options are:
* The size of the base grid to perform the PS calculation:
``grid_side_length``.
* The number of grid foldings to use: ``num_folds``.
* The factor by which to fold at each iteration: ``fold_factor`` (default: 4)
* The order of the window function: ``window_order`` (default: 3)
* Whether or not to correct the placement of the centre of the k-bins for small k values: ``shift_centre_small_k_bins`` (default: 1)
The window order sets the way the particle properties get assigned to the mesh.
Order 1 corresponds to the nearest-grid-point (NGP), order 2 to cloud-in-cell
(CIC), and order 3 to triangular-shaped-cloud (TSC). Higher-order schemes are not
implemented.
Finally, the quantities for which a PS should be computed are specified as a
list of pairs of values for the parameter ``requested_spectra``. Auto-spectra
are specified by using the same type for both pair members. The available values
listed in the following table:
+---------------------+---------------------------------------------------+
| Name | Description |
+=====================+===================================================+
| ``matter`` | Mass density of all matter |
+---------------------+---------------------------------------------------+
| ``cdm`` | Mass density of all dark matter |
+---------------------+---------------------------------------------------+
| ``gas`` | Mass density of all gas |
+---------------------+---------------------------------------------------+
| ``starBH`` | Mass density of all stars and BHs |
+---------------------+---------------------------------------------------+
| ``neutrino`` | Mass density of all neutrinos |
+---------------------+---------------------------------------------------+
| ``neutrino1`` | Mass density of a random half of the neutrinos |
+---------------------+---------------------------------------------------+
| ``neutrino2`` | Mass density of a the other half of the neutrinos |
+---------------------+---------------------------------------------------+
| ``pressure`` | Electron pressure |
+---------------------+---------------------------------------------------+
A dark matter mass density auto-spectrum is specified as ``cdm-cdm`` and a gas
density - electron pressure cross-spectrum as ``gas-pressure``.
The ``neutrino1`` and ``neutrino2`` selections are based on the particle IDs and
are mutually exclusive. The particles selected in each half are different in
each output. Note that neutrino PS can only be computed when neutrinos are
simulated using particles.
SWIFT uses bins of integer :math:`k`, with bins :math:`[0.5,1.5]`, :math:`[1.5,2.5]` etc. The
representative :math:`k` values used to be assigned to the bin centres (so k=1, 2, etc), which are
then transformed to physical :math:`k` by a factor :math:`kL/(2*pi)`. For the first few bins, only
few modes contribute to each bin. It is then advantageous to move the "centre" of the bin to the
actual location correponding to the mean of the contributing modes. The :math:`k` label of the bin
is thus shifted by a small amount. The way to calculate these shifts is to consider a 3D cube of
:math:`(kx,ky,kz)` cells and check which cells fall inside a spherical shell with boundaries
:math:`(i+0.5,i+1.5)`, then calculate the average :math:`k=\sqrt{kx^2+ky^2+kz^2}`. So for
:math:`i=0` there cells :math:`k=1` and 12 cells :math:`k=\sqrt(2)`, so the weighted k becomes
:math:`(6 * 1 + 12 * \sqrt(2)) / 18 = 1.2761424`. Note that only the first 7 (22) bins require a
correction larger than 1 (0.1) percent. We apply a correction to the first 128 terms. This
correction is activated when ``shift_centre_small_k_bins`` is switched on (that's the default
behaviour).
An example of a valid power-spectrum section of the parameter file looks like:
.. code:: YAML
PowerSpectrum:
grid_side_length: 256
num_folds: 3
requested_spectra: ["matter-matter", "cdm-cdm", "cdm-matter"] # Total-matter and CDM auto-spectra + CDM-total cross-spectrum
Some additional specific options for the power-spectra outputs are described in the
following pages:
* :ref:`Output_list_label` (to have PS not evenly spaced in time)
.. _Parameters_fof: .. _Parameters_fof:
Friends-Of-Friends (FOF) Friends-Of-Friends (FOF)
...@@ -956,6 +1448,36 @@ the MPI-rank. SWIFT writes one file per MPI rank. If the ``save`` option has ...@@ -956,6 +1448,36 @@ the MPI-rank. SWIFT writes one file per MPI rank. If the ``save`` option has
been activated, the previous set of restart files will be named been activated, the previous set of restart files will be named
``basename_000000.rst.prev``. ``basename_000000.rst.prev``.
On Lustre filesystems [#f4]_ it is important to properly stripe files to
achieve a good writing and reading speed. If the parameter
``lustre_OST_checks`` is set and the lustre API is available SWIFT will
determine the number of OSTs available and rank these by free space, it will
then set the `stripe count` of each restart file to `1` and choose an OST
`offset` so each rank writes to a different OST, unless there are more ranks
than OSTs in which case the assignment wraps. In this way OSTs should be
filled evenly and written to using an optimal access pattern.
If the parameter is not set then the files will be created with the default
system policy (or whatever was set for the directory where the files are
written). This parameter has no effect on non-Lustre file systems.
Other parameters are also provided to handle the cases when individual OSTs do
not have sufficient free space to write a restart file: ``lustre_OST_free`` and
when OSTs are closed for administrative reasons: ``lustre_OST_test``, in which
case they cannot be written. This is important as the `offset` assignment in
this case is not used by lustre which picks the next writable OST, so in our
scheme such OSTs will be used more times than we intended.
* Use the lustre API to assign a stripe and offset to restart files:
``lustre_OST_checks`` (default: ``0``)
* Do not use OSTs that do not have a certain amount of free space in MiB.
Zero disables and -1 activates a guess based on the size of the process:
``lustre_OST_free`` (default: ``0``)
* Check OSTs can be written to and remove those from consideration:
``lustre_OST_test`` (default: ``0``)
SWIFT can also be stopped by creating an empty file called ``stop`` in the SWIFT can also be stopped by creating an empty file called ``stop`` in the
directory where the restart files are written (i.e. the directory speicified by directory where the restart files are written (i.e. the directory speicified by
the parameter ``subdir``). This will make SWIFT dump a fresh set of restart file the parameter ``subdir``). This will make SWIFT dump a fresh set of restart file
...@@ -997,6 +1519,7 @@ hours after which a shell command will be run, one would use: ...@@ -997,6 +1519,7 @@ hours after which a shell command will be run, one would use:
delta_hours: 5.0 delta_hours: 5.0
stop_steps: 100 stop_steps: 100
max_run_time: 24.0 # In hours max_run_time: 24.0 # In hours
lustre_OST_count: 48 # System has 48 Lustre OSTs to distribute the files over
resubmit_on_exit: 1 resubmit_on_exit: 1
resubmit_command: ./resub.sh resubmit_command: ./resub.sh
...@@ -1361,11 +1884,11 @@ necessary and one would use: ...@@ -1361,11 +1884,11 @@ necessary and one would use:
invoke_stf: 1 # We want VELOCIraptor to be called when snapshots are dumped. invoke_stf: 1 # We want VELOCIraptor to be called when snapshots are dumped.
# ... # ...
# Rest of the snapshots properties # Rest of the snapshots properties
StructureFinding: StructureFinding:
config_file_name: my_stf_configuration_file.cfg # See the VELOCIraptor manual for the content of this file. config_file_name: my_stf_configuration_file.cfg # See the VELOCIraptor manual for the content of this file.
basename: ./haloes/ # Write the catalogs in this sub-directory basename: ./haloes/ # Write the catalogs in this sub-directory
If one additionally want to call VELOCIraptor at times not linked with If one additionally want to call VELOCIraptor at times not linked with
snapshots, the additional parameters need to be supplied. snapshots, the additional parameters need to be supplied.
...@@ -1434,6 +1957,27 @@ and all the gparts are not active during the timestep of the snapshot dump, the ...@@ -1434,6 +1957,27 @@ and all the gparts are not active during the timestep of the snapshot dump, the
exact forces computation is performed on the first timestep at which all the exact forces computation is performed on the first timestep at which all the
gparts are active after that snapshot output timestep. gparts are active after that snapshot output timestep.
Neutrinos
---------
The ``Neutrino`` section of the parameter file controls the behaviour of
neutrino particles (``PartType6``). This assumes that massive neutrinos have
been specified in the ``Cosmology`` section described above. Random
Fermi-Dirac momenta will be generated if ``generate_ics`` is used. The
:math:`\delta f` method for shot noise reduction can be activated with
``use_delta_f``. Finally, a random seed for the Fermi-Dirac momenta can
be set with ``neutrino_seed``.
For mode details on the neutrino implementation, refer to :ref:`Neutrinos`.
A complete specification of the model looks like
.. code:: YAML
Neutrino:
generate_ics: 1 # Replace neutrino particle velocities with random Fermi-Dirac momenta at the start
use_delta_f: 1 # Use the delta-f method for shot noise reduction
neutrino_seed: 1234 # A random seed used for the Fermi-Dirac momenta
------------------------ ------------------------
...@@ -1442,9 +1986,12 @@ gparts are active after that snapshot output timestep. ...@@ -1442,9 +1986,12 @@ gparts are active after that snapshot output timestep.
.. [#f2] which would translate into a constant :math:`G_N=1.5517771\times10^{-9}~cm^{3}\,g^{-1}\,s^{-2}` if expressed in the CGS system. .. [#f2] which would translate into a constant :math:`G_N=1.5517771\times10^{-9}~cm^{3}\,g^{-1}\,s^{-2}` if expressed in the CGS system.
.. [#f3] This feature only makes sense for non-cosmological runs for which the .. [#f3] The mapping is 0 --> gas, 1 --> dark matter, 2 --> background dark
internal time unit is such that when rounded to the nearest integer a matter, 3 --> sinks, 4 --> stars, 5 --> black holes, 6 --> neutrinos.
sensible number is obtained. A use-case for this feature would be to
compare runs over the same physical time but with different numbers of .. [#f4] https://wiki.lustre.org/Main_Page
snapshots. Snapshots at a given time would always have the same set of
digits irrespective of the number of snapshots produced before. .. [#f5] We add a per-output random integer to the OST value such that we don't
generate a bias towards low OSTs. This averages the load over all OSTs
over the course of a run even if the number of OSTs does not divide the
number of files and vice-versa.
.. Planetary EoS .. Planetary EoS
Jacob Kegerreis, 13th March 2020 Jacob Kegerreis, 14th July 2022
.. _planetary_eos: .. _planetary_eos:
Planetary Equations of State Planetary Equations of State
============================ ============================
Configuring SWIFT with the ``--with-equation-of-state=planetary`` and Configuring SWIFT with the ``--with-equation-of-state=planetary`` and
``--with-hydro=planetary`` options enables the use of multiple ``--with-hydro=planetary`` options enables the use of multiple
equations of state (EoS). equations of state (EoS).
Every SPH particle then requires and carries the additional ``MaterialID`` flag Every SPH particle then requires and carries the additional ``MaterialID`` flag
from the initial conditions file. This flag indicates the particle's material from the initial conditions file. This flag indicates the particle's material
and which EoS it should use. and which EoS it should use.
If you have another EoS that you would like us to add, then just let us know!
It is important to check that the EoS you use are appropriate It is important to check that the EoS you use are appropriate
for the conditions in the simulation that you run. for the conditions in the simulation that you run.
Please follow the original sources of these EoS for more information and Please follow the original sources of these EoS for more information and
to check the regions of validity. to check the regions of validity. If an EoS sets particles to have a pressure
of zero, then particles may end up overlapping, especially if the gravitational
softening is very small.
So far, we have implemented several Tillotson, ANEOS, SESAME, So far, we have implemented several Tillotson, ANEOS, SESAME,
and Hubbard \& MacFarlane (1980) materials, with more on the way. and Hubbard \& MacFarlane (1980) materials, with more on the way.
The material's ID is set by a base type ID (multiplied by 100), Custom materials in SESAME-style tables can also be provided.
plus a minor type: The material's ID is set by a somewhat arbitrary base type ID
(multiplied by 100) plus an individual value, matching our code for making
planetary initial conditions, `WoMa <https://github.com/srbonilla/WoMa>`_:
+ Ideal gas: ``0``
+ Default (Set :math:`\gamma` using ``--with-adiabatic-index``, default 5/3): ``0``
+ Tillotson (Melosh, 2007): ``1`` + Tillotson (Melosh, 2007): ``1``
+ Iron: ``100`` + Iron: ``100``
+ Granite: ``101`` + Granite: ``101``
+ Water: ``102`` + Water: ``102``
+ Basalt: ``103``
+ Ice: ``104``
+ Custom user-provided parameters: ``190``, ``191``, ..., ``199``
+ Hubbard \& MacFarlane (1980): ``2`` + Hubbard \& MacFarlane (1980): ``2``
+ Hydrogen-helium atmosphere: ``200`` + Hydrogen-helium atmosphere: ``200``
+ Ice H20-CH4-NH3 mix: ``201`` + Ice H20-CH4-NH3 mix: ``201``
+ Rock SiO2-MgO-FeS-FeO mix: ``202`` + Rock SiO2-MgO-FeS-FeO mix: ``202``
+ SESAME (and similar): ``3`` + SESAME (and others in similar-style tables): ``3``
+ Iron (2140): ``300`` + Iron (2140): ``300``
+ Basalt (7530): ``301`` + Basalt (7530): ``301``
+ Water (7154): ``302`` + Water (7154): ``302``
+ Senft \& Stewart (2008) water in a SESAME-style table: ``303`` + Senft \& Stewart (2008) water: ``303``
+ AQUA, Haldemann, J. et al. (2020) water: ``304``
+ Chabrier, G. et al. (2019) Hydrogen: ``305``
+ Chabrier, G. et al. (2019) Helium: ``306``
+ Chabrier & Debras (2021) H/He mixture Y=0.245 (Jupiter): ``307``
+ ANEOS (in SESAME-style tables): ``4`` + ANEOS (in SESAME-style tables): ``4``
+ Forsterite (Stewart et al. 2019): ``400`` + Forsterite (Stewart et al. 2019): ``400``
+ Iron (Stewart, zenodo.org/record/3866507): ``401`` + Iron (Stewart, zenodo.org/record/3866507): ``401``
+ Fe85Si15 (Stewart, zenodo.org/record/3866550): ``402`` + Fe85Si15 (Stewart, zenodo.org/record/3866550): ``402``
+ Custom (in SESAME-style tables): ``9``
The data files for the tabulated EoS can be downloaded using + User-provided custom material(s): ``900``, ``901``, ..., ``909``
the ``examples/EoSTables/get_eos_tables.sh`` script.
The data files for the tabulated EoS can be downloaded using
the ``examples/Planetary/EoSTables/get_eos_tables.sh`` script.
To enable one or multiple EoS, the corresponding ``planetary_use_*:`` To enable one or multiple EoS, the corresponding ``planetary_use_*:``
flag(s) must be set to ``1`` in the parameter file for a simulation, flag(s) must be set from ``0`` to ``1`` in the parameter file for a simulation,
along with the path to any table files, which are provided with the along with the path to any table files, which are set by the
``planetary_*_table_file:`` parameters, ``planetary_*_table_file:`` parameters,
as detailed in :ref:`Parameters_eos` and ``examples/parameter_example.yml``. as detailed in :ref:`Parameters_eos` and ``examples/parameter_example.yml``.
This currently means that all EoS within each base type are prepared at once,
which we intend to simplify in the future. Unlike the EoS for an ideal or isothermal gas, these more complicated materials
do not always include transformations between the internal energy,
Unlike the EoS for an ideal or isothermal gas, these more complicated materials temperature, and entropy. At the moment, we have implemented
do not always include transformations between the internal energy, :math:`P(\rho, u)` and :math:`c_s(\rho, u)` (and more in some cases),
temperature, and entropy. At the moment, we have implemented which is sufficient for the :ref:`planetary_sph` hydro scheme,
\\(P(\\rho, u)\\) and \\(c_s(\\rho, u)\\), but some materials may thus currently be incompatible with
which is sufficient for the :ref:`planetary_sph` hydro scheme, e.g. entropy-based schemes.
but makes these materials currently incompatible with entropy-based schemes.
The Tillotson sound speed was derived using
The Tillotson sound speed was derived using :math:`c_s^2 = \left. ( \partial P / \partial \rho ) \right|_S`
\\(c_s^2 = \\left. ( \\partial P / \\partial \\rho ) \\right|_S \\) as described in
as described in `Kegerreis et al. (2019) <https://doi.org/10.1093/mnras/stz1606>`_.
`Kegerreis et al. (2019) <https://doi.org/10.1093/mnras/stz1606>`_.
Note that there is a typo in the sign of Note that there is a typo in the sign of
\\(du = T dS - P dV = T dS + (P / \\rho^2) d\\rho \\) in the appendix; :math:`du = T dS - P dV = T dS + (P / \rho^2) d\rho` in the appendix,
the correct version was used in the derivation. but the correct version was used in the actual derivation.
The ideal gas uses the same equations detailed in :ref:`equation_of_state`.
The data files for the tabulated EoS can be downloaded using
the ``examples/EoSTables/get_eos_tables.sh`` script.
The format of the data files for SESAME, ANEOS, and similar-EoS tables
is similar to the SESAME 301 (etc) style. The file contents are:
.. code-block:: python
# header (12 lines)
version_date (YYYYMMDD)
num_rho num_T
rho[0] rho[1] ... rho[num_rho] (kg/m^3)
T[0] T[1] ... T[num_T] (K)
u[0, 0] P[0, 0] c[0, 0] s[0, 0] (J/kg, Pa, m/s, J/K/kg)
u[1, 0] ... ... ...
... ... ... ...
u[num_rho-1, 0] ... ... ...
u[0, 1] ... ... ...
... ... ... ...
u[num_rho-1, num_T-1] ... ... s[num_rho-1, num_T-1]
The ``version_date`` must match the value in the ``sesame.h`` ``SESAME_params``
objects, so we can ensure that any version updates work with the git repository.
This is ignored for custom materials.
The header contains a first line that gives the material name, followed by the
same 11 lines printed here to describe the contents.
...@@ -6,12 +6,28 @@ ...@@ -6,12 +6,28 @@
Planetary Hydro Scheme Planetary Hydro Scheme
====================== ======================
This scheme is based on :ref:`minimal` but also allows multiple materials, This scheme is based on :ref:`minimal` but also allows multiple materials,
meaning that different SPH particles can be assigned different meaning that different SPH particles can be assigned different
`equations of state <equations_of_state.html>`_ (EoS). `equations of state <equations_of_state.html>`_ (EoS).
Every SPH particle then requires and carries the additional ``MaterialID`` flag Every SPH particle then requires and carries the additional ``MaterialID`` flag
from the initial conditions file. This flag indicates the particle's material from the initial conditions file. This flag indicates the particle's material
and which EoS it should use. and which EoS it should use.
The Balsara viscosity switch is used by default, but can be disabled by The Balsara viscosity switch is used by default, but can be disabled by
compiling SWIFT with ``make CFLAGS=-DPLANETARY_SPH_NO_BALSARA``. compiling SWIFT with ``make CFLAGS=-DPLANETARY_SPH_NO_BALSARA``.
Note: to access the boundary-improvement method presented in Ruiz-Bonilla+2022,
use the ``planetary_imbalance_RB22`` git branch and compile with
``--with-hydro=planetary-gdf``. However, we instead recommend using the REMIX
SPH scheme, as it has effectively replaced this method.
.. _planetary_remix_hydro:
REMIX SPH
======================
REMIX is an SPH scheme designed to alleviate effects that typically suppress
mixing and instability growth at density discontinuities in SPH simulations
(Sandnes et al. 2025), and also includes the multiple EoS options as the base
Planetary scheme. For more information on what is included in the REMIX
scheme and how to configure SWIFT to use REMIX, see: :ref:`remix_sph`.