Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 840-unit-test-testtimeline-fails
  • 875-wendland-c6-missing-neighbour-contributions
  • 887-code-does-not-compile-with-parmetis-installed-locally-but-without-metis
  • CubeTest
  • FS_Del
  • GEARRT_Iliev1
  • GEARRT_Iliev3
  • GEARRT_Iliev4
  • GEARRT_Iliev5
  • GEARRT_Iliev5-fixed-nr-subcycles
  • GEARRT_Iliev7
  • GEARRT_Iliev_static
  • GEARRT_Ivanova
  • GEARRT_fixed_nr_subcycles
  • GEARRT_injection_tests_Iliev0
  • GPU_swift
  • GrackleCoolingUpdates2
  • Lambda-T-table
  • MAGMA2
  • MAGMA2_matthieu
  • MHD_FS
  • MHD_FS_TESTs
  • MHD_FS_VP_AdvectGauge
  • MHD_Orestis
  • MHD_canvas
  • MHD_canvas_RF_128
  • MHD_canvas_RF_growth_rate
  • MHD_canvas_RobertsFlow
  • MHD_canvas_SPH_errors
  • MHD_canvas_matthieu
  • MHD_canvas_nickishch
  • MHD_canvas_nickishch_Lorentz_force_test
  • MHD_canvas_nickishch_track_everything
  • MHD_canvas_sid
  • OAK/CPAW_updates
  • OAK/LoopAdvectionTest
  • OAK/adaptive_divv
  • OAK/kinetic_dedner
  • REMIX_cosmo
  • RT_dualc
  • RT_recombination_radiation
  • RT_test_mladen
  • SIDM
  • SIDM_wKDSDK
  • SNdust
  • SPHM1RT_CosmologicalStromgrenSphere
  • SPHM1RT_bincheck
  • SPHM1RT_smoothedRT
  • TangoSIDM
  • TestPropagation3D
  • Test_fixedhProb
  • activate_fewer_comms
  • active_h_max_optimization
  • adaptive_softening_Lieuwe
  • add_2p5D
  • add_black_holes_checks
  • adding_sidm_to_master
  • agn_crksph
  • agn_crksph_subtask_speedup
  • amd-optimization
  • arm_vec
  • automatic_tasks
  • better_ray_RNG
  • black_holes_accreted_angular_momenta_from_gas
  • burkert-potential
  • c11
  • c11_atomics_copy
  • cancel_all_sorts
  • cell_exchange_improvements
  • cell_types
  • cherry-pick-cd1c39e0
  • comm_tasks_are_special
  • conduction_velocities
  • cpp-fixes
  • cuda_test
  • darwin/adaptive_softening
  • darwin/gear_chemistry_fluxes
  • darwin/gear_mechanical_feedback
  • darwin/gear_preSN_feedback
  • darwin/gear_radiation
  • darwin/simulations
  • darwin/sink_formation_proba
  • darwin/sink_mpi
  • darwin/sink_mpi_physics
  • dead-time-stats
  • derijcke_cooling
  • dev_cms
  • do-not-activate-empty-star-pairs
  • domain_zoom_nometis
  • drift_flag_debug_check
  • driven_turbulence
  • driven_turbulence_forcings
  • engineering
  • eos_updates
  • evrard_disc
  • expand_fof_2022
  • explict_bkg_cdim
  • fewer_gpart_comms
  • fewer_star_comms
  • fewer_timestep_comms_no_empty_pairs
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
  • v0.8.5
  • v0.9.0
  • v1.0.0
  • v2025.01
  • v2025.04
119 results

Target

Select target project
  • dc-oman1/swiftsim
  • swift/swiftsim
  • pdraper/swiftsim
  • tkchan/swiftsim
  • dc-turn5/swiftsim
5 results
Select Git revision
  • CubeTest
  • GPU_swift
  • TangoSIDM
  • active_h_max_optimization
  • arm_vec
  • c11
  • c11_atomics_copy
  • comm_tasks_are_special
  • cuda_test
  • domain_zoom_nometis
  • drift_flag_debug_check
  • driven_turbulence
  • engineering
  • evrard_disc
  • expand_fof
  • fix_sink_timestep
  • fixed_hSIDM
  • fof_snapshots
  • gear_metal_diffusion
  • generic_cache
  • genetic_partitioning2
  • gizmo
  • gizmo_entropy_switch
  • gizmo_mfv_entropy
  • hashmap_mesh
  • isotropic_feedback
  • ivanova-testing
  • jsw/6dfof
  • kahip
  • lean_gparts
  • load-balance-testing
  • locked_hydro
  • logger_read_history
  • logger_read_history2
  • logger_write_hdf5
  • mass_dependent_h_max
  • master
  • mpi-one-thread
  • mpi-packed-parts
  • mpi-send-subparts
  • mpi-send-subparts-vector
  • mpi-subparts-vector-grav
  • mpi-testsome
  • mpi-threads
  • mpi_force_checks
  • numa_awareness
  • onesided-mpi-rdma
  • onesided-mpi-recv-cache
  • onesided-mpi-recv-window
  • onesided-mpi-single-recv-window
  • origin-master
  • parallel_exchange_cells
  • paranoid
  • phantom
  • planetary
  • planetary_boundary
  • queue-timers
  • queue-timers-clean
  • rdma-only
  • rdma-only-multiple-sends
  • rdma-only-subcopies
  • rdma-only-subparts
  • rdma-only-subparts-update
  • rdma-only-subparts-update-flamingo
  • rdma-only-subparts-update-flamingo-cellids
  • rdma-only-subparts-update-keep
  • rdma-only-subparts-update-keep-update
  • rdma-only-subsends
  • reweight-fitted-costs
  • reweight-scaled-costs
  • rgb-engineering
  • rt-gas-interactions
  • rt-ghost2-and-thermochemistry
  • scheduler_determinism
  • search-window-tests
  • signal-handler-dump
  • simba-stellar-feedback
  • sink_formation2
  • sink_merger
  • sink_merger2
  • skeleton
  • smarter_sends
  • snipes_data
  • spiral_potential
  • subgrid_SF_threshold
  • subsends
  • swift-rdma
  • swift_zoom_support
  • sync-send
  • thread-dump-extract-waiters
  • threadpool_rmapper
  • traphic
  • variable_hSIDM
  • whe-nu-bg-cosmo
  • when_to_proxy
  • yb-bhdev
  • yb-sndev
  • yb-sndev-dev
  • yb-varsndt-isotropic
  • yb-vi-gastrack
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
  • v0.8.5
  • v0.9.0
116 results
Show changes
Showing
with 749 additions and 174 deletions
.. Adding Hydro Schemes
Matthieu Schaller, 23/02/2024
.. _adaptive_softening:
Adaptive Softening
==================
.. toctree::
:maxdepth: 2
:hidden:
:caption: Contents:
We implement the method of Price & Monaghan (2007). This add correction terms to
the SPH equations of motions to account for the change in energy and momentum
created by the change in gravitationl potential generated by the change of
softening. The softening length is tied to the gas' smoothing length and is
thus adapting with the changes in density field. To use adaptive softening, use
.. code-block:: bash
./configure --with-adaptive-softening=yes
The adaptive softening scheme is implemented for all the SPH schemes above but
only for the Wendland-C2 kernel as it is the kernel used for the gravity
softening throughout the code.
......@@ -2,7 +2,7 @@
Josh Borrow 4th April 2018
.. _hydro:
Hydrodynamics Schemes
=====================
......@@ -39,6 +39,8 @@ In case the case of a 2 loop scheme, SWIFT removes the gradient loop and the ext
sphenix_sph
gasoline_sph
phantom_sph
remix_sph
adaptive_softening
gizmo
shadowswift
adding_your_own
.. REMIX SPH
Thomas Sandnes, 13th May 2025
.. _remix_sph:
REMIX SPH
==============================================
.. toctree::
:maxdepth: 2
:hidden:
:caption: Contents:
REMIX is an SPH scheme designed to alleviate effects that typically suppress
mixing and instability growth at density discontinuities in SPH simulations
(Sandnes et al. 2025). REMIX addresses this problem by directly targeting sources
of kernel smoothing error and discretisation error, resulting in a generalised,
material-independent formulation that improves the treatment both of
discontinuities within a single material, for example in an ideal gas, and of
interfaces between dissimilar materials. The scheme combines:
+ An evolved density estimate to avoid the kernel smoothing error in the
standard SPH integral density estimate;
+ Thermodynamically consistent, conservative equations of motion, with
free functions chosen to limit zeroth-order error;
+ Linear-order reproducing kernels with grad-h terms and a vacuum interface
treatment;
+ A "kernel normalising term" to avoid potential accumulation of error in
the evolved density estimate, such that densities are ensured to remain
representative of the distribution of particle masses in the simulation volume;
+ Advanced artificial viscosity and diffusion schemes with linear reconstruction
of quantities to particle midpoints, and a set of novel improvements to
effectively switch between treatments for shock-capturing under compression and
noise-smoothing in shearing regions.
To configure with this scheme, use
.. code-block:: bash
./configure --with-hydro=remix --with-equation-of-state=planetary
This scheme allows multiple materials,
meaning that different SPH particles can be assigned different
`equations of state <equations_of_state.html>`_ (EoS).
Every SPH particle then requires and carries the additional ``MaterialID`` flag
from the initial conditions file. This flag indicates the particle's material
and which EoS it should use. Note that configuring with
``--with-equation-of-state=planetary`` is required for this scheme, although
for simulations that use a single, ideal gas EoS, setting all MaterialIDs to
``0`` and including
.. code-block:: yaml
EoS:
planetary_use_idg_def: 1
in the parameter file are the only EoS-related additions needed compared with
other non-Planetary hydro schemes. Note also that since densities are evolved in
time, initial particle densities are required in initial conditions.
We additionally recommend configuring with ``--with-kernel=wendland-C2`` and with
.. code-block:: yaml
SPH:
resolution_eta: 1.487
in the parameter file for improved hydrodynamic behaviour and since this is the
configuration used for the validation simulations of Sandnes et al. (2025).
The current implementation of the REMIX hydro scheme has been validated for
planetary applications and various hydrodynamic test cases, and does not include
all necessary functionality for e.g. cosmological simulations.
Default parameters used in the artificial viscosity and diffusion schemes and the
normalising term (see Sandnes et al. 2025) are:
.. code-block:: c
#define const_remix_visc_alpha 1.5f
#define const_remix_visc_beta 3.f
#define const_remix_visc_epsilon 0.1f
#define const_remix_visc_a 2.0f / 3.0f
#define const_remix_visc_b 1.0f / 3.0f
#define const_remix_difn_a_u 0.05f
#define const_remix_difn_b_u 0.95f
#define const_remix_difn_a_rho 0.05f
#define const_remix_difn_b_rho 0.95f
#define const_remix_norm_alpha 1.0f
#define const_remix_slope_limiter_exp_denom 0.04f
These can be changed in ``src/hydro/REMIX/hydro_parameters.h``.
.. ShadowSWIFT (Moving mesh hydrodynamics)
Yolan Uyttenhove September 2023
ShadowSWIFT (moving mesh hydrodynamics)
=======================================
.. warning::
The moving mesh hydrodynamics solver is currently in the process of being merged into master and will **NOT**
work on the master branch. To use it, compile the code using the ``moving_mesh`` branch.
This is an implementation of the moving-mesh finite-volume method for hydrodynamics in SWIFT.
To use this scheme, a Riemann solver is also needed. Configure SWIFT as follows:
.. code-block:: bash
./configure --with-hydro="shadowswift" --with-riemann-solver="hllc"
Current status
~~~~~~~~~~~~~~
Due to the completely different task structure compared to SPH hydrodynamics, currently only a subset of the features of
SWIFT is supported in this scheme.
- Hydrodynamics is fully supported in 1D, 2D and 3D and over MPI.
- Both self-gravity and external potentials are supported.
- Cosmological time-integration is supported.
- Cooling and chemistry are supported, with the exception of the ``GEAR_diffusion`` chemistry scheme. Metals are
properly according to mass fluxes.
- Choice between periodic, reflective, open, inflow and vacuum boundary conditions (for non-periodic boundary
conditions, the desired variant must be selected in ``const.h``). Additionally, reflective boundary conditions
are applied to SWIFT's boundary particles. Configure with ``--with-boundary-particles=<N>`` to use this (e.g. to
simulate walls).
Caveats
~~~~~~~
These are currently the main limitations of the ShadowSWIFT hydro scheme:
- Unlike SPH the cells of the moving mesh must form a partition of the entire simulation volume. This means that there
cannot be empty SWIFT cells and vacuum must be explicitly represented by zero (or negligible) mass particles.
- Most other subgrid physics, most notably, star formation and stellar feedback are not supported yet.
- No MHD schemes are supported.
- No radiative-transfer schemes are supported.
......@@ -34,3 +34,8 @@ In order to add a new scheme, you will need to:
``nobase_noinst_HEADERS``, add your new header files.
6. Update the documentation. Add your equations/documentation to ``doc/RTD``.
.. toctree::
:caption: Table of Contents
sink_adding_new_scheme
.. Adding new schemes
Darwin Roduit, 16 Ocotber 2024
.. _new_option_sink:
How to add your sink scheme
-------------------------------
Here, we provide comprehensive information to guide you in adding your sink scheme into Swift. To better understand how to add new schemes within Swift, read the general information provided on :ref:`new_option` page.
The default sink scheme is empty and gives you an idea of the minimum required fields and functions for the code to compile. The GEAR sink module has the base functions plus some extra ones for its operations. It can provide you with a working example. However, it can only work with the GEAR feedback module since it relies on IMF properties that are only located there.
As discussed in the GEAR sink :ref:`sink_GEAR_model_summary`, the physics relies on the following tasks: sink formation, gas and sink particle flagging, gas swallowing, sink swallowing and star formation. You do not need to care about the tasks, only the core functions within the sink module. However, you may need to locate where the code calls these functions. The file ``src/runner_others.c`` contains the ``runner_do_star_formation_sink()`` and ``runner_do_sink_formation()``. These functions are responsible for generating stars out of sinks and sinks from gas particles. The other general task-related functions are in ``src/runner_sinks.c``.
The following presents the most essential functions you need to implement. This will give you an idea of the workload.
Sink formation
~~~~~~~~~~~~~~
Before forming a sink, the code loops over all gas and sink particles to gather data about its neighbourhood. This is performed in ``sink_prepare_part_sink_formation_gas_criteria()`` and ``sink_prepare_part_sink_formation_sink_criteria()`` functions. For instance, in GEAR, we compute the total potential energy, thermal energy, etc.
Then, to decide if we can turn a gas particle into a sink particle, the function ``sink_is_forming()`` is called. Before forming a sink particle, there is a call to ``sink_should_convert_to_sink()``. This function determines whether the gas particle must transform into a sink. Both functions return either 0 or 1.
Gas-sink density interactions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first interaction task to be run for the sinks is the density task. This task updates the smoothing length for the sink particle, unless a fixed cutoff radius is being used (coming soon). It can also calculate the contributions made by neigbouring gas particles to the density, sound speed, velocity etc. at the location of the sink. Code for these interactions should be added to ``sink_iact.h/runner_iact_nonsym_sinks_gas_density()``.
Once the contributions of all neigbouring gas particles have been calculated, the density calculation is completed by the sink density ghost task. You can set what this task does with the functions ``sink_end_density()`` and ``sink_prepare_swallow()`` in ``sink.h``.
The ``sink_end_density()`` function completes the calculation of the smoothing length (coming soon), and this is where you can finish density-based calculations by e.g. dividing mass-weighted contributions to the velocity field by the total density in the kernel. For examples of this, see the equivalent task for the black hole particles.
The ``sink_prepare_swallow()`` task is where you can calculate density-based quantities that you might need to use in swallowing interactions later. For example, a Bondi-Hoyle accretion prescription should calculate an accretion rate and target mass to be accreted here.
Gas and sink flagging: finding whom to eat
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before accreting the gas/sink particles, the sink needs to look for eligible particles. The gas swallow interactions are performed within ``runner_iact_nonsym_sinks_gas_swallow()`` and the sink swallow in ``runner_iact_nonsym_sinks_sink_swallow()``.
Gas and sink swallowing: updating the sink properties
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When the sink swallows gas particles, it updates its internal properties based on the gas particles' properties. The ``sink_swallow_part()`` function takes care of this.
Similarly, when the sink swallows sink particles, it updates its properties from the to-be-swallowed sink particles. The ``sink_swallow_sink()`` performs the update.
There is no more than that. The code properly removes the swallowed particles.
Star formation
~~~~~~~~~~~~~~
The most important function is ``sink_spawn_star()``. It controls whether the code should continue creating stars and must return 0 or 1.
The ``sink_copy_properties_to_star()`` does what it says. This function is also responsible for adequately initialising the stars' properties. In GEAR, we give new positions and velocities within this function.
The following three functions allow you to update your sink particles (e.g. their masses) before, during and after the star formation loop: ``sink_update_sink_properties_before_star_formation()``, ``sink_update_sink_properties_during_star_formation()`` and ``sink_update_sink_properties_after_star_formation()``.
These functions are located in ``sink/Default/sink.h``
......@@ -15,11 +15,12 @@ instead of, the lossless gzip compression filter.
**These compression filters are lossy, meaning that they modify the
data written to disk**
*The filters will reduce the accuracy of the data stored. No check is
made inside SWIFT to verify that the applied filters make sense. Poor
choices can lead to all the values of a given array reduced to 0, Inf,
or to have lost too much accuracy to be useful. The onus is entirely
on the user to choose wisely how they want to compress their data.*
.. warning::
The filters will reduce the accuracy of the data stored. No check is
made inside SWIFT to verify that the applied filters make sense. Poor
choices can lead to all the values of a given array reduced to 0, Inf,
or to have lost too much accuracy to be useful. The onus is entirely
on the user to choose wisely how they want to compress their data.
The filters are not applied when using parallel-hdf5.
......@@ -27,6 +28,14 @@ The name of any filter applied is carried by each individual field in
the snapshot using the meta-data attribute ``Lossy compression
filter``.
.. warning::
Starting with HDF5 version 1.14.4, filters which compress the data
by more than 2x are flagged as problematic (see their
`doc <https://docs.hdfgroup.org/hdf5/v1_14/group___f_a_p_l.html#gafa8e677af3200e155e9208522f8e05c0>`_
). SWIFT can nevertheless write files with them by setting the
appropriate file-level flags. However, some tools (such as
``h5py``) may *not* be able to read these fields.
The available filters are listed below.
N-bit filters for long long integers
......@@ -171,6 +180,8 @@ Same for a ``double`` (64 bits) output:
+=================+==============+==============+=============+===================================================+===================+
| No filter | 52 | 11 | 15.9 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | --- |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``DMantissa21`` | 21 | 11 | 6.62 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | 1.93x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``DMantissa13`` | 13 | 11 | 4.21 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | 2.56x |
+-----------------+--------------+--------------+-------------+---------------------------------------------------+-------------------+
| ``DMantissa9`` | 9 | 11 | 3.01 digits | :math:`[2.2\times 10^{-308}, 1.8\times 10^{308}]` | 3.05x |
......
......@@ -74,6 +74,9 @@ CGS. Entries in the file look like:
SmoothingLengths_Gas: on # Co-moving smoothing lengths (FWHM of the kernel) of the particles : a U_L [ cm ]
...
This can also be used to set the outputs produced by the
:ref:`fof_stand_alone_label`.
For cosmological simulations, users can optionally add the ``--cosmology`` flag
to generate the field names appropriate for such a run.
......
......@@ -466,7 +466,10 @@ can be either drawn randomly by setting the parameter ``generate_random_ids``
newly generated IDs do not clash with any other pre-existing particle. If this
option is set to :math:`0` (the default setting) then the new IDs are created in
increasing order from the maximal pre-existing value in the simulation, hence
preventing any clash.
preventing any clash. Finally, if the option
``particle_splitting_log_extra_splits`` is set, the code will log all the splits
that go beyond the maximal allowed (typically 64) in a file so that the split tree
for these particles can still be reconstructed.
The final set of parameters in this section determine the initial and minimum
temperatures of the particles.
......@@ -503,7 +506,7 @@ The full section to start a typical cosmological run would be:
minimal_temperature: 100 # U_T
H_mass_fraction: 0.755
H_ionization_temperature: 1e4 # U_T
particle_splitting: 1
particle_splitting: 1
particle_splitting_mass_threshold: 5e-3 # U_M
.. _Parameters_Stars:
......@@ -514,7 +517,7 @@ Stars
The ``Stars`` section is used to set parameters that describe the Stars
calculations when doing feedback or enrichment. Note that if stars only act
gravitationally (i.e. SWIFT is run *without* ``--feedback``) no parameters
in this section are used.
in this section are used.
The first four parameters are related to the neighbour search:
......@@ -576,6 +579,26 @@ specified, SWIFT will start and use the birth times specified in the
ICs. If no values are given in the ICs, the stars' birth times will be
zeroed, which can cause issues depending on the type of run performed.
.. _Parameters_Sinks:
Sinks
-----
Currently, there are two models for the sink particles, the Default model and the GEAR one. Their parameters are described below. To choose a model, configure the code with ``--with-sink=<model>``, where ``<model>`` can be ``none`` or ``GEAR``. To run with sink particles, add the option ``--sinks``.
Below you will find the description of the ``none`` which is the default model. For ``GEAR`` model, please refer to :ref:`sink_GEAR_model`.
By default, the code is configured with ``--with-sink=none``. Then, the ``DefaultSink`` section is used to set parameters that describe the sinks in this model. The unique parameter is the sink accretion radius (also called cut-off radius): ``cut_off_radius``.
Note that this model does not create sink particles or accrete gas.
The full section is:
.. code:: YAML
DefaultSink:
cut_off_radius: 1e-3 # Cut off radius of the sink particles (in internal units). This parameter should be adapted with the resolution..
.. _Parameters_time_integration:
Time Integration
......@@ -623,7 +646,7 @@ the start and end times or scale factors from the parameter file.
``max_dt_RMS_factor`` (default: ``0.25``)
* Whether or not only the gas particle masses should be considered for
the baryon component of the calculation: ``dt_RMS_use_gas_only`` (default: ``0``)
These values rarely need altering. The second parameter is only
meaningful if a subgrid model produces star (or other) particles with
masses substantially smaller than the gas masses. See the theory
......@@ -648,7 +671,7 @@ Whilst for a cosmological run, one would need:
dt_min: 1e-10
max_dt_RMS_factor: 0.25 # Default optional value
dt_RMS_use_gas_only: 0 # Default optional value
.. _Parameters_ICs:
Initial Conditions
......@@ -693,7 +716,7 @@ Finally, SWIFT also offers these options:
* Whether to replicate the box along each axis: ``replicate`` (default: ``1``).
* Whether to re-map the IDs to the range ``[0, N]`` and hence discard
the original IDs from the IC file: ``remap_ids`` (default: ``0``).
The shift is expressed in internal units and will be written to the header of
the snapshots. The option to replicate the box is especially useful for
weak-scaling tests. When set to an integer >1, the box size is multiplied by
......@@ -858,24 +881,42 @@ that can be used as if it was a non-distributed snapshot. In this case, the
HDF5 library itself can figure out which file is needed when manipulating the
snapshot.
On Lustre filesystems [#f4]_ it is important to properly stripe files to achieve
a good writing speed. If the parameter ``lustre_OST_count`` is set to the number
of OSTs present on the system, then SWIFT will set the `stripe count` of each
distributed file to `1` and set each file's `stripe index` to the MPI rank
generating it modulo the OST count [#f5]_. If the parameter is not set then the
files will be created with the default system policy (or whatever was set for
the directory where the files are written). This parameter has no effect on
non-Lustre file systems and no effect if distributed snapshots are not used.
* The number of Lustre OSTs to distribute the single-striped distributed
snapshot files over: ``lustre_OST_count`` (default: ``0``)
On Lustre filesystems [#f4]_ it is important to properly stripe files to
achieve a good writing and reading speed. If the parameter
``lustre_OST_checks`` is set and the lustre API is available SWIFT will
determine the number of OSTs available and rank these by free space, it will
then set the `stripe count` of each file to `1` and choose an OST
`offset` so each rank writes to a different OST, unless there are more ranks
than OSTs in which case the assignment wraps. In this way OSTs should be
filled evenly and written to using an optimal access pattern.
If the parameter is not set then the files will be created with the default
system policy (or whatever was set for the directory where the files are
written). This parameter has no effect on non-Lustre file systems.
Other parameters are also provided to handle the cases when individual OSTs do
not have sufficient free space to write a file: ``lustre_OST_free`` and
when OSTs are closed for administrative reasons: ``lustre_OST_test``, in which
case they cannot be written. This is important as the `offset` assignment in
this case is not used by lustre which picks the next writable OST, so in our
scheme such OSTs will be used more times than we intended.
* Use the lustre API to assign a stripe and offset to the distributed snapshot
files:
``lustre_OST_checks`` (default: ``0``)
* Do not use OSTs that do not have a certain amount of free space in MiB.
Zero disables and -1 activates a guess based on the size of the process:
``lustre_OST_free`` (default: ``0``)
* Check OSTs can be written to and remove those from consideration:
``lustre_OST_test`` (default: ``0``)
Users can optionally ask to randomly sub-sample the particles in the snapshots.
This is specified for each particle type individually:
* Whether to switch on sub-sampling: ``subsample``
* Whether to switch on sub-sampling: ``subsample_fraction``
* Whether to switch on sub-sampling: ``subsample``
* Whether to switch on sub-sampling: ``subsample_fraction``
These are arrays of 7 elements defaulting to seven 0s if left unspecified. Each
entry corresponds to the particle type used in the initial conditions and
......@@ -885,7 +926,7 @@ indicating the fraction of particles to keep in the outputs. Note that the
selection of particles is selected randomly for each individual
snapshot. Particles can hence not be traced back from output to output when this
is switched on.
Users can optionally specify the level of compression used by the HDF5 library
using the parameter:
......@@ -1103,7 +1144,7 @@ output field will be enabled. If this is 0 lossy compression is not applied.
* Whether to use lossless compression in the particle outputs: ``particles_gzip_level``
If this is non-zero the HDF5 deflate filter will be applied to lightcone particle output with
the compression level set to the specified value.
the compression level set to the specified value.
* HEALPix map resolution: ``nside``
......@@ -1144,7 +1185,7 @@ filter names. Set the filter name to ``on`` to disable compression.
* Whether to use lossless compression in the HEALPix map outputs: ``maps_gzip_level``
If this is non-zero the HDF5 deflate filter will be applied to the lightcone map output with
the compression level set to the specified value.
the compression level set to the specified value.
The following shows a full set of light cone parameters for the case where we're making two
light cones which only differ in the location of the observer:
......@@ -1158,7 +1199,7 @@ light cones which only differ in the location of the observer:
buffer_chunk_size: 100000
max_particles_buffered: 1000000
hdf5_chunk_size: 10000
# Redshift ranges for particle types
z_range_for_Gas: [0.0, 0.05]
z_range_for_DM: [0.0, 0.05]
......@@ -1166,7 +1207,7 @@ light cones which only differ in the location of the observer:
z_range_for_Stars: [0.0, 0.05]
z_range_for_BH: [0.0, 0.05]
z_range_for_Neutrino: [0.0, 0.05]
# Healpix map parameters
nside: 512
radius_file: ./shell_radii.txt
......@@ -1191,7 +1232,7 @@ light cones which only differ in the location of the observer:
enabled: 1
basename: lightcone1
observer_position: [74.2, 10.80, 53.59]
An example of the radius file::
......@@ -1217,11 +1258,11 @@ Equation of State (EoS)
The ``EoS`` section contains options for the equations of state.
Multiple EoS can be used for :ref:`planetary`,
see :ref:`planetary_eos` for more information.
see :ref:`planetary_eos` for more information.
To enable one or multiple EoS, the corresponding ``planetary_use_*:``
flag(s) must be set to ``1`` in the parameter file for a simulation,
along with the path to any table files, which are set by the
along with the path to any table files, which are set by the
``planetary_*_table_file:`` parameters.
For the (non-planetary) isothermal EoS, the ``isothermal_internal_energy:``
......@@ -1230,7 +1271,9 @@ parameter sets the thermal energy per unit mass.
.. code:: YAML
EoS:
isothermal_internal_energy: 20.26784 # Thermal energy per unit mass for the case of isothermal equation of state (in internal units).
isothermal_internal_energy: 20.26784 # Thermal energy per unit mass for the case of isothermal equation of state (in internal units).
barotropic_vacuum_sound_speed: 2e4 # Vacuum sound speed in the case of the barotropic equation of state (in internal units).
barotropic_core_density: 1e-13 # Core density in the case of the barotropic equation of state (in internal units).
# Select which planetary EoS material(s) to enable for use.
planetary_use_idg_def: 0 # Default ideal gas, material ID 0
planetary_use_Til_iron: 1 # Tillotson iron, material ID 100
......@@ -1278,6 +1321,7 @@ The options are:
* The number of grid foldings to use: ``num_folds``.
* The factor by which to fold at each iteration: ``fold_factor`` (default: 4)
* The order of the window function: ``window_order`` (default: 3)
* Whether or not to correct the placement of the centre of the k-bins for small k values: ``shift_centre_small_k_bins`` (default: 1)
The window order sets the way the particle properties get assigned to the mesh.
Order 1 corresponds to the nearest-grid-point (NGP), order 2 to cloud-in-cell
......@@ -1317,6 +1361,20 @@ are mutually exclusive. The particles selected in each half are different in
each output. Note that neutrino PS can only be computed when neutrinos are
simulated using particles.
SWIFT uses bins of integer :math:`k`, with bins :math:`[0.5,1.5]`, :math:`[1.5,2.5]` etc. The
representative :math:`k` values used to be assigned to the bin centres (so k=1, 2, etc), which are
then transformed to physical :math:`k` by a factor :math:`kL/(2*pi)`. For the first few bins, only
few modes contribute to each bin. It is then advantageous to move the "centre" of the bin to the
actual location correponding to the mean of the contributing modes. The :math:`k` label of the bin
is thus shifted by a small amount. The way to calculate these shifts is to consider a 3D cube of
:math:`(kx,ky,kz)` cells and check which cells fall inside a spherical shell with boundaries
:math:`(i+0.5,i+1.5)`, then calculate the average :math:`k=\sqrt{kx^2+ky^2+kz^2}`. So for
:math:`i=0` there cells :math:`k=1` and 12 cells :math:`k=\sqrt(2)`, so the weighted k becomes
:math:`(6 * 1 + 12 * \sqrt(2)) / 18 = 1.2761424`. Note that only the first 7 (22) bins require a
correction larger than 1 (0.1) percent. We apply a correction to the first 128 terms. This
correction is activated when ``shift_centre_small_k_bins`` is switched on (that's the default
behaviour).
An example of a valid power-spectrum section of the parameter file looks like:
.. code:: YAML
......@@ -1390,17 +1448,35 @@ the MPI-rank. SWIFT writes one file per MPI rank. If the ``save`` option has
been activated, the previous set of restart files will be named
``basename_000000.rst.prev``.
On Lustre filesystems [#f4]_ it is important to properly stripe files to achieve
a good writing and reading speed. If the parameter ``lustre_OST_count`` is set
to the number of OSTs present on the system, then SWIFT will set the `stripe
count` of each restart file to `1` and set each file's `stripe index` to the MPI
rank generating it modulo the OST count [#f5]_. If the parameter is not set then
the files will be created with the default system policy (or whatever was set
for the directory where the files are written). This parameter has no effect on
non-Lustre file systems.
On Lustre filesystems [#f4]_ it is important to properly stripe files to
achieve a good writing and reading speed. If the parameter
``lustre_OST_checks`` is set and the lustre API is available SWIFT will
determine the number of OSTs available and rank these by free space, it will
then set the `stripe count` of each restart file to `1` and choose an OST
`offset` so each rank writes to a different OST, unless there are more ranks
than OSTs in which case the assignment wraps. In this way OSTs should be
filled evenly and written to using an optimal access pattern.
* The number of Lustre OSTs to distribute the single-striped restart files over:
``lustre_OST_count`` (default: ``0``)
If the parameter is not set then the files will be created with the default
system policy (or whatever was set for the directory where the files are
written). This parameter has no effect on non-Lustre file systems.
Other parameters are also provided to handle the cases when individual OSTs do
not have sufficient free space to write a restart file: ``lustre_OST_free`` and
when OSTs are closed for administrative reasons: ``lustre_OST_test``, in which
case they cannot be written. This is important as the `offset` assignment in
this case is not used by lustre which picks the next writable OST, so in our
scheme such OSTs will be used more times than we intended.
* Use the lustre API to assign a stripe and offset to restart files:
``lustre_OST_checks`` (default: ``0``)
* Do not use OSTs that do not have a certain amount of free space in MiB.
Zero disables and -1 activates a guess based on the size of the process:
``lustre_OST_free`` (default: ``0``)
* Check OSTs can be written to and remove those from consideration:
``lustre_OST_test`` (default: ``0``)
SWIFT can also be stopped by creating an empty file called ``stop`` in the
directory where the restart files are written (i.e. the directory speicified by
......@@ -1808,11 +1884,11 @@ necessary and one would use:
invoke_stf: 1 # We want VELOCIraptor to be called when snapshots are dumped.
# ...
# Rest of the snapshots properties
StructureFinding:
config_file_name: my_stf_configuration_file.cfg # See the VELOCIraptor manual for the content of this file.
basename: ./haloes/ # Write the catalogs in this sub-directory
If one additionally want to call VELOCIraptor at times not linked with
snapshots, the additional parameters need to be supplied.
......@@ -1892,7 +1968,7 @@ Fermi-Dirac momenta will be generated if ``generate_ics`` is used. The
``use_delta_f``. Finally, a random seed for the Fermi-Dirac momenta can
be set with ``neutrino_seed``.
For mode details on the neutrino implementation, refer to :ref:`Neutrinos`.
For mode details on the neutrino implementation, refer to :ref:`Neutrinos`.
A complete specification of the model looks like
.. code:: YAML
......@@ -1904,7 +1980,7 @@ A complete specification of the model looks like
------------------------
.. [#f1] The thorough reader (or overly keen SWIFT tester) would find that the speed of light is :math:`c=1.8026\times10^{12}\,\rm{fur}\,\rm{ftn}^{-1}`, Newton's constant becomes :math:`G_N=4.896735\times10^{-4}~\rm{fur}^3\,\rm{fir}^{-1}\,\rm{ftn}^{-2}` and Planck's constant turns into :math:`h=4.851453\times 10^{-34}~\rm{fur}^2\,\rm{fir}\,\rm{ftn}^{-1}`.
......
......@@ -13,6 +13,8 @@ Every SPH particle then requires and carries the additional ``MaterialID`` flag
from the initial conditions file. This flag indicates the particle's material
and which EoS it should use.
If you have another EoS that you would like us to add, then just let us know!
It is important to check that the EoS you use are appropriate
for the conditions in the simulation that you run.
Please follow the original sources of these EoS for more information and
......@@ -24,15 +26,18 @@ So far, we have implemented several Tillotson, ANEOS, SESAME,
and Hubbard \& MacFarlane (1980) materials, with more on the way.
Custom materials in SESAME-style tables can also be provided.
The material's ID is set by a somewhat arbitrary base type ID
(multiplied by 100) plus an individual value:
(multiplied by 100) plus an individual value, matching our code for making
planetary initial conditions, `WoMa <https://github.com/srbonilla/WoMa>`_:
+ Ideal gas: ``0``
+ Default (\\(\\gamma\\) set using ``--with-adiabatic-index``, default 5/3): ``0``
+ Default (Set :math:`\gamma` using ``--with-adiabatic-index``, default 5/3): ``0``
+ Tillotson (Melosh, 2007): ``1``
+ Iron: ``100``
+ Granite: ``101``
+ Water: ``102``
+ Basalt: ``103``
+ Ice: ``104``
+ Custom user-provided parameters: ``190``, ``191``, ..., ``199``
+ Hubbard \& MacFarlane (1980): ``2``
+ Hydrogen-helium atmosphere: ``200``
+ Ice H20-CH4-NH3 mix: ``201``
......@@ -42,6 +47,10 @@ The material's ID is set by a somewhat arbitrary base type ID
+ Basalt (7530): ``301``
+ Water (7154): ``302``
+ Senft \& Stewart (2008) water: ``303``
+ AQUA, Haldemann, J. et al. (2020) water: ``304``
+ Chabrier, G. et al. (2019) Hydrogen: ``305``
+ Chabrier, G. et al. (2019) Helium: ``306``
+ Chabrier & Debras (2021) H/He mixture Y=0.245 (Jupiter): ``307``
+ ANEOS (in SESAME-style tables): ``4``
+ Forsterite (Stewart et al. 2019): ``400``
+ Iron (Stewart, zenodo.org/record/3866507): ``401``
......@@ -53,7 +62,7 @@ The data files for the tabulated EoS can be downloaded using
the ``examples/Planetary/EoSTables/get_eos_tables.sh`` script.
To enable one or multiple EoS, the corresponding ``planetary_use_*:``
flag(s) must be set to ``1`` in the parameter file for a simulation,
flag(s) must be set from ``0`` to ``1`` in the parameter file for a simulation,
along with the path to any table files, which are set by the
``planetary_*_table_file:`` parameters,
as detailed in :ref:`Parameters_eos` and ``examples/parameter_example.yml``.
......@@ -61,18 +70,18 @@ as detailed in :ref:`Parameters_eos` and ``examples/parameter_example.yml``.
Unlike the EoS for an ideal or isothermal gas, these more complicated materials
do not always include transformations between the internal energy,
temperature, and entropy. At the moment, we have implemented
\\(P(\\rho, u)\\) and \\(c_s(\\rho, u)\\) (and more in some cases),
:math:`P(\rho, u)` and :math:`c_s(\rho, u)` (and more in some cases),
which is sufficient for the :ref:`planetary_sph` hydro scheme,
but some materials may thus currently be incompatible with
e.g. entropy-based schemes.
The Tillotson sound speed was derived using
\\(c_s^2 = \\left. ( \\partial P / \\partial \\rho ) \\right|_S \\)
:math:`c_s^2 = \left. ( \partial P / \partial \rho ) \right|_S`
as described in
`Kegerreis et al. (2019) <https://doi.org/10.1093/mnras/stz1606>`_.
Note that there is a typo in the sign of
\\(du = T dS - P dV = T dS + (P / \\rho^2) d\\rho \\) in the appendix;
the correct version was used in the actual derivation.
:math:`du = T dS - P dV = T dS + (P / \rho^2) d\rho` in the appendix,
but the correct version was used in the actual derivation.
The ideal gas uses the same equations detailed in :ref:`equation_of_state`.
......
......@@ -16,5 +16,18 @@ and which EoS it should use.
The Balsara viscosity switch is used by default, but can be disabled by
compiling SWIFT with ``make CFLAGS=-DPLANETARY_SPH_NO_BALSARA``.
Note: the boundary-improvement methods presented in Ruiz-Bonilla+2022 are
in the process of being improved for release with better documentation etc soon.
Note: to access the boundary-improvement method presented in Ruiz-Bonilla+2022,
use the ``planetary_imbalance_RB22`` git branch and compile with
``--with-hydro=planetary-gdf``. However, we instead recommend using the REMIX
SPH scheme, as it has effectively replaced this method.
.. _planetary_remix_hydro:
REMIX SPH
======================
REMIX is an SPH scheme designed to alleviate effects that typically suppress
mixing and instability growth at density discontinuities in SPH simulations
(Sandnes et al. 2025), and also includes the multiple EoS options as the base
Planetary scheme. For more information on what is included in the REMIX
scheme and how to configure SWIFT to use REMIX, see: :ref:`remix_sph`.
......@@ -7,24 +7,36 @@ Planetary Simulations
=====================
SWIFT is also designed for running planetary simulations
such as of giant impacts, as introduced in Kegerreis+2019,
and any other types of simulations with more complicated equations of state
such as of giant impacts, as introduced in
`Kegerreis et al. (2019) <https://doi.org/10.1093/mnras/stz1606>`_,
and of any non-planetary simulations with more complicated equations of state
and/or multiple materials, etc.
Active development is ongoing of more features and examples,
Active development is ongoing of more features, supporting tools, and examples,
so please let us know if you are interested in using SWIFT
or have any implementation requests.
You can find an example simulation in ``swiftsim/examples/Planetary/``
under ``EarthImpact/``, as well as some hydro tests for comparison with other
schemes. The tabulated equation of state files can be downloaded using
You can find an example of creating initial conditions and running an impact
simulation in ``swiftsim/examples/Planetary/`` under ``DemoImpactInitCond/``
and ``DemoImpact/``. Check out their `README.md` files for more info.
The tabulated equation of state files can be downloaded there using
``EoSTables/get_eos_tables.sh``.
Planetary simulations are currently intended to be run with
SWIFT configured to use the planetary hydrodynamics scheme and equations of state:
``--with-hydro=planetary`` and ``--with-equation-of-state=planetary``.
These allow for multiple materials to be used,
chosen from the several available equations of state.
To run planetary or similar simulations with a traditional SPH formulation,
SWIFT should be configured to use the planetary hydrodynamics scheme and
equations of state: ``--with-hydro=planetary`` and
``--with-equation-of-state=planetary``. These allow for multiple materials to be
used, chosen from the several available equations of state.
Planetary simulations can also be carried out using the advanced
:ref:`remix_sph` scheme, which improves the treatment of mixing and material
interfaces (Sandnes et al. 2025).
To configure within REMIX, use ``--with-hydro=remix`` and
``--with-equation-of-state=planetary``. Note that REMIX simulations require
initial particle densities in the initial conditions. We also recommend
configuring with ``--with-kernel=wendland-C2`` and with ``resolution_eta: 1.487``
in the parameter file for improved hydrodynamic behaviour and since this is the
configuration used for the validation simulations of Sandnes et al. (2025).
.. toctree::
:maxdepth: 1
......
......@@ -30,7 +30,7 @@ Compiling for GEAR RT
you to select a hydro Riemann solver, e.g ``--with-riemann-solver=hllc``.
- The thermochemistry requires the `grackle <https://github.com/grackle-project/grackle>`_
library version 3.2. Grackle is a chemistry and cooling library presented in
library version above 3.2.1. [#f4]_ Grackle is a chemistry and cooling library presented in
`B. Smith et al. 2017 <https://ui.adsabs.harvard.edu/abs/2017MNRAS.466.2217S>`_.
Please note that the current implementation is not (yet) as
advanced as the :ref:`GEAR subgrid model grackle cooling <gear_grackle_cooling>`,
......@@ -38,6 +38,17 @@ Compiling for GEAR RT
grackle cooling in combination with GEAR RT. You can however follow the Grackle
installation instructions documented there.
.. warning::
(State 2023) Grackle is experiencing current development, and the API is subject
to changes in the future. For convenience, a frozen version is hosted as a fork
on github here: https://github.com/mladenivkovic/grackle-swift .
The version available there will be tried and tested and ensured to work with
GEAR-RT.
Additionally, that repository hosts files necessary to install that specific
version of grackle with spack.
Compulsory Runtime Parameters
......@@ -468,3 +479,10 @@ useful:
`I. Iliev et al. 2006 <https://ui.adsabs.harvard.edu/abs/2006MNRAS.369.1625I>`_
paper, but that is very specialized and shouldn't have much use in real
applications.
.. [#f4] Grackle version 3.2.1 still contained a bug related to the use of their
"threadsafe functions" that could lead to disastrous outcomes. That was fixed
in commit `a59489f`. So versions after 3.2.1 should work as expected.
To be safe, we recommend you use the forked grackle repository specifically
intended to freeze a stable version for use with SWIFT. You can find that fork
on github: https://github.com/mladenivkovic/grackle-swift .
.. Radiative Transfer Scheme Requirements
Mladen Ivkovic 05.2021
.. _rt_general:
.. warning::
The radiative transfer schemes are still in development and are not useable
at this moment. This page is currently a placeholder to document new
features and requirements as the code grows.
Radiative Transfer in SWIFT
---------------------------
We currently support two methods for radiative transfer in SWIFT:
:ref:`GEAR-RT <rt_GEAR>` and :ref:`SPHM1RT <rt_SPHM1>`. They are both
moment-based methods. For documentation on each of these methods, please
refer to their respective pages.
This page contains some general documentation on running SWIFT with RT, which
is valid irrespective of the exact method used.
Requirements
~~~~~~~~~~~~
To be able to run with radiative transfer, you'll need to run swift with the
following flags:
.. code::
swift --radiation --hydro --feedback --stars --self-gravity [and/or] --external-gravity
Some notes on these runtime flags:
- The radiation data is coupled to the gas data, so you can't run without
``--hydro``. (Also, the whole point of these schemes is the interaction between
gas and radiation, so why would you want to?)
- Currently the only source of radiation are stars, so you need ``--stars``.
If you want to run without radiative sources, still run the code with
``--stars``, even if you don't have any stars in your initial conditions.
- Running with ``--stars`` requires some form of gravity, be it self-gravity or
external gravity. Since we need stars, we inherit this requirement. If you want
no gravity, run with ``--external-gravity`` and set the external potential to
zero.
- We need ``--feedback`` in order to have meaningful smoothing lengths for
stars. However, you don't need any specific feedback model; It'll work even if
you configured ``--with-feedback=none``.
.. _rt_subcycling:
RT Sub-Cycling
~~~~~~~~~~~~~~
SWIFT allows to sub-cycle the solution of radiative transfer steps (both
photon propagation and thermochemistry) with respect to the hydrodynamics
time steps. Basically you can tell SWIFT to run up to X radiative transfer
steps during a single hydrodynamics step for all particles in the simulation.
The aim is to not waste time doing unnecessary hydrodynamics updates, which
typically allow for much higher time steps compared to radiation due to the
propagation speed of the respective advected quantity.
You will need to provide an upper limit on how many RT sub-cycles per hydro
step you want to allow. That is governed by the
.. code:: yaml
TimeIntegration:
max_nr_rt_subcycles: 128 # maximal number of RT sub-cycles per hydro step
parameter, which is mandatory for any RT runs. To turn off sub-cycling and
couple the radiative transfer and the hydrodynamics time steps one-to-one,
set this parameter to either 0 or 1.
Due to the discretization of individual particle time steps in time bins
with a factor of 2 difference in time step size from a lower to a higher
time bin, the ``max_nr_rt_subcycles`` parameter itself is required to be
a power of 2 as well.
Note that this parameter will set an upper limit to the number of sub-cycles
per hydro step. If the ratio of hydro-to-RT time step is greater than what
``max_nr_rt_subcycles`` allows for, then the hydro time step will be reduced
to fit the maximal threshold. If it is smaller, the particle will simply do
fewer sub-cycles.
Reading the output of time-step information
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When running with :ref:`sub-cycling enabled <rt_subcycling>`, additional time
step information is written.
Firstly, whenever you run with a SWIFT compiled with any RT method, it will also
create a file called ``rtsubcycles.txt``, which contains analogous data to the
``timesteps.txt`` file. However, if you run without sub-cycling (i.e. with
``TimeIntegration:max_nr_rt_subcycles`` set to ``0`` or ``1``), **then the file
will remain empty** (save for the header information). That's because its contents
would be redundant - all the data will be identical to what will be written in
``timesteps.txt``.
Secondly, when running with RT and sub-cycling enabled, the output to ``STDOUT``
contains additional information. More concretely, it changes from something like this:
.. code-block::
[00003.0] engine_dump_snapshot: Dumping snapshot at t=0.000000e+00
[00003.1] engine_print_stats: Saving statistics at t=0.000000e+00
# Step Time Scale-factor Redshift Time-step Time-bins Updates g-Updates s-Updates sink-Updates b-Updates Wall-clock time [ms] Props Dead time [ms]
0 0.000000e+00 1.0000000 0.0000000 0.000000e+00 1 56 18000 31000 13000 0 0 2610.609 281 101.971
1 1.220703e-05 1.0000000 0.0000000 1.220703e-05 43 43 11 11 0 0 0 61.686 1 1.324
2 2.441406e-05 1.0000000 0.0000000 1.220703e-05 43 44 12685 12685 0 0 0 1043.433 0 35.461
3 3.662109e-05 1.0000000 0.0000000 1.220703e-05 43 43 11 11 0 0 0 51.340 1 1.628
4 4.882813e-05 1.0000000 0.0000000 1.220703e-05 43 45 18000 18000 0 0 0 1342.531 0 36.831
5 6.103516e-05 1.0000000 0.0000000 1.220703e-05 43 43 11 11 0 0 0 48.412 1 1.325
6 7.324219e-05 1.0000000 0.0000000 1.220703e-05 43 44 12685 12685 0 0 0 1037.307 0 34.718
7 8.544922e-05 1.0000000 0.0000000 1.220703e-05 43 43 11 11 0 0 0 47.791 1 1.362
8 9.765625e-05 1.0000000 0.0000000 1.220703e-05 43 46 18000 18004 4 0 0 1410.851 0 35.005
9 1.098633e-04 1.0000000 0.0000000 1.220703e-05 43 43 11 11 0 0 0 48.322 1 1.327
10 1.220703e-04 1.0000000 0.0000000 1.220703e-05 43 44 12685 12685 0 0 0 1109.944 0 33.691
To something like this:
.. code-block::
[rt-sc] 0 0.000000e+00 1.000000 0.000000 1.220703e-05 1 56 18000 - - - - - - -
[rt-sc] 1 1.220703e-05 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.683 - 0.170
[rt-sc] 2 2.441406e-05 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 100.543 - 5.070
[rt-sc] 3 3.662109e-05 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.762 - 0.208
[rt-sc] 4 4.882813e-05 1.000000 0.000000 1.220703e-05 43 45 18000 - - - - 124.011 - 6.396
[rt-sc] 5 6.103516e-05 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.831 - 0.254
[rt-sc] 6 7.324219e-05 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 107.172 - 5.572
[rt-sc] 7 8.544922e-05 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.759 - 0.227
[00003.0] engine_dump_snapshot: Dumping snapshot at t=0.000000e+00
[00003.1] engine_print_stats: Saving statistics at t=0.000000e+00
# Step Time Scale-factor Redshift Time-step Time-bins Updates g-Updates s-Updates sink-Updates b-Updates Wall-clock time [ms] Props Dead time [ms]
0 0.000000e+00 1.0000000 0.0000000 0.000000e+00 1 56 18000 31000 13000 0 0 2941.254 281 120.261
[rt-sc] 0 9.765625e-05 1.000000 0.000000 1.220703e-05 43 46 18000 - - - - - - -
[rt-sc] 1 1.098633e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.990 - 0.417
[rt-sc] 2 1.220703e-04 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 104.155 - 5.744
[rt-sc] 3 1.342773e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.765 - 0.176
[rt-sc] 4 1.464844e-04 1.000000 0.000000 1.220703e-05 43 45 18000 - - - - 125.237 - 5.605
[rt-sc] 5 1.586914e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.856 - 0.282
[rt-sc] 6 1.708984e-04 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 112.171 - 5.251
[rt-sc] 7 1.831055e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.861 - 0.241
1 9.765625e-05 1.0000000 0.0000000 9.765625e-05 46 46 4 8 4 0 0 546.225 1 24.648
[rt-sc] 0 1.953125e-04 1.000000 0.000000 1.220703e-05 43 47 18000 - - - - - - -
[rt-sc] 1 2.075195e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.842 - 0.212
[rt-sc] 2 2.197266e-04 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 126.674 - 6.295
[rt-sc] 3 2.319336e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.797 - 0.289
[rt-sc] 4 2.441406e-04 1.000000 0.000000 1.220703e-05 43 45 18000 - - - - 142.086 - 5.511
[rt-sc] 5 2.563477e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.919 - 0.196
[rt-sc] 6 2.685547e-04 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 131.550 - 5.896
[rt-sc] 7 2.807617e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.809 - 0.186
2 1.953125e-04 1.0000000 0.0000000 9.765625e-05 46 47 27 43 16 0 0 558.226 0 27.711
[rt-sc] 0 2.929688e-04 1.000000 0.000000 1.220703e-05 43 46 18000 - - - - - - -
[rt-sc] 1 3.051758e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.738 - 0.207
[rt-sc] 2 3.173828e-04 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 122.572 - 5.170
[rt-sc] 3 3.295898e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 1.063 - 0.345
[rt-sc] 4 3.417969e-04 1.000000 0.000000 1.220703e-05 43 45 18000 - - - - 147.110 - 5.409
[rt-sc] 5 3.540039e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 1.091 - 0.350
[rt-sc] 6 3.662109e-04 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 134.273 - 6.561
[rt-sc] 7 3.784180e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.825 - 0.298
3 2.929688e-04 1.0000000 0.0000000 9.765625e-05 46 46 4 8 4 0 0 557.164 0 24.760
Here's what's going on here:
- All lines beginning with the prefix ``[rt-sc]`` are time step data of RT
sub-cycling steps, i.e. of sub-cycles.
- The sub-cycling index follows the prefix ``[rt-sc]``. For example, a line
beginning with ``[rt-sc] 2`` is the sub-cycle with index ``2`` of some time step.
- The "sub-cycle" with index ``0`` is the one performed alongside all other tasks
(e.g. hydro, gravity, stellar feedback, etc.) All other sub-cycle indices
indicate actual sub-cycles, i.e. actual intermediate steps where only radiative
transfer is being solved (or put differently: where only RT tasks are being launched).
- The sub-cycling lines are written to ``STDOUT`` *before* the line of the full
time step data. More precisely, in the above example, time step ``1`` with all its
RT sub-cycles is written to ``STDOUT`` as this block:
.. code-block::
# Step Time Scale-factor Redshift Time-step Time-bins Updates g-Updates s-Updates sink-Updates b-Updates Wall-clock time [ms] Props Dead time [ms]
[ ... some lines omitted ... ]
[rt-sc] 0 9.765625e-05 1.000000 0.000000 1.220703e-05 43 46 18000 - - - - - - -
[rt-sc] 1 1.098633e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.990 - 0.417
[rt-sc] 2 1.220703e-04 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 104.155 - 5.744
[rt-sc] 3 1.342773e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.765 - 0.176
[rt-sc] 4 1.464844e-04 1.000000 0.000000 1.220703e-05 43 45 18000 - - - - 125.237 - 5.605
[rt-sc] 5 1.586914e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.856 - 0.282
[rt-sc] 6 1.708984e-04 1.000000 0.000000 1.220703e-05 43 44 12685 - - - - 112.171 - 5.251
[rt-sc] 7 1.831055e-04 1.000000 0.000000 1.220703e-05 43 43 11 - - - - 0.861 - 0.241
1 9.765625e-05 1.0000000 0.0000000 9.765625e-05 46 46 4 8 4 0 0 546.225 1 24.648
- Let's have a closer look at the written data:
- In step ``1``, 4 hydro particles, 8 gravity particles, and 4 star particles
were updated. (You can see that in the last line.)
- The integration of the full step was performed over a time step with
size ``9.765625e-05``. **That is valid for all physics except radiative
transfer.** The RT was integrated 8 times with a time step size of ``1.220703e-05``.
You can see this in the sub-cycling output lines.
- Each RT sub-cycling line also tells you the minimal and maximal time bin
size that was worked on, as well as how many hydro particles underwent RT
updates.
- RT sub-cycles only ever update radiative transfer. There will never be any
gravity, star, sink, or black hole particle updates in it.
- Since the sub-cycle with index ``0`` is performed alongside all other physics
during the main step, the isolated wall-clock time and dead time fields are
not available for it.
- The wall-clock time and dead time fields in the full step line (the one starting
*without* ``[rt-sc]``) include the data of the sub-cycles for this step as well.
.. RT Subcycling
Mladen Ivkovic 07.2022
.. _rt_subcycling:
RT Subcycling
-------------
.. warning::
The radiative transfer schemes are still in development and are not useable
at this moment. This page is currently a placeholder to document new
features and requirements as the code grows.
SWIFT allows to sub-cycle the solution of radiative transfer steps (both
photon propagation and thermochemistry) with respect to the hydrodynamics
time steps. Basically you can tell SWIFT to run up to X radiative transfer
steps during a single hydrodynamics step for all particles in the simulation.
The aim is to not waste time doing unnecessary hydrodynamics updates, which
typically allow for much higher time steps compared to radiation due to the
propagation speed of the respective advected quantity.
You will need to provide an upper limit on how many RT subcycles per hydro
step you want to allow. That is governed by the
.. code:: yaml
TimeIntegration:
max_nr_rt_subcycles: 128 # maximal number of RT subcycles per hydro step
parameter, which is mandatory for any RT runs. To turn off subcycling and
couple the radiative transfer and the hydrodynamics time steps one-to-one,
set this parameter to either 0 or 1.
Due to the discretization of individual particle time steps in time bins
with a factor of 2 difference in time step size from a lower to a higher
time bin, the ``max_nr_rt_subcycles`` parameter itself is required to be
a power of 2 as well.
Note that this parameter will set an upper limit to the number of subcycles
per hydro step. If the ratio of hydro-to-RT time step is greater than what
``max_nr_rt_subcycles`` allows for, then the hydro time step will be reduced
to fit the maximal threshold. If it is smaller, the particle will simply do
fewer subcycles.
......@@ -21,9 +21,8 @@ schemes.
:maxdepth: 2
:caption: Contents:
requirements
RT_general
GEAR_RT
SPHM1_RT
RT_subcycling
RT_notes_for_developers
additional_tools
.. Radiative Transfer Scheme Requirements
Mladen Ivkovic 05.2021
.. _rt_requirements:
Requirements
------------
.. warning::
The radiative transfer schemes are still in development and are not useable
at this moment. This page is currently a placeholder to document new
features and requirements as the code grows.
To be able to run with radiative transfer, you'll need to run swift with the
following flags:
.. code::
swift --radiation --hydro --feedback --stars --self-gravity [and/or] --external-gravity
Some notes:
- The radiation data is coupled to the gas data, so you can't run without
``--hydro``. (Also the whole point of these schemes is the interaction between
gas and radiation, so why would you want to?)
- Currently the only source of radiation are stars, so you need ``--stars``.
If you want to run without radiative sources, still run the code with
``--stars``, even if you don't have any stars in your initial conditions.
- Running with ``--stars`` requires some form of gravity, be it self-gravity or
external gravity. Since we need stars, we inherit this requirement. If you want
no gravity, run with ``--external-gravity`` and set the external potential to
zero.
- We need ``--feedback`` in order to have meaningful smoothing lengths for
stars. However, you don't need any specific feedback model; It'll work even if
you configured ``--with-feedback=none``.
......@@ -249,6 +249,12 @@ obtained by running SWIFT with the ``-o`` runtime option (See
:ref:`Output_selection_label` for details). Each field contains a short
description attribute giving a brief summary of what the quantity represents.
Note that the HDF5 names of some fields differ from the GADGET-2 format for
initial condition files (see :ref:`Initial_Conditions_label`) that mixes
singular and plural names, which in snapshot files are all plural by default
(e.g. ``InternalEnergies`` in snapshots versus ``InternalEnergy`` in initial
conditions).
All the individual arrays created by SWIFT have had the Fletcher 32 check-sum
filter applied by the HDF5 library when writing them. This means that any
eventual data corruption on the disks will be detected and reported by the
......@@ -267,7 +273,9 @@ part designed for users to directly read and in part for machine
reading of the information. Each field contains the exponent of the
scale-factor, reduced Hubble constant [#f2]_ and each of the 5 base units
that is required to convert the field values to physical CGS
units. These fields are:
units. The base assumption is that all fields are written in the
co-moving frame (see below for exceptions).
These fields are:
+----------------------+---------------------------------------+
| Meta-data field name | Description |
......@@ -324,6 +332,12 @@ case of the densities and assuming the usual system of units
In the case of a non-cosmological simulation, these two expressions
are identical since :math:`a=1`.
In some special cases, the fields cannot be meaningfully expressed as
co-moving quantities. In these exceptional circumstances, we set the
value of the attribute ``Value stored as physical`` to ``1``. And we
additionally set the attribute ``Property can be converted to
comoving`` to ``0``.
Particle splitting metadata
---------------------------
......@@ -357,12 +371,16 @@ the combination of ``ProgenitorID`` and this binary tree corresponds to a
fully traceable, unique, identifier for every particle in the simulation volume.
Note that we can only track 64 splitting events for a given particle, and after
this the binary tree is meaningless. In practice, however, such a high number
of splitting events is extremely unlikely to occur.
An example is provided in ``examples/SubgridTests/ParticleSplitting``, with
a figure showing how one particle is split (eventually) into 16 descendants
that makes use of this metadata.
this the binary tree is meaningless. In practice, however, such a high number of
splitting events is extremely unlikely to occur. The logging of extra splits can
optionally be activated. When particles reach 64 splits, their tree information
is reset but the status prior to the reset is stored in a log file allowing for
the reconstruction of the full history even in the cases where the maximum is
reached.
An example is provided in ``examples/SubgridTests/ParticleSplitting``, with a
figure showing how one particle is split (eventually) into 16 descendants that
makes use of this metadata.
Quick access to particles via hash-tables
-----------------------------------------
......
......@@ -44,3 +44,4 @@ A full list of all relevant parameters of the model is in ``params.rst``. We als
numerics
params
output
variable_heating_temperatures
......@@ -10,22 +10,41 @@ In order to launch jets, we introduce a jet reservoir that functions identically
These kicks are handed out to particles in a symmetric way with respect to the spin vector of the BH. :math:`N_\mathrm{j}/2` particles are kicked from the 'upper' hemisphere relative to the spin vector, and the other half from the lower hemisphere. The particles to be kicked can be any in the smoothing kernel. We include four different choices: the particles kicked are: 1) the closest to the BH, 2) the farthest from the BH, 3) the ones of minimal density and 4) the ones closest to the spin axis, in terms of angular distance. Note that these sortings are done for each hemisphere seperately.
The particles chosen are always given velocities based on the same algorithm, regardless of their positions in the kernel. We perform the actual kicks in the following way. Velocity kicks are chosen at random from a cone around the current spin vector with a (half-)opening angle of :math:`\theta_\mathrm{j}`. In particular, we first choose the kick vector around the z-axis as :math:`\vec{v}_\mathrm{kick}=(\sin\theta\cos\phi,\hspace{0.3mm}\sin\theta\sin\phi,\hspace{0.3mm}\cos \theta)`. Here, :math:`\cos\theta` is chosen uniformly from the interval :math:`[\cos\theta_\mathrm{j},1]`, and :math:`\sin\theta=\sqrt{1-\cos\theta^2}`. :math:`\phi` is chosen uniformly from :math:`[0,2\pi]`. This random vector, now representing a random kick within a cone around the z-axis, is rotated into the frame of the spin vector so that the cone is pointing in the right direction. For particles being kicked from the 'negative' side of the BH hemisphere, the final kick vector is simply multiplied by :math:`-1`.
The particles chosen are always given velocities based on the same algorithm, regardless of their positions in the kernel. We perform the actual kicks in the following way. Velocity kicks are chosen at random from a cone around the current spin vector with a (half-)opening angle of :math:`\theta_\mathrm{j}`. In particular, we first choose the kick vector around the z-axis as :math:`\hat{v}_\mathrm{kick}=(\sin\theta\cos\phi,\hspace{0.3mm}\sin\theta\sin\phi,\hspace{0.3mm}\cos \theta)`. Here, :math:`\cos\theta` is chosen uniformly from the interval :math:`[\cos\theta_\mathrm{j},1]`, and :math:`\sin\theta=\sqrt{1-\cos\theta^2}`. :math:`\phi` is chosen uniformly from :math:`[0,2\pi]`. This random vector, now representing a random kick within a cone around the z-axis, is rotated into the frame of the spin vector so that the cone is pointing in the right direction. For particles being kicked from the 'negative' side of the BH hemisphere, the final kick vector is simply multiplied by :math:`-1`.
We then add the kick vector to the particle's current velocity. We do this in a way that conserves energy, so that the magnitude of the final velocity is computed from
We increase the particle's velocity in the chosen kick direction by an amount :math:`\vert \Delta \vec{v} \vert` such that its energy increases by :math:`(1/2)m_\mathrm{gas}v_\mathrm{jet}^2`. For this reason, :math:`\vert \Delta \vec{v} \vert< v_\mathrm{jet}` generally holds. We calculate :math:`\vert \Delta \vec{v} \vert` from the equation
.. math::
\frac{1}{2}m_\mathrm{gas}\vec{v}_\mathrm{fin}^2=\frac{1}{2}m_\mathrm{gas}\vec{v}_\mathrm{0}^2 + \frac{1}{2}m_\mathrm{gas}\vec{v}_\mathrm{kick}^2,
(\vec{v}_0+\Delta\vec{v})^2 = \vert \vec{v}_0\vert ^2 + v_\mathrm{j}^2,
while its direction is computed from conservation of momentum:
which follows from conservation of energy. This vector equation can be solved to yield the necessary magnitude of the velocity increase that should be applied (in the chosen kick direction)
.. math::
m_\mathrm{gas}\vec{v}_\mathrm{fin}=m_\mathrm{gas}\vec{v}_\mathrm{0} + m_\mathrm{gas}\vec{v}_\mathrm{kick}.
\vert \Delta\vec{v}\vert = \sqrt{v_\mathrm{j}^2 + v_\mathrm{0,j}^2} - v_\mathrm{0,j},
where :math:`v_\mathrm{0,j} = \sin \theta\vert \vec{v}_0\vert` is the magnitude of the initial velocity projected onto the chosen kick direction, with :math:`\sin \theta` the angle between the direction of the initial velocity and the chosen kick direction.
Black hole time steps
---------------------
Black holes will generally have time steps based on their gravitational interactions, but also based on their current accretion rate and the expected time interval until which the thermal feedback reservoir (representing radiative feedback) will have grown enough to heat 1 particle. We introduce a similar time step, but based on when the jet reservoir grows enough to kick :math:`N_\mathrm{j}` particles. We then take the minimum of those two.
Given the changes to the BH model in the form of AGN jets and BH spin evolution, a few additional time-step criteria need to be implemented. The minimum of these time-steps is taken to actually evolve the BH, alongside the other time-steps already used for the BH in the code. We introduce a jet-related time-step that is given by:
.. math::
\Delta t_\mathrm{jet}=\frac{\Delta E_\mathrm{jet}}{P_\mathrm{jet}}.
This time-step ensures that the BH is woken up by the time it needs to 'hand out' a pair of kicks. In the above equation, :math:`P_\mathrm{jet}` is the current, instantenous jet power, while :math:`\Delta E_\mathrm{jet}=2\times m_\mathrm{ngb}v_\mathrm{jet}^2` is the energy to be handed out to a pair of particles, with :math:`m_\mathrm{ngb}` the average gas particle mass in the BH's kernel, and :math:`v_\mathrm{jet}` the target jet velocity.
We also introduce two time-steps related to the angular momentum of the BH. The first of these ensures that the magnitude of spin does not change too much over a single time-step, and it is given by
.. math::
\Delta t_\mathrm{a}=0.1\frac{\vert a\vert M_\mathrm{BH}}{s \dot{M}_\mathrm{BH,0}},
where :math:`s` is the spinup/spindown function. The numerical factor :math:`0.1` quantifies how finely we want to evolve spin; it ensures that the value of spin changes no more than :math:`10` per cent (relative to the current value) over the next time-step.
We also introduce a time-step related to the redirection of the spin vector. Since the spin vector may be redirected very quickly relative to its magnitude (due to LT torques), this criterion is separate to the one mentioned above. This time-step is given
.. math::
\Delta t_\mathrm{a}=0.1\frac{M_\mathrm{warp}J_\mathrm{BH}}{\dot{M}_\mathrm{BH,0}J_\mathrm{warp}\sin\theta},
On top of that, we add a time step that makes sure the BH spin doesn't get evolved too much over one time step. Its magnitude is actually not a problem, since the growth of the spin magnitude is always tied with the growth of mass. However, the direction is more problematic (especially in the thin disk case, where alignment can occur very quickly, with little mass or spin growth, due to large warp radii). For this reason, we make sure that the amount of warp angular momentum interacting with the BH over the next time-step, :math:`\Delta_J=(J_\mathrm{warp}/M_\mathrm{warp})\Delta M`, is a small fraction of the current BH angular momentum :math:`J_\mathrm{BH}` (e.g. :math:`0.01`).
where :math:`\theta` is the angle between the current BH spin vector and the angular momentum of gas in the accretion disc on large scales. The numerical prefactor is again present to ensure a fine enough evolution of the spin vector direction. In particular, in the case that the spin vector and the gas angular momentum are perpendicular (:math:`\sin\theta=1`), this criterion will lead to a change of no more than :math:`\approx5\degree` in the spin vector direction per time-step.