Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 840-unit-test-testtimeline-fails
  • 875-wendland-c6-missing-neighbour-contributions
  • 887-code-does-not-compile-with-parmetis-installed-locally-but-without-metis
  • CubeTest
  • FS_Del
  • GEARRT_Iliev1
  • GEARRT_Iliev3
  • GEARRT_Iliev4
  • GEARRT_Iliev5
  • GEARRT_Iliev5-fixed-nr-subcycles
  • GEARRT_Iliev7
  • GEARRT_Iliev_static
  • GEARRT_Ivanova
  • GEARRT_fixed_nr_subcycles
  • GEARRT_injection_tests_Iliev0
  • GPU_swift
  • GrackleCoolingUpdates2
  • Lambda-T-table
  • MAGMA2
  • MAGMA2_matthieu
  • MHD_FS
  • MHD_FS_TESTs
  • MHD_FS_VP_AdvectGauge
  • MHD_Orestis
  • MHD_canvas
  • MHD_canvas_RF_128
  • MHD_canvas_RF_growth_rate
  • MHD_canvas_RobertsFlow
  • MHD_canvas_SPH_errors
  • MHD_canvas_matthieu
  • MHD_canvas_nickishch
  • MHD_canvas_nickishch_Lorentz_force_test
  • MHD_canvas_nickishch_track_everything
  • MHD_canvas_sid
  • OAK/CPAW_updates
  • OAK/LoopAdvectionTest
  • OAK/adaptive_divv
  • OAK/kinetic_dedner
  • REMIX_cosmo
  • RT_dualc
  • RT_recombination_radiation
  • RT_test_mladen
  • SIDM
  • SIDM_wKDSDK
  • SNdust
  • SPHM1RT_CosmologicalStromgrenSphere
  • SPHM1RT_bincheck
  • SPHM1RT_smoothedRT
  • TangoSIDM
  • TestPropagation3D
  • Test_fixedhProb
  • activate_fewer_comms
  • active_h_max_optimization
  • adaptive_softening_Lieuwe
  • add_2p5D
  • add_black_holes_checks
  • adding_sidm_to_master
  • agn_crksph
  • agn_crksph_subtask_speedup
  • amd-optimization
  • arm_vec
  • automatic_tasks
  • better_ray_RNG
  • black_holes_accreted_angular_momenta_from_gas
  • burkert-potential
  • c11
  • c11_atomics_copy
  • cancel_all_sorts
  • cell_exchange_improvements
  • cell_types
  • cherry-pick-cd1c39e0
  • comm_tasks_are_special
  • conduction_velocities
  • cpp-fixes
  • cuda_test
  • darwin/adaptive_softening
  • darwin/gear_chemistry_fluxes
  • darwin/gear_mechanical_feedback
  • darwin/gear_preSN_feedback
  • darwin/gear_radiation
  • darwin/simulations
  • darwin/sink_formation_proba
  • darwin/sink_mpi
  • darwin/sink_mpi_physics
  • dead-time-stats
  • derijcke_cooling
  • dev_cms
  • do-not-activate-empty-star-pairs
  • domain_zoom_nometis
  • drift_flag_debug_check
  • driven_turbulence
  • driven_turbulence_forcings
  • engineering
  • eos_updates
  • evrard_disc
  • expand_fof_2022
  • explict_bkg_cdim
  • fewer_star_comms
  • fewer_timestep_comms_no_empty_pairs
  • fix-velcut
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
  • v0.8.5
  • v0.9.0
  • v1.0.0
  • v2025.01
  • v2025.04
119 results

Target

Select target project
  • dc-oman1/swiftsim
  • swift/swiftsim
  • pdraper/swiftsim
  • tkchan/swiftsim
  • dc-turn5/swiftsim
5 results
Select Git revision
  • CubeTest
  • GPU_swift
  • TangoSIDM
  • active_h_max_optimization
  • arm_vec
  • c11
  • c11_atomics_copy
  • comm_tasks_are_special
  • cuda_test
  • domain_zoom_nometis
  • drift_flag_debug_check
  • driven_turbulence
  • engineering
  • evrard_disc
  • expand_fof
  • fix_sink_timestep
  • fixed_hSIDM
  • fof_snapshots
  • gear_metal_diffusion
  • generic_cache
  • genetic_partitioning2
  • gizmo
  • gizmo_entropy_switch
  • gizmo_mfv_entropy
  • hashmap_mesh
  • isotropic_feedback
  • ivanova-testing
  • jsw/6dfof
  • kahip
  • lean_gparts
  • load-balance-testing
  • locked_hydro
  • logger_read_history
  • logger_read_history2
  • logger_write_hdf5
  • mass_dependent_h_max
  • master
  • mpi-one-thread
  • mpi-packed-parts
  • mpi-send-subparts
  • mpi-send-subparts-vector
  • mpi-subparts-vector-grav
  • mpi-testsome
  • mpi-threads
  • mpi_force_checks
  • numa_awareness
  • onesided-mpi-rdma
  • onesided-mpi-recv-cache
  • onesided-mpi-recv-window
  • onesided-mpi-single-recv-window
  • origin-master
  • parallel_exchange_cells
  • paranoid
  • phantom
  • planetary
  • planetary_boundary
  • queue-timers
  • queue-timers-clean
  • rdma-only
  • rdma-only-multiple-sends
  • rdma-only-subcopies
  • rdma-only-subparts
  • rdma-only-subparts-update
  • rdma-only-subparts-update-flamingo
  • rdma-only-subparts-update-flamingo-cellids
  • rdma-only-subparts-update-keep
  • rdma-only-subparts-update-keep-update
  • rdma-only-subsends
  • reweight-fitted-costs
  • reweight-scaled-costs
  • rgb-engineering
  • rt-gas-interactions
  • rt-ghost2-and-thermochemistry
  • scheduler_determinism
  • search-window-tests
  • signal-handler-dump
  • simba-stellar-feedback
  • sink_formation2
  • sink_merger
  • sink_merger2
  • skeleton
  • smarter_sends
  • snipes_data
  • spiral_potential
  • subgrid_SF_threshold
  • subsends
  • swift-rdma
  • swift_zoom_support
  • sync-send
  • thread-dump-extract-waiters
  • threadpool_rmapper
  • traphic
  • variable_hSIDM
  • whe-nu-bg-cosmo
  • when_to_proxy
  • yb-bhdev
  • yb-sndev
  • yb-sndev-dev
  • yb-varsndt-isotropic
  • yb-vi-gastrack
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
  • v0.8.5
  • v0.9.0
116 results
Show changes
Commits on Source (500)
Showing
with 1446 additions and 556 deletions
......@@ -39,12 +39,14 @@ examples/*/*/*.rst
examples/*/*/*.hdf5
examples/*/*/*.csv
examples/*/*/*.dot
examples/*/*/cell_hierarchy.html
examples/**/cell_hierarchy.html
examples/*/*/energy.txt
examples/*/*/task_level.txt
examples/**/task_level.txt
examples/*/*/timesteps_*.txt
examples/*/*/SFR.txt
examples/*/*/partition_fixed_costs.h
examples/**/timesteps.txt
examples/**/SFR.txt
examples/**/statistics.txt
examples/**/partition_fixed_costs.h
examples/*/*/memuse_report-step*.dat
examples/*/*/memuse_report-step*.log
examples/*/*/restart/*
......@@ -56,7 +58,6 @@ examples/*/*/used_parameters.yml
examples/*/*/unused_parameters.yml
examples/*/*/fof_used_parameters.yml
examples/*/*/fof_unused_parameters.yml
examples/*/*/partition_fixed_costs.h
examples/*/*.mpg
examples/*/*/gravity_checks_*.dat
examples/*/*/coolingtables.tar.gz
......@@ -65,6 +66,9 @@ examples/*/*/yieldtables.tar.gz
examples/*/*/yieldtables
examples/*/*/photometry.tar.gz
examples/*/*/photometry
examples/*/*/plots
examples/*/*/snapshots
examples/*/*/restart
examples/Cooling/CoolingRates/cooling_rates
examples/Cooling/CoolingRates/cooling_element_*.dat
examples/Cooling/CoolingRates/cooling_output.dat
......@@ -73,16 +77,15 @@ examples/SubgridTests/CosmologicalStellarEvolution/StellarEvolutionSolution*
examples/SmallCosmoVolume/SmallCosmoVolume_DM/power_spectra
examples/SmallCosmoVolume/SmallCosmoVolume_cooling/snapshots/
examples/SmallCosmoVolume/SmallCosmoVolume_hydro/snapshots/
examples/SmallCosmoVolume/SmallCosmoVolume_cooling/CloudyData_UVB=HM2012.h5
examples/SmallCosmoVolume/SmallCosmoVolume_cooling/CloudyData_UVB=HM2012_shielded.h5
examples/GEAR/AgoraDisk/CloudyData_UVB=HM2012.h5
examples/GEAR/AgoraDisk/CloudyData_UVB=HM2012_shielded.h5
examples/GEAR/AgoraDisk/chemistry-AGB+OMgSFeZnSrYBaEu-16072013.h5
examples/GEAR/AgoraCosmo/CloudyData_UVB=HM2012_shielded.h5
examples/GEAR/AgoraCosmo/POPIIsw.h5
examples/GEAR/ZoomIn/CloudyData_UVB=HM2012.h5
examples/GEAR/ZoomIn/POPIIsw.h5
examples/GEAR/ZoomIn/snap/
examples/**/CloudyData_UVB=HM2012.h5
examples/**/CloudyData_UVB=HM2012_shielded.h5
examples/**/CloudyData_UVB=HM2012_high_density.h5
examples/**/chemistry-AGB+OMgSFeZnSrYBaEu-16072013.h5
examples/**/POPIIsw.h5
examples/**/GRACKLE_INFO
examples/**/snap/
examples/SinkParticles/HomogeneousBox/snapshot_0003restart.hdf5
tests/testActivePair
tests/testActivePair.sh
......@@ -222,6 +225,7 @@ theory/Multipoles/mac_potential.pdf
theory/Cosmology/cosmology.pdf
theory/Cooling/eagle_cooling.pdf
theory/Gizmo/gizmo-implementation-details/gizmo-implementation-details.pdf
theory/RadiativeTransfer/GEARRT/GEARRT.pdf
m4/libtool.m4
m4/ltoptions.m4
......
......@@ -13,7 +13,7 @@ Josh Borrow joshua.borrow@durham.ac.uk
Loic Hausammann loic.hausammann@epfl.ch
Yves Revaz yves.revaz@epfl.ch
Jacob Kegerreis jacob.kegerreis@durham.ac.uk
Mladen Ivkovic mladen.ivkovic@epfl.ch
Mladen Ivkovic mladen.ivkovic@durham.ac.uk
Stuart McAlpine stuart.mcalpine@helsinki.fi
Folkert Nobels nobels@strw.leidenuniv.nl
John Helly j.c.helly@durham.ac.uk
......@@ -25,4 +25,10 @@ Sylvia Ploeckinger ploeckinger@lorentz.leidenuniv.nl
Willem Elbers willem.h.elbers@durham.ac.uk
TK Chan chantsangkeung@gmail.com
Marcel van Daalen daalen@strw.leidenuniv.nl
Filip Husko filip.husko@durham.ac.uk
\ No newline at end of file
Filip Husko filip.husko@durham.ac.uk
Orestis Karapiperis karapiperis@lorentz.leidenuniv.nl
Stan Verhoeve s06verhoeve@gmail.com
Nikyta Shchutskyi shchutskyi@lorentz.leidenuniv.nl
Will Roper w.roper@sussex.ac.uk
Darwin Roduit darwin.roduit@alumni.epfl.ch
Jonathan Davies j.j.davies@ljmu.ac.uk
The SWIFT source code is using a variation of the 'Google' formatting style.
The script 'format.sh' in the root directory applies the clang-format-13
The script 'format.sh' in the root directory applies the clang-format-18
tool with our style choices to all the SWIFT C source file. Please apply
the formatting script to the files before submitting a merge request.
......
......@@ -99,7 +99,7 @@ before you can build it.
- HDF5:
A HDF5 library (v. 1.8.x or higher) is required to read and
A HDF5 library (v. 1.10.x or higher) is required to read and
write particle data. One of the commands "h5cc" or "h5pcc"
should be available. If "h5pcc" is located then a parallel
HDF5 built for the version of MPI located should be
......@@ -191,7 +191,7 @@ before you can build it.
==================
The SWIFT source code uses a variation of 'Google' style. The script
'format.sh' in the root directory applies the clang-format-13 tool with our
'format.sh' in the root directory applies the clang-format-18 tool with our
style choices to all the SWIFT C source file. Please apply the formatting
script to the files before submitting a merge request.
......
......@@ -37,15 +37,15 @@ MYFLAGS =
# Add the source directory and the non-standard paths to the included library headers to CFLAGS
AM_CFLAGS = -I$(top_srcdir)/src -I$(top_srcdir)/argparse $(HDF5_CPPFLAGS) \
$(GSL_INCS) $(FFTW_INCS) $(NUMA_INCS) $(GRACKLE_INCS) $(OPENMP_CFLAGS) \
$(CHEALPIX_CFLAGS)
$(GSL_INCS) $(FFTW_INCS) $(NUMA_INCS) $(GRACKLE_INCS) \
$(CHEALPIX_CFLAGS) $(LUSTREAPI_CFLAGS)
AM_LDFLAGS = $(HDF5_LDFLAGS)
# Extra libraries.
EXTRA_LIBS = $(GSL_LIBS) $(HDF5_LIBS) $(FFTW_LIBS) $(NUMA_LIBS) $(PROFILER_LIBS) \
$(TCMALLOC_LIBS) $(JEMALLOC_LIBS) $(TBBMALLOC_LIBS) $(GRACKLE_LIBS) \
$(CHEALPIX_LIBS)
$(CHEALPIX_LIBS) $(LUSTREAPI_LIBS)
# MPI libraries.
MPI_LIBS = $(PARMETIS_LIBS) $(METIS_LIBS) $(MPI_THREAD_LIBS) $(FFTW_MPI_LIBS)
......
......@@ -6,7 +6,7 @@
/____/ |__/|__/___/_/ /_/
SPH With Inter-dependent Fine-grained Tasking
Version : 1.0.0
Version : 2025.04
Website: www.swiftsim.com
Twitter: @SwiftSimulation
......
......@@ -18,7 +18,7 @@ More general information about SWIFT is available on the project
[webpages](http://www.swiftsim.com).
For information on how to _run_ SWIFT, please consult the onboarding guide
available [here](http://www.swiftsim.com/onboarding.pdf). This includes
available [here](https://swift.strw.leidenuniv.nl/onboarding.pdf). This includes
dependencies, and a few examples to get you going.
We suggest that you use the latest release branch of SWIFT, rather than the
......@@ -55,7 +55,7 @@ experimentation with various values is highly encouraged. Each problem will
likely require different values and the sensitivity to the details of the
physical model is something left to the users to explore.
Acknowledgment & Citation
Acknowledgement & Citation
-------------------------
The SWIFT code was last described in this paper:
......@@ -66,7 +66,7 @@ their results.
In order to keep track of usage and measure the impact of the software, we
kindly ask users publishing scientific results using SWIFT to add the following
sentence to the acknowledgment section of their papers:
sentence to the acknowledgement section of their papers:
"The research in this paper made use of the SWIFT open-source
simulation code (http://www.swiftsim.com, Schaller et al. 2018)
......@@ -81,7 +81,7 @@ Contribution Guidelines
-----------------------
The SWIFT source code uses a variation of the 'Google' formatting style.
The script 'format.sh' in the root directory applies the clang-format-10
The script 'format.sh' in the root directory applies the clang-format-18
tool with our style choices to all the SWIFT C source file. Please apply
the formatting script to the files before submitting a pull request.
......@@ -106,7 +106,7 @@ Runtime parameters
/____/ |__/|__/___/_/ /_/
SPH With Inter-dependent Fine-grained Tasking
Version : 1.0.0
Version : 2025.04
Website: www.swiftsim.com
Twitter: @SwiftSimulation
......
......@@ -105,20 +105,13 @@ int argparse_help_cb(struct argparse *self,
const struct argparse_option *option);
// built-in option macros
#define OPT_END() \
{ ARGPARSE_OPT_END, 0, NULL, NULL, 0, NULL, 0, 0 }
#define OPT_BOOLEAN(...) \
{ ARGPARSE_OPT_BOOLEAN, __VA_ARGS__ }
#define OPT_BIT(...) \
{ ARGPARSE_OPT_BIT, __VA_ARGS__ }
#define OPT_INTEGER(...) \
{ ARGPARSE_OPT_INTEGER, __VA_ARGS__ }
#define OPT_FLOAT(...) \
{ ARGPARSE_OPT_FLOAT, __VA_ARGS__ }
#define OPT_STRING(...) \
{ ARGPARSE_OPT_STRING, __VA_ARGS__ }
#define OPT_GROUP(h) \
{ ARGPARSE_OPT_GROUP, 0, NULL, NULL, h, NULL, 0, 0 }
#define OPT_END() {ARGPARSE_OPT_END, 0, NULL, NULL, 0, NULL, 0, 0}
#define OPT_BOOLEAN(...) {ARGPARSE_OPT_BOOLEAN, __VA_ARGS__}
#define OPT_BIT(...) {ARGPARSE_OPT_BIT, __VA_ARGS__}
#define OPT_INTEGER(...) {ARGPARSE_OPT_INTEGER, __VA_ARGS__}
#define OPT_FLOAT(...) {ARGPARSE_OPT_FLOAT, __VA_ARGS__}
#define OPT_STRING(...) {ARGPARSE_OPT_STRING, __VA_ARGS__}
#define OPT_GROUP(h) {ARGPARSE_OPT_GROUP, 0, NULL, NULL, h, NULL, 0, 0}
#define OPT_HELP() \
OPT_BOOLEAN('h', "help", NULL, "show this help message and exit", \
argparse_help_cb, 0, 0)
......
This diff is collapsed.
This diff is collapsed.
......@@ -24,14 +24,14 @@ While the initial graph is showing all the tasks/dependencies, the next ones are
Task dependencies for a single cell
-----------------------------------
There is an option to additionally write the dependency graphs of the task dependencies for a single cell.
There is an option to additionally write the dependency graphs of the task dependencies for a single cell.
You can select which cell to write using the ``Scheduler:dependency_graph_cell: cellID`` parameter, where ``cellID`` is the cell ID of type long long.
This feature will create an individual file for each step specified by the ``Scheduler:dependency_graph_frequency`` and, differently from the full task graph, will create an individual file for each MPI rank that has this cell.
Using this feature has several requirements:
- You need to compile SWIFT including either ``--enable-debugging-checks`` or ``--enable-cell-graph``. Otherwise, cells won't have IDs.
- There is a limit on how many cell IDs SWIFT can handle while enforcing them to be reproduceably unique. That limit is up to 32 top level cells in any dimension, and up to 16 levels of depth. If any of these thresholds are exceeded, the cells will still have unique cell IDs, but the actual IDs will most likely vary between any two runs.
- There is a limit on how many cell IDs SWIFT can handle while enforcing them to be reproducibly unique. That limit is up to 32 top level cells in any dimension, and up to 16 levels of depth. If any of these thresholds are exceeded, the cells will still have unique cell IDs, but the actual IDs will most likely vary between any two runs.
To plot the task dependencies, you can use the same script as before: ``tools/plot_task_dependencies.py``. The dependency graph now may have some tasks with a pink-ish background colour: These tasks represent dependencies that are unlocked by some other task which is executed for the requested cell, but the cell itself doesn't have an (active) task of that type itself in that given step.
......@@ -39,15 +39,15 @@ To plot the task dependencies, you can use the same script as before: ``tools/pl
Task levels
-----------------
At the beginning of each simulation the file ``task_level_0.txt`` is generated.
At the beginning of each simulation the file ``task_level_0.txt`` is generated.
It contains the counts of all tasks at all levels (depths) in the tree.
The depths and counts of the tasks can be plotted with the script ``tools/plot_task_levels.py``.
It will display the individual tasks at the x-axis, the number of each task at a given level on the y-axis, and the level is shown as the colour of the plotted point.
Additionally, the script can write out in brackets next to each tasks's name on the x-axis on how many different levels the task exists using the ``--count`` flag.
Additionally, the script can write out in brackets next to each task's name on the x-axis on how many different levels the task exists using the ``--count`` flag.
Finally, in some cases the counts for different levels of a task may be very close to each other and overlap on the plot, making them barely visible.
This can be alleviated by using the ``--displace`` flag:
This can be alleviated by using the ``--displace`` flag:
It will displace the plot points w.r.t. the y-axis in an attempt to make them better visible, however the counts won't be exact in that case.
If you wish to have more task level plots, you can use the parameter ``Scheduler:task_level_output_frequency``.
If you wish to have more task level plots, you can use the parameter ``Scheduler:task_level_output_frequency``.
It defines how many steps are done in between two task level output dumps.
......@@ -139,9 +139,9 @@ Each line of the logs contains the following information:
activation: 1 if record for the start of a request, 0 if request completion
tag: MPI tag of the request
size: size, in bytes, of the request
sum: sum, in bytes, of all requests that are currently not logged as complete
sum: sum, in bytes, of all requests that are currently not logged as complete
The stic values should be synchronized between ranks as all ranks have a
The stic values should be synchronised between ranks as all ranks have a
barrier in place to make sure they start the step together, so should be
suitable for matching between ranks. The unique keys to associate records
between ranks (so that the MPI_Isend and MPI_Irecv pairs can be identified)
......@@ -161,27 +161,27 @@ on which the additional task data will be dumped. Swift will then create ``threa
and ``thread_info-step<nr>.dat`` files. Similarly, for threadpool related tools, you need to compile
swift with ``--enable-threadpool-debugging`` and then run it with ``-Y <interval>``.
For the analysis and plotting scripts listed below, you need to provide the **\*info-step<nr>.dat**
For the analysis and plotting scripts listed below, you need to provide the **\*info-step<nr>.dat**
files as a cmdline argument, not the ``*stats-step<nr>.dat`` files.
A short summary of the scripts in ``tools/task_plots/``:
- ``analyse_tasks.py``:
- ``analyse_tasks.py``:
The output is an analysis of the task timings, including deadtime per thread
and step, total amount of time spent for each task type, for the whole step
and per thread and the minimum and maximum times spent per task type.
- ``analyse_threadpool_tasks.py``:
The output is an analysis of the threadpool task timings, including
- ``analyse_threadpool_tasks.py``:
The output is an analysis of the threadpool task timings, including
deadtime per thread and step, total amount of time spent for each task type, for the
whole step and per thread and the minimum and maximum times spent per task type.
- ``iplot_tasks.py``:
An interactive task plot, showing what thread was doing what task and for
how long for a step. **Needs python2 and the tkinter module**.
- ``plot_tasks.py``:
Creates a task plot image, showing what thread was doing what task and for how long.
- ``plot_threadpool.py``:
- ``iplot_tasks.py``:
An interactive task plot, showing what thread was doing what task and for
how long for a step. **Needs the tkinter module**.
- ``plot_tasks.py``:
Creates a task plot image, showing what thread was doing what task and for how long.
- ``plot_threadpool.py``:
Creates a threadpool plot image, showing what thread was doing what threadpool call and for
how long.
how long.
For more details on the scripts as well as further options, look at the documentation at the top
......@@ -189,7 +189,7 @@ of the individual scripts and call them with the ``-h`` flag.
Task data is also dumped when using MPI and the tasks above can be used on
that as well, some offer the ability to process all ranks, and others to
select individual ranks.
select individual ranks.
It is also possible to process a complete run of task data from all the
available steps using the ``process_plot_tasks.py`` and
......@@ -205,6 +205,8 @@ by using the size of the task data files to schedule parallel processes more
effectively (the ``--weights`` argument).
.. _dumperThread:
Live internal inspection using the dumper thread
------------------------------------------------
......@@ -236,49 +238,81 @@ than once. For a non-MPI run the file is simply called ``.dump``, note for MPI
you need to create one file per rank, so ``.dump.0``, ``.dump.1`` and so on.
Deadlock Detector
---------------------------
When configured with ``--enable-debugging-checks``, the parameter
.. code-block:: yaml
Scheduler:
deadlock_waiting_time_s: 300.
can be specified. It specifies the time (in seconds) the scheduler should wait
for a new task to be executed during a simulation step (specifically: during a
call to ``engine_launch()``). After this time passes without any new tasks being
run, the scheduler assumes that the code has deadlocked. It then dumps the same
diagnostic data as :ref:`the dumper thread <dumperThread>` (active tasks, queued
tasks, and memuse/MPIuse reports, if swift was configured with the corresponding
flags) and aborts.
A value of zero or a negative value for ``deadlock_waiting_time_s`` disable the
deadlock detector.
You are likely well advised to try and err on the upper side for the time to
choose for the ``deadlock_waiting_time_s`` parameter. A value in the order of
several (tens of) minutes is recommended. A too small value might cause your run to
erroneously crash and burn despite not really being deadlocked, just slow or
badly balanced.
Neighbour search statistics
---------------------------
One of the core algorithms in SWIFT is an iterative neighbour search
whereby we try to find an appropriate radius around a particle's
position so that the weighted sum over neighbouring particles within
that radius is equal to some target value. The most obvious example of
this iterative neighbour search is the SPH density loop, but various
sub-grid models employ a very similar iterative neighbour search. The
computational cost of this iterative search is significantly affected by
the number of iterations that is required, and it can therefore be
One of the core algorithms in SWIFT is an iterative neighbour search
whereby we try to find an appropriate radius around a particle's
position so that the weighted sum over neighbouring particles within
that radius is equal to some target value. The most obvious example of
this iterative neighbour search is the SPH density loop, but various
sub-grid models employ a very similar iterative neighbour search. The
computational cost of this iterative search is significantly affected by
the number of iterations that is required, and it can therefore be
useful to analyse the progression of the iterative scheme in detail.
When configured with ``--enable-ghost-statistics=X``, SWIFT will be
compiled with additional diagnostics that statistically track the number
of iterations required to find a converged answer. Here, ``X`` is a
fixed number of bins to use to collect the required statistics
(``ghost`` refers to the fact that the iterations take place inside the
ghost tasks). In practice, this means that every cell in the SWIFT tree
will be equipped with an additional ``struct`` containing three sets of
``X`` bins (one set for each iterative neighbour loop: hydro, stellar
feedback, AGN feedback). For each bin ``i``, we store the number of
particles that required updating during iteration ``i``, the number of
particles that could not find a single neighbouring particle, the
minimum and maximum smoothing length of all particles that required
updating, and the sum of all their search radii and all their search
radii squared. This allows us to calculate the upper and lower limits,
as well as the mean and standard deviation on the search radius for each
iteration and for each cell. Note that there could be more iterations
required than the number of bins ``X``; in this case the additional
iterations will be accumulated in the final bin. At the end of each time
step, a text file is produced (one per MPI rank) that contains the
information for all cells that had any relevant activity. This text file
is named ``ghost_stats_ssss_rrrr.txt``, where ``ssss`` is the step
When configured with ``--enable-ghost-statistics=X``, SWIFT will be
compiled with additional diagnostics that statistically track the number
of iterations required to find a converged answer. Here, ``X`` is a
fixed number of bins to use to collect the required statistics
(``ghost`` refers to the fact that the iterations take place inside the
ghost tasks). In practice, this means that every cell in the SWIFT tree
will be equipped with an additional ``struct`` containing three sets of
``X`` bins (one set for each iterative neighbour loop: hydro, stellar
feedback, AGN feedback). For each bin ``i``, we store the number of
particles that required updating during iteration ``i``, the number of
particles that could not find a single neighbouring particle, the
minimum and maximum smoothing length of all particles that required
updating, and the sum of all their search radii and all their search
radii squared. This allows us to calculate the upper and lower limits,
as well as the mean and standard deviation on the search radius for each
iteration and for each cell. Note that there could be more iterations
required than the number of bins ``X``; in this case the additional
iterations will be accumulated in the final bin. At the end of each time
step, a text file is produced (one per MPI rank) that contains the
information for all cells that had any relevant activity. This text file
is named ``ghost_stats_ssss_rrrr.txt``, where ``ssss`` is the step
counter for that time step and ``rrrr`` is the MPI rank.
The script ``tools/plot_ghost_stats.py`` takes one or multiple
``ghost_stats.txt`` files and computes global statistics for all the
cells in those files. The script also takes the name of an output file
where it will save those statistics as a set of plots, and an optional
label that will be displayed as the title of the plots. Note that there
are no restrictions on the number of input files or how they relate;
different files could represent different MPI ranks, but also different
time steps or even different simulations (which would make little
sense). It is up to the user to make sure that the input is actually
The script ``tools/plot_ghost_stats.py`` takes one or multiple
``ghost_stats.txt`` files and computes global statistics for all the
cells in those files. The script also takes the name of an output file
where it will save those statistics as a set of plots, and an optional
label that will be displayed as the title of the plots. Note that there
are no restrictions on the number of input files or how they relate;
different files could represent different MPI ranks, but also different
time steps or even different simulations (which would make little
sense). It is up to the user to make sure that the input is actually
relevant.
......@@ -52,18 +52,19 @@ following bibtex citation block:
@ARTICLE{2023arXiv230513380S,
author = {{Schaller}, Matthieu and others},
title = "{Swift: A modern highly-parallel gravity and smoothed particle hydrodynamics solver for astrophysical and cosmological applications}",
journal = {arXiv e-prints},
keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Cosmology and Nongalactic Astrophysics, Astrophysics - Earth and Planetary Astrophysics, Astrophysics - Astrophysics of Galaxies, Computer Science - Distributed, Parallel, and Cluster Computing},
year = 2023,
title = "{SWIFT: A modern highly-parallel gravity and smoothed particle hydrodynamics solver for astrophysical and cosmological applications}",
journal = {\mnras},
keywords = {software: simulations, methods: numerical, software: public release, Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Cosmology and Nongalactic Astrophysics, Astrophysics - Earth and Planetary Astrophysics, Astrophysics - Astrophysics of Galaxies, Computer Science - Distributed, Parallel, and Cluster Computing},
year = 2024,
month = may,
eid = {arXiv:2305.13380},
pages = {arXiv:2305.13380},
doi = {10.48550/arXiv.2305.13380},
volume = {530},
number = {2},
pages = {2378-2419},
doi = {10.1093/mnras/stae922},
archivePrefix = {arXiv},
eprint = {2305.13380},
primaryClass = {astro-ph.IM},
adsurl = {https://ui.adsabs.harvard.edu/abs/2023arXiv230513380S},
adsurl = {https://ui.adsabs.harvard.edu/abs/2024MNRAS.530.2378S},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
......@@ -101,5 +102,4 @@ code. This corresponds to the following bibtex citation block:
When using models or parts of the code whose details were introduced in other
papers, we kindly ask that the relevant work is properly acknowledge and
cited. This includes the :ref:`subgrid`, the :ref:`planetary` extensions, the
hydrodynamics and radiative transfer implementations, or the particle-based
:ref:`neutrinos`.
:ref:`hydro` and :ref:`rt`, or the particle-based :ref:`neutrinos`.
......@@ -7,20 +7,20 @@
Equations of State
==================
Currently, SWIFT offers two different gas equations of state (EoS)
implemented: ``ideal`` and ``isothermal``; as well as a variety of EoS for
"planetary" materials. The EoS describe the relations between our
main thermodynamical variables: the internal energy per unit mass
(\\(u\\)), the mass density (\\(\\rho\\)), the entropy (\\(A\\)) and
the pressure (\\(P\\)).
Currently, SWIFT offers three different gas equations of state (EoS)
implemented: ``ideal``, ``isothermal``, and ``barotropic``; as well as a variety
of EoS for "planetary" materials. The EoS describe the relations between our
main thermodynamical variables: the internal energy per unit mass :math:`u`, the
mass density :math:`\rho`, the entropy :math:`A` and the pressure :math:`P`.
It is selected af configure time via the option ``--with-equation-of-state``.
Gas EoS
-------
We write the adiabatic index as \\(\\gamma \\) and \\( c_s \\) denotes
We write the adiabatic index as :math:`\gamma` and :math:`c_s` denotes
the speed of sound. The adiabatic index can be changed at configure
time by choosing one of the allowed values of the option
``--with-adiabatic-index``. The default value is \\(\\gamma = 5/3 \\).
``--with-adiabatic-index``. The default value is :math:`\gamma = 5/3`.
The tables below give the expression for the thermodynamic quantities
on each row entry as a function of the gas density and the
......@@ -29,27 +29,38 @@ thermodynamical quantity given in the header of each column.
.. csv-table:: Ideal Gas
:header: "Variable", "A", "u", "P"
"A", "", "\\( \\left( \\gamma - 1 \\right) u \\rho^{1-\\gamma} \\)", "\\(P \\rho^{-\\gamma} \\)"
"u", "\\( A \\frac{ \\rho^{ \\gamma - 1 } }{\\gamma - 1 } \\)", "", "\\(\\frac{1}{\\gamma - 1} \\frac{P}{\\rho}\\)"
"P", "\\( A \\rho^\\gamma \\)", "\\( \\left( \\gamma - 1\\right) u \\rho \\)", ""
"\\(c_s\\)", "\\(\\sqrt{ \\gamma \\rho^{\\gamma - 1} A}\\)", "\\(\\sqrt{ u \\gamma \\left( \\gamma - 1 \\right) } \\)", "\\(\\sqrt{ \\frac{\\gamma P}{\\rho} }\\)"
"A", "", :math:`\left( \gamma - 1 \right) u \rho^{1-\gamma}`, :math:`P \rho^{-\gamma}`
"u", :math:`A \frac{ \rho^{ \gamma - 1 } }{\gamma - 1 }`, "", :math:`\frac{1}{\gamma - 1} \frac{P}{\rho}`
"P", :math:`A \rho^\gamma`, :math:`\left( \gamma - 1\right) u \rho`, ""
:math:`c_s`, :math:`\sqrt{ \gamma \rho^{\gamma - 1} A}`, :math:`\sqrt{ u \gamma \left( \gamma - 1 \right) }`, :math:`\sqrt{ \frac{\gamma P}{\rho} }`
.. csv-table:: Isothermal Gas
:header: "Variable", "A", "u", "P"
:header: "Variable", "-", "-", "-"
"A", "", "\\(\\left( \\gamma - 1 \\right) u \\rho^{1-\\gamma}\\)", ""
"A", "", :math:`\left( \gamma - 1 \right) u \rho^{1-\gamma}`, ""
"u", "", "const", ""
"P", "", "\\(\\left( \\gamma - 1\\right) u \\rho \\)", ""
"\\( c_s\\)", "", "\\(\\sqrt{ u \\gamma \\left( \\gamma - 1 \\right) } \\)", ""
Note that when running with an isothermal equation of state, the value
of the tracked thermodynamic variable (e.g. the entropy in a
"P", "", :math:`\left( \gamma - 1\right) u \rho`, ""
:math:`c_s`, "", :math:`\sqrt{ u \gamma \left( \gamma - 1 \right) }`, ""
.. csv-table:: Barotropic Gas
:header: "Variable", "-", "-", "-"
"A", "", :math:`\rho^{1-\gamma} c_0^2 \sqrt{1 + \left( \frac{\rho}{\rho_c} \right) }`, ""
"u", "", :math:`\frac{1}(\gamma -1)c_0^2 \sqrt{1 + \left( \frac{\rho}{\rho_c} \right) }`, ""
"P", "", :math:`\rho c_0^2 \sqrt{1 + \left( \frac{\rho}{\rho_c} \right) }`, ""
:math:`c_s`, "", :math:`\sqrt{ c_0^2 \sqrt{1 + \left( \frac{\rho}{\rho_c} \right) }}`, ""
Note that when running with an isothermal or barotropic equation of state, the
value of the tracked thermodynamic variable (e.g. the entropy in a
density-entropy scheme or the internal enegy in a density-energy SPH
formulation) written to the snapshots is meaningless. The pressure,
however, is always correct in all scheme.
formulation) written to the snapshots is meaningless. The pressure, however, is
always correct in all scheme.
For the isothermal equation of state, the internal energy is specified at
runtime via the parameter file. In the case of the barotropic gas, the vacuum
sound speed :math:`c_0` and core density :math:`\rho_c` are similarly specified.
Planetary EoS
......@@ -66,7 +77,7 @@ See :ref:`new_option` for a full list of required changes.
You will need to provide an ``equation_of_state.h`` file containing: the
definition of ``eos_parameters``, IO functions and transformations between the
different variables: \\(u(\\rho, A)\\), \\(u(\\rho, P)\\), \\(P(\\rho,A)\\),
\\(P(\\rho, u)\\), \\(A(\\rho, P)\\), \\(A(\\rho, u)\\), \\(c_s(\\rho, A)\\),
\\(c_s(\\rho, u)\\) and \\(c_s(\\rho, P)\\). See other equation of state files
different variables: :math:`u(\rho, A)`, :math:`u(\rho, P)`, :math:`P(\rho,A)`,
:math:`P(\rho, u)`, :math:`A(\rho, P)`, :math:`A(\rho, u)`, :math:`c_s(\rho, A)`,
:math:`c_s(\rho, u)` and :math:`c_s(\rho, P)`. See other equation of state files
to have implementation details.
......@@ -344,7 +344,105 @@ follows the definitions of `Creasey, Theuns & Bower (2013)
<https://adsabs.harvard.edu/abs/2013MNRAS.429.1922C>`_ equations (16) and (17).
The potential is implemented along the x-axis.
12. MWPotential2014 (``MWPotential2014``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This potential is based on ``galpy``'s ``MWPotential2014`` from `Jo Bovy (2015) <https://ui.adsabs.harvard.edu/abs/2015ApJS..216...29B>`_ and consists in a NFW potential for the halo, an axisymmetric Miyamoto-Nagai potential for the disk and a bulge modeled by a power spherical law with exponential cut-off. The bulge is given by the density:
:math:`\rho(r) = A \left( \frac{r_1}{r} \right)^\alpha \exp \left( - \frac{r^2}{r_c^2} \right)`,
where :math:`A` is an amplitude, :math:`r_1` is a reference radius for amplitude, :math:`\alpha` is the inner power and :math:`r_c` is the cut-off radius.
The resulting potential is:
:math:`\Phi_{\mathrm{MW}}(R, z) = f_1 \Phi_{\mathrm{NFW}} + f_2 \Phi_{\mathrm{MN}} + f_3 \Phi_{\text{bulge}}`,
where :math:`R^2 = x^2 + y^2` is the projected radius and :math:`f_1`, :math:`f_2` and :math:`f_3` are three coefficients that adjust the strength of each individual component.
The parameters of the model are:
.. code:: YAML
MWPotential2014Potential:
useabspos: 0 # 0 -> positions based on centre, 1 -> absolute positions
position: [0.,0.,0.] # Location of centre of potential with respect to centre of the box (if 0) otherwise absolute (if 1) (internal units)
timestep_mult: 0.005 # Dimensionless pre-factor for the time-step condition, basically determines the fraction of the orbital time we use to do the time integration
epsilon: 0.001 # Softening size (internal units)
concentration: 9.823403437774843 # concentration of the Halo
M_200_Msun: 147.41031542774076e10 # M200 of the galaxy disk (in M_sun)
H: 1.2778254614201471 # Hubble constant in units of km/s/Mpc
Mdisk_Msun: 6.8e10 # Mass of the disk (in M_sun)
Rdisk_kpc: 3.0 # Effective radius of the disk (in kpc)
Zdisk_kpc: 0.280 # Scale-height of the disk (in kpc)
amplitude_Msun_per_kpc3: 1.0e10 # Amplitude of the bulge (in M_sun/kpc^3)
r_1_kpc: 1.0 # Reference radius for amplitude of the bulge (in kpc)
alpha: 1.8 # Exponent of the power law of the bulge
r_c_kpc: 1.9 # Cut-off radius of the bulge (in kpc)
potential_factors: [0.4367419745056084, 1.002641971008805, 0.022264787598364262] #Coefficients that adjust the strength of the halo (1st component), the disk (2nd component) and the bulge (3rd component)
Note that the default value of the "Hubble constant" here seems odd. As it
enters multiplicatively with the :math:`f_1` term, the absolute normalisation is
actually not important.
Dynamical friction
..................
This potential can be supplemented by a dynamical friction force, following the Chandrasekhar’s dynamical friction formula,
where the velocity distribution function is assumed to be Maxwellian (Binney & Tremaine 2008, eq. 8.7):
:math:`\frac{\rm{d} \vec{v}_{\rm M}}{\rm{d} t}=-\frac{4\pi G^2M_{\rm sat}\rho \ln \Lambda}{v^3_{\rm{M}}} \left[ \rm{erf}(X) - \frac{2 X}{\sqrt\pi} e^{-X^2} \right] \vec{v}_{\rm M}`,
with:
:math:`X = \frac{v_{\rm{M}}}{\sqrt{2} \sigma}`, :math:`\sigma` being the radius-dependent velocity dispersion of the galaxy.
This latter is computed using the Jeans equations, assuming a spherical component. It is provided by a polynomial fit of order 16.
The velocity dispersion is floored to :math:`\sigma_{\rm min}`, a free parameter.
:math:`\ln \Lambda` is the Coulomb parameter.
:math:`M_{\rm sat}` is the mass of the in-falling satellite on which the dynamical friction is supposed to act.
To prevent very high values of the dynamical friction that can occurs at the center of the model, the acceleration is multiplied by:
:math:`\rm{max} \left(0, \rm{erf}\left( 2\, \frac{ r-r_{\rm{core}} }{r_{\rm{core}}} \right) \right)`
This can also mimic the decrease of the dynamical friction due to a core.
The additional parameters for the dynamical friction are:
.. code:: YAML
with_dynamical_friction: 0 # Are we running with dynamical friction ? 0 -> no, 1 -> yes
df_lnLambda: 5.0 # Coulomb logarithm
df_sigma_floor_km_p_s : 10.0 # Minimum velocity dispersion for the velocity dispersion model
df_satellite_mass_in_Msun : 1.0e10 # Satellite mass in solar mass
df_core_radius_in_kpc: 10 # Radius below which the dynamical friction vanishes.
df_polyfit_coeffs00: -2.96536595e-31 # Polynomial fit coefficient for the velocity dispersion model (order 16)
df_polyfit_coeffs01: 8.88944631e-28 # Polynomial fit coefficient for the velocity dispersion model (order 15)
df_polyfit_coeffs02: -1.18280578e-24 # Polynomial fit coefficient for the velocity dispersion model (order 14)
df_polyfit_coeffs03: 9.29479457e-22 # Polynomial fit coefficient for the velocity dispersion model (order 13)
df_polyfit_coeffs04: -4.82805265e-19 # Polynomial fit coefficient for the velocity dispersion model (order 12)
df_polyfit_coeffs05: 1.75460211e-16 # Polynomial fit coefficient for the velocity dispersion model (order 11)
df_polyfit_coeffs06: -4.59976540e-14 # Polynomial fit coefficient for the velocity dispersion model (order 10)
df_polyfit_coeffs07: 8.83166045e-12 # Polynomial fit coefficient for the velocity dispersion model (order 9)
df_polyfit_coeffs08: -1.24747700e-09 # Polynomial fit coefficient for the velocity dispersion model (order 8)
df_polyfit_coeffs09: 1.29060404e-07 # Polynomial fit coefficient for the velocity dispersion model (order 7)
df_polyfit_coeffs10: -9.65315026e-06 # Polynomial fit coefficient for the velocity dispersion model (order 6)
df_polyfit_coeffs11: 5.10187806e-04 # Polynomial fit coefficient for the velocity dispersion model (order 5)
df_polyfit_coeffs12: -1.83800281e-02 # Polynomial fit coefficient for the velocity dispersion model (order 4)
df_polyfit_coeffs13: 4.26501444e-01 # Polynomial fit coefficient for the velocity dispersion model (order 3)
df_polyfit_coeffs14: -5.78038064e+00 # Polynomial fit coefficient for the velocity dispersion model (order 2)
df_polyfit_coeffs15: 3.57956721e+01 # Polynomial fit coefficient for the velocity dispersion model (order 1)
df_polyfit_coeffs16: 1.85478908e+02 # Polynomial fit coefficient for the velocity dispersion model (order 0)
df_timestep_mult : 0.1 # Dimensionless pre-factor for the time-step condition for the dynamical friction force
How to implement your own potential
-----------------------------------
......
......@@ -19,8 +19,20 @@ friends (its *friends-of-friends*). This creates networks of linked particles
which are called *groups*. The size (or length) of
a group is the number of particles in that group. If a particle does not
find any other particle within ``l`` then it forms its own group of
size 1. For a given distribution of particles the resulting list of
groups is unique and unambiguously defined.
size 1. **For a given distribution of particles the resulting list of
groups is unique and unambiguously defined.**
In our implementation, we use three separate categories influencing their
behaviour in the FOF code:
- ``linkable`` particles which behave as described above.
- ``attachable`` particles which can `only` form a link with the `nearest` ``linkable`` particle they find.
- And the others which are ignored entirely.
The category of each particle type is specified at run time in the parameter
file. The classic scenario for the two categories is to run FOF on the dark
matter particles (i.e. they are `linkable`) and then attach the gas, stars and
black holes to their nearest DM (i.e. the baryons are `attachable`).
Small groups are typically discarded, the final catalogue only contains
objects with a length above a minimal threshold, typically of the
......@@ -36,20 +48,25 @@ domain decomposition and tree structure that is created for the other
parts of the code. The tree can be easily used to find neighbours of
particles within the linking length.
Depending on the application, the choice of linking length and
minimal group size can vary. For cosmological applications, bound
structures (dark matter haloes) are traditionally identified using a
linking length expressed as :math:`0.2` of the mean inter-particle
separation :math:`d` in the simulation which is given by :math:`d =
\sqrt[3]{\frac{V}{N}}`, where :math:`N` is the number of particles in
the simulation and :math:`V` is the simulation (co-moving)
volume. Usually only dark matter particles are considered for the
number :math:`N`. Other particle types are linked but do not
participate in the calculation of the linking length. Experience shows
that this produces groups that are similar to the commonly adopted
(but much more complex) definition of virialised haloes. A minimal
group length of :math:`32` is often adopted in order to get a robust
catalogue of haloes and compute a good halo mass function.
Depending on the application, the choice of linking length and minimal group
size can vary. For cosmological applications, bound structures (dark matter
haloes) are traditionally identified using a linking length expressed as
:math:`0.2` of the mean inter-particle separation :math:`d` in the simulation
which is given by :math:`d = \sqrt[3]{\frac{V}{N}}`, where :math:`N` is the
number of particles in the simulation and :math:`V` is the simulation
(co-moving) volume. Experience shows that this produces groups that are similar
to the commonly adopted (but much more complex) definition of virialised
haloes. A minimal group length of :math:`32` is often adopted in order to get a
robust catalogue of haloes and compute a good halo mass function. Usually only
dark matter particles are considered for the number :math:`N`. In practice, the
mean inter-particle separation is evaluated based on the cosmology adopted in
the simulation. We use: :math:`d=\sqrt[3]{\frac{m_{\rm DM}}{\Omega_{\rm cdm}
\rho_{\rm crit}}}` for simulations with baryonic particles and
:math:`d=\sqrt[3]{\frac{m_{\rm DM}}{(\Omega_{\rm cdm} + \Omega_{\rm b})
\rho_{\rm crit}}}` for DMO simulations. In both cases, :math:`m_{\rm DM}` is the
mean mass of the DM particles. Using this definition (rather than basing in on
:math:`N`) makes the code robust to zoom-in scenarios where the entire volume is
not filled with particles.
For non-cosmological applications of the FOF algorithm, the choice of
the linking length is more difficult and left to the user. The choice
......
......@@ -10,8 +10,9 @@ The main purpose of the on-the-fly FOF is to identify haloes during a
cosmological simulation in order to seed some of them with black holes
based on physical considerations.
**In this mode, no group catalogue is written to the disk. The resulting list
of haloes is only used internally by SWIFT.**
.. warning::
In this mode, no group catalogue is written to the disk. The resulting list
of haloes is only used internally by SWIFT.
Note that a catalogue can nevertheless be written after every seeding call by
setting the optional parameter ``dump_catalogue_when_seeding``.
......
......@@ -20,8 +20,14 @@ absolute value using the parameter ``absolute_linking_length``. This is
expressed in internal units. This value will be ignored (and the ratio of
the mean inter-particle separation will be used) when set to ``-1``.
The categories of particles are specified using the ``linking_types`` and
``attaching_types`` arrays. They are of the length of the number of particle
types in SWIFT (currently 7) and specify for each type using ``1`` or ``0``
whether or not the given particle type is in this category. Types not present
in either category are ignored entirely.
The second important parameter is the minimal size of groups to retain in
the catalogues. This is given in terms of number of particles (of all types)
the catalogues. This is given in terms of number of *linking* particles
via the parameter ``min_group_size``. When analysing simulations, to
identify haloes, the common practice is to set this to ``32`` in order to
not plague the catalogue with too many small, likely unbound, structures.
......@@ -98,10 +104,12 @@ A full FOF section of the YAML parameter file looks like:
time_first: 0.2 # Time of first FoF black hole seeding calls.
delta_time: 1.005 # Time between consecutive FoF black hole seeding calls.
min_group_size: 256 # The minimum no. of particles required for a group.
linking_types: [0, 1, 0, 0, 0, 0, 0] # Which particle types to consider for linking (here only DM)
attaching_types: [1, 0, 0, 0, 1, 1, 0] # Which particle types to consider for attaching (here gas, stars, and BHs)
linking_length_ratio: 0.2 # Linking length in units of the main inter-particle separation.
seed_black_holes_enabled: 0 # Do not seed black holes when running FOF
black_hole_seed_halo_mass_Msun: 1.5e10 # Minimal halo mass in which to seed a black hole (in solar masses).
dump_catalogue_when_seeding: 0 # (Optional) Write a FOF catalogue when seeding black holes. Defaults to 0 if unspecified.
absolute_linking_length: -1. # (Optional) Absolute linking length (in internal units).
group_id_default: 2147483647 # (Optional) Sets the group ID of particles in groups below the minimum size.
group_id_offset: 1 # (Optional) Sets the offset of group ID labelling. Defaults to 1 if unspecified.
seed_black_holes_enabled: 0 # Do not seed black holes when running FOF
......@@ -11,17 +11,17 @@ compiled by configuring the code with the option
``--enable-stand-alone-fof``. The ``fof`` and ``fof_mpi`` executables
will then be generated alongside the regular SWIFT ones.
The executable takes a parameter file as an argument. It will then
read the snapshot specified in the parameter file and extract all
the dark matter particles by default. FOF is then run on these
particles and a catalogue of groups is written to disk. Additional
particle types can be read and processed by the stand-alone FOF
code by adding any of the following runtime parameters to the
command line:
The executable takes a parameter file as an argument. It will then read the
snapshot specified in the parameter file (specified as an initial condition
file) and extract all the dark matter particles by default. FOF is then run on
these particles and a catalogue of groups is written to disk. Additional
particle types can be read and processed by the stand-alone FOF code by adding
any of the following runtime parameters to the command line:
* ``--hydro``: Read and process the gas particles,
* ``--stars``: Read and process the star particles,
* ``--black-holes``: Read and process the black hole particles,
* ``--sinks``: Read and process the sink particles,
* ``--cosmology``: Consider cosmological terms.
Running with cosmology is necessary when using a linking length based
......@@ -34,3 +34,13 @@ internal units). The FOF code will also write a snapshot with an
additional field for each particle. This contains the ``GroupID`` of
each particle and can be used to find all the particles in a given
halo and to link them to the information stored in the catalogue.
The particle fields written to the snapshot can be modified using the
:ref:`Output_selection_label` options.
.. warning::
Note that since not all particle properties are read in stand-alone
mode, not all particle properties will be written to the snapshot generated
by the stand-alone FOF.
......@@ -31,16 +31,19 @@ To compile SWIFT, you will need the following libraries:
HDF5
~~~~
Version 1.8.x or higher is required. Input and output files are stored as HDF5
Version 1.10.x or higher is required. Input and output files are stored as HDF5
and are compatible with the existing GADGET-2 specification. Please consider
using a build of parallel-HDF5, as SWIFT can leverage this when writing and
reading snapshots. We recommend using HDF5 > 1.10.x as this is *vastly superior*
reading snapshots. We recommend using HDF5 >= 1.12.x as this is *vastly superior*
in parallel.
HDF5 is widely available through system package managers.
MPI
~~~
A recent implementation of MPI, such as Open MPI (v2.x or higher), is required,
or any library that implements at least the MPI 3 standard.
MPI implementations are widely available through system package managers.
Running SWIFT on OmniPath atchitechtures with Open MPI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
......@@ -53,19 +56,25 @@ with ``--mca btl vader,self -mca mtl psm``.
Libtool
~~~~~~~
The build system depends on libtool.
The build system depends on libtool. Libtool is widely available through system
package managers.
FFTW
~~~~
Version 3.3.x or higher is required for periodic gravity.
Version 3.3.x or higher is required for periodic gravity. FFTW is widely available
through system package managers or on http://fftw.org/.
ParMETIS or METIS
~~~~~~~~~~~~~~~~~
One is required for domain decomposition and load balancing.
One of these libraries is required for domain decomposition and load balancing.
Source codes for them libraries are available
`here for METIS <https://github.com/KarypisLab/METIS>`_ and
`here for ParMETIS <https://github.com/KarypisLab/ParMETIS>`_ .
GSL
~~~
The GSL is required for cosmological integration.
The GSL is required for cosmological integration. GSL is widely available through
system package managers.
Optional Dependencies
......@@ -87,17 +96,33 @@ You can build documentation for SWIFT with DOXYGEN.
Python
~~~~~~
To run the examples, you will need python 3 and some of the standard scientific libraries (numpy, matplotlib).
Some examples make use of the `swiftsimio <https://swiftsimio.readthedocs.io/en/latest/>`_ library.
To run the examples, you will need python 3 and some of the standard scientific
libraries (numpy, matplotlib). Some examples make use of the
`swiftsimio <https://swiftsimio.readthedocs.io/en/latest/>`_ library.
GRACKLE
~~~~~~~
GRACKLE cooling is implemented in SWIFT. If you wish to take advantage of it, you will need it installed.
GRACKLE cooling is implemented in SWIFT. If you wish to take advantage of it, you
will need it installed. It can be found `here <https://github.com/grackle-project/grackle>`_.
.. warning::
(State 2023) Grackle is experiencing current development, and the API is subject
to changes in the future. For convenience, a frozen version is hosted as a fork
on github here: https://github.com/mladenivkovic/grackle-swift .
The version available there will be tried and tested and ensured to work with
SWIFT.
Additionally, that repository hosts files necessary to install that specific
version of grackle with spack.
HEALPix C library
~~~~~~~~~~~~~~~~~~~
This is required for making light cone HEALPix maps. Note that by default HEALPix builds a static library which cannot be used to build the SWIFT shared library. Either HEALPix must be built as a shared library or -fPIC must be added to the C compiler flags when HEALPix is being configured.
This is required for making light cone HEALPix maps. Note that by default HEALPix
builds a static library which cannot be used to build the SWIFT shared library.
Either HEALPix must be built as a shared library or -fPIC must be added to the C
compiler flags when HEALPix is being configured.
CFITSIO
~~~~~~~
......
......@@ -9,7 +9,7 @@ to get up and running with some examples, and then build your own initial condit
for running.
Also, you might want to consult our onboarding guide (available at
http://www.swiftsim.com/onboarding.pdf) if you would like something to print out
https://swift.strw.leidenuniv.nl/onboarding.pdf) if you would like something to print out
and keep on your desk.
.. toctree::
......