Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 840-unit-test-testtimeline-fails
  • 875-wendland-c6-missing-neighbour-contributions
  • 887-code-does-not-compile-with-parmetis-installed-locally-but-without-metis
  • CubeTest
  • FS_Del
  • GEARRT_Iliev1
  • GEARRT_Iliev3
  • GEARRT_Iliev4
  • GEARRT_Iliev5
  • GEARRT_Iliev5-fixed-nr-subcycles
  • GEARRT_Iliev7
  • GEARRT_Iliev_static
  • GEARRT_Ivanova
  • GEARRT_fixed_nr_subcycles
  • GEARRT_injection_tests_Iliev0
  • GPU_swift
  • GrackleCoolingUpdates2
  • Lambda-T-table
  • MAGMA2
  • MAGMA2_matthieu
  • MHD_FS
  • MHD_FS_TESTs
  • MHD_FS_VP_AdvectGauge
  • MHD_Orestis
  • MHD_canvas
  • MHD_canvas_RF_128
  • MHD_canvas_RF_growth_rate
  • MHD_canvas_RobertsFlow
  • MHD_canvas_SPH_errors
  • MHD_canvas_matthieu
  • MHD_canvas_nickishch
  • MHD_canvas_nickishch_Lorentz_force_test
  • MHD_canvas_nickishch_track_everything
  • MHD_canvas_sid
  • OAK/CPAW_updates
  • OAK/LoopAdvectionTest
  • OAK/adaptive_divv
  • OAK/kinetic_dedner
  • REMIX_cosmo
  • RT_dualc
  • RT_recombination_radiation
  • RT_test_mladen
  • SIDM
  • SIDM_wKDSDK
  • SNdust
  • SPHM1RT_CosmologicalStromgrenSphere
  • SPHM1RT_bincheck
  • SPHM1RT_smoothedRT
  • TangoSIDM
  • TestPropagation3D
  • Test_fixedhProb
  • activate_fewer_comms
  • active_h_max_optimization
  • adaptive_softening_Lieuwe
  • add_2p5D
  • add_black_holes_checks
  • adding_sidm_to_master
  • agn_crksph
  • agn_crksph_subtask_speedup
  • amd-optimization
  • arm_vec
  • automatic_tasks
  • better_ray_RNG
  • black_holes_accreted_angular_momenta_from_gas
  • burkert-potential
  • c11
  • c11_atomics_copy
  • cancel_all_sorts
  • cell_exchange_improvements
  • cell_types
  • cherry-pick-cd1c39e0
  • comm_tasks_are_special
  • conduction_velocities
  • cpp-fixes
  • cuda_test
  • darwin/adaptive_softening
  • darwin/gear_chemistry_fluxes
  • darwin/gear_mechanical_feedback
  • darwin/gear_preSN_feedback
  • darwin/gear_radiation
  • darwin/simulations
  • darwin/sink_formation_proba
  • darwin/sink_mpi
  • darwin/sink_mpi_physics
  • dead-time-stats
  • derijcke_cooling
  • dev_cms
  • do-not-activate-empty-star-pairs
  • domain_zoom_nometis
  • drift_flag_debug_check
  • driven_turbulence
  • driven_turbulence_forcings
  • engineering
  • eos_updates
  • evrard_disc
  • expand_fof_2022
  • explict_bkg_cdim
  • fewer_gpart_comms
  • fewer_star_comms
  • fewer_timestep_comms_no_empty_pairs
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
  • v0.8.5
  • v0.9.0
  • v1.0.0
  • v2025.01
  • v2025.04
119 results

Target

Select target project
  • dc-oman1/swiftsim
  • swift/swiftsim
  • pdraper/swiftsim
  • tkchan/swiftsim
  • dc-turn5/swiftsim
5 results
Select Git revision
  • GPU_swift
  • Nifty-Clutser-Solution
  • Rsend_repart
  • Subsize
  • active_h_max
  • add-convergence-scripts
  • add_dehnen_aly_density_correction
  • add_particles_debug
  • advanced_opening_criteria
  • arm_vec
  • assume_for_gcc
  • atomic_read_writes
  • back_of_the_queue
  • c11_atomics
  • c11_atomics_copy
  • c11_standard
  • cache_time_bin
  • comm_tasks_are_special
  • cpp
  • cuda_test
  • dumper-thread
  • eagle-cooling-improvements
  • energy_logger
  • engineering
  • evrard_disc
  • ewald_correction
  • extra_EAGLE_EoS
  • feedback_after_hydro
  • gear
  • gear_feedback
  • gear_star_formation
  • generic_cache
  • genetic_partitioning2
  • gizmo
  • gizmo_mfv_entropy
  • gpart_assignment_speedup
  • gravity_testing
  • hydro_validation
  • isolated_galaxy_update
  • ivanova
  • ivanova-dirty
  • kahip
  • local_part
  • logger_index_file
  • logger_restart
  • logger_skip_fields
  • logger_write_hdf5
  • master
  • memalign-test
  • more_task_info
  • move_configure_to_m4
  • mpi-one-thread
  • mpi-packed-parts
  • mpi-send-subparts
  • mpi-send-subparts-vector
  • mpi-subparts-vector-grav
  • mpi-testsome
  • mpi-threads
  • multi_phase_bh
  • new_cpp
  • non-periodic-repart
  • non-periodic-repart-update
  • numa_alloc
  • numa_awareness
  • ontheflyimages
  • optimized_entropy_floor
  • parallel_compression
  • paranoid
  • planetary_ideal_gas
  • pyswiftsim
  • recurse_less_in_sort
  • recursive_star_ghosts
  • remove_long_long
  • reorder_rebuild
  • reverted_grav_depth_logic
  • reweight-fitted-costs
  • reweight-scaled-costs
  • sampled_stellar_evolution
  • scheduler_determinism
  • scotch
  • search-window-tests
  • signal-handler-dump
  • simba-stellar-feedback
  • skeleton
  • smarter_sends
  • snipes_data
  • sort-soa
  • spiral_potential
  • star_formation
  • stellar_disc_example
  • stf_output_dirs
  • streaming_io
  • subcell_sort
  • thread-dump-extract-waiters
  • threadpool_rmapper
  • timestep_limiter_update
  • top_level_cells
  • traphic
  • update-gravity
  • update_brute_force_checks
  • v0.0
  • v0.1
  • v0.1.0-pre
  • v0.2.0
  • v0.3.0
  • v0.4.0
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.8.0
  • v0.8.1
  • v0.8.2
  • v0.8.3
  • v0.8.4
114 results
Show changes
Showing
with 1102 additions and 474 deletions
.. _GettingStarted:
What is the Motivation for SWIFT?
=================================
SWIFT is designing with parallelism from the very start. It is a bottom up approach with a task based
parallel infrastructure at it's very core.
.. toctree::
:maxdepth: 1
.. _SWIFTGravity:
Gravity
=============================
SWIFT implements the calculation of gravitational forces using a version of TreePM.
Matthieu - can you update this part? I know you have implemented a Multipole method here
with some SWIFT specific modifications. Could you write a brief overview of the idea?
.. toctree::
:maxdepth: 1
.. _SWIFTSPH:
Smooth Particle Hydrodynamics
=============================
SWIFT simulates the universe using SPH.
.. toctree::
:maxdepth: 1
.. _SWIFTPhysics:
What Physics does SWIFT Simulate?
=================================
.. toctree::
:maxdepth: 1
SPH/sph.rst
Gravity/gravity.rst
# -*- coding: utf-8 -*-
#
# SWIFT documentation build configuration file, created by
# sphinx-quickstart on Thu Nov 5 15:57:05 2015.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.todo', 'sphinx.ext.mathjax']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'SWIFT'
copyright = u'2015: John Regan, James Willis, Stefan Arridge, Matthieu Schaller'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'SWIFTdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'SWIFT.tex', u'SWIFT Documentation',
u'John Regan, James Willis, Stefan Arridge, Matthieu Schaller', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'swift', u'SWIFT Documentation',
[u'John Regan, James Willis, Stefan Arridge, Matthieu Schaller'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'SWIFT', u'SWIFT Documentation',
u'John Regan, James Willis, Stefan Arridge, Matthieu Schaller', 'SWIFT', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
.. SWIFT documentation master file, created by
sphinx-quickstart on Thu Nov 5 15:57:05 2015.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to SWIFT's documentation!
=================================
This is the first go at writing some high level documentation for SWIFT.
It will be augmented and linked with the already available documentation.
Contents:
.. toctree::
:maxdepth: 2
Motivation/index.rst
Innovation/index.rst
Physics/index.rst
DeveloperGuide/developerguide.rst
FAQ/index.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
......@@ -4,7 +4,7 @@
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = SWIFTSPHWIthFine-grainedinter-dependentTasking
SPHINXPROJ = SWIFT-documentation
SOURCEDIR = source
BUILDDIR = build
......@@ -17,4 +17,4 @@ help:
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
\ No newline at end of file
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
SWIFT Documentation
===================
This is the main documentation for SWIFT that can be found on ReadTheDocs.
This is the main documentation for SWIFT that can be found on ReadTheDocs
or at `swiftsim.com/docs`.
You will need the `sphinx` and `sphinx-autobuild` python packages (pip install
them!) to build the documentation to html, as well as the `sphinx_rtd_theme`
package which is used as the theme.
To build the documentation, `make html` and then it is available in
`build/html`.
To build the documentation (from this directory), `make html` and the output
files will be created in `build/html/`.
Please consider adding documentation when you add code!
To build the latex manual (from this directory), `make latexpdf` and the output
files will be `build/latex/SWIFT-user-manual.pdf`.
Please consider adding documentation when you add code!
.. AnalysisTools
Loic Hausammann 20th March 2019
Peter W. Draper 28th March 2019
Mladen Ivkovic 18th March 2021
Bert Vandenbroucke 31st February 2022
.. _Analysis_Tools:
......@@ -10,30 +12,72 @@ Analysis Tools
Task dependencies
-----------------
At the beginning of each simulation the file ``dependency_graph.csv`` is generated and can be transformed into a ``dot`` and a ``png`` file with the script ``tools/plot_task_dependencies.py``.
At the beginning of each simulation the file ``dependency_graph_0.csv`` is generated and can be transformed into a ``dot`` and a ``png`` file with the script ``tools/plot_task_dependencies.py``.
It requires the ``dot`` package that is available in the library graphviz.
This script has also the possibility to generate a list of function calls for each task with the option ``--with-calls`` (this list may be incomplete).
This script has also the possibility to generate a list of function calls for each task with the option ``--with-calls`` (this list may be incomplete) and to describe at which level each task are run ``--with-levels`` (a larger simulation will provide more accurate levels).
You can convert the ``dot`` file into a ``png`` with the following command
``dot -Tpng dependency_graph.dot -o dependency_graph.png`` or directly read it with the python module ``xdot`` with ``python -m xdot dependency_graph.dot``.
If you wish to have more dependency graphs, you can use the parameter ``Scheduler:dependency_graph_frequency``. It defines how many steps are done in between two graphs.
While the initial graph is showing all the tasks/dependencies, the next ones are only showing the active tasks/dependencies.
Task dependencies for a single cell
-----------------------------------
There is an option to additionally write the dependency graphs of the task dependencies for a single cell.
You can select which cell to write using the ``Scheduler:dependency_graph_cell: cellID`` parameter, where ``cellID`` is the cell ID of type long long.
This feature will create an individual file for each step specified by the ``Scheduler:dependency_graph_frequency`` and, differently from the full task graph, will create an individual file for each MPI rank that has this cell.
Using this feature has several requirements:
- You need to compile SWIFT including either ``--enable-debugging-checks`` or ``--enable-cell-graph``. Otherwise, cells won't have IDs.
- There is a limit on how many cell IDs SWIFT can handle while enforcing them to be reproducibly unique. That limit is up to 32 top level cells in any dimension, and up to 16 levels of depth. If any of these thresholds are exceeded, the cells will still have unique cell IDs, but the actual IDs will most likely vary between any two runs.
To plot the task dependencies, you can use the same script as before: ``tools/plot_task_dependencies.py``. The dependency graph now may have some tasks with a pink-ish background colour: These tasks represent dependencies that are unlocked by some other task which is executed for the requested cell, but the cell itself doesn't have an (active) task of that type itself in that given step.
Task levels
-----------------
At the beginning of each simulation the file ``task_level_0.txt`` is generated.
It contains the counts of all tasks at all levels (depths) in the tree.
The depths and counts of the tasks can be plotted with the script ``tools/plot_task_levels.py``.
It will display the individual tasks at the x-axis, the number of each task at a given level on the y-axis, and the level is shown as the colour of the plotted point.
Additionally, the script can write out in brackets next to each task's name on the x-axis on how many different levels the task exists using the ``--count`` flag.
Finally, in some cases the counts for different levels of a task may be very close to each other and overlap on the plot, making them barely visible.
This can be alleviated by using the ``--displace`` flag:
It will displace the plot points w.r.t. the y-axis in an attempt to make them better visible, however the counts won't be exact in that case.
If you wish to have more task level plots, you can use the parameter ``Scheduler:task_level_output_frequency``.
It defines how many steps are done in between two task level output dumps.
Cell graph
----------
An interactive graph of the cells is available with the configuration option ``--enable-cell-graph``.
During a run, SWIFT will generate a ``cell_hierarchy_*.csv`` file per MPI rank.
The command ``tools/make_cell_hierarchy.sh cell_hierarchy_*.csv`` merges the files together and generates the file ``cell_hierarchy.html``
that contains the graph and can be read with your favorite web browser.
An interactive graph of the cells is available with the configuration option ``--enable-cell-graph``. During a
run, SWIFT will generate a ``cell_hierarchy_*.csv`` file per MPI rank at the frequency given by the parameter
``--cell-dumps=n``. The script ``tools/make_cell_hierarchy.py`` can be used to collate the files produced by
different MPI ranks and convert them into a web page that shows an interactive cell hierarchy. The script
takes the names of all the files you want to include as input, and requires an output prefix that will be used
to name the output files ``prefix.csv`` and ``prefix.html``. If the prefix path contains directories that do
not exist, the script will create those.
With chrome, you cannot access the files directly, you will need to either access them through an existing server (e.g. public http provided by your university)
or install ``npm`` and then run the following commands
The output files cannot be directly viewed from a browser, because they require a server connection to
interactively load the data. You can either copy them over to a server, or set up a local server yourself. The
latter can also be done directly by the script by using the optional parameter ``--serve``.
.. code-block:: bash
npm install http-server -g
http-server .
When running a large simulation, the data loading may take a while (a few seconds for EAGLE_6). Your browser
should not be hanging, but will appear to be idle. For really large simulations, the browser will give up and
will probably display an error message.
Now you can open the web page ``http://localhost:8080/cell_hierarchy.html``.
If you wish to add some information to the graph, you can do it by modifying the files ``src/space.c`` and
``tools/data/cell_hierarchy.html``. In the first one, you will need to modify the calls to ``fprintf`` in the
functions ``space_write_cell_hierarchy`` and ``space_write_cell``. Here the code is simply writing CSV files
containing all the required information about the cells. In the second file, you will need to find the
function ``mouseover`` and add the field that you have created. You can also increase the size of the bubble
through the style parameter ``height``.
Memory usage reports
--------------------
......@@ -68,3 +112,207 @@ the step, and the total memory still in use per label. Note this includes
memory still active from previous steps and the total memory is also continued
from the previous dump.
MPI task communication reports
------------------------------
When SWIFT is configured using the ``--enable-mpiuse-reports`` flag it will
log any all asynchronous MPI communications made to send particle updates
between nodes to support the tasks.
The output files are called ``mpiuse_report-rank<m>-step<n>.dat``, i.e. one
per rank per step. These have a line for each request for communication, either
an MPI_Irecv or MPI_Isend and a line for the subsequent completion (successful
MPI_Test).
Each line of the logs contains the following information:
.. code-block:: none
stic: ticks since the start of this step
etic: ticks since the start of the simulation
dtic: ticks that the request was active
step: current step
rank: current rank
otherrank: rank that the request was sent to or expected from
type itype: task type as string and enum
subtype isubtype: task subtype as string and enum
activation: 1 if record for the start of a request, 0 if request completion
tag: MPI tag of the request
size: size, in bytes, of the request
sum: sum, in bytes, of all requests that are currently not logged as complete
The stic values should be synchronised between ranks as all ranks have a
barrier in place to make sure they start the step together, so should be
suitable for matching between ranks. The unique keys to associate records
between ranks (so that the MPI_Isend and MPI_Irecv pairs can be identified)
are "otherrank/rank/subtype/tag/size" and "rank/otherrank/subtype/tag/size"
for send and recv respectively. When matching ignore step0.
Task and Threadpool Plots and Analysis Tools
--------------------------------------------
A variety of plotting tools for tasks and threadpools is available in ``tools/task_plots/``.
To be able to use the task analysis tools, you need to compile swift with ``--enable-task-debugging``
and then run swift with ``-y <interval>``, where ``<interval>`` is the interval between time steps
on which the additional task data will be dumped. Swift will then create ``thread_stats-step<nr>.dat``
and ``thread_info-step<nr>.dat`` files. Similarly, for threadpool related tools, you need to compile
swift with ``--enable-threadpool-debugging`` and then run it with ``-Y <interval>``.
For the analysis and plotting scripts listed below, you need to provide the **\*info-step<nr>.dat**
files as a cmdline argument, not the ``*stats-step<nr>.dat`` files.
A short summary of the scripts in ``tools/task_plots/``:
- ``analyse_tasks.py``:
The output is an analysis of the task timings, including deadtime per thread
and step, total amount of time spent for each task type, for the whole step
and per thread and the minimum and maximum times spent per task type.
- ``analyse_threadpool_tasks.py``:
The output is an analysis of the threadpool task timings, including
deadtime per thread and step, total amount of time spent for each task type, for the
whole step and per thread and the minimum and maximum times spent per task type.
- ``iplot_tasks.py``:
An interactive task plot, showing what thread was doing what task and for
how long for a step. **Needs the tkinter module**.
- ``plot_tasks.py``:
Creates a task plot image, showing what thread was doing what task and for how long.
- ``plot_threadpool.py``:
Creates a threadpool plot image, showing what thread was doing what threadpool call and for
how long.
For more details on the scripts as well as further options, look at the documentation at the top
of the individual scripts and call them with the ``-h`` flag.
Task data is also dumped when using MPI and the tasks above can be used on
that as well, some offer the ability to process all ranks, and others to
select individual ranks.
It is also possible to process a complete run of task data from all the
available steps using the ``process_plot_tasks.py`` and
``process_plot_tasks_MPI.py`` scripts, as appropriate.
These scripts have one required argument: a time limit to use on the horizontal
time axis. When set to 0, this limit is determined by the data for each step,
making it very hard to compare relative sizes of different steps.
The optional ``--files`` arguments allows more control over which steps are
included in the analysis. Large numbers of tasks can be analysed more
efficiently by using multiple processes (the optional ``--nproc`` argument),
and if sufficient memory is available, the parallel analysis can be optimised
by using the size of the task data files to schedule parallel processes more
effectively (the ``--weights`` argument).
.. _dumperThread:
Live internal inspection using the dumper thread
------------------------------------------------
If the configuration option ``--enable-dumper`` is used then an extra thread
is created that polls for the existence of local files called
``.dump<.rank>``. When found this will trigger dump logs of the current state
of various internal queues and loggers, depending on what is enabled.
Without any other options this will dump logs of the current tasks in the
queues (these are those ready to run when time and all conflicts allow) and
all the tasks that are expected to run this step (those which are active in
the current time step). If ``memuse-reports`` is enabled the currently logged
memory use is also dumped and if ``mpiuse-reports`` is enabled the MPI
communications performed this step are dumped. As part of this dump a report
about MPI messages which have been logged but not completed is also made to
the terminal. These are useful when diagnosing MPI deadlocks.
The active tasks are dumped to files ``task_dump-step<n>.dat`` or
``task_dump_MPI-step<n>.dat_<rank>`` when using MPI.
Similarly the currently queued tasks are dumped to files
``queue_dump-step<n>.dat`` or ``queue_dump_MPI-step<n>.dat_<rank>``.
Memory use logs are written to files ``memuse-error-report-rank<n>.txt``.
The MPI logs follow the pattern using ``mpiuse-error-report-rank<n>.txt``.
The ``.dump<.rank>`` files once seen are deleted, so dumping can be done more
than once. For a non-MPI run the file is simply called ``.dump``, note for MPI
you need to create one file per rank, so ``.dump.0``, ``.dump.1`` and so on.
Deadlock Detector
---------------------------
When configured with ``--enable-debugging-checks``, the parameter
.. code-block:: yaml
Scheduler:
deadlock_waiting_time_s: 300.
can be specified. It specifies the time (in seconds) the scheduler should wait
for a new task to be executed during a simulation step (specifically: during a
call to ``engine_launch()``). After this time passes without any new tasks being
run, the scheduler assumes that the code has deadlocked. It then dumps the same
diagnostic data as :ref:`the dumper thread <dumperThread>` (active tasks, queued
tasks, and memuse/MPIuse reports, if swift was configured with the corresponding
flags) and aborts.
A value of zero or a negative value for ``deadlock_waiting_time_s`` disable the
deadlock detector.
You are likely well advised to try and err on the upper side for the time to
choose for the ``deadlock_waiting_time_s`` parameter. A value in the order of
several (tens of) minutes is recommended. A too small value might cause your run to
erroneously crash and burn despite not really being deadlocked, just slow or
badly balanced.
Neighbour search statistics
---------------------------
One of the core algorithms in SWIFT is an iterative neighbour search
whereby we try to find an appropriate radius around a particle's
position so that the weighted sum over neighbouring particles within
that radius is equal to some target value. The most obvious example of
this iterative neighbour search is the SPH density loop, but various
sub-grid models employ a very similar iterative neighbour search. The
computational cost of this iterative search is significantly affected by
the number of iterations that is required, and it can therefore be
useful to analyse the progression of the iterative scheme in detail.
When configured with ``--enable-ghost-statistics=X``, SWIFT will be
compiled with additional diagnostics that statistically track the number
of iterations required to find a converged answer. Here, ``X`` is a
fixed number of bins to use to collect the required statistics
(``ghost`` refers to the fact that the iterations take place inside the
ghost tasks). In practice, this means that every cell in the SWIFT tree
will be equipped with an additional ``struct`` containing three sets of
``X`` bins (one set for each iterative neighbour loop: hydro, stellar
feedback, AGN feedback). For each bin ``i``, we store the number of
particles that required updating during iteration ``i``, the number of
particles that could not find a single neighbouring particle, the
minimum and maximum smoothing length of all particles that required
updating, and the sum of all their search radii and all their search
radii squared. This allows us to calculate the upper and lower limits,
as well as the mean and standard deviation on the search radius for each
iteration and for each cell. Note that there could be more iterations
required than the number of bins ``X``; in this case the additional
iterations will be accumulated in the final bin. At the end of each time
step, a text file is produced (one per MPI rank) that contains the
information for all cells that had any relevant activity. This text file
is named ``ghost_stats_ssss_rrrr.txt``, where ``ssss`` is the step
counter for that time step and ``rrrr`` is the MPI rank.
The script ``tools/plot_ghost_stats.py`` takes one or multiple
``ghost_stats.txt`` files and computes global statistics for all the
cells in those files. The script also takes the name of an output file
where it will save those statistics as a set of plots, and an optional
label that will be displayed as the title of the plots. Note that there
are no restrictions on the number of input files or how they relate;
different files could represent different MPI ranks, but also different
time steps or even different simulations (which would make little
sense). It is up to the user to make sure that the input is actually
relevant.
Continuous Simulation Data Stream (CSDS)
========================================
The CSDS is a particle based output (e.g. snapshot) that takes into account the large difference of timescale.
If you have any questions, a slack channel is available for it in SWIFT's slack.
To run it, you will need to use the configuration option ``--enable-csds`` and the run time argument ``--csds``.
Currently the CSDS is implemented only for GEAR, Gadget2 and the default modules, but can be easily extended to the other schemes by adding the CSDS structure to the particles and implementing the IO functions (see ``src/hydro/Gadget2/hydro_part.h``, ``src/hydro/Gadget2/hydro_csds.c`` and ``src/hydro/Gadget2/hydro_csds.h``).
The main parameters of the CSDS are ``CSDS:delta_step`` and ``CSDS:index_mem_frac`` that define the time accuracy of the CSDS and the number of index files.
The first parameter defines the number of active steps that a particle is doing before writing and the second defines the total storage size of the index files as a fraction of the dump file.
For reading, the python wrapper is available through the configuration option ``--with-python``.
I recommend running the SedovBlast_3D with the CSDS and then using the example ``csds/examples/reader_example.py``.
This file is kept up to date with the most recent changes and includes a call to all the existing functions.
If you need some extra information, a doc string is provided for the class ``csds.Reader`` and all its methods.
If you wish to obtain a snapshot from the CSDS, a script is available in ``csds/examples/create_snapshot.py``.
doc/RTD/source/CitingSWIFT/SWIFT_logo.png

42.3 KiB

.. Citing SWIFT
Matthieu Schaller, 5th April 2020
Disclaimer, Citing SWIFT & Giving Credit
========================================
First of all, thank you for using SWIFT for your projects!
Licensing
~~~~~~~~~
SWIFT is licensed under the LGPL version 3.0. A copy of the license is provided
in the file ``COPYING`` at the root of the repository. It can also be found
online `here <https://www.gnu.org/licenses/lgpl-3.0-standalone.html>`_ as an
extension to the `GPL <https://www.gnu.org/licenses/gpl-3.0-standalone.html>`_.
.. figure:: https://www.gnu.org/graphics/lgplv3-with-text-154x68.png
:width: 100px
:alt: LGPL v3.0 logo
Disclaimer
~~~~~~~~~~
We would like to emphasise that SWIFT comes without any warranty of accuracy,
correctness or efficiency. As mentioned in the license, the software comes
`as-is` and the onus is on the user to get meaningful results. Whilst the
authors will endeavour to answer questions related to using the code, we
recommend users build and maintain their own copies. This documentation contains
the most basic information to get started. Reading it and possibly also the
source code is the best way to start running simulations.
The users are responsible to understand what the code is doing and for the
results of their simulation runs.
Note also that the values of the parameters given in the examples are only
indicative. We recommend users experiment by themselves and a campaign of
experimentation with various values is highly encouraged. Each problem will
likely require different values and the sensitivity to the details of the
physical model is something left to the users to explore.
Acknowledgment & Citation
~~~~~~~~~~~~~~~~~~~~~~~~~
The SWIFT code was last described in this `publication
<https://ui.adsabs.harvard.edu/abs/2023arXiv230513380S/abstract>`_, which
introduced the core solver, the numerical methods used as well as many
extensions. We ask users running SWIFT for their research to please cite this
paper when they present their results. This paper is best referenced by the
following bibtex citation block:
.. code-block:: bibtex
@ARTICLE{2023arXiv230513380S,
author = {{Schaller}, Matthieu and others},
title = "{SWIFT: A modern highly-parallel gravity and smoothed particle hydrodynamics solver for astrophysical and cosmological applications}",
journal = {\mnras},
keywords = {software: simulations, methods: numerical, software: public release, Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Cosmology and Nongalactic Astrophysics, Astrophysics - Earth and Planetary Astrophysics, Astrophysics - Astrophysics of Galaxies, Computer Science - Distributed, Parallel, and Cluster Computing},
year = 2024,
month = may,
volume = {530},
number = {2},
pages = {2378-2419},
doi = {10.1093/mnras/stae922},
archivePrefix = {arXiv},
eprint = {2305.13380},
primaryClass = {astro-ph.IM},
adsurl = {https://ui.adsabs.harvard.edu/abs/2024MNRAS.530.2378S},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
In order to keep track of usage and measure the impact of the software, we
kindly ask users publishing scientific results using SWIFT to add the following
sentence to the acknowledgment section of their papers:
.. code-block:: text
The research in this paper made use of the SWIFT open-source simulation code
(http://www.swiftsim.com, Schaller et al. 2018) version X.Y.Z.
with the version number set to the version used for the simulations and the
reference pointing to the `ASCL entry <https://ascl.net/1805.020>`_ of the
code. This corresponds to the following bibtex citation block:
.. code-block:: bibtex
@MISC{2018ascl.soft05020S,
author = {{Schaller}, Matthieu and others},
title = "{SWIFT: SPH With Inter-dependent Fine-grained Tasking}",
keywords = {Software },
howpublished = {Astrophysics Source Code Library},
year = 2018,
month = may,
eid = {ascl:1805.020},
pages = {ascl:1805.020},
archivePrefix = {ascl},
eprint = {1805.020},
adsurl = {https://ui.adsabs.harvard.edu/abs/2018ascl.soft05020S},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
When using models or parts of the code whose details were introduced in other
papers, we kindly ask that the relevant work is properly acknowledge and
cited. This includes the :ref:`subgrid`, the :ref:`planetary` extensions, the
:ref:`hydro` and :ref:`rt`, or the particle-based :ref:`neutrinos`.
......@@ -11,7 +11,10 @@ For instance, just running the ``swift`` binary will not use any SPH or gravity;
the particles will just sit still!
Below is a list of the command line options and when they should be used. The same list
can be found by typing ``./swift -h``::
can be found by typing ``./swift -h``:
.. code-block:: none
-h, --help show this help message and exit
......@@ -19,26 +22,57 @@ can be found by typing ``./swift -h``::
-b, --feedback Run with stars feedback.
-c, --cosmology Run with cosmological time integration.
--temperature Run with temperature calculation.
--temperature Run with temperature calculation.
-C, --cooling Run with cooling (also switches on --temperature).
-D, --drift-all Always drift all particles even the ones
far from active particles. This emulates
Gadget-[23] and GIZMO's default behaviours.
-F, --star-formation Run with star formation.
-F, --star-formation Run with star formation.
-g, --external-gravity Run with an external gravitational potential.
-G, --self-gravity Run with self-gravity.
-M, --multipole-reconstruction Reconstruct the multipoles every time-step.
-s, --hydro Run with hydrodynamics.
-S, --stars Run with stars.
-B, --black-holes Run with black holes.
-k, --sinks Run with sink particles.
-u, --fof Run Friends-of-Friends algorithm to
perform black hole seeding.
--lightcone Generate lightcone outputs.
-x, --velociraptor Run with structure finding.
--line-of-sight Run with line-of-sight outputs.
--limiter Run with time-step limiter.
--sync Run with time-step synchronization
of particles hit by feedback events.
--csds Run with the Continuous Simulation Data
Stream (CSDS).
-R, --radiation Run with radiative transfer.
--power Run with power spectrum outputs.
Simulation meta-options:
--quick-lyman-alpha Run with all the options needed for the
quick Lyman-alpha model. This is equivalent
to --hydro --self-gravity --stars --star-formation
--cooling.
--eagle Run with all the options needed for the
EAGLE model. This is equivalent to --hydro
--limiter --sync --self-gravity --stars
--star-formation --cooling --feedback
--black-holes --fof.
--gear Run with all the options needed for the
GEAR model. This is equivalent to --hydro
--limiter --sync --self-gravity --stars
--star-formation --cooling --feedback.
--agora Run with all the options needed for the
AGORA model. This is equivalent to --hydro
--limiter --sync --self-gravity --stars
--star-formation --cooling --feedback.
Control options:
-a, --pin Pin runners using processor affinity.
--nointerleave Do not interleave memory allocations across
NUMA regions.
-d, --dry-run Dry run. Read the parameter file, allocates
memory but does not read the particles
from ICs. Exits before the start of time
......@@ -58,12 +92,24 @@ can be found by typing ``./swift -h``::
read from the parameter file. Can be used
more than once {sec:par:value}.
-r, --restart Continue using restart files.
-t, --threads=<int> The number of threads to use on each MPI
rank. Defaults to 1 if not specified.
-t, --threads=<int> The number of task threads to use on each
MPI rank. Defaults to 1 if not specified.
--pool-threads=<int> The number of threads to use on each MPI
rank for the threadpool operations.
Defaults to the numbers of task threads
if not specified.
-T, --timers=<int> Print timers every time-step.
-v, --verbose=<int> Run in verbose mode, in MPI mode 2 outputs
from all ranks.
-y, --task-dumps=<int> Time-step frequency at which task analysis
files and/or tasks are dumped.
-y, --task-dumps=<int> Time-step frequency at which task graphs
are dumped.
--cell-dumps=<int> Time-step frequency at which cell graphs
are dumped.
-Y, --threadpool-dumps=<int> Time-step frequency at which threadpool
tasks are dumped.
--dump-tasks-threshold=<flt> Fraction of the total step's time spent
in a task to trigger a dump of the task plot
on this step
See the file examples/parameter_example.yml for an example of parameter file.
.. Equations of State
Loic Hausammann, 6th April 2018
Jacob Kegerreis, 3rd February 2019
Jacob Kegerreis, 13th March 2020
.. _equation_of_state:
Equations of State
==================
Currently (if the documentation was well updated), we have two different gas
equations of state (EoS) implemented: ideal and isothermal; as well as a variety
of EoS for "planetary" materials.
The EoS describe the relations between our main thermodynamical variables:
the internal energy (\\(u\\)), the density (\\(\\rho\\)), the entropy (\\(A\\))
and the pressure (\\(P\\)).
Currently, SWIFT offers three different gas equations of state (EoS)
implemented: ``ideal``, ``isothermal``, and ``barotropic``; as well as a variety
of EoS for "planetary" materials. The EoS describe the relations between our
main thermodynamical variables: the internal energy per unit mass :math:`u`, the
mass density :math:`\rho`, the entropy :math:`A` and the pressure :math:`P`.
It is selected af configure time via the option ``--with-equation-of-state``.
Gas EoS
-------
In the following section, the variables not yet defined are: \\(\\gamma\\) for
the adiabatic index and \\( c_s \\) for the speed of sound.
We write the adiabatic index as :math:`\gamma` and :math:`c_s` denotes
the speed of sound. The adiabatic index can be changed at configure
time by choosing one of the allowed values of the option
``--with-adiabatic-index``. The default value is :math:`\gamma = 5/3`.
The tables below give the expression for the thermodynamic quantities
on each row entry as a function of the gas density and the
thermodynamical quantity given in the header of each column.
.. csv-table:: Ideal Gas
:header: "Variable", "A", "u", "P"
"A", "", "\\( \\left( \\gamma - 1 \\right) u \\rho^{1-\\gamma} \\)", "\\(P \\rho^{-\\gamma} \\)"
"u", "\\( A \\frac{ \\rho^{ \\gamma - 1 } }{\\gamma - 1 } \\)", "", "\\(\\frac{1}{\\gamma - 1} \\frac{P}{\\rho}\\)"
"P", "\\( A \\rho^\\gamma \\)", "\\( \\left( \\gamma - 1\\right) u \\rho \\)", ""
"\\(c_s\\)", "\\(\\sqrt{ \\gamma \\rho^{\\gamma - 1} A}\\)", "\\(\\sqrt{ u \\gamma \\left( \\gamma - 1 \\right) } \\)", "\\(\\sqrt{ \\frac{\\gamma P}{\\rho} }\\)"
"A", "", :math:`\left( \gamma - 1 \right) u \rho^{1-\gamma}`, :math:`P \rho^{-\gamma}`
"u", :math:`A \frac{ \rho^{ \gamma - 1 } }{\gamma - 1 }`, "", :math:`\frac{1}{\gamma - 1} \frac{P}{\rho}`
"P", :math:`A \rho^\gamma`, :math:`\left( \gamma - 1\right) u \rho`, ""
:math:`c_s`, :math:`\sqrt{ \gamma \rho^{\gamma - 1} A}`, :math:`\sqrt{ u \gamma \left( \gamma - 1 \right) }`, :math:`\sqrt{ \frac{\gamma P}{\rho} }`
.. csv-table:: Isothermal Gas
:header: "Variable", "A", "u", "P"
:header: "Variable", "-", "-", "-"
"A", "", "\\(\\left( \\gamma - 1 \\right) u \\rho^{1-\\gamma}\\)", ""
"A", "", :math:`\left( \gamma - 1 \right) u \rho^{1-\gamma}`, ""
"u", "", "const", ""
"P", "", "\\(\\left( \\gamma - 1\\right) u \\rho \\)", ""
"\\( c_s\\)", "", "\\(\\sqrt{ u \\gamma \\left( \\gamma - 1 \\right) } \\)", ""
"P", "", :math:`\left( \gamma - 1\right) u \rho`, ""
:math:`c_s`, "", :math:`\sqrt{ u \gamma \left( \gamma - 1 \right) }`, ""
.. csv-table:: Barotropic Gas
:header: "Variable", "-", "-", "-"
"A", "", :math:`\rho^{1-\gamma} c_0^2 \sqrt{1 + \left( \frac{\rho}{\rho_c} \right) }`, ""
"u", "", :math:`\frac{1}{\gamma -1}c_0^2 \sqrt{1 + \left( \frac{\rho}{\rho_c} \right) }`, ""
"P", "", :math:`\rho c_0^2 \sqrt{1 + \left( \frac{\rho}{\rho_c} \right) }`, ""
:math:`c_s`, "", :math:`\sqrt{ c_0^2 \sqrt{1 + \left( \frac{\rho}{\rho_c} \right) }}`, ""
.. csv-table:: Continuous Barotropic Gas
:header: "Variable", "-", "-", "-"
"A", "", "", ""
"u", "", "", ""
"P", "", ":math:`\max\left\{c^2_{{\rm s}, 0} \rho \left[1 + \left(\frac{\rho}{\rho_c}\right)^{\gamma - 1} \right], P_{\rm ext}\right\}`", ""
:math:`c_s`, "", ":math:`\sqrt{\gamma\max\left\{c^2_{{\rm s}, 0} \rho \left[1 + \left(\frac{\rho}{\rho_c}\right)^{\gamma - 1} \right], P_{\rm ext}\right\}/\rho}`", ""
Note that when running with an isothermal or barotropic equation of state, the
value of the tracked thermodynamic variable (e.g. the entropy in a
density-entropy scheme or the internal enegy in a density-energy SPH
formulation) written to the snapshots is meaningless. The pressure, however, is
always correct in all scheme.
For the isothermal equation of state, the internal energy is specified at
runtime via the parameter file. In the case of the barotropic gas, the vacuum
sound speed :math:`c_0` and core density :math:`\rho_c` are similarly specified.
Planetary EoS
-------------
Configuring SWIFT with the ``--with-equation-of-state=planetary`` and
``--with-hydro=planetary`` options enables the use of multiple EoS.
Every SPH particle then requires and carries the additional ``MaterialID`` flag
from the initial conditions file. This flag indicates the particle's material
and which EoS it should use.
So far, we have implemented several Tillotson, SESAME, and Hubbard \& MacFarlane
(1980) materials, with more on their way.
The material's ID is set by a base type ID (multiplied by 100), plus a minor
type:
+ Tillotson (Melosh, 2007): ``1``
+ Iron: ``100``
+ Granite: ``101``
+ Water: ``102``
+ Hubbard \& MacFarlane (1980): ``2``
+ Hydrogen-helium atmosphere: ``200``
+ Ice H20-CH4-NH3 mix: ``201``
+ Rock SiO2-MgO-FeS-FeO mix: ``202``
+ SESAME (and similar): ``3``
+ Iron (2140): ``300``
+ Basalt (7530): ``301``
+ Water (7154): ``302``
+ Senft \& Stewart (2008) water (in a SESAME-style table): ``303``
Unlike the EoS for an ideal or isothermal gas, these more complicated materials
do not always include transformations between the internal energy,
temperature, and entropy. At the moment, we have only implemented
\\(P(\\rho, u)\\) and \\(c_s(\\rho, u)\\).
This is sufficient for the simple :ref:`planetary_sph` hydrodynamics scheme,
but makes these materials currently incompatible with other entropy-based
schemes.
The Tillotson sound speed was derived using
\\(c_s^2 = \\left. \\dfrac{\\partial P}{\\partial \\rho} \\right|_S \\)
as described in Kegerreis et al. (2019).
The table files for the HM80 and SESAME-style EoS can be downloaded using
the ``examples/EoSTables/get_eos_tables.sh`` script.
See :ref:`planetary_eos`.
How to Implement a New Equation of State
......@@ -89,7 +85,7 @@ See :ref:`new_option` for a full list of required changes.
You will need to provide an ``equation_of_state.h`` file containing: the
definition of ``eos_parameters``, IO functions and transformations between the
different variables: \\(u(\\rho, A)\\), \\(u(\\rho, P)\\), \\(P(\\rho,A)\\),
\\(P(\\rho, u)\\), \\(A(\\rho, P)\\), \\(A(\\rho, u)\\), \\(c_s(\\rho, A)\\),
\\(c_s(\\rho, u)\\) and \\(c_s(\\rho, P)\\). See other equation of state files
different variables: :math:`u(\rho, A)`, :math:`u(\rho, P)`, :math:`P(\rho,A)`,
:math:`P(\rho, u)`, :math:`A(\rho, P)`, :math:`A(\rho, u)`, :math:`c_s(\rho, A)`,
:math:`c_s(\rho, u)` and :math:`c_s(\rho, P)`. See other equation of state files
to have implementation details.
.. External potentials in SWIFT
Folkert Nobels, 25th October 2018
Alejandro Benitez-Llambay, October 2019
Matthieu Schaller, December 2021
External Potentials
===================
......@@ -11,38 +13,436 @@ potential in SWIFT.
Implemented External Potentials
-------------------------------
Currently there are several potentials implemented in SWIFT. On this page we
give a short overview of the potentials that are implemented in the code:
1. No potential (none)
2. Point mass potential (point-mass): classical point mass, can be placed at
a position with a mass.
3. Plummer potential (point-mass-softened): in the code a softened point mass
corresponds to a Plummer potential, can be placed at a position with a mass.
4. Isothermal potential (isothermal): An isothermal potential which corresponds
to a density profile which is :math:`\propto r^{-2}` and a potential which is
logarithmic. This potential has as free parameters the rotation velocity
and the position.
5. Hernquist potential (hernquist): A potential that is given by the Hernquist
potential:
Currently there are several potentials implemented in SWIFT. On this page we
give a short overview of the potentials that are implemented in the code. They
are all switched on at configuration time via the argument
``--with-ext-potential=XYZ`` and get their own independant section in the
parameter file. The name of the potential to pass to the configuration script is
given in the parenthesis of the titles below; for instance
``--with-ext-potential=herquist``.
1. No potential (``none``)
^^^^^^^^^^^^^^^^^^^^^^^^^^
This is the default setup. No external potential is used.
There are no parameters associated with this model.
2. Constant acceleration (``constant``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This "potential" just applies a constant acceleration vector at every position
:math:`\vec{x}` in the simulation volume. This is very common in idealised test
cases like the Rayleigh-Taylor instability or engineering applications relying
on Earth's constant gravity field.
* :math:`\phi(\vec{x}) = -\vec{g} \cdot \vec{x}`
* :math:`\vec{a}(\vec{x}) = \vec{g}`
The only parameter of this model is the vector :math:`\vec{g}` given in `cgs`
units:
.. code:: YAML
ConstantPotential:
g_cgs: [0., 0., -9,81] # Earth acceleration along z-axis (cgs units)
3. Point mass potential (``point-mass``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A simple single point mass that can be placed at any position :math:`\vec{p}`.
The position can be specified either absolutely or with respect to the centre of
the simulation volume.
* :math:`\phi(\vec{x}) = -\displaystyle\frac{G_{\rm N}m}{|r|}`
* :math:`\vec{a}(\vec{x}) = -\displaystyle\frac{G_{\rm N}m}{|r|^3}\vec{r}`,
where :math:`\vec{r} = \vec{x} - \vec{p}`.
The code also imposes an extra limit of the size of the particles'
time-step. The time-step has to be shorter than :math:`\Delta t_{pot} = C
|\vec{a}(\vec{x})| / |\dot{\vec{a}}(\vec{x})|`. This criterion is designed to
make sure the changes in accelerations remain small. The jerk
(:math:`\dot{\vec{a}}\equiv\frac{d}{dt}\vec{a}`) is calculated from the
positions of the particles and only includes the contribution of the external
potential itself. The other criteria (CFL, self-gravity, ...) are applied on top
of this criterion. The dimensionless constant :math:`C` defaults to `FLT_MAX` if
left unspecified, effectively making SWIFT run without this time-step criterion.
The parameters of the model are:
.. code:: YAML
PointMassPotential:
position: [3., 4., 5.] # Location of the point mass (internal units)
useabspos: 1 # Use absolute positions (0 for relative to centre)
mass: 5.972e24 # Mass of the point (internal units)
timestep_mult: 0.1 # (Optional) The dimensionless constant C in the time-step condition
4. Plummer potential (``point-mass-softened``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A single point mass with a fixed softening length using a Plummer shape that can
be placed at any position :math:`\vec{p}`. The position can be specified either
absolutely or with respect to the centre of the simulation volume.
* :math:`\phi(\vec{x}) = -\displaystyle\frac{G_{\rm N}m}{\sqrt{|r|^2 + \epsilon^2}}`
* :math:`\vec{a}(\vec{x}) = -\displaystyle\frac{G_{\rm N}m}{(|r|^2 + \epsilon^2)^{3/2}}\vec{r}`,
where :math:`\vec{r} = \vec{x} - \vec{p}`.
The code also imposes an extra limit of the size of the particles'
time-step. The time-step has to be shorter than :math:`\Delta t_{pot} = C
|\vec{a}(\vec{x})| / |\dot{\vec{a}}(\vec{x})|`. This criterion is designed to
make sure the changes in accelerations remain small. The jerk
(:math:`\dot{\vec{a}}\equiv\frac{d}{dt}\vec{a}`) is calculated from the
positions of the particles and only includes the contribution of the external
potential itself. The other criteria (CFL, self-gravity, ...) are applied on top
of this criterion. The dimensionless constant :math:`C` defaults to `FLT_MAX` if
left unspecified, effectively making SWIFT run without this time-step criterion.
The parameters of the model are:
.. code:: YAML
PointMassPotential:
position: [3., 4., 5.] # Location of the point mass (internal units)
useabspos: 1 # Use absolute positions (0 for relative to centre)
mass: 5.972e24 # Mass of the point (internal units)
softening: 0.01 # Softening length (internal units)
timestep_mult: 0.1 # (Optional) The dimensionless constant C in the time-step condition
5. Isothermal potential (``isothermal``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
An isothermal potential which corresponds to a density profile which is
:math:`\propto r^{-2}` and a potential which is logarithmic. This potential is
entirely set by its centre :math:`\vec{p}` and the (radially-constant) rotation
velocity of the potential :math:`v_{\rm rot}`. The centre of the potential is softened.
* :math:`\phi(\vec{x}) = -\displaystyle\frac{1}{4\pi G_{\rm N}}\log(\sqrt{|r|^2 + \epsilon^2})`
* :math:`\vec{a}(\vec{x}) = -\displaystyle\frac{v_{\rm rot}^2} {|r|^2 + \epsilon^2}`,
where :math:`\vec{r} = \vec{x} - \vec{p}`.
The code also imposes an extra limit of the size of the particles'
time-step. The time-step has to be shorter than :math:`\Delta t_{pot} = C
|\vec{a}(\vec{x})| / |\dot{\vec{a}}(\vec{x})|`. This criterion is designed to
make sure the changes in accelerations remain small. The jerk
(:math:`\dot{\vec{a}}\equiv\frac{d}{dt}\vec{a}`) is calculated from the
positions of the particles and only includes the contribution of the external
potential itself. The other criteria (CFL, self-gravity, ...) are applied on top
of this criterion. The dimensionless constant :math:`C` defaults to `FLT_MAX` if
left unspecified, effectively making SWIFT run without this time-step criterion.
The parameters of the model are:
.. code:: YAML
IsothermalPotential:
position: [3., 4., 5.] # Location of the centre of the profile (internal units)
useabspos: 1 # Use absolute positions (0 for relative to centre)
vrot: 200 # Rotation velocity of the profile (internal units)
softening: 0.01 # Softening length (internal units)
timestep_mult: 0.1 # (Optional) The dimensionless constant C in the time-step condition
:math:`\Phi(r) = - \frac{GM}{r+a}.`
6. Hernquist potential (``hernquist``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We can set up a potential as given by Hernquist (1990):
:math:`\Phi(r) = - \frac{G_{\rm N}M}{r+a},`
where :math:`M` is the total Hernquist mass and :math: `a` is the Hernquist-
equivalent scale radius of the potential. The potential can be set at any
position in the box. It adds an additional time step constraint that limits
the time step to a maximum of a specified fraction of the circular orbital
time at the current radius of the particle. The other criteria
(CFL, self-gravity, ...) are applied on top of this criterion. For example, a
fraction of 0.01 means that an orbital period would be resolved by 100 time steps.
In the most basic version, the Hernquist potential can be run by providing
only the Hernquist mass, scale length, softening length and fraction of the
orbital time for the time stepping. In this case the model parameters may
look something like:
.. code:: YAML
HernquistPotential:
useabspos: 0 # 0 -> positions based on centre, 1 -> absolute positions
position: [0.,0.,0.] # Location of centre of isothermal potential with respect to centre of the box (if 0) otherwise absolute (if 1) (internal units)
mass: 1e12 # Mass of the Hernquist potential
scalelength: 2.0 # scale length a (internal units)
timestep_mult: 0.01 # Dimensionless pre-factor for the time-step condition, determines the fraction of the orbital time we use to do the time integration; fraction of 0.01 means we resolve an orbit with 100 timesteps
epsilon: 0.2 # Softening size (internal units)
Besides the basic version, it is also possible to run the Hernquist
potential for idealised disk galaxies that follow the approach of
`Springel, Di Matteo & Hernquist (2005)
<https://ui.adsabs.harvard.edu/abs/2005MNRAS.361..776S/abstract>`_. This
potential, however, uses a corrected value of the formulation that improves
the match with the NFW profile (below) with the same M200 (Nobels+ in prep).
Contrary to the above (idealizeddisk: 0 setup), the idealised disk setup runs
by specifying one out of :math:`M_{200}`, :math:`V_{200}`, or :math:`R_{200}`,
plus concentration and reduced Hubble constant.
In this case, we don't provide the 'mass' and 'scalelength' parameters, but
'M200' (or 'V200', or 'R200') and 'concentration' :math:`c`, as well as reduced Hubble
constant :math:`h` to define the potential. The parameters of the model may look something like:
.. code:: YAML
HernquistPotential:
useabspos: 0 # 0 -> positions based on centre, 1 -> absolute positions
position: [0.,0.,0.] # Location of centre of isothermal potential with respect to centre of the box (if 0) otherwise absolute (if 1) (internal units)
idealizeddisk: 1 # Run with an idealized galaxy disk
M200: 137.0 # M200 of the galaxy disk
h: 0.704 # reduced Hubble constant (value does not specify the used units!)
concentration: 9.0 # concentration of the Halo
diskfraction: 0.040 # Disk mass fraction
bulgefraction: 0.0 # Bulge mass fraction
timestep_mult: 0.01 # Dimensionless pre-factor for the time-step condition, determines the fraction of the orbital time we use to do the time integration; fraction of 0.01 means we resolve an orbit with 100 timesteps
epsilon: 0.2 # Softening size (internal units)
The user should specify one out of 'M200', 'V200', or 'R200' to define
the potential. The reduced Hubble constant is then used to determine the
other two. Then, the scale radius is calculated as :math:`R_s = R_{200}/c`,
where :math:`c` is the concentration, and the Hernquist-equivalent scale-length
is calculated as:
:math:`a = \frac{b+\sqrt{b}}{1-b} R_{200}`,
where :math:`b = \frac{2}{c^2}(\ln(1+c) - \frac{c}{1+c})`.
Two examples using the Hernquist potential can be found in ``swiftsim/examples/GravityTests/``.
The ``Hernquist_radialinfall`` example puts 5 particles with zero velocity in a Hernquist
potential, resulting in radial orbits. The ``Hernquist_circularorbit``example puts three
particles on a circular orbit in a Hernquist potential, one at the inner region, one at the
maximal circular velocity, and one in the outer region. To run these examples, SWIFT must
be configured with the flag ``--with-ext-potential=hernquist``, or ``hernquist-sdmh05``
(see below).
7. Hernquist SDMH05 potential (``hernquist-sdmh05``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is the same potential as Hernquist with the difference being the
way that the idealised disk part is calculated. This potential uses
exactly the same approach as `Springel, Di Matteo & Hernquist (2005)
<https://ui.adsabs.harvard.edu/abs/2005MNRAS.361..776S/abstract>`_,
which means that ICs generated with the original `MakeNewDisk` code can
be used with this potential. Contrary to the updated Hernquist
potential (above), it is not possible to have an identically matched
NFW potential in this case.
The parameters needed for this potential are the same set of variables as
above, i.e. 'mass' and 'scalelength' when we don't use the idealised
disk, and 'concentration' plus one out of 'M200', 'V200', or 'R200' if
we do. As one of the three is provided, the reduced Hubble constant is
used to calculate the other two. Then, the scale radius is calculated
using the NFW definition, :math:`R_s = R_{200}/c`, and the Hernquist-
equivalent scale length is given by
:math:`a = R_s \sqrt{2(\ln(1+c) - \frac{c}{1+c})}.`
Runs that provide e.g. M200 and c (using idealised disk) are thus equivalent
to providing mass and scale length if calculated by the above equation (without
idealized disk).
8. Navarro-Frenk-White potential (``nfw``):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The most used potential to describe dark matter halos, the
potential is given by:
:math:`\Phi(r) = - \frac{4\pi G_{\rm N} \rho_0 R_s^3}{r} \ln \left( 1+
\frac{r}{R_s} \right).`
This potential has as free parameters the concentration of the DM halo, the
virial mass (:math:`M_{200}`) and the critical density. The potential add
an additional time step constrain that limits the time step to a maximum of
a specified fraction of the orbital time.
This potential in the centre and the enclosed mass at R200 are identical to
the Hernquist potential discussed above.
The parameters of the model are:
.. code:: YAML
NFWPotential:
useabspos: 0 # 0 -> positions based on centre, 1 -> absolute positions
position: [0.,0.,0.] # Location of centre of isothermal potential with respect to centre of the box (if 0) otherwise absolute (if 1) (internal units)
M200: 137.0 # M200 of the galaxy disk
h: 0.704 # reduced Hubble constant (value does not specify the used units!)
concentration: 9.0 # concentration of the Halo
diskfraction: 0.040 # Disk mass fraction
bulgefraction: 0.0 # Bulge mass fraction
timestep_mult: 0.01 # Dimensionless pre-factor for the time-step condition, basically determines the fraction of the orbital time we use to do the time integration
epsilon: 0.2 # Softening size (internal units)
9. NFW poential + Miyamoto-Nagai potential (``nfw-mn``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This includes and NFW potential (identical to nfw)
plus an axisymmetric Miyamoto-Nagai potential. The Miyamoto-Nagai potential is given by:
:math:`\Phi(R,z) = - \frac{G_{\rm N} M_{d}}{\sqrt{R^2 + \left ( R_d + \sqrt{z^2 + Z_d^2} \right )^2}}`,
where :math:`R^2 = x^2 + y^2` is the projected radius and :math:`M_d`, :math:`R_d`, :math:`Z_d` are the
mass, scalelength and scaleheight of the disk (in internal units), respectively.
The parameters of the model are:
.. code:: YAML
NFW_MNPotential:
useabspos: 0 # 0 -> positions based on centre, 1 -> absolute positions
position: [0.,0.,0.] # Location of centre of isothermal potential with respect to centre of the box (if 0) otherwise absolute (if 1) (internal units)
M_200: 137.0 # M200 of the halo in internal units
critical_density: 123.4 # Critical density of the universe in internal units
concentration: 9.0 # concentration of the NFW halo
Mdisk: 3.0 # Mass of the disk in internal units
Rdisk: 3.0 # Disk scale-length in internal units
Zdisk: 3.0 # Disk scale-height in internal units
timestep_mult: 0.01 # Dimensionless pre-factor for the time-step condition, basically determines the fraction of the orbital time we use to do the time integration
10. Sine wave (``sine-wave``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This "potential" is designed for specific idealised tests. It is a basic (not
really physical!) sinusoid wave along the x-axis with a unit wavelength and a
tunable amplitude and growth time.
* :math:`\phi(\vec{x}) = A \cos\left(2 \pi x_{0}\right) / 2\pi`
* :math:`a_0(\vec{x}) = A \sin\left(2 \pi x_{0}\right)`,
where the 0 subscript indicates the x-component. The y- and z-components are zero.
The amplitude :math:`A` can be growing linearly to its maximum over a fixed
time-scale.
Optionally, a constant maximail time-step size can be used with this potential.
The parameters of the model are:
.. code:: YAML
SineWavePotential:
amplitude: 2.5 # The amplitude of the wave (internal units)
growth_time: 1.2 # Time for the amplitude to grow to its final value (internal units)
timestep_limit: 5.0 # (Optional) The max time-step of the particles (internal units)
11. Disc Patch (``disc-patch``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A potential corresponding to a vertical slice through a galactic disk. This
follows the definitions of `Creasey, Theuns & Bower (2013)
<https://adsabs.harvard.edu/abs/2013MNRAS.429.1922C>`_ equations (16) and (17).
The potential is implemented along the x-axis.
12. MWPotential2014 (``MWPotential2014``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This potential is based on ``galpy``'s ``MWPotential2014`` from `Jo Bovy (2015) <https://ui.adsabs.harvard.edu/abs/2015ApJS..216...29B>`_ and consists in a NFW potential for the halo, an axisymmetric Miyamoto-Nagai potential for the disk and a bulge modeled by a power spherical law with exponential cut-off. The bulge is given by the density:
:math:`\rho(r) = A \left( \frac{r_1}{r} \right)^\alpha \exp \left( - \frac{r^2}{r_c^2} \right)`,
where :math:`A` is an amplitude, :math:`r_1` is a reference radius for amplitude, :math:`\alpha` is the inner power and :math:`r_c` is the cut-off radius.
The resulting potential is:
:math:`\Phi_{\mathrm{MW}}(R, z) = f_1 \Phi_{\mathrm{NFW}} + f_2 \Phi_{\mathrm{MN}} + f_3 \Phi_{\text{bulge}}`,
where :math:`R^2 = x^2 + y^2` is the projected radius and :math:`f_1`, :math:`f_2` and :math:`f_3` are three coefficients that adjust the strength of each individual component.
The parameters of the model are:
.. code:: YAML
MWPotential2014Potential:
useabspos: 0 # 0 -> positions based on centre, 1 -> absolute positions
position: [0.,0.,0.] # Location of centre of potential with respect to centre of the box (if 0) otherwise absolute (if 1) (internal units)
timestep_mult: 0.005 # Dimensionless pre-factor for the time-step condition, basically determines the fraction of the orbital time we use to do the time integration
epsilon: 0.001 # Softening size (internal units)
concentration: 9.823403437774843 # concentration of the Halo
M_200_Msun: 147.41031542774076e10 # M200 of the galaxy disk (in M_sun)
H: 1.2778254614201471 # Hubble constant in units of km/s/Mpc
Mdisk_Msun: 6.8e10 # Mass of the disk (in M_sun)
Rdisk_kpc: 3.0 # Effective radius of the disk (in kpc)
Zdisk_kpc: 0.280 # Scale-height of the disk (in kpc)
amplitude_Msun_per_kpc3: 1.0e10 # Amplitude of the bulge (in M_sun/kpc^3)
r_1_kpc: 1.0 # Reference radius for amplitude of the bulge (in kpc)
alpha: 1.8 # Exponent of the power law of the bulge
r_c_kpc: 1.9 # Cut-off radius of the bulge (in kpc)
potential_factors: [0.4367419745056084, 1.002641971008805, 0.022264787598364262] #Coefficients that adjust the strength of the halo (1st component), the disk (2nd component) and the bulge (3rd component)
Note that the default value of the "Hubble constant" here seems odd. As it
enters multiplicatively with the :math:`f_1` term, the absolute normalisation is
actually not important.
Dynamical friction
..................
This potential can be supplemented by a dynamical friction force, following the Chandrasekhar’s dynamical friction formula,
where the velocity distribution function is assumed to be Maxwellian (Binney & Tremaine 2008, eq. 8.7):
:math:`\frac{\rm{d} \vec{v}_{\rm M}}{\rm{d} t}=-\frac{4\pi G^2M_{\rm sat}\rho \ln \Lambda}{v^3_{\rm{M}}} \left[ \rm{erf}(X) - \frac{2 X}{\sqrt\pi} e^{-X^2} \right] \vec{v}_{\rm M}`,
with:
:math:`X = \frac{v_{\rm{M}}}{\sqrt{2} \sigma}`, :math:`\sigma` being the radius-dependent velocity dispersion of the galaxy.
This latter is computed using the Jeans equations, assuming a spherical component. It is provided by a polynomial fit of order 16.
The velocity dispersion is floored to :math:`\sigma_{\rm min}`, a free parameter.
:math:`\ln \Lambda` is the Coulomb parameter.
:math:`M_{\rm sat}` is the mass of the in-falling satellite on which the dynamical friction is supposed to act.
To prevent very high values of the dynamical friction that can occurs at the center of the model, the acceleration is multiplied by:
:math:`\rm{max} \left(0, \rm{erf}\left( 2\, \frac{ r-r_{\rm{core}} }{r_{\rm{core}}} \right) \right)`
This can also mimic the decrease of the dynamical friction due to a core.
The additional parameters for the dynamical friction are:
.. code:: YAML
with_dynamical_friction: 0 # Are we running with dynamical friction ? 0 -> no, 1 -> yes
df_lnLambda: 5.0 # Coulomb logarithm
df_sigma_floor_km_p_s : 10.0 # Minimum velocity dispersion for the velocity dispersion model
df_satellite_mass_in_Msun : 1.0e10 # Satellite mass in solar mass
df_core_radius_in_kpc: 10 # Radius below which the dynamical friction vanishes.
df_polyfit_coeffs00: -2.96536595e-31 # Polynomial fit coefficient for the velocity dispersion model (order 16)
df_polyfit_coeffs01: 8.88944631e-28 # Polynomial fit coefficient for the velocity dispersion model (order 15)
df_polyfit_coeffs02: -1.18280578e-24 # Polynomial fit coefficient for the velocity dispersion model (order 14)
df_polyfit_coeffs03: 9.29479457e-22 # Polynomial fit coefficient for the velocity dispersion model (order 13)
df_polyfit_coeffs04: -4.82805265e-19 # Polynomial fit coefficient for the velocity dispersion model (order 12)
df_polyfit_coeffs05: 1.75460211e-16 # Polynomial fit coefficient for the velocity dispersion model (order 11)
df_polyfit_coeffs06: -4.59976540e-14 # Polynomial fit coefficient for the velocity dispersion model (order 10)
df_polyfit_coeffs07: 8.83166045e-12 # Polynomial fit coefficient for the velocity dispersion model (order 9)
df_polyfit_coeffs08: -1.24747700e-09 # Polynomial fit coefficient for the velocity dispersion model (order 8)
df_polyfit_coeffs09: 1.29060404e-07 # Polynomial fit coefficient for the velocity dispersion model (order 7)
df_polyfit_coeffs10: -9.65315026e-06 # Polynomial fit coefficient for the velocity dispersion model (order 6)
df_polyfit_coeffs11: 5.10187806e-04 # Polynomial fit coefficient for the velocity dispersion model (order 5)
df_polyfit_coeffs12: -1.83800281e-02 # Polynomial fit coefficient for the velocity dispersion model (order 4)
df_polyfit_coeffs13: 4.26501444e-01 # Polynomial fit coefficient for the velocity dispersion model (order 3)
df_polyfit_coeffs14: -5.78038064e+00 # Polynomial fit coefficient for the velocity dispersion model (order 2)
df_polyfit_coeffs15: 3.57956721e+01 # Polynomial fit coefficient for the velocity dispersion model (order 1)
df_polyfit_coeffs16: 1.85478908e+02 # Polynomial fit coefficient for the velocity dispersion model (order 0)
df_timestep_mult : 0.1 # Dimensionless pre-factor for the time-step condition for the dynamical friction force
The free parameters of Hernquist potential are mass, scale length,
and softening. The potential can be set at any position in the box.
6. NFW potential (nfw): The most used potential to describe dark matter halos, the
potential is given by:
:math:`\Phi(r) = - \frac{4\pi G \rho_0 R_s^3}{r} \ln \left( 1+
\frac{r}{R_s} \right).`
This potential has as free parameters the concentration of the DM halo, the
virial mass (:math:`M_{200}`) and the critical density.
7. Sine wave (sine-wave)
8. Point mass ring (point-mass-ring)
9. Disc Patch (disc-patch)
How to implement your own potential
-----------------------------------
......
......@@ -19,8 +19,20 @@ friends (its *friends-of-friends*). This creates networks of linked particles
which are called *groups*. The size (or length) of
a group is the number of particles in that group. If a particle does not
find any other particle within ``l`` then it forms its own group of
size 1. For a given distribution of particles the resulting list of
groups is unique and unambiguously defined.
size 1. **For a given distribution of particles the resulting list of
groups is unique and unambiguously defined.**
In our implementation, we use three separate categories influencing their
behaviour in the FOF code:
- ``linkable`` particles which behave as described above.
- ``attachable`` particles which can `only` form a link with the `nearest` ``linkable`` particle they find.
- And the others which are ignored entirely.
The category of each particle type is specified at run time in the parameter
file. The classic scenario for the two categories is to run FOF on the dark
matter particles (i.e. they are `linkable`) and then attach the gas, stars and
black holes to their nearest DM (i.e. the baryons are `attachable`).
Small groups are typically discarded, the final catalogue only contains
objects with a length above a minimal threshold, typically of the
......@@ -36,20 +48,25 @@ domain decomposition and tree structure that is created for the other
parts of the code. The tree can be easily used to find neighbours of
particles within the linking length.
Depending on the application, the choice of linking length and
minimal group size can vary. For cosmological applications, bound
structures (dark matter haloes) are traditionally identified using a
linking length expressed as :math:`0.2` of the mean inter-particle
separation :math:`d` in the simulation which is given by :math:`d =
\sqrt[3]{\frac{V}{N}}`, where :math:`N` is the number of particles in
the simulation and :math:`V` is the simulation (co-moving)
volume. Usually only dark matter particles are considered for the
number :math:`N`. Other particle types are linked but do not
participate in the calculation of the linking length. Experience shows
that this produces groups that are similar to the commonly adopted
(but much more complex) definition of virialised haloes. A minimal
group length of :math:`32` is often adopted in order to get a robust
catalogue of haloes and compute a good halo mass function.
Depending on the application, the choice of linking length and minimal group
size can vary. For cosmological applications, bound structures (dark matter
haloes) are traditionally identified using a linking length expressed as
:math:`0.2` of the mean inter-particle separation :math:`d` in the simulation
which is given by :math:`d = \sqrt[3]{\frac{V}{N}}`, where :math:`N` is the
number of particles in the simulation and :math:`V` is the simulation
(co-moving) volume. Experience shows that this produces groups that are similar
to the commonly adopted (but much more complex) definition of virialised
haloes. A minimal group length of :math:`32` is often adopted in order to get a
robust catalogue of haloes and compute a good halo mass function. Usually only
dark matter particles are considered for the number :math:`N`. In practice, the
mean inter-particle separation is evaluated based on the cosmology adopted in
the simulation. We use: :math:`d=\sqrt[3]{\frac{m_{\rm DM}}{\Omega_{\rm cdm}
\rho_{\rm crit}}}` for simulations with baryonic particles and
:math:`d=\sqrt[3]{\frac{m_{\rm DM}}{(\Omega_{\rm cdm} + \Omega_{\rm b})
\rho_{\rm crit}}}` for DMO simulations. In both cases, :math:`m_{\rm DM}` is the
mean mass of the DM particles. Using this definition (rather than basing in on
:math:`N`) makes the code robust to zoom-in scenarios where the entire volume is
not filled with particles.
For non-cosmological applications of the FOF algorithm, the choice of
the linking length is more difficult and left to the user. The choice
......
......@@ -10,8 +10,12 @@ The main purpose of the on-the-fly FOF is to identify haloes during a
cosmological simulation in order to seed some of them with black holes
based on physical considerations.
**In this mode, no group catalogue is written to the disk. The resulting list
of haloes is only used internally by SWIFT.**
.. warning::
In this mode, no group catalogue is written to the disk. The resulting list
of haloes is only used internally by SWIFT.
Note that a catalogue can nevertheless be written after every seeding call by
setting the optional parameter ``dump_catalogue_when_seeding``.
Once the haloes have been identified by the FOF code, SWIFT will iterate
over the list of groups and will check whether each halo obeys the
......@@ -44,3 +48,14 @@ minimal group length of order 500 is beneficial over the more traditional
value of 32 as it will reduce the number of haloes to consider by about a
factor 10 (assuming a normal cosmology) whilst still being far enough from
the actual user-defined mass limit to be safe.
On-the-fly Friends-Of-Friends for snapshot output
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It may be desirable to include FOF group membership information for each
particle in the output snapshots even when black hole seeding is not in use.
This can be achieved by setting the ``invoke_fof`` parameter in the
``Snapshots`` section of the parameter file.
FOF will be run just before each snapshot is written and the snapshot will
include a dataset which specifies which FOF group each particle belongs to.
......@@ -20,8 +20,14 @@ absolute value using the parameter ``absolute_linking_length``. This is
expressed in internal units. This value will be ignored (and the ratio of
the mean inter-particle separation will be used) when set to ``-1``.
The categories of particles are specified using the ``linking_types`` and
``attaching_types`` arrays. They are of the length of the number of particle
types in SWIFT (currently 7) and specify for each type using ``1`` or ``0``
whether or not the given particle type is in this category. Types not present
in either category are ignored entirely.
The second important parameter is the minimal size of groups to retain in
the catalogues. This is given in terms of number of particles (of all types)
the catalogues. This is given in terms of number of *linking* particles
via the parameter ``min_group_size``. When analysing simulations, to
identify haloes, the common practice is to set this to ``32`` in order to
not plague the catalogue with too many small, likely unbound, structures.
......@@ -32,8 +38,14 @@ minimal halo mass (see below).
------------------------
In the case of black holes seeding, we run the FOF module on-the-fly during
a cosmological simulation. The time of the first FOF call is controlled by
the following two options:
a cosmological simulation. Black hole seeding is enabled with the following
parameter:
* Enable seeding of black holes in FOF groups: ``seed_black_holes_enabled``
This should be set to 1 to enable or 0 to disable black hole seeding. The
time of the first FOF call for seeding is controlled by the following two
options:
* Time of the first FOF call (non-cosmological runs): ``time_first``,
* Scale-factor of the first FOF call (cosmological runs):
......@@ -51,9 +63,20 @@ halo (group) mass considered for seeding black holes. This is specified by
the parameter ``black_hole_seed_halo_mass_Msun`` which is expressed in
solar masses.
There are two ways to invoke FOF on the fly for purposes other than black hole
seeding:
Firstly, one can switch on the ``invoke_fof`` parameter in the
``Snapshots`` section of the parameter file. This will produce a catalogue every
time the code writes a snapshot.
The second option is to set ``dump_catalogue_when_seeding`` in the ``FOF``
section. This will force the code to write a catalogue every time the BH seeding
code is run.
------------------------
In the case of the stand-alone module, the four seeding parameters
In the case of the stand-alone module, the five seeding parameters
described above are ignored but an additional one needs to be
specified. This is the name of the file in which the catalogue of groups will
be written. This is specified by the parameter ``fof_output``. The linking
......@@ -81,8 +104,12 @@ A full FOF section of the YAML parameter file looks like:
time_first: 0.2 # Time of first FoF black hole seeding calls.
delta_time: 1.005 # Time between consecutive FoF black hole seeding calls.
min_group_size: 256 # The minimum no. of particles required for a group.
linking_types: [0, 1, 0, 0, 0, 0, 0] # Which particle types to consider for linking (here only DM)
attaching_types: [1, 0, 0, 0, 1, 1, 0] # Which particle types to consider for attaching (here gas, stars, and BHs)
linking_length_ratio: 0.2 # Linking length in units of the main inter-particle separation.
seed_black_holes_enabled: 0 # Do not seed black holes when running FOF
black_hole_seed_halo_mass_Msun: 1.5e10 # Minimal halo mass in which to seed a black hole (in solar masses).
dump_catalogue_when_seeding: 0 # (Optional) Write a FOF catalogue when seeding black holes. Defaults to 0 if unspecified.
absolute_linking_length: -1. # (Optional) Absolute linking length (in internal units).
group_id_default: 2147483647 # (Optional) Sets the group ID of particles in groups below the minimum size.
group_id_offset: 1 # (Optional) Sets the offset of group ID labelling. Defaults to 1 if unspecified.
......@@ -11,17 +11,22 @@ compiled by configuring the code with the option
``--enable-stand-alone-fof``. The ``fof`` and ``fof_mpi`` executables
will then be generated alongside the regular SWIFT ones.
The executable takes a parameter file as an argument. It will then
read the snapshot specified in the parameter file and extract all
the dark matter particles by default. FOF is then run on these
particles and a catalogue of groups is written to disk. Additional
particle types can be read and processed by the stand-alone FOF
code by adding any of the following runtime parameters to the
command line:
The executable takes a parameter file as an argument. It will then read the
snapshot specified in the parameter file (specified as an initial condition
file) and extract all the dark matter particles by default. FOF is then run on
these particles and a catalogue of groups is written to disk. Additional
particle types can be read and processed by the stand-alone FOF code by adding
any of the following runtime parameters to the command line:
* ``--hydro``: Read and process the gas particles,
* ``--stars``: Read and process the star particles,
* ``--black-holes``: Read and process the black hole particles.
* ``--black-holes``: Read and process the black hole particles,
* ``--sinks``: Read and process the sink particles,
* ``--cosmology``: Consider cosmological terms.
Running with cosmology is necessary when using a linking length based
on the mean inter-particle separation. For analysis where an absolute
linking length is used this is not necessary.
The group catalogues contain a unique identifier for each group as
well as the number of particles in each group and their total mass (in
......@@ -29,3 +34,13 @@ internal units). The FOF code will also write a snapshot with an
additional field for each particle. This contains the ``GroupID`` of
each particle and can be used to find all the particles in a given
halo and to link them to the information stored in the catalogue.
The particle fields written to the snapshot can be modified using the
:ref:`Output_selection_label` options.
.. warning::
Note that since not all particle properties are read in stand-alone
mode, not all particle properties will be written to the snapshot generated
by the stand-alone FOF.
......@@ -5,6 +5,24 @@
Compiling SWIFT
===============
Compilers
---------
SWIFT is a C99 code, and as such requires a C compiler that is able
to work with code built for that standard.
SWIFT has been tested with the Intel, GCC, LLVM (clang) C compilers.
We suggest:
+ Intel >= 2018
+ GCC >= 8.2.0
+ LLVM >= 7.0.0
We have specific issues with the following compilers:
+ GCC 7.3.0 with the ``-mskylake-avx512`` flag.
Dependencies
------------
......@@ -13,16 +31,19 @@ To compile SWIFT, you will need the following libraries:
HDF5
~~~~
Version 1.8.x or higher is required. Input and output files are stored as HDF5
Version 1.10.x or higher is required. Input and output files are stored as HDF5
and are compatible with the existing GADGET-2 specification. Please consider
using a build of parallel-HDF5, as SWIFT can leverage this when writing and
reading snapshots. We recommend using HDF5 > 1.10.x as this is `vastly superior`
reading snapshots. We recommend using HDF5 >= 1.12.x as this is *vastly superior*
in parallel.
HDF5 is widely available through system package managers.
MPI
~~~
A recent implementation of MPI, such as Open MPI (v2.x or higher), is required,
or any library that implements at least the MPI 3 standard.
MPI implementations are widely available through system package managers.
Running SWIFT on OmniPath atchitechtures with Open MPI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
......@@ -35,29 +56,35 @@ with ``--mca btl vader,self -mca mtl psm``.
Libtool
~~~~~~~
The build system depends on libtool.
The build system depends on libtool. Libtool is widely available through system
package managers.
FFTW
~~~~
Version 3.3.x or higher is required for periodic gravity.
Version 3.3.x or higher is required for periodic gravity. FFTW is widely available
through system package managers or on http://fftw.org/.
ParMETIS or METIS
~~~~~~~~~~~~~~~~~
One is required for domain decomposition and load balancing.
libNUMA
~~~~~~~
libNUMA is used to pin threads (but see INSTALL.swift).
One of these libraries is required for domain decomposition and load balancing.
Source codes for them libraries are available
`here for METIS <https://github.com/KarypisLab/METIS>`_ and
`here for ParMETIS <https://github.com/KarypisLab/ParMETIS>`_ .
GSL
~~~
The GSL is required for cosmological integration.
The GSL is required for cosmological integration. GSL is widely available through
system package managers.
Optional Dependencies
---------------------
There are also the following _optional_ dependencies.
There are also the following *optional* dependencies.
libNUMA
~~~~~~~
libNUMA is used to pin threads (but see INSTALL.swift).
TCmalloc/Jemalloc
~~~~~~~~~~~~~~~~~
......@@ -69,18 +96,44 @@ You can build documentation for SWIFT with DOXYGEN.
Python
~~~~~~
To run the examples, you will need python and some of the standard scientific libraries (numpy, matplotlib). Some examples use Python 2 scripts, but the more recent ones use Python 3 (this is specified in individual READMEs).
To run the examples, you will need python 3 and some of the standard scientific
libraries (numpy, matplotlib). Some examples make use of the
`swiftsimio <https://swiftsimio.readthedocs.io/en/latest/>`_ library.
GRACKLE
~~~~~~~
GRACKLE cooling is implemented in SWIFT. If you wish to take advantage of it, you will need it installed.
GRACKLE cooling is implemented in SWIFT. If you wish to take advantage of it, you
will need it installed. It can be found `here <https://github.com/grackle-project/grackle>`_.
.. warning::
(State 2023) Grackle is experiencing current development, and the API is subject
to changes in the future. For convenience, a frozen version is hosted as a fork
on github here: https://github.com/mladenivkovic/grackle-swift .
The version available there will be tried and tested and ensured to work with
SWIFT.
Additionally, that repository hosts files necessary to install that specific
version of grackle with spack.
HEALPix C library
~~~~~~~~~~~~~~~~~~~
This is required for making light cone HEALPix maps. Note that by default HEALPix
builds a static library which cannot be used to build the SWIFT shared library.
Either HEALPix must be built as a shared library or -fPIC must be added to the C
compiler flags when HEALPix is being configured.
CFITSIO
~~~~~~~
This may be required as a dependency of HEALPix.
Initial Setup
-------------
We use autotools for setup. To get a basic running version of the code
(the binary is created in swiftsim/examples) on most platforms, run
We use autotools for setup. To get a basic running version of the code use:
.. code-block:: bash
......@@ -88,6 +141,7 @@ We use autotools for setup. To get a basic running version of the code
./configure
make
the executable binaries are found in the top directory.
MacOS Specific Oddities
~~~~~~~~~~~~~~~~~~~~~~~
......@@ -100,6 +154,9 @@ MacOS, so it is best to leave it out. To configure:
./configure --disable-compiler-warnings --disable-doxygen-doc
When using the clang compiler, the hand-written vectorized routines
have to be disabled. This is done at configuration time by adding
the flag ``--disable-hand-vec``.
Trouble Finding Libraries
~~~~~~~~~~~~~~~~~~~~~~~~~
......@@ -111,7 +168,7 @@ HDF5 library,
.. code-block:: bash
./configure --with-hdf5=/path/to/h5cc
./configure --with-hdf5=/path/to/hdf5_root
More information about what needs to be provided to these flags is given in
``./configure --help``.