Skip to content
Snippets Groups Projects
Commit e3d33e15 authored by Josh Borrow's avatar Josh Borrow
Browse files

Merge branch 'updated_homepage' into 'master'

Updated homepage text for all 3 sections

See merge request !11
parents 74b9f04d 830d0cee
No related branches found
No related tags found
1 merge request!11Updated homepage text for all 3 sections
# Astronomer
Want to get started using SWIFT? Check out the onboarding guide available
[here](onboarding.pdf).
Want to get started using SWIFT? Check out the on-boarding guide available
[here](onboarding.pdf). SWIFT can be used as a drop-in replacement for
Gadget-2 and initial conditions in hdf5 format for Gadget can
directly be read by SWIFT. The only difference is the parameter file
that will need to be adapted for SWIFT.
SWIFT combines multiple numerical methods that are briefly outlined
here. The whole art is to efficiently couple them to
exploit modern computer architectures.
## Gravity
SWIFT uses the Fast Multipole Method to calculate gravitational forces between
particles. As well as this self-gravity mode, we also make many useful external
potentials available, such as a softened point mass and sine wave.
Gravtiational accuracy can be tuned through use of a standard 'opening angle',
which is controlled at runtime in the parameterfile.
## SPH Emulation Modes
There are many hydrodynamics schemes implemented in SWIFT, and SWIFT is designed
such that it should be (relatively) simple for users to add their own. The three
main modes are as follows:
SWIFT uses the Fast Multipole Method (FMM) to calculate gravitational
forces between nearby particles. These forces are combined with
long-range forces provided by a mesh that captures both the periodic
nature of the calculation and the expansion of the simulated universe.
SWIFT currently uses a single fixed but time-variable softening length
for all the particles.
As well as this self-gravity mode, we also make many useful external
potentials available, such as galaxy haloes or stratified boxes that
are used in idealised problems.
Gravitational accuracy can be tuned through use of the opening angle
and the choice of a multipole order for the short-range gravity
calculation. The mesh forces are controlled by the cell size and
frequency of the update.
## Cosmology
SWIFT implements a standard LCDM cosmology background expansion and
solves the equations in a comoving frame. We allow for equations of
state of dark-energy that evolve with scale-factor. The structure of
the code can easily allow for modified-gravity solvers or
self-interacting dark matter schemes to be implemented. These will be
part of future releases of the code.
Unlike other cosmological codes, SWIFT does not express quantities in
units of the reduced Hubble parameter. This reduces the possible
confusion created by this convention when using the data product but
requires users to convert their initial conditions (using a specific
mode of operation of SWIFT!) when taking them from a different code.
## Hydrodynamics Schemes
There are many hydrodynamics schemes implemented in SWIFT, and SWIFT
is designed such that it should be simple for users to add their
own.
All the schemes can be combined with a time-step limiter inspired by
the method of [Durier & Dalla Vecchia
2012](http://adsabs.harvard.edu/abs/2012MNRAS.419..465D), which is
necessary to ensure energy conservation in simulations that involve
sudden injection of energy such as in feedback events.
The four main modes are as follows:
### Minimal SPH
In this mode SWIFT uses the simplest energy-conserving SPH scheme that
can be written with no viscosity switches nor thermal diffusion
terms. It follows exactly the description in the review of the topic
by [Price 2012](http://adsabs.harvard.edu/abs/2012JCoPh.231..759P) and
is not optimised. This mode is used for education purposes or can
serve as a basis to help developers create other hydrodynamics
schemes.
### GADGET-2 SPH
SWIFT makes a 'backwards-compatible' GADGET-2 SPH mode, which uses a standard
[Monaghan 1977](http://adsabs.harvard.edu/abs/1977MNRAS.181..375G) artificial
SWIFT contains a 'backwards-compatible' [GADGET-2
SPH](http://adsabs.harvard.edu/abs/2002MNRAS.333..649S) mode, which
uses a standard [Monaghan
1977](http://adsabs.harvard.edu/abs/1977MNRAS.181..375G) artificial
viscosity scheme with a
[Balsara](https://www.ideals.illinois.edu/handle/2142/23836) switch. Note that
the GADGET-2 SPH scheme is implemented to be the same as in the public release
of GADGET-2, rather than the equations as specified in the original paper, as
there are some differences. This is to enable users to use SWIFT as a drop-in
replacement for GADGET-2.
[Balsara](https://www.ideals.illinois.edu/handle/2142/23836)
switch. Note that the GADGET-2 SPH scheme is implemented to be the
same as in the public release of GADGET-2. This is to enable users to
use SWIFT as a drop-in replacement for GADGET-2.
### Pressure-Entropy SPH
In SWIFT, the Pressure-Entropy and (in the future) Pressure-Energy schemes from
[Hopkins 2013](http://adsabs.harvard.edu/abs/2013MNRAS.428.2840H) are made
available for use. These schemes use a weighting factor of either entropy or
energy in the calculation of density, which has the effect of promoting mixing
and reducing spurious surface tensions that are present in a traditional
"Density-Entropy" scheme (such as the GADGET-2 one presented above).
In SWIFT, the Pressure-Entropy and (in the future) Pressure-Energy
schemes from [Hopkins
2013](http://adsabs.harvard.edu/abs/2013MNRAS.428.2840H) are made
available for use. These schemes use a weighting factor of either
entropy or energy in the calculation of density, which has the effect
of promoting mixing and reducing spurious surface tensions that are
present in a traditional "Density-Entropy" scheme (such as the GADGET-2 one
presented above). This scheme avoids artificial surface tension at contact
discontinuities and allows for better mixing between phases. This leads to much
better behaviour in cases such at the Kelvin-Helmholtz instabilities or the
infamous ['blob'
test](http://adsabs.harvard.edu/abs/2007MNRAS.380..963A).
### GIZMO (MFM)
Bert Vandenbrouke has also implemented a publicly-available GIZMO-like scheme
([Hopkins 2015](http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1409.7395))
in SWIFT. This promises higher-accuracy hydrodynamics, but with the natural
adaptivity of SPH.
SWIFT can also use the GIZMO scheme ([Hopkins
2015](http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1409.7395)),
also know as 'SPH-ALE' outside of astrophysics. This scheme is a
hybrid between a particle method and a finite volume method. Whilst
particles are used to represent the fluid, fluxes between them
are computed and exchanged using Riemann solvers and proper gradient
reconstruction. This allows for a much more accurate representation of
the physics without any ad-hoc switches for viscosity or thermal
diffusion but also comes at a higher computational cost.
<div class="videowrapper"><iframe width="100%" height="100%" src="https://www.youtube.com/embed/sce-AWTbXFI" frameborder="0" allowfullscreen></iframe>
</div>
## Subgrid models for galaxy formation
SWIFT implements two main models to study galaxy formation. These are
available in the public repository and different components (star
formation, cooling, feedback, etc.) can be mixed and matched for
comparison purposes.
### EAGLE model
The [EAGLE model](http://adsabs.harvard.edu/abs/2015MNRAS.446..521S)
of galaxy formation is available in SWIFT. This combines the cooling
of gas due to interaction with the UV and X-ray background radiation
of [Wiersma 2009](http://adsabs.harvard.edu/abs/2009MNRAS.393...99W),
the star-formation method of [Schaye
2008](http://adsabs.harvard.edu/abs/2008MNRAS.383.1210S), the stellar
evolution and gas enrichment model of [Wiersma
2009](http://adsabs.harvard.edu/abs/2009MNRAS.399..574W), feedback
from stars following [Dalla Vecchia
2012](http://adsabs.harvard.edu/abs/2012MNRAS.426..140D),
super-massive black-hole accretion following [Rosas-Guevara
2015](http://adsabs.harvard.edu/abs/2013arXiv1312.0598R) and
black-hole feedback following [Booth
2009](http://adsabs.harvard.edu/abs/2009MNRAS.398...53B). All these
modules have been ported from the Gadget-3 code to SWIFT and will
hence behave slightly differently.
### GEAR model
The [GEAR model](http://adsabs.harvard.edu/abs/2012ASPC..453..141R) is
available in SWIFT. This model uses the [GRACKLE
library](http://adsabs.harvard.edu/abs/2017MNRAS.466.2217S) for
cooling and is one of the many models that are part of the [AGORA
comparison
project](http://adsabs.harvard.edu/abs/2014ApJS..210...14K).
## Structure finder
SWIFT can be linked to the VELOCIraptor phase-space structure finder
to return haloes and sub-haloes while the simulation is running. This
on-the-fly processing allows for a much faster time-to-science than in
the classic method of post-processing simulations after they are run.
## Documentation and tests
There is a large amount of background reading material available in the
theory directory provided with SWIFT. You will need pdflatex to build
......@@ -54,9 +162,4 @@ use, the results of which are available on our developer Wiki
[here](https://gitlab.cosma.dur.ac.uk/swift/swiftsim/wikis/hydro-tests).
## Paralleisation strategy
SWIFT uses a hybrid OpenMP + MPI paralellisation scheme with the
[QuickShed](https://gitlab.cosma.dur.ac.uk/swift/quicksched) tasking library.
This enables SWIFT to achieve near-perfect weak scaling (see next section).
# Computer Scientist
## Scaling
Cosmological simulations are typically very hard to scale to large numbers of
cores, due to the fact that information is needed from each of the nodes to
perform a given time-step. SWIFT uses smart domain decomposition, vectorisation,
and asynchronous communication to provide a 36.7x speedup over our direct
competition (the publicly available GADGET-2 code) and near-perfect weak
scaling.
![SWIFT Scaling Plot](scalingplot.png)
The left panel ("Weak Scaling") shows how the runtime of a problem changes when
the number of threads is increased proportionally to the number of particles in
the system (i.e. a fixed 'load per thread'). The right panel ("Strong Scaling")
shows how the runtime changes for a fixed load as it is spread over more
threads. The right panel shows the 36.7x speedup that SWIFT offers over
GADGET-2.
## Parallelisation strategy
SWIFT uses a hybrid MPI + threads parallelisation scheme with a
modified version of the publicly available lightweight tasking library
[QuickShed](https://gitlab.cosma.dur.ac.uk/swift/quicksched) as its
backbone. Communications between compute nodes are scheduled by the
library itself and use asynchronous calls to MPI to maximise the
overlap between communication and computation. The domain
decomposition itself is performed by splitting the graph of all the
compute tasks, using the METIS library, to minimise the number
of required MPI communications. The core calculations in SWIFT use
hand-written SIMD intrinsics to process multiple particles in parallel
and achieve maximal performance.
## Strong- and weak-scaling
Cosmological simulations are typically very hard to scale to large
numbers of cores, due to the fact that information is needed from each
of the nodes to perform a given time-step. SWIFT uses smart domain
decomposition, vectorisation, and asynchronous communication to
provide a 36.7x speedup over the de-facto standard (the publicly
available GADGET-2 code) and near-perfect weak scaling even on
problems larger than presented in the published astrophysics
literature
![SWIFT Scaling Plot](scalingplot.png) The left panel ("Weak Scaling")
shows how the run-time of a problem changes when the number of threads
is increased proportionally to the number of particles in the system
(i.e. a fixed 'load per thread'). The right panel ("Strong Scaling")
shows how the run-time changes for a fixed load as it is spread over
more threads. The right panel shows the 36.7x speedup that SWIFT
offers over GADGET-2. This uses a representative problem (a snapshot
of the [EAGLE](http://adsabs.harvard.edu/abs/2014ApJS..210...14K)
simulation at late time where the hierarchy of time-steps is very deep
and where most other codes struggle to harvest any scaling or performance.
## I/O performance
SWIFT uses the parallel-hdf5 library to read and write snapshots
efficiently on distributed file systems. Through careful tuning of
Lustre parameters, SWIFT can write snapshots at the maximal disk
writing speed of a given system.
......@@ -2,20 +2,22 @@
## What is SWIFT?
SWIFT is a hydrodynamics and gravity code for Astrophysics. What does
that even mean? It is a computer program designed for running on
supercomputers that simulates forces upon matter due to two main
things: gravity and hydrodynamics (forces that arise from fluids such
as viscosity). This turns out to be quite a complicated problem as we
can't build computers large enough to simulate everything down to the
level of individual atoms. This implies we need to re-think the
equations that describe the matter components and how they interact
with each-others. In practice, we must solve the equations that
describe these problems numerically, which requires a lot of computing
power and fast computer code.
SWIFT is a hydrodynamics and gravity code for astrophysics and
cosmology. What does that even mean? It is a computer program designed
for running on supercomputers that simulates forces upon matter due to
two main things: gravity and hydrodynamics (forces that arise from
fluids such as viscosity). The creation and evolution of stars and
black holes is also modelled together with the effects they have on
their surroundings. This turns out to be quite a complicated problem
as we can't build computers large enough to simulate everything down
to the level of individual atoms. This implies that we need to
re-think the equations that describe the matter components and how
they interact with each-others. In practice, we must solve the
equations that describe these problems numerically, which requires a
lot of computing power and fast computer code.
We use SWIFT to run simulations of Astrophysical objects, such as
galaxies or even the whole Universe. We do this to test theories
galaxies or even the whole Universe. We do this to test theories
about what the Universe is made of and evolved from the Big Bang up to
the present day!
......@@ -33,7 +35,8 @@ super-computers and took almost 50 days on a very large computer to
complete. SWIFT aims to remedy that by choosing to parallelise the
problem in a different way, by using better algorithms and by having a
more modular structure than other codes making it easier for users to
pick and choose what physics they want to include in their simulation.
pick and choose what physical models they want to include in their
simulations.
The way that super-computers are built is not by having one huge
super-fast 'computer', but rather by having lots of regular computers
......@@ -58,11 +61,22 @@ for much more flexibility. This cuts down on the time when a node is
sitting and waiting for work, which is just wasted time, electricity
and ultimately money!
One other computer technology that occured in the last decade is the
appearance of so-called vector-instructions. These allow one given
computing core to process not just one number at a time (as in the
past) but up to 16 (or even more on some machines!) in parallel. This
means that a given compute core can solve the equations for 16 stars
(for instance) at a time and not just one. However, exploiting this
capability is hard and requires writing very detailed code. That is
rarely done in other codes but our extra efforts pay off and SWIFT can
solve the same equations as other software in significantly less time!
## What is SPH?
Smoothed Particle Hydrodynamics (SPH) is a method of calculating the
forces between fluid elements (gas or liquids). Let's say that we want
to simulate some water and a wave within it. Even a liter of water has
Smoothed Particle Hydrodynamics (SPH) is a numerical method for
approximating the forces between fluid elements (gas or
liquids). Let's say that we want to simulate some water and a wave
within it. Even a single liter of water has
100000000000000000000000000 particles in it. To store that much data
we would require a computer that as 100 trillion times as much storage
space as *all of the data on the internet*. It's clear that we need a
......@@ -72,3 +86,10 @@ hope!
It turns out that we can represent the water by many fewer particles
if we can smooth over the gaps between them efficiently. Smoothed
Particle Hydrodynamics is the technique that we use to do that.
SPH was originally developped to solve problems in astrophysics but is
now a popular tool in industry with applications that affect our
everyday life. Turbines are modelled with this technique to
understand how to harvest as much energy from the wind. The method is
also used to understand how waves and tsunamis affect the shores,
allowing scientists to design effective defence for the population.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment