Welcome to the cosmological hydrodynamical code ______ _________________ / ___/ | / / _/ ___/_ __/ \__ \| | /| / // // /_ / / ___/ /| |/ |/ // // __/ / / /____/ |__/|__/___/_/ /_/ SPH With Inter-dependent Fine-grained Tasking Website: www.swiftsim.com Twitter: @SwiftSimulation See INSTALL.swift for install instructions. Usage: swift [OPTION]... PARAMFILE swift_mpi [OPTION]... PARAMFILE Valid options are: -a Pin runners using processor affinity. -c Run with cosmological time integration. -C Run with cooling. -d Dry run. Read the parameter file, allocate memory but does not read the particles from ICs and exit before the start of time integration. Allows user to check validy of parameter and IC files as well as memory limits. -D Always drift all particles even the ones far from active particles. This emulates Gadget-[23] and GIZMO's default behaviours. -e Enable floating-point exceptions (debugging mode). -f {int} Overwrite the CPU frequency (Hz) to be used for time measurements. -g Run with an external gravitational potential. -G Run with self-gravity. -n {int} Execute a fixed number of time steps. When unset use the time_end parameter to stop. -s Run with hydrodynamics. -S Run with stars. -t {int} The number of threads to use on each MPI rank. Defaults to 1 if not specified. -v [12] Increase the level of verbosity 1: MPI-rank 0 writes 2: All MPI-ranks write -y {int} Time-step frequency at which task graphs are dumped. -h Print this help message and exit. See the file examples/parameter_example.yml for an example of parameter file.
"...git@gitlab.cosma.dur.ac.uk:swift/swiftsim.git" did not exist on "1023f11a3211941250ecb4516e43f2e07163d70d"
Matthieu Schaller
authored
Only repartition when required Only repartition when the previous step processed some large fraction of all the particles, and then only when the loads between the ranks are out of balance. This is for several reasons: * Repartitioning is expensive, so should only be done when necessary. * Frequent repartitioning with multi-dt is not necessary (for the EAGLE volumes anyway). * It is more representative to check the load balance when all tasks have been ran. The load balance is determined from the user CPU time per step (including the CPU time from all threads). We exclude the system time as that is not down to processing and tends to even out the ranks artificially, much as elapsed time does (since we wait for all the MPI tasks to come together). The load imbalance allowed is determined by the parameter `DomainDecomposition:trigger`, this can also be set to a number greater than one, in which case the old repartitioning scheme of every 'trigger' steps will be used (previously trigger was always 100). See merge request !290
Name | Last commit | Last update |
---|---|---|
doc | ||
examples | ||
m4 | ||
src | ||
tests | ||
theory | ||
.clang-format | ||
.gitignore | ||
AUTHORS | ||
CONTRIBUTING.md | ||
COPYING | ||
ChangeLog | ||
INSTALL.swift | ||
Makefile.am | ||
NEWS | ||
README | ||
autogen.sh | ||
configure.ac | ||
format.sh |