diff --git a/.gitignore b/.gitignore index 8c3ede8f3125f5024c1fe01a405024d1cf5a7f19..a8be04764a48b1eb96cfac70483262e2a9a17dc7 100644 --- a/.gitignore +++ b/.gitignore @@ -31,6 +31,7 @@ examples/*/*.txt examples/*/*.dot examples/*/restart/* examples/*/used_parameters.yml +examples/*/unused_parameters.yml examples/*/*/*.xmf examples/*/*/*.png examples/*/*/*.mp4 @@ -41,6 +42,7 @@ examples/*/*/*.hdf5 examples/*/snapshots* examples/*/restart/* examples/*/*/used_parameters.yml +examples/*/*/unused_parameters.yml examples/*/*.mpg examples/*/gravity_checks_*.dat @@ -80,6 +82,7 @@ tests/test_nonsym_force_1_vec.dat tests/test_nonsym_force_2_vec.dat tests/potential.dat tests/testGreetings +tests/testSelectOutput tests/testReading tests/testSingle tests/testTimeIntegration @@ -102,6 +105,7 @@ tests/test125cells.sh tests/test125cellsPerturbed.sh tests/testParser.sh tests/testReading.sh +tests/testSelectOutput.sh tests/testAdiabaticIndex tests/testRiemannExact tests/testRiemannTRRS diff --git a/INSTALL.swift b/INSTALL.swift index 1782b75e34e2028110717d7873bc2c97365f8240..999a8d3655fa14a8ba1dcbd430b5146cc55ba791 100644 --- a/INSTALL.swift +++ b/INSTALL.swift @@ -96,18 +96,21 @@ SWIFT depends on a number of third party libraries that should be available before you can build it. - - HDF5: a HDF5 library (v. 1.8.x or higher) is required to read and - write particle data. One of the commands "h5cc" or "h5pcc" - should be available. If "h5pcc" is located them a parallel - HDF5 built for the version of MPI located should be - provided. If the command is not available then it can be - located using the "--with-hfd5" configure option. The value - should be the full path to the "h5cc" or "h5pcc" commands. - - - - MPI: to run on more than one node an MPI library that fully - supports MPI_THREAD_MULTIPLE. Before running configure the - "mpirun" command should be available in the shell. If your + - HDF5: + A HDF5 library (v. 1.8.x or higher) is required to read and + write particle data. One of the commands "h5cc" or "h5pcc" + should be available. If "h5pcc" is located then a parallel + HDF5 built for the version of MPI located should be + provided. If the command is not available then it can be + located using the "--with-hdf5" configure option. The value + should be the full path to the "h5cc" or "h5pcc" commands. + SWIFT makes effective use of parallel HDF5 when running on more than + one node, so this option is highly recommended. + + - MPI: + To run on more than one node an MPI library that fully + supports MPI_THREAD_MULTIPLE is required. Before running configure + the "mpirun" command should be available in the shell. If your command isn't called "mpirun" then define the "MPIRUN" environment variable, either in the shell or when running configure. @@ -116,57 +119,69 @@ before you can build it. much like the CC one. Use this when your MPI compiler has a none-standard name. - - GSL: To use cosmological time integration, a version of the GSL - must be available. + - GSL: + To use cosmological time integration, a version of the GSL + must be available. - - libtool: The build system relies on libtool as well as the other autotools. + - FFTW 3.x: + To run with periodic gravity forces, a build of the FFTW 3 + library must be available. Note that SWIFT does not make use + of the parallel capability of FFTW. Calculations are done by + single MPI nodes independently. - - Optional Dependencies - ===================== +- libtool: + The build system relies on libtool as well as the other autotools. - - METIS: a build of the METIS library can be optionally used to - optimize the load between MPI nodes (requires an MPI - library). This should be found in the standard installation - directories, or pointed at using the "--with-metis" - configuration option. In this case the top-level - installation directory of the METIS build should be - given. Note to use METIS you should at least supply - "--with-metis". + Optional Dependencies + ===================== - - libNUMA: a build of the NUMA library can be used to pin the threads - to the physical core of the machine SWIFT is running - on. This is not always necessary as the OS scheduler may - do a good job at distributing the threads among the - different cores on each computing node. + - METIS: + a build of the METIS library can be optionally used to + optimize the load between MPI nodes (requires an MPI + library). This should be found in the standard installation + directories, or pointed at using the "--with-metis" + configuration option. In this case the top-level installation + directory of the METIS build should be given. Note to use + METIS you should supply at least "--with-metis". - - TCMalloc: a build of the TCMalloc library (part of gperftools) can - be used to obtain faster allocations than the standard C - malloc function part of glibc. The option "-with-tcmalloc" - should be passed to the configuration script to use it. +- libNUMA: + a build of the NUMA library can be used to pin the threads to + the physical core of the machine SWIFT is running on. This is + not always necessary as the OS scheduler may do a good job at + distributing the threads among the different cores on each + computing node. + - tcmalloc / jemalloc / TBBmalloc: + a build of the tcmalloc library (part of gperftools), jemalloc + or TBBmalloc can be used be used to obtain faster and more + scalable allocations than the standard C malloc function part + of glibc. Using one of these is highly recommended on systems + with many cores per node. One of the options + "--with-tcmalloc", "--with-jemalloc" or "--with-tbbmalloc" + should be passed to the configuration script to use it. - - gperftools: a build of gperftools can be used to obtain good - profiling of the code. The option "-with-profiler" - needs to be passed to the configuration script to use - it. + - gperftools: + a build of gperftools can be used to obtain good profiling of + the code. The option "--with-profiler" needs to be passed to + the configuration script to use it. + - DOXYGEN: + the doxygen library is required to create the SWIFT API + documentation. - - DOXYGEN: the doxygen library is required to create the SWIFT API - documentation. + - python: + Examples and solution script use python and rely on the numpy + library version 1.8.2 or higher. - - python: Examples and solution script use python and rely on the - numpy library version 1.8.2 or higher. SWIFT Coding style ================== -The SWIFT source code is using a variation of the 'Google' style. The -script 'format.sh' in the root directory applies the clang-format-3.8 -tool with our style choices to all the SWIFT C source file. Please -apply the formatting script to the files before submitting a merge -request. +The SWIFT source code uses a variation of 'Google' style. The script +'format.sh' in the root directory applies the clang-format-3.8 tool with our +style choices to all the SWIFT C source file. Please apply the formatting +script to the files before submitting a merge request. diff --git a/configure.ac b/configure.ac index 56018272debbb9e9ccb2f425eff45b97c358aec9..ff36f36aa2d0ac46aea54eb83cb8fa45601b9459 100644 --- a/configure.ac +++ b/configure.ac @@ -19,9 +19,6 @@ AC_INIT([SWIFT],[0.7.0],[https://gitlab.cosma.dur.ac.uk/swift/swiftsim]) swift_config_flags="$*" -# Need to define this, instead of using fifth argument of AC_INIT, until 2.64. -AC_DEFINE([PACKAGE_URL],["www.swiftsim.com"], [Package web pages]) - AC_COPYRIGHT AC_CONFIG_SRCDIR([src/space.c]) AC_CONFIG_AUX_DIR([.]) @@ -628,15 +625,51 @@ AC_SUBST([FFTW_LIBS]) AC_SUBST([FFTW_INCS]) AM_CONDITIONAL([HAVEFFTW],[test -n "$FFTW_LIBS"]) +# Check for -lprofiler usually part of the gperftools along with tcmalloc. +have_profiler="no" +AC_ARG_WITH([profiler], + [AS_HELP_STRING([--with-profiler=PATH], + [use cpu profiler library or specify the directory with lib @<:@yes/no@:>@] + )], + [with_profiler="$withval"], + [with_profiler="no"] +) +if test "x$with_profiler" != "xno"; then + if test "x$with_profiler" != "xyes" -a "x$with_profiler" != "x"; then + proflibs="-L$with_profiler -lprofiler" + else + proflibs="-lprofiler" + fi + AC_CHECK_LIB([profiler],[ProfilerFlush], + [have_profiler="yes" + AC_DEFINE([WITH_PROFILER],1,[Link against the gperftools profiling library.])], + [have_profiler="no"], $proflibs) + + if test "$have_profiler" = "yes"; then + PROFILER_LIBS="$proflibs" + else + PROFILER_LIBS="" + fi +fi +AC_SUBST([PROFILER_LIBS]) +AM_CONDITIONAL([HAVEPROFILER],[test -n "$PROFILER_LIBS"]) + +# Check for special allocators +have_special_allocator="no" + # Check for tcmalloc a fast malloc that is part of the gperftools. have_tcmalloc="no" AC_ARG_WITH([tcmalloc], - [AS_HELP_STRING([--with-tcmalloc], + [AS_HELP_STRING([--with-tcmalloc=PATH], [use tcmalloc library or specify the directory with lib @<:@yes/no@:>@] )], [with_tcmalloc="$withval"], [with_tcmalloc="no"] ) +if test "x$with_tcmalloc" != "xno" -a "x$have_special_allocator" != "xno"; then + AC_MSG_ERROR("Cannot activate more than one alternative malloc library") +fi + if test "x$with_tcmalloc" != "xno"; then if test "x$with_tcmalloc" != "xyes" -a "x$with_tcmalloc" != "x"; then tclibs="-L$with_tcmalloc -ltcmalloc" @@ -660,10 +693,17 @@ if test "x$with_tcmalloc" != "xno"; then if test "$have_tcmalloc" = "yes"; then TCMALLOC_LIBS="$tclibs" - # These are recommended for GCC. - if test "$ax_cv_c_compiler_vendor" = "gnu"; then - CFLAGS="$CFLAGS -fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free" - fi + AC_DEFINE([HAVE_TCMALLOC],1,[The tcmalloc library appears to be present.]) + + have_special_allocator="tcmalloc" + + # Prevent compilers that replace the calls with built-ins (GNU 99) from doing so. + case "$ax_cv_c_compiler_vendor" in + intel | gnu | clang) + CFLAGS="$CFLAGS -fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free" + ;; + esac + else TCMALLOC_LIBS="" fi @@ -671,44 +711,19 @@ fi AC_SUBST([TCMALLOC_LIBS]) AM_CONDITIONAL([HAVETCMALLOC],[test -n "$TCMALLOC_LIBS"]) -# Check for -lprofiler usually part of the gperftools along with tcmalloc. -have_profiler="no" -AC_ARG_WITH([profiler], - [AS_HELP_STRING([--with-profiler], - [use cpu profiler library or specify the directory with lib @<:@yes/no@:>@] - )], - [with_profiler="$withval"], - [with_profiler="no"] -) -if test "x$with_profiler" != "xno"; then - if test "x$with_profiler" != "xyes" -a "x$with_profiler" != "x"; then - proflibs="-L$with_profiler -lprofiler" - else - proflibs="-lprofiler" - fi - AC_CHECK_LIB([profiler],[ProfilerFlush], - [have_profiler="yes" - AC_DEFINE([WITH_PROFILER],1,[Link against the gperftools profiling library.])], - [have_profiler="no"], $proflibs) - - if test "$have_profiler" = "yes"; then - PROFILER_LIBS="$proflibs" - else - PROFILER_LIBS="" - fi -fi -AC_SUBST([PROFILER_LIBS]) -AM_CONDITIONAL([HAVEPROFILER],[test -n "$PROFILER_LIBS"]) - # Check for jemalloc another fast malloc that is good with contention. have_jemalloc="no" AC_ARG_WITH([jemalloc], - [AS_HELP_STRING([--with-jemalloc], + [AS_HELP_STRING([--with-jemalloc=PATH], [use jemalloc library or specify the directory with lib @<:@yes/no@:>@] )], [with_jemalloc="$withval"], [with_jemalloc="no"] ) +if test "x$with_jemalloc" != "xno" -a "x$have_special_allocator" != "xno"; then + AC_MSG_ERROR("Cannot activate more than one alternative malloc library") +fi + if test "x$with_jemalloc" != "xno"; then if test "x$with_jemalloc" != "xyes" -a "x$with_jemalloc" != "x"; then jelibs="-L$with_jemalloc -ljemalloc" @@ -720,6 +735,18 @@ if test "x$with_jemalloc" != "xno"; then if test "$have_jemalloc" = "yes"; then JEMALLOC_LIBS="$jelibs" + + AC_DEFINE([HAVE_JEMALLOC],1,[The jemalloc library appears to be present.]) + + have_special_allocator="jemalloc" + + # Prevent compilers that replace the regular calls with built-ins (GNU 99) from doing so. + case "$ax_cv_c_compiler_vendor" in + intel | gnu | clang) + CFLAGS="$CFLAGS -fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free" + ;; + esac + else JEMALLOC_LIBS="" fi @@ -727,11 +754,49 @@ fi AC_SUBST([JEMALLOC_LIBS]) AM_CONDITIONAL([HAVEJEMALLOC],[test -n "$JEMALLOC_LIBS"]) -# Don't allow both tcmalloc and jemalloc. -if test "x$have_tcmalloc" != "xno" -a "x$have_jemalloc" != "xno"; then - AC_MSG_ERROR([Cannot use tcmalloc at same time as jemalloc]) +# Check for tbbmalloc, Intel's fast and parallel allocator +have_tbbmalloc="no" +AC_ARG_WITH([tbbmalloc], + [AS_HELP_STRING([--with-tbbmalloc=PATH], + [use tbbmalloc library or specify the directory with lib @<:@yes/no@:>@] + )], + [with_tbbmalloc="$withval"], + [with_tbbmalloc="no"] +) +if test "x$with_tbbmalloc" != "xno" -a "x$have_special_allocator" != "xno"; then + AC_MSG_ERROR("Cannot activate more than one alternative malloc library") fi +if test "x$with_tbbmalloc" != "xno"; then + if test "x$with_tbbmalloc" != "xyes" -a "x$with_tbbmalloc" != "x"; then + tbblibs="-L$with_tbbmalloc -ltbbmalloc_proxy -ltbbmalloc" + else + tbblibs="-ltbbmalloc_proxy -ltbbmalloc" + fi + AC_CHECK_LIB([tbbmalloc],[scalable_malloc],[have_tbbmalloc="yes"],[have_tbbmalloc="no"], + $tbblibs) + + if test "$have_tbbmalloc" = "yes"; then + TBBMALLOC_LIBS="$tbblibs" + + AC_DEFINE([HAVE_TBBMALLOC],1,[The TBBmalloc library appears to be present.]) + + have_special_allocator="TBBmalloc" + + # Prevent compilers that replace the calls with built-ins (GNU 99) from doing so. + case "$ax_cv_c_compiler_vendor" in + intel | gnu | clang) + CFLAGS="$CFLAGS -fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free" + ;; + esac + + else + TBBMALLOC_LIBS="" + fi +fi +AC_SUBST([TBBMALLOC_LIBS]) +AM_CONDITIONAL([HAVETBBMALLOC],[test -n "$TBBMALLOC_LIBS"]) + # Check for HDF5. This is required. AX_LIB_HDF5 if test "$with_hdf5" != "yes"; then @@ -753,7 +818,8 @@ if test "$with_hdf5" = "yes"; then if test "$enable_parallel_hdf5" = "yes"; then AC_MSG_CHECKING([for HDF5 parallel support]) -# Check if the library is capable, the header should define H5_HAVE_PARALLEL. + + # Check if the library is capable, the header should define H5_HAVE_PARALLEL. AC_COMPILE_IFELSE([AC_LANG_SOURCE([[ #include "hdf5.h" @@ -814,6 +880,27 @@ AC_LINK_IFELSE([AC_LANG_PROGRAM( [AC_DEFINE(HAVE__RTC,1,[Define if you have the UNICOS _rtc() intrinsic.])],[rtc_ok=no]) AC_MSG_RESULT($rtc_ok) +# Special timers for the ARM v7 and ARM v8 platforms (taken from FFTW-3 to match their cycle.h) +AC_ARG_ENABLE(armv8-pmccntr-el0, [AC_HELP_STRING([--enable-armv8-pmccntr-el0],[enable the cycle counter on ARMv8 via the PMCCNTR_EL0 register])], have_armv8pmccntrel0=$enableval) +if test "$have_armv8pmccntrel0"x = "yes"x; then + AC_DEFINE(HAVE_ARMV8_PMCCNTR_EL0,1,[Define if you have enabled the PMCCNTR_EL0 cycle counter on ARMv8]) +fi + +AC_ARG_ENABLE(armv8-cntvct-el0, [AC_HELP_STRING([--enable-armv8-cntvct-el0],[enable the cycle counter on ARMv8 via the CNTVCT_EL0 register])], have_armv8cntvctel0=$enableval) +if test "$have_armv8cntvctel0"x = "yes"x; then + AC_DEFINE(HAVE_ARMV8_CNTVCT_EL0,1,[Define if you have enabled the CNTVCT_EL0 cycle counter on ARMv8]) +fi + +AC_ARG_ENABLE(armv7a-cntvct, [AC_HELP_STRING([--enable-armv7a-cntvct],[enable the cycle counter on Armv7a via the CNTVCT register])], have_armv7acntvct=$enableval) +if test "$have_armv7acntvct"x = "yes"x; then + AC_DEFINE(HAVE_ARMV7A_CNTVCT,1,[Define if you have enabled the CNTVCT cycle counter on ARMv7a]) +fi + +AC_ARG_ENABLE(armv7a-pmccntr, [AC_HELP_STRING([--enable-armv7a-pmccntr],[enable the cycle counter on Armv7a via the PMCCNTR register])], have_armv7apmccntr=$enableval) +if test "$have_armv7apmccntr"x = "yes"x; then + AC_DEFINE(HAVE_ARMV7A_PMCCNTR,1,[Define if you have enabled the PMCCNTR cycle counter on ARMv7a]) +fi + # Add warning flags by default, if these can be used. Option =error adds # -Werror to GCC, clang and Intel. Note do this last as compiler tests may # become errors, if that's an issue don't use CFLAGS for these, use an AC_SUBST(). @@ -848,6 +935,11 @@ if test "$enable_warn" != "no"; then ;; esac fi + + # We want strict-prototypes, but this must still work even if warnings + # are an error. + AX_CHECK_COMPILE_FLAG([-Wstrict-prototypes],[CFLAGS="$CFLAGS -Wstrict-prototypes"], + [CFLAGS="$CFLAGS"],[$CFLAGS],[AC_LANG_SOURCE([int main(void){return 0;}])]) fi # Various package configuration options. @@ -1204,6 +1296,7 @@ AC_CONFIG_FILES([tests/testPeriodicBC.sh], [chmod +x tests/testPeriodicBC.sh]) AC_CONFIG_FILES([tests/testPeriodicBCPerturbed.sh], [chmod +x tests/testPeriodicBCPerturbed.sh]) AC_CONFIG_FILES([tests/testInteractions.sh], [chmod +x tests/testInteractions.sh]) AC_CONFIG_FILES([tests/testParser.sh], [chmod +x tests/testParser.sh]) +AC_CONFIG_FILES([tests/testSelectOutput.sh], [chmod +x tests/testSelectOutput.sh]) # Save the compilation options AC_DEFINE_UNQUOTED([SWIFT_CONFIG_FLAGS],["$swift_config_flags"],[Flags passed to configure]) @@ -1211,6 +1304,11 @@ AC_DEFINE_UNQUOTED([SWIFT_CONFIG_FLAGS],["$swift_config_flags"],[Flags passed to # Make sure the latest git revision string gets included touch src/version.c +# Need to define this, instead of using fifth argument of AC_INIT, until +# 2.64. Defer until now as this redefines PACKAGE_URL, which can emit a +# compilation error when testing with -Werror. +AC_DEFINE([PACKAGE_URL],["www.swiftsim.com"], [Package web pages]) + # Generate output. AC_OUTPUT @@ -1220,22 +1318,21 @@ AC_MSG_RESULT([ $PACKAGE_NAME v.$PACKAGE_VERSION - Compiler : $CC - - vendor : $ax_cv_c_compiler_vendor - - version : $ax_cv_c_compiler_version - - flags : $CFLAGS - MPI enabled : $enable_mpi - HDF5 enabled : $with_hdf5 - - parallel : $have_parallel_hdf5 - Metis enabled : $have_metis - FFTW3 enabled : $have_fftw - GSL enabled : $have_gsl - libNUMA enabled : $have_numa - GRACKLE enabled : $have_grackle - Using tcmalloc : $have_tcmalloc - Using jemalloc : $have_jemalloc - CPU profiler : $have_profiler - Pthread barriers : $have_pthread_barrier + Compiler : $CC + - vendor : $ax_cv_c_compiler_vendor + - version : $ax_cv_c_compiler_version + - flags : $CFLAGS + MPI enabled : $enable_mpi + HDF5 enabled : $with_hdf5 + - parallel : $have_parallel_hdf5 + Metis enabled : $have_metis + FFTW3 enabled : $have_fftw + GSL enabled : $have_gsl + libNUMA enabled : $have_numa + GRACKLE enabled : $have_grackle + Special allocators : $have_special_allocator + CPU profiler : $have_profiler + Pthread barriers : $have_pthread_barrier Hydro scheme : $with_hydro Dimensionality : $with_dimension diff --git a/doc/RTD/DeveloperGuide/AddingTasks/addingtasks.rst b/doc/RTD-Legacy/DeveloperGuide/AddingTasks/addingtasks.rst similarity index 100% rename from doc/RTD/DeveloperGuide/AddingTasks/addingtasks.rst rename to doc/RTD-Legacy/DeveloperGuide/AddingTasks/addingtasks.rst diff --git a/doc/RTD/DeveloperGuide/Examples/Cooling/cooling.rst b/doc/RTD-Legacy/DeveloperGuide/Examples/Cooling/cooling.rst similarity index 100% rename from doc/RTD/DeveloperGuide/Examples/Cooling/cooling.rst rename to doc/RTD-Legacy/DeveloperGuide/Examples/Cooling/cooling.rst diff --git a/doc/RTD/DeveloperGuide/Examples/ExternalGravity/externalgravity.rst b/doc/RTD-Legacy/DeveloperGuide/Examples/ExternalGravity/externalgravity.rst similarity index 100% rename from doc/RTD/DeveloperGuide/Examples/ExternalGravity/externalgravity.rst rename to doc/RTD-Legacy/DeveloperGuide/Examples/ExternalGravity/externalgravity.rst diff --git a/doc/RTD/DeveloperGuide/developerguide.rst b/doc/RTD-Legacy/DeveloperGuide/developerguide.rst similarity index 100% rename from doc/RTD/DeveloperGuide/developerguide.rst rename to doc/RTD-Legacy/DeveloperGuide/developerguide.rst diff --git a/doc/RTD/FAQ/index.rst b/doc/RTD-Legacy/FAQ/index.rst similarity index 100% rename from doc/RTD/FAQ/index.rst rename to doc/RTD-Legacy/FAQ/index.rst diff --git a/doc/RTD/Innovation/AsynchronousComms/index.rst b/doc/RTD-Legacy/Innovation/AsynchronousComms/index.rst similarity index 100% rename from doc/RTD/Innovation/AsynchronousComms/index.rst rename to doc/RTD-Legacy/Innovation/AsynchronousComms/index.rst diff --git a/doc/RTD/Innovation/Caching/index.rst b/doc/RTD-Legacy/Innovation/Caching/index.rst similarity index 100% rename from doc/RTD/Innovation/Caching/index.rst rename to doc/RTD-Legacy/Innovation/Caching/index.rst diff --git a/doc/RTD/Innovation/HeirarchicalCellDecomposition/InitialDecomp.png b/doc/RTD-Legacy/Innovation/HeirarchicalCellDecomposition/InitialDecomp.png similarity index 100% rename from doc/RTD/Innovation/HeirarchicalCellDecomposition/InitialDecomp.png rename to doc/RTD-Legacy/Innovation/HeirarchicalCellDecomposition/InitialDecomp.png diff --git a/doc/RTD/Innovation/HeirarchicalCellDecomposition/SplitCell.png b/doc/RTD-Legacy/Innovation/HeirarchicalCellDecomposition/SplitCell.png similarity index 100% rename from doc/RTD/Innovation/HeirarchicalCellDecomposition/SplitCell.png rename to doc/RTD-Legacy/Innovation/HeirarchicalCellDecomposition/SplitCell.png diff --git a/doc/RTD/Innovation/HeirarchicalCellDecomposition/SplitPair.png b/doc/RTD-Legacy/Innovation/HeirarchicalCellDecomposition/SplitPair.png similarity index 100% rename from doc/RTD/Innovation/HeirarchicalCellDecomposition/SplitPair.png rename to doc/RTD-Legacy/Innovation/HeirarchicalCellDecomposition/SplitPair.png diff --git a/doc/RTD/Innovation/HeirarchicalCellDecomposition/index.rst b/doc/RTD-Legacy/Innovation/HeirarchicalCellDecomposition/index.rst similarity index 100% rename from doc/RTD/Innovation/HeirarchicalCellDecomposition/index.rst rename to doc/RTD-Legacy/Innovation/HeirarchicalCellDecomposition/index.rst diff --git a/doc/RTD/Innovation/HybridParallelism/index.rst b/doc/RTD-Legacy/Innovation/HybridParallelism/index.rst similarity index 100% rename from doc/RTD/Innovation/HybridParallelism/index.rst rename to doc/RTD-Legacy/Innovation/HybridParallelism/index.rst diff --git a/doc/RTD/Innovation/TaskBasedParallelism/OMPScaling.png b/doc/RTD-Legacy/Innovation/TaskBasedParallelism/OMPScaling.png similarity index 100% rename from doc/RTD/Innovation/TaskBasedParallelism/OMPScaling.png rename to doc/RTD-Legacy/Innovation/TaskBasedParallelism/OMPScaling.png diff --git a/doc/RTD/Innovation/TaskBasedParallelism/TasksExample.png b/doc/RTD-Legacy/Innovation/TaskBasedParallelism/TasksExample.png similarity index 100% rename from doc/RTD/Innovation/TaskBasedParallelism/TasksExample.png rename to doc/RTD-Legacy/Innovation/TaskBasedParallelism/TasksExample.png diff --git a/doc/RTD/Innovation/TaskBasedParallelism/TasksExampleConflicts.png b/doc/RTD-Legacy/Innovation/TaskBasedParallelism/TasksExampleConflicts.png similarity index 100% rename from doc/RTD/Innovation/TaskBasedParallelism/TasksExampleConflicts.png rename to doc/RTD-Legacy/Innovation/TaskBasedParallelism/TasksExampleConflicts.png diff --git a/doc/RTD/Innovation/TaskBasedParallelism/index.rst b/doc/RTD-Legacy/Innovation/TaskBasedParallelism/index.rst similarity index 100% rename from doc/RTD/Innovation/TaskBasedParallelism/index.rst rename to doc/RTD-Legacy/Innovation/TaskBasedParallelism/index.rst diff --git a/doc/RTD/Innovation/TaskGraphPartition/index.rst b/doc/RTD-Legacy/Innovation/TaskGraphPartition/index.rst similarity index 100% rename from doc/RTD/Innovation/TaskGraphPartition/index.rst rename to doc/RTD-Legacy/Innovation/TaskGraphPartition/index.rst diff --git a/doc/RTD/Innovation/Vectorisation/index.rst b/doc/RTD-Legacy/Innovation/Vectorisation/index.rst similarity index 100% rename from doc/RTD/Innovation/Vectorisation/index.rst rename to doc/RTD-Legacy/Innovation/Vectorisation/index.rst diff --git a/doc/RTD/Innovation/index.rst b/doc/RTD-Legacy/Innovation/index.rst similarity index 96% rename from doc/RTD/Innovation/index.rst rename to doc/RTD-Legacy/Innovation/index.rst index da3f2474b4c71f8030634b2f669d3d8093757171..65a9cb82230b5661f332e25ea05644ec8e2cdbf8 100644 --- a/doc/RTD/Innovation/index.rst +++ b/doc/RTD-Legacy/Innovation/index.rst @@ -1,5 +1,3 @@ -.. _GettingStarted: - What makes SWIFT different? =========================== diff --git a/doc/RTD-Legacy/Makefile b/doc/RTD-Legacy/Makefile new file mode 100644 index 0000000000000000000000000000000000000000..b1dfebb01c2f55530f7a8efad3c5e5bf96484c18 --- /dev/null +++ b/doc/RTD-Legacy/Makefile @@ -0,0 +1,177 @@ +# Makefile for Sphinx documentation +# + +# You can set these variables from the command line. +SPHINXOPTS = +SPHINXBUILD = sphinx-build +PAPER = +BUILDDIR = _build + +# User-friendly check for sphinx-build +ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) +$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) +endif + +# Internal variables. +PAPEROPT_a4 = -D latex_paper_size=a4 +PAPEROPT_letter = -D latex_paper_size=letter +ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . +# the i18n builder cannot share the environment and doctrees with the others +I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . + +.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext + +help: + @echo "Please use \`make <target>' where <target> is one of" + @echo " html to make standalone HTML files" + @echo " dirhtml to make HTML files named index.html in directories" + @echo " singlehtml to make a single large HTML file" + @echo " pickle to make pickle files" + @echo " json to make JSON files" + @echo " htmlhelp to make HTML files and a HTML help project" + @echo " qthelp to make HTML files and a qthelp project" + @echo " devhelp to make HTML files and a Devhelp project" + @echo " epub to make an epub" + @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " latexpdf to make LaTeX files and run them through pdflatex" + @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" + @echo " text to make text files" + @echo " man to make manual pages" + @echo " texinfo to make Texinfo files" + @echo " info to make Texinfo files and run them through makeinfo" + @echo " gettext to make PO message catalogs" + @echo " changes to make an overview of all changed/added/deprecated items" + @echo " xml to make Docutils-native XML files" + @echo " pseudoxml to make pseudoxml-XML files for display purposes" + @echo " linkcheck to check all external links for integrity" + @echo " doctest to run all doctests embedded in the documentation (if enabled)" + +clean: + rm -rf $(BUILDDIR)/* + +html: + $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html + @echo + @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." + +dirhtml: + $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml + @echo + @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." + +singlehtml: + $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml + @echo + @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." + +pickle: + $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle + @echo + @echo "Build finished; now you can process the pickle files." + +json: + $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json + @echo + @echo "Build finished; now you can process the JSON files." + +htmlhelp: + $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp + @echo + @echo "Build finished; now you can run HTML Help Workshop with the" \ + ".hhp project file in $(BUILDDIR)/htmlhelp." + +qthelp: + $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp + @echo + @echo "Build finished; now you can run "qcollectiongenerator" with the" \ + ".qhcp project file in $(BUILDDIR)/qthelp, like this:" + @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/SWIFT.qhcp" + @echo "To view the help file:" + @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/SWIFT.qhc" + +devhelp: + $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp + @echo + @echo "Build finished." + @echo "To view the help file:" + @echo "# mkdir -p $$HOME/.local/share/devhelp/SWIFT" + @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/SWIFT" + @echo "# devhelp" + +epub: + $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub + @echo + @echo "Build finished. The epub file is in $(BUILDDIR)/epub." + +latex: + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex + @echo + @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." + @echo "Run \`make' in that directory to run these through (pdf)latex" \ + "(use \`make latexpdf' here to do that automatically)." + +latexpdf: + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex + @echo "Running LaTeX files through pdflatex..." + $(MAKE) -C $(BUILDDIR)/latex all-pdf + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." + +latexpdfja: + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex + @echo "Running LaTeX files through platex and dvipdfmx..." + $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." + +text: + $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text + @echo + @echo "Build finished. The text files are in $(BUILDDIR)/text." + +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man." + +texinfo: + $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo + @echo + @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." + @echo "Run \`make' in that directory to run these through makeinfo" \ + "(use \`make info' here to do that automatically)." + +info: + $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo + @echo "Running Texinfo files through makeinfo..." + make -C $(BUILDDIR)/texinfo info + @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." + +gettext: + $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale + @echo + @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." + +changes: + $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes + @echo + @echo "The overview file is in $(BUILDDIR)/changes." + +linkcheck: + $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck + @echo + @echo "Link check complete; look for any errors in the above output " \ + "or in $(BUILDDIR)/linkcheck/output.txt." + +doctest: + $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest + @echo "Testing of doctests in the sources finished, look at the " \ + "results in $(BUILDDIR)/doctest/output.txt." + +xml: + $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml + @echo + @echo "Build finished. The XML files are in $(BUILDDIR)/xml." + +pseudoxml: + $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml + @echo + @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." diff --git a/doc/RTD/Motivation/index.rst b/doc/RTD-Legacy/Motivation/index.rst similarity index 100% rename from doc/RTD/Motivation/index.rst rename to doc/RTD-Legacy/Motivation/index.rst diff --git a/doc/RTD/Physics/Gravity/gravity.rst b/doc/RTD-Legacy/Physics/Gravity/gravity.rst similarity index 100% rename from doc/RTD/Physics/Gravity/gravity.rst rename to doc/RTD-Legacy/Physics/Gravity/gravity.rst diff --git a/doc/RTD/Physics/SPH/sph.rst b/doc/RTD-Legacy/Physics/SPH/sph.rst similarity index 100% rename from doc/RTD/Physics/SPH/sph.rst rename to doc/RTD-Legacy/Physics/SPH/sph.rst diff --git a/doc/RTD/Physics/index.rst b/doc/RTD-Legacy/Physics/index.rst similarity index 100% rename from doc/RTD/Physics/index.rst rename to doc/RTD-Legacy/Physics/index.rst diff --git a/doc/RTD/conf.py b/doc/RTD-Legacy/conf.py similarity index 98% rename from doc/RTD/conf.py rename to doc/RTD-Legacy/conf.py index b4eab3d354322f8ff5e060b1795c2654eb879f90..6b65aabe54a480a29a7930f6e6f24a35b0204dcf 100644 --- a/doc/RTD/conf.py +++ b/doc/RTD-Legacy/conf.py @@ -25,7 +25,7 @@ import sys, os # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. -extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.pngmath', 'sphinx.ext.mathjax'] +extensions = ['sphinx.ext.todo', 'sphinx.ext.mathjax'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] diff --git a/doc/RTD/index.rst b/doc/RTD-Legacy/index.rst similarity index 100% rename from doc/RTD/index.rst rename to doc/RTD-Legacy/index.rst diff --git a/doc/RTD/.gitignore b/doc/RTD/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..50239858cbc1b86a270ae88903139d91b943acd3 --- /dev/null +++ b/doc/RTD/.gitignore @@ -0,0 +1,2 @@ +build/* +make.bat diff --git a/doc/RTD/Makefile b/doc/RTD/Makefile index b1dfebb01c2f55530f7a8efad3c5e5bf96484c18..22ff80ae32739df7d7ca97d8d2a6c6c8159b8248 100644 --- a/doc/RTD/Makefile +++ b/doc/RTD/Makefile @@ -1,177 +1,20 @@ -# Makefile for Sphinx documentation +# Minimal makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build -PAPER = -BUILDDIR = _build - -# User-friendly check for sphinx-build -ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) -$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) -endif - -# Internal variables. -PAPEROPT_a4 = -D latex_paper_size=a4 -PAPEROPT_letter = -D latex_paper_size=letter -ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -# the i18n builder cannot share the environment and doctrees with the others -I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . - -.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext +SPHINXPROJ = SWIFTSPHWIthFine-grainedinter-dependentTasking +SOURCEDIR = source +BUILDDIR = build +# Put it first so that "make" without argument is like "make help". help: - @echo "Please use \`make <target>' where <target> is one of" - @echo " html to make standalone HTML files" - @echo " dirhtml to make HTML files named index.html in directories" - @echo " singlehtml to make a single large HTML file" - @echo " pickle to make pickle files" - @echo " json to make JSON files" - @echo " htmlhelp to make HTML files and a HTML help project" - @echo " qthelp to make HTML files and a qthelp project" - @echo " devhelp to make HTML files and a Devhelp project" - @echo " epub to make an epub" - @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" - @echo " latexpdf to make LaTeX files and run them through pdflatex" - @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" - @echo " text to make text files" - @echo " man to make manual pages" - @echo " texinfo to make Texinfo files" - @echo " info to make Texinfo files and run them through makeinfo" - @echo " gettext to make PO message catalogs" - @echo " changes to make an overview of all changed/added/deprecated items" - @echo " xml to make Docutils-native XML files" - @echo " pseudoxml to make pseudoxml-XML files for display purposes" - @echo " linkcheck to check all external links for integrity" - @echo " doctest to run all doctests embedded in the documentation (if enabled)" - -clean: - rm -rf $(BUILDDIR)/* - -html: - $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html - @echo - @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." - -dirhtml: - $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml - @echo - @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." - -singlehtml: - $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml - @echo - @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." - -pickle: - $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle - @echo - @echo "Build finished; now you can process the pickle files." - -json: - $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json - @echo - @echo "Build finished; now you can process the JSON files." - -htmlhelp: - $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp - @echo - @echo "Build finished; now you can run HTML Help Workshop with the" \ - ".hhp project file in $(BUILDDIR)/htmlhelp." - -qthelp: - $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp - @echo - @echo "Build finished; now you can run "qcollectiongenerator" with the" \ - ".qhcp project file in $(BUILDDIR)/qthelp, like this:" - @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/SWIFT.qhcp" - @echo "To view the help file:" - @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/SWIFT.qhc" - -devhelp: - $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp - @echo - @echo "Build finished." - @echo "To view the help file:" - @echo "# mkdir -p $$HOME/.local/share/devhelp/SWIFT" - @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/SWIFT" - @echo "# devhelp" - -epub: - $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub - @echo - @echo "Build finished. The epub file is in $(BUILDDIR)/epub." - -latex: - $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex - @echo - @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." - @echo "Run \`make' in that directory to run these through (pdf)latex" \ - "(use \`make latexpdf' here to do that automatically)." - -latexpdf: - $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex - @echo "Running LaTeX files through pdflatex..." - $(MAKE) -C $(BUILDDIR)/latex all-pdf - @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." - -latexpdfja: - $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex - @echo "Running LaTeX files through platex and dvipdfmx..." - $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja - @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." - -text: - $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text - @echo - @echo "Build finished. The text files are in $(BUILDDIR)/text." - -man: - $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man - @echo - @echo "Build finished. The manual pages are in $(BUILDDIR)/man." - -texinfo: - $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo - @echo - @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." - @echo "Run \`make' in that directory to run these through makeinfo" \ - "(use \`make info' here to do that automatically)." - -info: - $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo - @echo "Running Texinfo files through makeinfo..." - make -C $(BUILDDIR)/texinfo info - @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." - -gettext: - $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale - @echo - @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." - -changes: - $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes - @echo - @echo "The overview file is in $(BUILDDIR)/changes." - -linkcheck: - $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck - @echo - @echo "Link check complete; look for any errors in the above output " \ - "or in $(BUILDDIR)/linkcheck/output.txt." - -doctest: - $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest - @echo "Testing of doctests in the sources finished, look at the " \ - "results in $(BUILDDIR)/doctest/output.txt." + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -xml: - $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml - @echo - @echo "Build finished. The XML files are in $(BUILDDIR)/xml." +.PHONY: help Makefile -pseudoxml: - $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml - @echo - @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." +# Catch-all target: route all unknown targets to Sphinx using the new +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). +%: Makefile + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) \ No newline at end of file diff --git a/doc/RTD/README.md b/doc/RTD/README.md new file mode 100644 index 0000000000000000000000000000000000000000..3394ce7b8b97a71a7f14fb235e49e2efb51b9f9f --- /dev/null +++ b/doc/RTD/README.md @@ -0,0 +1,15 @@ +SWIFT Documentation +=================== + +This is the main documentation for SWIFT that can be found on ReadTheDocs. + +You will need the `sphinx` and `sphinx-autobuild` python packages (pip install +them!) to build the documentation to html, as well as the `sphinx_rtd_theme` +package which is used as the theme. + +To build the documentation, `make html` and then it is available in +`build/html`. + +Please consider adding documentation when you add code! + + diff --git a/doc/RTD/source/Cooling/index.rst b/doc/RTD/source/Cooling/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..46a01b2a054629b7fc13f0ea190c2a5a0fdd6d9c --- /dev/null +++ b/doc/RTD/source/Cooling/index.rst @@ -0,0 +1,67 @@ +.. Equation of State + Loic Hausammann, 7th April 2018 + +.. _cooling: + +Cooling +======= + +Currently, we have 5 different cooling (EAGLE, Grackle, const-lambda, const-du +and none). Three of them are easily solved analytically (const-lambda, +const-du and none) while the two last requires complex chemical networks. + + +Equations +--------- + +The first table compares the different analytical cooling while the next ones +are specific to a given cooling. The quantities are the internal energy (\\( u +\\)), the density \\( rho \\), the element mass fraction (\\( X_i \\)), the +cooling function (\\(\\Lambda\\), the proton mass (\\( m_H \\)) and the time +step condition (\\( t\_\\text{step}\\)). If not specified otherwise, all +cooling contains a temperature floor avoiding negative temperature. + +.. csv-table:: Analytical Cooling + :header: "Variable", "Const-Lambda", "Const-du", "None" + + "\\( \\frac{ \\mathrm{d}u }{ \\mathrm{d}t } \\)", "\\( -\\Lambda \\frac{\\rho^2 X_H^2}{\\rho m_H^2} \\)", "const", "0" + "\\( \\Delta t\_\\text{max} \\)", "\\( t\_\\text{step} \\frac{u}{\\left|\\frac{ \\mathrm{d}u }{ \\mathrm{d}t }\\right|} \\)", "\\( t\_\\text{step} \\frac{u}{\\ \\left| \\frac{ \\mathrm{d}u }{ \\mathrm{d}t }\\right|} \\)", "None" + + +Grackle +~~~~~~~ + +Grackle is a chemistry and cooling library presented in B. Smith et al. 2016 +(do not forget to cite if used). Four different modes are available: +equilibrium, 6 species network (H, H\\( ^+ \\), e\\( ^- \\), He, He\\( ^+ \\) +and He\\( ^{++} \\)), 9 species network (adds H\\(^-\\), H\\(_2\\) and +H\\(_2^+\\)) and 12 species (adds D, D\\(^+\\) and HD). Following the same +order, the swift cooling options are ``grackle``, ``grackle1``, ``grackle2`` +and ``grackle3`` (the numbers correspond to the value of +``primordial_chemistry`` in Grackle). It also includes some self-shielding +methods and UV background. In order to use the Grackle cooling, you will need +to provide an HDF5 table computed by Cloudy. + +When starting a simulation without providing the different fractions, the code +supposes an equilibrium and computes the fractions automatically. + +Eagle +~~~~~ + +TODO + +How to Implement a New Cooling +------------------------------ + +The developper should provide at least one function for: + * writing the cooling name in HDF5 + * cooling a particle + * the maximal time step possible + * initializing a particle + * computing the total energy radiated by a particle + * initializing the cooling parameters + * printing the cooling type + +For implementation details, see ``src/cooling/none/cooling.h`` + +See :ref:`new_option` for the full list of changes required. diff --git a/doc/RTD/source/EquationOfState/index.rst b/doc/RTD/source/EquationOfState/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..3558041e9513b967a2530165acec5e5f4f11a364 --- /dev/null +++ b/doc/RTD/source/EquationOfState/index.rst @@ -0,0 +1,50 @@ +.. Equation of State + Loic Hausammann, 6th April 2018 + +.. _equation_of_state: + +Equation of State +================= + +Currently (if the documentation was well updated), we have two different +equation of states implemented: ideal gas and isothermal. They describe the +relations between our main thermodynamical variables: the internal energy +(\\(u\\)), the density (\\(\\rho\\)), the entropy (\\(A\\)) and the pressure +(\\(P\\)). + +Equations +--------- + +In the following section, the variables not yet defined are: \\(\\gamma\\) for +the adiabatic index and \\( c_s \\) for the speed of sound. + +.. csv-table:: Ideal Gas + :header: "Variable", "A", "u", "P" + + "A", "", "\\( \\left( \\gamma - 1 \\right) u \\rho^{1-\\gamma} \\)", "\\(P \\rho^{-\\gamma} \\)" + "u", "\\( A \\frac{ \\rho^{ \\gamma - 1 } }{\\gamma - 1 } \\)", "", "\\(\\frac{1}{\\gamma - 1} \\frac{P}{\\rho}\\)" + "P", "\\( A \\rho^\\gamma \\)", "\\( \\left( \\gamma - 1\\right) u \\rho \\)", "" + "\\(c_s\\)", "\\(\\sqrt{ \\gamma \\rho^{\\gamma - 1} A}\\)", "\\(\\sqrt{ u \\gamma \\left( \\gamma - 1 \\right) } \\)", "\\(\\sqrt{ \\frac{\\gamma P}{\\rho} }\\)" + + +.. csv-table:: Isothermal Gas + :header: "Variable", "A", "u", "P" + + + "A", "", "\\(\\left( \\gamma - 1 \\right) u \\rho^{1-\\gamma}\\)", "" + "u", "", "const", "" + "P", "", "\\(\\left( \\gamma - 1\\right) u \\rho \\)", "" + "\\( c_s\\)", "", "\\(\\sqrt{ u \\gamma \\left( \\gamma - 1 \\right) } \\)", "" + + +How to Implement a New Equation of State +---------------------------------------- + +See :ref:`new_option` for a full list of required changes. + +You will need to provide a ``equation_of_state.h`` file containing: the +definition of ``eos_parameters``, IO functions and transformations between the +different variables: \\(u(\\rho, A)\\), \\(u(\\rho, P)\\), \\(P(\\rho,A)\\), +\\(P(\\rho, u)\\), \\(A(\\rho, P)\\), \\(A(\\rho, u)\\), \\(c_s(\\rho, A)\\), +\\(c_s(\\rho, u)\\) and \\(c_s(\\rho, P)\\). See other equation of state files +to have implementation details. diff --git a/doc/RTD/source/GettingStarted/compiling_code.rst b/doc/RTD/source/GettingStarted/compiling_code.rst new file mode 100644 index 0000000000000000000000000000000000000000..c40f06965e15146c41bf210aec3b195032cef0e7 --- /dev/null +++ b/doc/RTD/source/GettingStarted/compiling_code.rst @@ -0,0 +1,108 @@ +.. Compiling the Code + Josh Borrow, 5th April 2018 + + +Compiling SWIFT +=============== + +Dependencies +------------ + +To compile SWIFT, you will need the following libraries: + +HDF5 +~~~~ + +Version 1.8.x or higher is required. Input and output files are stored as HDF5 +and are compatible with the existing GADGET-2 specification. Please consider +using a build of parallel-HDF5, as SWIFT can leverage this when writing and +reading snapshots. We recommend using HDF5 > 1.10.x as this is `vastly superior` +in parallel. + +MPI +~~~ +A recent implementation of MPI, such as Open MPI (v2.x or higher), is required, +or any library that implements at least the MPI 3 standard. + +Libtool +~~~~~~~ +The build system depends on libtool. + +FFTW +~~~~ +Version 3.3.x or higher is required for periodic gravity. + +METIS +~~~~~ +METIS is used for domain decomposition and load balancing. + +libNUMA +~~~~~~~ +libNUMA is used to pin threads. + +GSL +~~~ +The GSL is required for cosmological integration. + + +Optional Dependencies +--------------------- + +There are also the following _optional_ dependencies. + +TCmalloc/Jemalloc +~~~~~~~~~~~~~~~~~ +TCmalloc/Jemalloc are used for faster memory allocations when available. + +DOXYGEN +~~~~~~~ +You can build documentation for SWIFT with DOXYGEN. + +Python +~~~~~~ +To run the examples, you will need python and some of the standard scientific libraries (numpy, matplotlib). Some examples use Python 2 scripts, but the more recent ones use Python 3 (this is specified in individual READMEs). + +GRACKLE +~~~~~~~ +GRACKLE cooling is implemented in SWIFT. If you wish to take advantage of it, you will need it installed. + + +Initial Setup +------------- + +We use autotools for setup. To get a basic running version of the code +(the binary is created in swiftsim/examples) on most platforms, run + +.. code-block:: bash + + ./autogen.sh + ./configure + make + + +MacOS Specific Oddities +~~~~~~~~~~~~~~~~~~~~~~~ + +To build on MacOS you will need to disable compiler warnings due to an +incomplete implementation of pthread barriers. DOXYGEN also has some issues on +MacOS, so it is best to leave it out. To configure: + +.. code-block:: bash + + ./configure --disable-compiler-warnings --disable-doxygen-doc + + +Trouble Finding Libraries +~~~~~~~~~~~~~~~~~~~~~~~~~ + +If the configure script is having trouble finding your libraries for you, it +may be that they are in nonstandard locations. You can link the specific +library locations by using ``--with-<LIBRARY>=<PATH>``. For example for the +HDF5 library, + +.. code-block:: bash + + ./configure --with-hdf5=/path/to/h5cc + +More information about what needs to be provided to these flags is given in +``./configure --help``. diff --git a/doc/RTD/source/GettingStarted/configuration_options.rst b/doc/RTD/source/GettingStarted/configuration_options.rst new file mode 100644 index 0000000000000000000000000000000000000000..e37384cfd1c29cb1df82cc180a763f4859650b2e --- /dev/null +++ b/doc/RTD/source/GettingStarted/configuration_options.rst @@ -0,0 +1,50 @@ +.. Configuration Options + Josh Borrow, 5th April 2018 + +Configuration Options +===================== + +There are many configuration options that SWIFT makes available; a few key +ones are summarised here. + +Note that these need to be ran with ``./configure x`` where ``x`` is the +configuration flag. + +A description of the available options of the below flags can be found by using +``./configure --help``. + +``--with-hydro=gadget2`` +~~~~~~~~~~~~~~~~~~~~~~~~ +There are several hydrodynamical schemes available in SWIFT. You can choose +between them at compile-time with this option. + +``--with-riemann-solver=none`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Some hydrodynamical schemes, for example GIZMO, require a Riemann solver. + +``--with-kernel=cubic-spline`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Several kernels are made available for use with the hydrodynamical schemes. +Choose between them with this compile-time flag. + +``--with-hydro-dimension=3`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Run problems in 1, 2, and 3 (default) dimensions. + +``--with-equation-of-state=ideal-gas`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Several equations of state are made available with this flag. Also consider +``--with-adiabatic-index``. + +``--with-cooling=none`` +~~~~~~~~~~~~~~~~~~~~~~~ +Several cooling implementations (including GRACKLE) are available. + +``--with-ext-potential=none`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Many external potentials are available for use with SWIFT. You can choose +between them at compile time. Some examples include a central potential, a +softened central potential, and a sinusoidal potential. You will need to +configure, for example, the mass in your parameterfile at runtime. + + diff --git a/doc/RTD/source/GettingStarted/index.rst b/doc/RTD/source/GettingStarted/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..d15a8eee2f3b9089a1c8ee033f9aa3ee7ad92a5f --- /dev/null +++ b/doc/RTD/source/GettingStarted/index.rst @@ -0,0 +1,25 @@ +.. Getting Started + Josh Borrow, 4th April 2018 + +Getting Started +=============== + +So, you want to use SWIFT? Below you should find all of the information you need +to get up and running with some examples, and then build your own initial conditions +for running. + +Also, you might want to consult our onboarding guide (available at +http://www.swiftsim.com/onboarding.pdf) if you would like something to print out +and keep on your desk. + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + + compiling_code + running_example + runtime_options + configuration_options + parameter_file + what_about_mpi + running_on_large_systems diff --git a/doc/RTD/source/GettingStarted/parameter_file.rst b/doc/RTD/source/GettingStarted/parameter_file.rst new file mode 100644 index 0000000000000000000000000000000000000000..32a4f1220d0dc1c65074cc14b53d2c4edd666a67 --- /dev/null +++ b/doc/RTD/source/GettingStarted/parameter_file.rst @@ -0,0 +1,25 @@ +.. Parameter File + Loic Hausammann, 1 june 2018 + +Parameter File +============== + +To run SWIFT, you will need to provide a ``yaml`` parameter file. An example is +given in ``examples/parameter_file.yml`` which should contain all possible +parameters. Each section in this file corresponds to a different option in +SWIFT and are not always required depending on the configuration options and +the run time parameters. + + +Output Selection +~~~~~~~~~~~~~~~~ + +With SWIFT, you can select the particle fields to output in snapshot using the parameter file. +In section ``SelectOutput``, you can remove a field by adding a parameter formatted in the +following way ``field_parttype`` where ``field`` is the name of the field that you +want to remove (e.g. ``Masses``) and ``parttype`` is the type of particles that +contains this field (e.g. ``Gas``, ``DM`` or ``Star``). For a parameter, the only +values accepted are 0 (skip this field when writing) or 1 (default, do not skip +this field when writing). + +You can generate a ``yaml`` file containing all the possible fields with ``./swift -o output.yml``. By default, all the fields are written. diff --git a/doc/RTD/source/GettingStarted/running_example.rst b/doc/RTD/source/GettingStarted/running_example.rst new file mode 100644 index 0000000000000000000000000000000000000000..854e74cf830d58e51cf866d59a93ede6dceb57b6 --- /dev/null +++ b/doc/RTD/source/GettingStarted/running_example.rst @@ -0,0 +1,37 @@ +.. Running an Example + Josh Borrow, 5th April 2018 + +Running an Example +================== + +Now that you have built the code, you will want to run an example! To do that, +you need to follow the following instructions (requires ``python2`` or +``python3`` with the ``h5py`` and other standard scientific packages, as well +as ``wget`` for grabbing the glass). + +.. code-block:: bash + + cd examples/SodShock_3D + ./getGlass.sh + python makeIC.py + ../swift -s -t 4 sodShock.yml + python plotSolution.py 1 + + +This will run the 'SodShock' in 3D and produce a nice plot that shows you +how the density has varied. Try running with GIZMO (this will take +_significantly_ longer than with SPH) to see the difference. For that, you +will need to reconfigure with the following options: + +.. code-block:: bash + + ./configure \ + --with-hydro=gizmo \ + --with-riemann-solver=hllc + + +To see the results that you should get, you should check out our developer +wiki at https://gitlab.cosma.dur.ac.uk/swift/swiftsim/wikis/Sod_3D. + +If you don't get these results, please contact us on our GitHub page at +https://github.com/SWIFTSIM/swiftsim/issues. diff --git a/doc/RTD/source/GettingStarted/running_on_large_systems.rst b/doc/RTD/source/GettingStarted/running_on_large_systems.rst new file mode 100644 index 0000000000000000000000000000000000000000..55eb812cef21474045931490591b3978841a4085 --- /dev/null +++ b/doc/RTD/source/GettingStarted/running_on_large_systems.rst @@ -0,0 +1,42 @@ +.. Running on Large Systems + Josh Borrow, 5th April 2018 + +Running on Large Systems +======================== + +There are a few extra things to keep in mind when running SWIFT on a large +system (i.e. over MPI on several nodes). Here are some recommendations: + ++ Compile and run with + `tbbmalloc <https://www.threadingbuildingblocks.org>`_. You can add this + to the configuration of SWIFT by running configure with the + ``--with-tbbmalloc`` flag. Using this allocator, over the one included in the + standard library, is particularly important on systems with large core counts + per node. Alternatives include + `jemalloc <https://github.com/jemalloc/jemalloc>`_ and + `tcmalloc <https://github.com/gperftools/gperftools>`_, and using these + other allocation tools also improves performance on single-node jobs. ++ Run with one MPI rank per NUMA region, usually a socket, rather than per node. + Typical HPC clusters now use two chips per node. Consult with your local system + manager if you are unsure about your system configuration. This can be done + by invoking ``mpirun -np <NUMBER OF CHIPS> swift_mpi -t <NUMBER OF CORES PER CHIP>``. + You should also be careful to include this in your batch script, for example + with the `SLURM <https://slurm.schedmd.com>`_ batch system you will need to + include ``#SBATCH --tasks-per-node=2``. ++ Run with threads pinned. You can do this by passing the ``-a`` flag to the + SWIFT binary. This ensures that processes stay on the same core that spawned + them, ensuring that cache is accessed more efficiently. ++ Ensure that you compile with METIS. More information is available in an + upcoming paper, but using METIS allows for work to be distributed in a + more efficient way between your nodes. + +Your batch script should look something like the following (to run on 16 nodes +each with 2x16 core processors for a total of 512 cores): + +.. code-block:: bash + + #SBATCH -N 16 # Number of nodes to run on + #SBATCH --tasks-per-node=2 # This system has 2 chips per node + + mpirun -np 32 swift_mpi -t 16 -a parameter.yml + diff --git a/doc/RTD/source/GettingStarted/runtime_options.rst b/doc/RTD/source/GettingStarted/runtime_options.rst new file mode 100644 index 0000000000000000000000000000000000000000..b2ca10640d8830b9b5ecb8e117bf047af738889c --- /dev/null +++ b/doc/RTD/source/GettingStarted/runtime_options.rst @@ -0,0 +1,41 @@ +.. Runtime Options + Josh Borrow, 5th April 2018 + +Runtime Options +=============== + +SWIFT requires a number of runtime options to run and get any sensible output. +For instance, just running the ``swift`` binary will not use any SPH or gravity; +the particles will just sit still! + +Below is a list of the runtime options and when they should be used. The same list +can be found by typing ``./swift -h``. + ++ ``-a``: Pin runners using processor affinity. ++ ``-c``: Run with cosmological time integration. ++ ``-C``: Run with cooling. ++ ``-d``: Dry run. Read the parameter file, allocate memory but does not read + the particles from ICs and exit before the start of time integration. Allows + user to check validity of parameter and IC files as well as memory limits. ++ ``-D``: Always drift all particles even the ones far from active particles. + This emulates Gadget-[23] and GIZMO's default behaviours. ++ ``-e``: Enable floating-point exceptions (debugging mode). ++ ``-f``: {int} Overwrite the CPU frequency (Hz) to be used for time measurements. ++ ``-g``: Run with an external gravitational potential. ++ ``-G``: Run with self-gravity. ++ ``-M``: Reconstruct the multipoles every time-step. ++ ``-n``: {int} Execute a fixed number of time steps. When unset use the + time_end parameter to stop. ++ ``-o``: {str} Generate a default output parameter file. ++ ``-P``: {sec:par:val} Set parameter value and overwrites values read from the + parameters file. Can be used more than once. ++ ``-s``: Run with hydrodynamics. ++ ``-S``: Run with stars. ++ ``-t``: {int} The number of threads to use on each MPI rank. Defaults to 1 if + not specified. ++ ``-T``: Print timers every time-step. ++ ``-v``: [12] Increase the level of verbosity: 1, MPI-rank 0 writes, 2, All + MPI-ranks write. ++ ``-y``: {int} Time-step frequency at which task graphs are dumped. ++ ``-Y``: {int} Time-step frequency at which threadpool tasks are dumped. ++ ``-h``: Print a help message and exit. diff --git a/doc/RTD/source/GettingStarted/what_about_mpi.rst b/doc/RTD/source/GettingStarted/what_about_mpi.rst new file mode 100644 index 0000000000000000000000000000000000000000..098fd35d80d71866cb86d2342d5d54710cd73a82 --- /dev/null +++ b/doc/RTD/source/GettingStarted/what_about_mpi.rst @@ -0,0 +1,12 @@ +.. What about MPI? Running SWIFT on more than one node + Josh Borrow, 5th April 2018 + +What about MPI? Running SWIFT on more than one node +=================================================== + +After compilation, you will be left with two binaries. One is called ``swift``, +and the other ``swift_mpi``. Current wisdom is to run ``swift`` if you are only +using one node (i.e. without any interconnect), and one MPI rank per NUMA +region using ``swift_mpi`` for anything larger. You will need some GADGET-2 +HDF5 initial conditions to run SWIFT, as well as a compatible yaml +parameterfile. diff --git a/doc/RTD/source/HydroSchemes/adding_your_own.rst b/doc/RTD/source/HydroSchemes/adding_your_own.rst new file mode 100644 index 0000000000000000000000000000000000000000..2d7e640f66153a17e19f4e4c456cd37eed19a95a --- /dev/null +++ b/doc/RTD/source/HydroSchemes/adding_your_own.rst @@ -0,0 +1,119 @@ +.. Adding Hydro Schemes + Josh Borrow, 5th April 2018 + + +Adding Hydro Schemes +==================== + +.. toctree:: + :maxdepth: 2 + :hidden: + :caption: Contents: + +SWIFT is engineered to enable you to add your own hydrodynamics schemes easily. +We enable this through the use of header files to encapsulate each scheme. + +Note that it's unlikely you will ever have to consider paralellism or 'loops over +neighbours' for SWIFT; all of this is handled by the tasking system. All we ask +for is the interaction functions that tell us how to a) compute the density +and b) compute forces. + + +Getting Started +--------------- + +The hydro schemes are stored in ``src/hydro``. You will need to create a folder +with a sensible name that you are going to store your scheme-specific information +in. Then, you will need to create the following files: + ++ ``hydro.h``, which includes functions that are applied to particles at the end + of the density loop and beginning of the force loop, along with helper functions ++ ``hydro_debug.h``, which includes a quick function that prints out your particle + properties for debugging purposes ++ ``hydro_iact.h`` that includes the interaction functions ++ ``hydro_io.h`` which includes the information on what should be read from the + initial conditions file, as well as written to the output files ++ ``hydro_part.h`` which includes your particle definition. SWIFT uses an array-of + -structures scheme. + + +``hydro.h`` +----------- + +As previously noted, ``hydro.h`` includes the helper functions for your scheme. You +will need to 'fill out' the following: + ++ ``hydro_get_comoving_internal_energy(p)`` which returns the comoving internal energy + of your particles (typically this will just be ``p->u``). ++ ``hydro_get_physical_internal_energy(p, cosmo)`` which returns the physical internal + energy. You can use the ``a_factor_internal_energy`` from the ``cosmology`` struct. ++ ``hydro_get_comoving_pressure(p)`` which returns the comoving pressure. ++ ``hydro_get_comoving_entropy(p)`` which returns the comoving entropy. ++ ``hydro_get_physical_entropy(p, cosmo)`` which returns the physical entropy. In our + formalism, usually there is no conversion factor here so it is the same as the + comoving version. ++ ``hydro_get_comoving_soundspeed(p)`` which returns the comoving sound speed. ++ ``hydro_get_physical_soundspeed(p, cosmo)`` which returns the physical sound + speed. You can use the ``a_factor_sound_speed``. ++ ``hydro_get_comoving_density(p)`` which returns the comoving density. ++ ``hydro_get_physical_density(p, cosmo)`` which returns the physical density. + You can use the ``a3_inv`` member of the ``cosmology`` struct. ++ ``hydro_get_mass(p)`` returns the mass of particle ``p``. ++ ``hydro_get_drifted_velocities(p, xp, dt_kick_hydro, dt_kick_grav, v[3])`` gets + the drifted velocities; this is just ``a_hydro * dt_kick_hydro`` + ``a_grav * + dt_kick_grav`` in most implementations. ++ ``hydro_get_energy_dt(p)`` returns the time derivative of the (comoving) internal + energy of the particle. ++ ``hydro_set_energy_dt(p)`` sets the time derivative of the (comoving) internal + energy of the particle. ++ ``hydro_compute_timestep(p, xp, hydro_props, cosmo)`` returns the timestep for + the hydrodynamics particles. ++ ``hydro_timestep_extra(p, dt)`` does some extra hydro operations once the + physical timestel for the particle is known. ++ ``hydro_init_part(p, hydro_space)`` initialises the particle in preparation for + the density calculation. This essentially sets properties, such as the density, + to zero. ++ ``hydro_end_density(p, cosmo)`` performs operations directly after the density + loop on each particle. Note that you will have to add a particle's self-contribution + at this stage as particles are never 'interacted' with themselves. ++ ``hydro_part_has_no_neighbours(p, xp, cosmo)`` resets properties to a sensible + value if a particle is found to have no neighbours. ++ ``hydro_prepare_force(p, xp, cosmo)`` is computed for each particle before the + force loop. You can use this to pre-compute particle properties that are used + in the force loop, but only depend on the particle itself. ++ ``hydro_reset_acceleration(p)`` resets the acceleration variables of the particles + to zero in preparation for the force loop. ++ ``hydro_predict_extra(p, xp, dt_drift, dt_therm)`` predicts extra particle properties + when drifting, such as the smoothing length. ++ ``hydro_end_force(p, cosmo)`` is called after the force loop for each particle and + can be used to e.g. include overall factors of the smoothing length. ++ ``hydro_kick_extra(p, xp, dt_therm)`` kicks extra variables. ++ ``hydro_convert_quantities(p, xp)`` converts quantities at the start of a run (e.g. + internal energy to entropy). ++ ``hydro_first_init_part(p, xp)`` is called for every particle at the start of a run + and is used to initialise variables. + + +``hydro_debug.h`` +----------------- + +TBD + + +``hydro_iact.h`` +---------------- + +TBD + + +``hydro_io.h`` +-------------- + +TBD + + +``hydro_part.h`` +---------------- + +TBD + diff --git a/doc/RTD/source/HydroSchemes/gizmo.rst b/doc/RTD/source/HydroSchemes/gizmo.rst new file mode 100644 index 0000000000000000000000000000000000000000..365e1dc41c27f7c92bfb33859bedad2d96f35248 --- /dev/null +++ b/doc/RTD/source/HydroSchemes/gizmo.rst @@ -0,0 +1,27 @@ +.. GIZMO (MFV) + Josh Borrow, 5th April 2018 + +GIZMO-Like Scheme +================= + +.. toctree:: + :maxdepth: 2 + :hidden: + :caption: Contents: + + +There is a meshless finite volume (MFV) GIZMO-like scheme implemented in SWIFT +(see Hopkins 2015 for more information). You will need a Riemann solver to run +this, and configure as follows: + +.. code-block:: bash + + ./configure --with-hydro="gizmo-mfv" --with-riemann-solver="hllc" + + +We also have the meshless finite mass (MFM) GIZMO-like scheme. You can select +this at compile-time with the following configuration flags: + +.. code-block:: bash + + ./configure --with-hydro="gizmo-mfm" --with-riemann-solver="hllc" diff --git a/doc/RTD/source/HydroSchemes/hopkins_sph.rst b/doc/RTD/source/HydroSchemes/hopkins_sph.rst new file mode 100644 index 0000000000000000000000000000000000000000..bcc51e0ad96b18956f1c8e54f7bf2bf3b352c138 --- /dev/null +++ b/doc/RTD/source/HydroSchemes/hopkins_sph.rst @@ -0,0 +1,30 @@ +.. 'Hopkins'-SPH + Josh Borrow 5th April 2018 + +Pressure-Entropy SPH +==================== + +.. toctree:: + :maxdepth: 2 + :hidden: + :caption: Contents: + +A pressure-entropy SPH scheme is available in SWIFT, inspired by Hopkins 2013. +This includes a Monaghan AV scheme and a Balsara switch. + + +.. code-block:: bash + + ./configure --with-hydro="pressure-entropy" + + +Pressure-Energy SPH +=================== + +Pressure-energy SPH is now implemented in SWIFT, and like the pressure-entropy +scheme it includes a Monaghan AV scheme and a Balsara switch. + + +.. code-block:: bash + + ./configure --with-hydro="pressure-energy" diff --git a/doc/RTD/source/HydroSchemes/index.rst b/doc/RTD/source/HydroSchemes/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..cd6c169245e83440a1258d216991763488586c0c --- /dev/null +++ b/doc/RTD/source/HydroSchemes/index.rst @@ -0,0 +1,21 @@ +.. Hydrodynamics Schemes + Josh Borrow 4th April 2018 + +.. _hydro: + +Hydrodynamics Schemes +===================== + +This section of the documentation includes information on the hydrodynamics +schemes available in SWIFT, as well as how to implement your own. + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + + traditional_sph + minimal_sph + hopkins_sph + gizmo + adding_your_own + diff --git a/doc/RTD/source/HydroSchemes/minimal_sph.rst b/doc/RTD/source/HydroSchemes/minimal_sph.rst new file mode 100644 index 0000000000000000000000000000000000000000..1a16a23360aaba8b28920150af0d4f4b05c74c2f --- /dev/null +++ b/doc/RTD/source/HydroSchemes/minimal_sph.rst @@ -0,0 +1,20 @@ +.. Minimal SPH + Josh Borrow 4th April 2018 + +Minimal (Density-Energy) SPH +============================ + +.. toctree:: + :maxdepth: 2 + :hidden: + :caption: Contents: + +This scheme is a textbook implementation of Density-Energy SPH, and can be used +as a pedagogical example. It also implements a Monaghan AV scheme, like the +GADGET-2 scheme. It uses very similar equations, but differs in implementation +details; namely it tracks the internal energy \(u\) as the thermodynamic +variable, rather than entropy \(A\). To use the minimal scheme, use + +.. code-block:: bash + + ./configure --with-hydro="minimal" diff --git a/doc/RTD/source/HydroSchemes/traditional_sph.rst b/doc/RTD/source/HydroSchemes/traditional_sph.rst new file mode 100644 index 0000000000000000000000000000000000000000..c69ea5f60644119b8590414ffe00a75246de49a6 --- /dev/null +++ b/doc/RTD/source/HydroSchemes/traditional_sph.rst @@ -0,0 +1,17 @@ +.. Traditional SPH (GADGET-2) + Josh Borrow 4th April 2018 + +Traditional (Density-Entropy) SPH +================================= + +.. toctree:: + :maxdepth: 2 + :hidden: + :caption: Contents: + +Traditional, GADGET-2-like, Density-Entropy SPH is available in SWIFT with +a Monaghan artificial viscosity scheme and Balsara switch. + +To use this hydro scheme, you need no extra configuration options -- it is the +default! + diff --git a/doc/RTD/source/InitialConditions/index.rst b/doc/RTD/source/InitialConditions/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..e9684ac4ffde5886f7a110de2cd7fb0fbb572a5e --- /dev/null +++ b/doc/RTD/source/InitialConditions/index.rst @@ -0,0 +1,176 @@ +.. Initial Conditions + Josh Borrow, 5th April 2018 + +Initial Conditions +================== + +To run anything more than examples from our suite, you will need to be able to +produce your own initial conditions for SWIFT. We use the same initial conditions +as the popular GADGET-2 code, which uses the HDF5 file format. + +As the original documentation for the GADGET-2 initial conditions format is +quite sparse, we lay out here all of the necessary components. If you are generating +your initial conditions from python, we recommend you use the h5py package. We +provide a writing wrapper for this for our initial conditions in +``examples/KeplerianRing/write_gadget.py``. + +You can find out more about the HDF5 format on their webpages, here: +https://support.hdfgroup.org/HDF5/doc/H5.intro.html + + +Structure of the File +--------------------- + +There are several groups that contain 'auxilliary' information, such as ``Header``. +Particle data is placed in groups that signify particle type. + ++---------------------+------------------------+ +| Group Name | Physical Particle Type | ++=====================+========================+ +| ``PartType0`` | Gas | ++---------------------+------------------------+ +| ``PartType1`` | Dark Matter | ++---------------------+------------------------+ +| ``PartType2`` | Ignored | ++---------------------+------------------------+ +| ``PartType3`` | Ignored | ++---------------------+------------------------+ +| ``PartType4`` | Stars | ++---------------------+------------------------+ +| ``PartType5`` | Black Holes | ++---------------------+------------------------+ + +Currently, not all of these particle types are included in SWIFT. Note that the +only particles that have hydrodynamical forces calculated between them are those +in ``PartType0``. + + +Necessary Components +-------------------- + +There are several necessary components (in particular header information) in a +SWIFT initial conditions file. Again, we recommend that you use the ``write_gadget`` +script. + +Header +~~~~~~ + +In ``Header``, the following attributes are required: + ++ ``BoxSize``, a floating point number or N-dimensional (usually 3) array + that describes the size of the box. ++ ``Flag_Entropy_ICs``, a historical value that tells the code if you have + included entropy or internal energy values in your intial conditions files. + Acceptable values are 0 or 1. ++ ``NumPart_Total``, a length 6 array of integers that tells the code how many + particles are of each type are in the initial conditions file. ++ ``NumPart_Total_HighWord``, a historical length-6 array that tells the code + the number of high word particles in the initial conditions there are. If + you are unsure, just set this to ``[0, 0, 0, 0, 0, 0]``. This does have to be + present, but unlike GADGET-2, this can be a set of 0s unless you have more than + 2^31 particles. ++ ``NumFilesPerSnapshot``, again a historical integer value that tells the code + how many files there are per snapshot. You will probably want to set this to 1 + and simply have a single HDF5 file for your initial conditions; SWIFT can + leverage parallel-HDF5 to read from this single file in parallel. ++ ``NumPart_ThisFile``, a length 6 array of integers describing the number of + particles in this file. If you have followed the above advice, this will be + exactly the same as the ``NumPart_Total`` array. + +You may want to include the following for backwards-compatibility with many +GADGET-2 based analysis programs: + ++ ``MassTable``, an array of length 6 which gives the masses of each particle + type. SWIFT ignores this and uses the individual particle masses, but some + programs will crash if it is not included. ++ ``Time``, the internal code time of the start (set this to 0). + +RuntimePars +~~~~~~~~~~~ + +In ``RuntimePars``, the following attributes are required: + ++ ``PeriodicBoundaryConditionsOn``, a flag to tell the code whether or not you + have periodic boundaries switched on. Again, this is historical; it should be + set to 1 (default) if you have the code running in periodic mode, or 0 otherwise. + + +Units +~~~~~ + +In ``Units``, you will need to specify what units your initial conditions are +in. If these are not present, the code assumes that you are using the same +units for your initial conditions as are in your parameterfile, but it is best +to include them to be on the safe side. You will need: + ++ ``Unit current in cgs (U_I)`` ++ ``Unit length in cgs (U_L)`` ++ ``Unit mass in cgs (U_M)`` ++ ``Unit temperature in cgs (U_T)`` ++ ``Unit time in cgs (U_t)`` + +These are all floating point numbers. + + +Particle Data +~~~~~~~~~~~~~ + +Now for the interesting part! You can include particle data groups for each +individual particle type (e.g. ``PartType0``) that have the following _datasets_: + ++ ``Coordinates``, an array of shape (N, 3) where N is the number of particles + of that type, that are the cartesian co-ordinates of the particles. Co-ordinates + must be positive, but will be wrapped on reading to be within the periodic box. ++ ``Velocities``, an array of shape (N, 3) that is the cartesian velocities + of the particles. ++ ``ParticleIDs``, an array of length N that are unique identifying numbers for + each particle. Note that these have to be unique to a particle, and cannot be + the same even between particle types. Please ensure that your IDs are positive + integer numbers. ++ ``Masses``, an array of length N that gives the masses of the particles. + +For ``PartType0`` (i.e. particles that interact through hydrodynamics), you will +need the following auxilliary items: + ++ ``InternalEnergy``, an array of length N that gives the internal energies of + the particles. For PressureEntropy, you can specify ``Entropy`` instead. ++ ``SmoothingLength``, the smoothing lenghts of the particles. These will be + tidied up a bit, but it is best if you provide accurate numbers. + + +Summary +~~~~~~~ + +You should have an HDF5 file with the following structure: + +.. code-block:: bash + + Header/ + BoxSize=[x, y, z] + Flag_Entropy_ICs=1 + NumPart_Total=[0, 1, 2, 3, 4, 5] + NumPart_Total_HighWord=[0, 0, 0, 0, 0, 0] + NumFilesPerSnapshot=1 + NumPart_ThisFile=[0, 1, 2, 3, 4, 5] + RuntimePars/ + PeriodicBoundariesOn=1 + Units/ + Unit current in cgs (U_I)=1.0 + Unit length in cgs (U_L)=1.0 + Unit mass in cgs (U_M)=1.0 + Unit temperature in cgs (U_T)=1.0 + Unit time in cgs (U_t)=1.0 + PartType0/ + Coordinates=[[x, y, z]] + Velocities=[[vx, vy, vz]] + ParticleIDs=[...] + Masses=[...] + InternalEnergy=[...] + SmoothingLength=[...] + PartType1/ + Coordinates=[[x, y, z]] + Velocities=[[vx, vy, vz]] + ParticleIDs=[...] + Masses=[...] + + diff --git a/doc/RTD/source/NewOption/index.rst b/doc/RTD/source/NewOption/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..a7445524017fefd99d76c80a4a1ecc646874bd7a --- /dev/null +++ b/doc/RTD/source/NewOption/index.rst @@ -0,0 +1,36 @@ +.. Equation of State + Loic Hausammann, 7th April 2018 + +.. _new_option: + +General information for adding new schemes +========================================== + +The following steps are required for any new options (such as new +:ref:`hydro`, :ref:`chemistry`, :ref:`cooling`, +:ref:`equation_of_state`, :ref:`stars` or :ref:`gravity`) + +In order to add a new scheme, you will need to: + +1. Create a new subdirectory inside the option directory (e.g. + ``src/equation_of_state`` or ``src/hydro``) with an explicit name. + +2. Create the required new files (depending on your option, you will need + different files). Copy the structure of the most simple option (e.g. + ``src/hydro/Gadget2``, ``src/gravity/Default``, ``src/stars/Default``, + ``src/cooling/none``, ``src/chemistry/none`` or + ``src/equation_of_state/ideal_gas``) + +3. Add the right includes in the option file (e.g. ``src/hydro.h``, + ``src/gravity.h``, ``src/stars.h``, ``src/cooling.h``, ``src/chemistry.h`` + or ``src/equation_of_state.h``) and the corresponding io file if present. + +4. Add the new option in ``configure.ac``. This file generates the + ``configure`` script and you just need to add a new option under the right + ``case``. + +5. Add your files in ``src/Makefile.am``. In order to generate the Makefiles + during the configuration step, a list of files is required. In + ``nobase_noinst_HEADERS``, add your new header files. + +6. Update the documentation. Add your equations/documentation to ``doc/RTD``. diff --git a/doc/RTD/source/conf.py b/doc/RTD/source/conf.py new file mode 100644 index 0000000000000000000000000000000000000000..031687ea5228252e2d2e44ec0bd6f53b1b64d732 --- /dev/null +++ b/doc/RTD/source/conf.py @@ -0,0 +1,165 @@ +# -*- coding: utf-8 -*- +# +# Configuration file for the Sphinx documentation builder. +# +# This file does only contain a selection of the most common options. For a +# full list see the documentation: +# http://www.sphinx-doc.org/en/stable/config + +# -- Path setup -------------------------------------------------------------- + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +# +# import os +# import sys +# sys.path.insert(0, os.path.abspath('.')) + +# -- Project information ----------------------------------------------------- + +project = 'SWIFT: SPH WIth Fine-grained inter-dependent Tasking' +copyright = '2018, SWIFT Collaboration' +author = 'SWIFT Team' + +# The short X.Y version +version = '0.7' +# The full version, including alpha/beta/rc tags +release = '0.7.0' + + +# -- General configuration --------------------------------------------------- + +# If your documentation needs a minimal Sphinx version, state it here. +# +# needs_sphinx = '1.0' + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = [ + 'sphinx.ext.todo', + 'sphinx.ext.mathjax', + 'sphinx.ext.githubpages', +] + +# Add any paths that contain templates here, relative to this directory. +templates_path = ['.templates'] + +# The suffix(es) of source filenames. +# You can specify multiple suffix as a list of string: +# +# source_suffix = ['.rst', '.md'] +source_suffix = '.rst' + +# The master toctree document. +master_doc = 'index' + +# The language for content autogenerated by Sphinx. Refer to documentation +# for a list of supported languages. +# +# This is also used if you do content translation via gettext catalogs. +# Usually you set "language" from the command line for these cases. +language = None + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +# This pattern also affects html_static_path and html_extra_path . +exclude_patterns = [] + +# The name of the Pygments (syntax highlighting) style to use. +pygments_style = 'sphinx' + + +# -- Options for HTML output ------------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +# +html_theme = 'sphinx_rtd_theme' + +# Theme options are theme-specific and customize the look and feel of a theme +# further. For a list of options available for each theme, see the +# documentation. +# +# html_theme_options = {} + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ['.static'] + +# Custom sidebar templates, must be a dictionary that maps document names +# to template names. +# +# The default sidebars (for documents that don't match any pattern) are +# defined by theme itself. Builtin themes are using these templates by +# default: ``['localtoc.html', 'relations.html', 'sourcelink.html', +# 'searchbox.html']``. +# +# html_sidebars = {} + + +# -- Options for HTMLHelp output --------------------------------------------- + +# Output file base name for HTML help builder. +htmlhelp_basename = 'SWIFTSPHWIthFine-grainedinter-dependentTaskingdoc' + + +# -- Options for LaTeX output ------------------------------------------------ + +latex_elements = { + # The paper size ('letterpaper' or 'a4paper'). + # + # 'papersize': 'letterpaper', + + # The font size ('10pt', '11pt' or '12pt'). + # + # 'pointsize': '10pt', + + # Additional stuff for the LaTeX preamble. + # + # 'preamble': '', + + # Latex figure (float) alignment + # + # 'figure_align': 'htbp', +} + +# Grouping the document tree into LaTeX files. List of tuples +# (source start file, target name, title, +# author, documentclass [howto, manual, or own class]). +latex_documents = [ + (master_doc, 'SWIFTSPHWIthFine-grainedinter-dependentTasking.tex', 'SWIFT: SPH WIth Fine-grained inter-dependent Tasking Documentation', + 'Josh Borrow', 'manual'), +] + + +# -- Options for manual page output ------------------------------------------ + +# One entry per manual page. List of tuples +# (source start file, name, description, authors, manual section). +man_pages = [ + (master_doc, 'swiftsphwithfine-grainedinter-dependenttasking', 'SWIFT: SPH WIth Fine-grained inter-dependent Tasking Documentation', + [author], 1) +] + + +# -- Options for Texinfo output ---------------------------------------------- + +# Grouping the document tree into Texinfo files. List of tuples +# (source start file, target name, title, author, +# dir menu entry, description, category) +texinfo_documents = [ + (master_doc, 'SWIFTSPHWIthFine-grainedinter-dependentTasking', 'SWIFT: SPH WIth Fine-grained inter-dependent Tasking Documentation', + author, 'SWIFTSPHWIthFine-grainedinter-dependentTasking', 'One line description of project.', + 'Miscellaneous'), +] + + +# -- Extension configuration ------------------------------------------------- + +# -- Options for todo extension ---------------------------------------------- + +# If true, `todo` and `todoList` produce output, else they produce nothing. +todo_include_todos = True diff --git a/doc/RTD/source/index.rst b/doc/RTD/source/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..888945a5c0101bb6f59b574a30f1f736ad134079 --- /dev/null +++ b/doc/RTD/source/index.rst @@ -0,0 +1,22 @@ +.. Welcome! + sphinx-quickstart on Wed Apr 4 15:03:50 2018. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +Welcome to SWIFT: SPH WIth Fine-grained inter-dependent Tasking's documentation! +================================================================================ + +Want to get started using SWIFT? Check out the on-boarding guide available +here. SWIFT can be used as a drop-in replacement for Gadget-2 and initial +conditions in hdf5 format for Gadget can directly be read by SWIFT. The only +difference is the parameter file that will need to be adapted for SWIFT. + +.. toctree:: + :maxdepth: 2 + + GettingStarted/index + InitialConditions/index + HydroSchemes/index + Cooling/index + EquationOfState/index + NewOption/index diff --git a/examples/EAGLE_100/eagle_100.yml b/examples/EAGLE_100/eagle_100.yml index a570d81f403b303a41f286cc2407ce39e10735b9..223da63701817260b32ff6718273c2cf2f08e6bf 100644 --- a/examples/EAGLE_100/eagle_100.yml +++ b/examples/EAGLE_100/eagle_100.yml @@ -31,7 +31,7 @@ Snapshots: scale_factor_first: 0.92 # Scale-factor of the first snaphot (cosmological run) time_first: 0.01 # Time of the first output (non-cosmological run) (in internal units) delta_time: 1.10 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/EAGLE_12/eagle_12.yml b/examples/EAGLE_12/eagle_12.yml index e400610d967e6a821cc46495c827a52bc42ea1aa..14035500366e9b260e77d94bcefa9c529874f35f 100644 --- a/examples/EAGLE_12/eagle_12.yml +++ b/examples/EAGLE_12/eagle_12.yml @@ -31,7 +31,7 @@ Snapshots: scale_factor_first: 0.92 # Scale-factor of the first snaphot (cosmological run) time_first: 0.01 # Time of the first output (non-cosmological run) (in internal units) delta_time: 1.10 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/EAGLE_25/eagle_25.yml b/examples/EAGLE_25/eagle_25.yml index 2a478d822cf911ca543b61900d491c05946c70ce..462ce3791204859d7d213234cac8a87e5a2c92e0 100644 --- a/examples/EAGLE_25/eagle_25.yml +++ b/examples/EAGLE_25/eagle_25.yml @@ -31,7 +31,6 @@ Snapshots: scale_factor_first: 0.92 # Scale-factor of the first snaphot (cosmological run) time_first: 0.01 # Time of the first output (non-cosmological run) (in internal units) delta_time: 1.10 # Time difference between consecutive outputs (in internal units) - compression: 4 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/EAGLE_50/eagle_50.yml b/examples/EAGLE_50/eagle_50.yml index 5e4193e5c1c6295ded19e12bf50f6e156d797656..1eca096132cff59b2db41ee6a0166456fa21549e 100644 --- a/examples/EAGLE_50/eagle_50.yml +++ b/examples/EAGLE_50/eagle_50.yml @@ -31,7 +31,7 @@ Snapshots: scale_factor_first: 0.92 # Scale-factor of the first snaphot (cosmological run) time_first: 0.01 # Time of the first output (non-cosmological run) (in internal units) delta_time: 1.10 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/EAGLE_6/eagle_6.yml b/examples/EAGLE_6/eagle_6.yml index 0be082dd758a974ff9c9b1c0284e3a2a3efa6d00..bac3561b08dcd018c3a8189df3a1228350658396 100644 --- a/examples/EAGLE_6/eagle_6.yml +++ b/examples/EAGLE_6/eagle_6.yml @@ -32,7 +32,7 @@ Snapshots: scale_factor_first: 0.92 # Scale-factor of the first snaphot (cosmological run) time_first: 0.01 # Time of the first output (non-cosmological run) (in internal units) delta_time: 1.10 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/EAGLE_DMO_100/eagle_100.yml b/examples/EAGLE_DMO_100/eagle_100.yml index 0548980cc6b2a785eb55781f213da4d3980cd9d5..9295aff6e21f5321cd63e7a544cce9912e3929a8 100644 --- a/examples/EAGLE_DMO_100/eagle_100.yml +++ b/examples/EAGLE_DMO_100/eagle_100.yml @@ -31,7 +31,7 @@ Snapshots: scale_factor_first: 0.92 # Scale-factor of the first snaphot (cosmological run) time_first: 0.01 # Time of the first output (non-cosmological run) (in internal units) delta_time: 1.10 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/EAGLE_DMO_12/eagle_12.yml b/examples/EAGLE_DMO_12/eagle_12.yml index f76ca2f833cfe2722ef6b68bf4105c0344a569bd..1e5d99b0abcf4cdb67c6b41c417f1e95aad5c187 100644 --- a/examples/EAGLE_DMO_12/eagle_12.yml +++ b/examples/EAGLE_DMO_12/eagle_12.yml @@ -31,7 +31,7 @@ Snapshots: scale_factor_first: 0.92 # Scale-factor of the first snaphot (cosmological run) time_first: 0.01 # Time of the first output (non-cosmological run) (in internal units) delta_time: 1.10 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/EAGLE_DMO_25/eagle_25.yml b/examples/EAGLE_DMO_25/eagle_25.yml index 4b8dcf7bf30b62ff65bbe44f4a80b31b320d167e..f31d167cc0a011121bbac85b34b8e16ce6b61bf8 100644 --- a/examples/EAGLE_DMO_25/eagle_25.yml +++ b/examples/EAGLE_DMO_25/eagle_25.yml @@ -31,7 +31,7 @@ Snapshots: scale_factor_first: 0.92 # Scale-factor of the first snaphot (cosmological run) time_first: 0.01 # Time of the first output (non-cosmological run) (in internal units) delta_time: 1.10 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/EAGLE_DMO_50/eagle_50.yml b/examples/EAGLE_DMO_50/eagle_50.yml index 181b633c498fd4b4b4b17a7bffbd32aabc1d4726..2526fa1509ba3a21870ccfcd186bf7b70f6e0ced 100644 --- a/examples/EAGLE_DMO_50/eagle_50.yml +++ b/examples/EAGLE_DMO_50/eagle_50.yml @@ -31,7 +31,7 @@ Snapshots: scale_factor_first: 0.92 # Scale-factor of the first snaphot (cosmological run) time_first: 0.01 # Time of the first output (non-cosmological run) (in internal units) delta_time: 1.10 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/EvrardCollapse_3D/evrard.yml b/examples/EvrardCollapse_3D/evrard.yml index a2b39fb1e7fa36098abe720945ece4b611eb4fe8..f9a4e69f72e6bb19b818cb985ef92122b1a10b2a 100644 --- a/examples/EvrardCollapse_3D/evrard.yml +++ b/examples/EvrardCollapse_3D/evrard.yml @@ -18,7 +18,7 @@ Snapshots: basename: evrard # Common part of the name of output files time_first: 0. # Time of the first output (in internal units) delta_time: 0.1 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/GreshoVortex_3D/gresho.yml b/examples/GreshoVortex_3D/gresho.yml index f42fcbfd00941e7b9c5c09c0d2e3118f5cc1f57d..113c03b9bd0e411bf04f29c70937ac7fab3708f3 100644 --- a/examples/GreshoVortex_3D/gresho.yml +++ b/examples/GreshoVortex_3D/gresho.yml @@ -21,7 +21,7 @@ Snapshots: basename: gresho # Common part of the name of output files time_first: 0. # Time of the first output (in internal units) delta_time: 1e-1 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/KelvinHelmholtzGrowthRate_3D/kelvinHelmholtzGrowthRate.yml b/examples/KelvinHelmholtzGrowthRate_3D/kelvinHelmholtzGrowthRate.yml index 3133e2769e81b80c18760e8258665fc1a6eee6ca..e39c01645b766ae585558452683dc8e1bdf425a8 100644 --- a/examples/KelvinHelmholtzGrowthRate_3D/kelvinHelmholtzGrowthRate.yml +++ b/examples/KelvinHelmholtzGrowthRate_3D/kelvinHelmholtzGrowthRate.yml @@ -18,7 +18,7 @@ Snapshots: basename: kelvinHelmholtzGrowthRate # Common part of the name of output files time_first: 0. # Time of the first output (in internal units) delta_time: 0.04 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/Makefile.am b/examples/Makefile.am index 95057c9bee7d3b4cc001d6c19ca39cab5d8544c4..9de393ba6c77f9bd50e1d8aff12c0d83ba8e7ddc 100644 --- a/examples/Makefile.am +++ b/examples/Makefile.am @@ -24,7 +24,7 @@ AM_CFLAGS = -I$(top_srcdir)/src $(HDF5_CPPFLAGS) $(GSL_INCS) $(FFTW_INCS) AM_LDFLAGS = $(HDF5_LDFLAGS) # Extra libraries. -EXTRA_LIBS = $(HDF5_LIBS) $(FFTW_LIBS) $(PROFILER_LIBS) $(TCMALLOC_LIBS) $(JEMALLOC_LIBS) $(GRACKLE_LIBS) $(GSL_LIBS) +EXTRA_LIBS = $(HDF5_LIBS) $(FFTW_LIBS) $(PROFILER_LIBS) $(TCMALLOC_LIBS) $(JEMALLOC_LIBS) $(TBBMALLOC_LIBS) $(GRACKLE_LIBS) $(GSL_LIBS) # MPI libraries. MPI_LIBS = $(METIS_LIBS) $(MPI_THREAD_LIBS) diff --git a/examples/Noh_3D/noh.yml b/examples/Noh_3D/noh.yml index 88119827501e17cc26742e8cad92a3611e83faa7..cc15af7ec190cd2c10cdff3a3ccb3f0beaf7e177 100644 --- a/examples/Noh_3D/noh.yml +++ b/examples/Noh_3D/noh.yml @@ -18,7 +18,7 @@ Snapshots: basename: noh # Common part of the name of output files time_first: 0. # Time of the first output (in internal units) delta_time: 5e-2 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/SedovBlast_3D/sedov.yml b/examples/SedovBlast_3D/sedov.yml index 2df2c432cef8afec49c687b643a872b06f9abb60..75849e33c0c644a18cd7357f901699d0d682c160 100644 --- a/examples/SedovBlast_3D/sedov.yml +++ b/examples/SedovBlast_3D/sedov.yml @@ -18,8 +18,8 @@ Snapshots: basename: sedov # Common part of the name of output files time_first: 0. # Time of the first output (in internal units) delta_time: 1e-2 # Time difference between consecutive outputs (in internal units) - compression: 4 - + compression: 1 + # Parameters governing the conserved quantities statistics Statistics: delta_time: 1e-3 # Time between statistics output @@ -33,4 +33,4 @@ SPH: InitialConditions: file_name: ./sedov.hdf5 smoothing_length_scaling: 3.33 - + diff --git a/examples/SodShockSpherical_3D/sodShock.yml b/examples/SodShockSpherical_3D/sodShock.yml index 52c06dd65ec0b37a0ec0707315ccd15356a7b2d6..3fc4a1fb2b8cc5f6a603abf4c87ac99c7647b9bd 100644 --- a/examples/SodShockSpherical_3D/sodShock.yml +++ b/examples/SodShockSpherical_3D/sodShock.yml @@ -18,7 +18,7 @@ Snapshots: basename: sodShock # Common part of the name of output files time_first: 0. # Time of the first output (in internal units) delta_time: 0.1 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/SodShock_3D/sodShock.yml b/examples/SodShock_3D/sodShock.yml index 26fbfe4faacf2bbbfd9b077bf9f9c075ce93ef6d..6042c8090d00fef5467a7fed3d6f5a104c626f43 100644 --- a/examples/SodShock_3D/sodShock.yml +++ b/examples/SodShock_3D/sodShock.yml @@ -18,7 +18,7 @@ Snapshots: basename: sodShock # Common part of the name of output files time_first: 0. # Time of the first output (in internal units) delta_time: 0.2 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/VacuumSpherical_3D/vacuum.yml b/examples/VacuumSpherical_3D/vacuum.yml index 92164a18a46404ad7730f3411dc14953139501bb..8792f029d97f413882ae0ea6c8603d64efaddbfa 100644 --- a/examples/VacuumSpherical_3D/vacuum.yml +++ b/examples/VacuumSpherical_3D/vacuum.yml @@ -18,7 +18,7 @@ Snapshots: basename: vacuum # Common part of the name of output files time_first: 0. # Time of the first output (in internal units) delta_time: 0.05 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/Vacuum_3D/vacuum.yml b/examples/Vacuum_3D/vacuum.yml index 45a6b73d54de69d71e194ec074ebdd00cd6a57c0..cf44d2441f5009d2fc75084a2c872e3618e40912 100644 --- a/examples/Vacuum_3D/vacuum.yml +++ b/examples/Vacuum_3D/vacuum.yml @@ -18,7 +18,7 @@ Snapshots: basename: vacuum # Common part of the name of output files time_first: 0. # Time of the first output (in internal units) delta_time: 0.1 # Time difference between consecutive outputs (in internal units) - compression: 4 + compression: 1 # Parameters governing the conserved quantities statistics Statistics: diff --git a/examples/main.c b/examples/main.c index 4d59e97445af2f59cfcd947a805c2ed2a1218ad2..61b32c69dcce8903fbc8e79d382a9dc2bef62e94 100644 --- a/examples/main.c +++ b/examples/main.c @@ -54,7 +54,7 @@ struct profiler prof; /** * @brief Help messages for the command line parameters. */ -void print_help_message() { +void print_help_message(void) { printf("\nUsage: swift [OPTION]... PARAMFILE\n"); printf(" swift_mpi [OPTION]... PARAMFILE\n\n"); @@ -90,6 +90,8 @@ void print_help_message() { printf(" %2s %14s %s\n", "-n", "{int}", "Execute a fixed number of time steps. When unset use the time_end " "parameter to stop."); + printf(" %2s %14s %s\n", "-o", "{str}", + "Generate a default output parameter file."); printf(" %2s %14s %s\n", "-P", "{sec:par:val}", "Set parameter value and overwrites values read from the parameters " "file. Can be used more than once."); @@ -195,6 +197,7 @@ int main(int argc, char *argv[]) { int nr_threads = 1; int with_verbose_timers = 0; int nparams = 0; + char output_parameters_filename[200] = ""; char *cmdparams[PARSER_MAX_NO_OF_PARAMS]; char paramFileName[200] = ""; char restart_file[200] = ""; @@ -202,7 +205,7 @@ int main(int argc, char *argv[]) { /* Parse the parameters */ int c; - while ((c = getopt(argc, argv, "acCdDef:FgGhMn:P:rsSt:Tv:y:Y:")) != -1) + while ((c = getopt(argc, argv, "acCdDef:FgGhMn:o:P:rsSt:Tv:y:Y:")) != -1) switch (c) { case 'a': #if defined(HAVE_SETAFFINITY) && defined(HAVE_LIBNUMA) @@ -259,6 +262,15 @@ int main(int argc, char *argv[]) { return 1; } break; + case 'o': + if (sscanf(optarg, "%s", output_parameters_filename) != 1) { + if (myrank == 0) { + printf("Error parsing output fields filename"); + print_help_message(); + } + return 1; + } + break; case 'P': cmdparams[nparams] = optarg; nparams++; @@ -324,6 +336,15 @@ int main(int argc, char *argv[]) { return 1; break; } + + /* Write output parameter file */ + if (myrank == 0 && strcmp(output_parameters_filename, "") != 0) { + io_write_output_field_parameter(output_parameters_filename); + printf("End of run.\n"); + return 0; + } + + /* check inputs */ if (optind == argc - 1) { if (!strcpy(paramFileName, argv[optind++])) error("Error reading parameter file name."); @@ -447,10 +468,6 @@ int main(int argc, char *argv[]) { "values."); for (int k = 0; k < nparams; k++) parser_set_param(params, cmdparams[k]); } - - /* And dump the parameters as used. */ - // parser_print_params(¶ms); - parser_write_params_to_file(params, "used_parameters.yml"); } #ifdef WITH_MPI /* Broadcast the parameter file */ @@ -713,6 +730,9 @@ int main(int argc, char *argv[]) { "ICs.", N_total[0], N_total[2], N_total[1]); + /* Verify that the fields to dump actually exist */ + if (myrank == 0) io_check_output_fields(params, N_total); + /* Initialize the space with these data. */ if (myrank == 0) clocks_gettime(&tic); space_init(&s, params, &cosmo, dim, parts, gparts, sparts, Ngas, Ngpart, @@ -811,6 +831,7 @@ int main(int argc, char *argv[]) { &cooling_func, &chemistry, &sourceterms); engine_config(0, &e, params, nr_nodes, myrank, nr_threads, with_aff, talking, restart_file); + if (myrank == 0) { clocks_gettime(&toc); message("engine_init took %.3f %s.", clocks_diff(&tic, &toc), @@ -889,6 +910,13 @@ int main(int argc, char *argv[]) { 0) error("Failed to generate restart filename"); + /* dump the parameters as used. */ + + /* used parameters */ + parser_write_params_to_file(params, "used_parameters.yml", 1); + /* unused parameters */ + parser_write_params_to_file(params, "unused_parameters.yml", 0); + /* Main simulation loop */ /* ==================== */ int force_stop = 0; @@ -943,9 +971,10 @@ int main(int argc, char *argv[]) { /* Open file and position at end. */ file_thread = fopen(dumpfile, "a"); - fprintf(file_thread, " %03i 0 0 0 0 %lli %lli %zi %zi %zi 0 0 %lli\n", - myrank, e.tic_step, e.toc_step, e.updates, e.g_updates, - e.s_updates, cpufreq); + fprintf(file_thread, + " %03d 0 0 0 0 %lld %lld %lld %lld %lld 0 0 %lld\n", myrank, + e.tic_step, e.toc_step, e.updates, e.g_updates, e.s_updates, + cpufreq); int count = 0; for (int l = 0; l < e.sched.nr_tasks; l++) { if (!e.sched.tasks[l].implicit && e.sched.tasks[l].toc != 0) { @@ -981,8 +1010,8 @@ int main(int argc, char *argv[]) { FILE *file_thread; file_thread = fopen(dumpfile, "w"); /* Add some information to help with the plots */ - fprintf(file_thread, " %i %i %i %i %lli %lli %zi %zi %zi %i %lli\n", -2, - -1, -1, 1, e.tic_step, e.toc_step, e.updates, e.g_updates, + fprintf(file_thread, " %d %d %d %d %lld %lld %lld %lld %lld %d %lld\n", + -2, -1, -1, 1, e.tic_step, e.toc_step, e.updates, e.g_updates, e.s_updates, 0, cpufreq); for (int l = 0; l < e.sched.nr_tasks; l++) { if (!e.sched.tasks[l].implicit && e.sched.tasks[l].toc != 0) { @@ -1035,14 +1064,15 @@ int main(int argc, char *argv[]) { if (myrank == 0) { /* Print some information to the screen */ - printf(" %6d %14e %14e %10.5f %14e %4d %4d %12zu %12zu %12zu %21.3f %6d\n", - e.step, e.time, e.cosmology->a, e.cosmology->z, e.time_step, - e.min_active_bin, e.max_active_bin, e.updates, e.g_updates, - e.s_updates, e.wallclock_time, e.step_props); + printf( + " %6d %14e %14e %10.5f %14e %4d %4d %12lld %12lld %12lld %21.3f %6d\n", + e.step, e.time, e.cosmology->a, e.cosmology->z, e.time_step, + e.min_active_bin, e.max_active_bin, e.updates, e.g_updates, e.s_updates, + e.wallclock_time, e.step_props); fflush(stdout); fprintf(e.file_timesteps, - " %6d %14e %14e %14e %4d %4d %12zu %12zu %12zu %21.3f %6d\n", + " %6d %14e %14e %14e %4d %4d %12lld %12lld %12lld %21.3f %6d\n", e.step, e.time, e.cosmology->a, e.time_step, e.min_active_bin, e.max_active_bin, e.updates, e.g_updates, e.s_updates, e.wallclock_time, e.step_props); diff --git a/examples/parameter_example.yml b/examples/parameter_example.yml index 791db2758290e400d4ed9ffe5b8e0d4303057874..6eb277b303f440de5f92b31caecc432c54069149 100644 --- a/examples/parameter_example.yml +++ b/examples/parameter_example.yml @@ -37,9 +37,10 @@ SPH: # Parameters for the self-gravity scheme Gravity: eta: 0.025 # Constant dimensionless multiplier for time integration. - theta: 0.7 # Opening angle (Multipole acceptance criterion) + theta: 0.7 # Opening angle (Multipole acceptance criterion). comoving_softening: 0.0026994 # Comoving softening length (in internal units). max_physical_softening: 0.0007 # Physical softening length (in internal units). + rebuild_frequency: 0.01 # (Optional) Frequency of the gravity-tree rebuild in units of the number of g-particles (this is the default value). a_smooth: 1.25 # (Optional) Smoothing scale in top-level cell sizes to smooth the long-range forces over (this is the default value). r_cut_max: 4.5 # (Optional) Cut-off in number of top-level cells beyond which no FMM forces are computed (this is the default value). r_cut_min: 0.1 # (Optional) Cut-off in number of top-level cells below which no truncation of FMM forces are performed (this is the default value). diff --git a/src/Makefile.am b/src/Makefile.am index e6e4408e5c7fdfad7a5f1b9abd199c02aea644aa..a28f1c3beb6c6da707845dabea0a7098cf34bdde 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -25,7 +25,7 @@ AM_LDFLAGS = $(HDF5_LDFLAGS) $(FFTW_LIBS) -version-info 0:0:0 GIT_CMD = @GIT_CMD@ # Additional dependencies for shared libraries. -EXTRA_LIBS = $(HDF5_LIBS) $(PROFILER_LIBS) $(TCMALLOC_LIBS) $(JEMALLOC_LIBS) $(GRACKLE_LIB) $(GSL_LIBS) +EXTRA_LIBS = $(HDF5_LIBS) $(FFTW_LIBS) $(PROFILER_LIBS) $(TCMALLOC_LIBS) $(JEMALLOC_LIBS) $(TBBMALLOC_LIBS) $(GRACKLE_LIB) $(GSL_LIBS) # MPI libraries. MPI_LIBS = $(METIS_LIBS) $(MPI_THREAD_LIBS) diff --git a/src/atomic.h b/src/atomic.h index bcd79d2bd7c084abf91810149c41b189f3ef1b7b..8232f93b75c95112ca8bc061c71e6b41ac1e16bb 100644 --- a/src/atomic.h +++ b/src/atomic.h @@ -39,13 +39,16 @@ * * This is a text-book implementation based on an atomic CAS. * + * We create a temporary union to cope with the int-only atomic CAS + * and the floating-point min that we want. + * * @param address The address to update. * @param y The value to update the address with. */ __attribute__((always_inline)) INLINE static void atomic_min_f( - volatile float* address, float y) { + volatile float *const address, const float y) { - int* int_ptr = (int*)address; + int *const int_ptr = (int *)address; typedef union { float as_float; @@ -90,13 +93,16 @@ __attribute__((always_inline)) INLINE static void atomic_min( * * This is a text-book implementation based on an atomic CAS. * + * We create a temporary union to cope with the int-only atomic CAS + * and the floating-point max that we want. + * * @param address The address to update. * @param y The value to update the address with. */ __attribute__((always_inline)) INLINE static void atomic_max_f( - volatile float* address, float y) { + volatile float *const address, const float y) { - int* int_ptr = (int*)address; + int *const int_ptr = (int *)address; typedef union { float as_float; @@ -118,13 +124,16 @@ __attribute__((always_inline)) INLINE static void atomic_max_f( * * This is a text-book implementation based on an atomic CAS. * + * We create a temporary union to cope with the int-only atomic CAS + * and the floating-point add that we want. + * * @param address The address to update. * @param y The value to update the address with. */ __attribute__((always_inline)) INLINE static void atomic_add_f( - volatile float* address, float y) { + volatile float *const address, const float y) { - int* int_ptr = (int*)address; + int *const int_ptr = (int *)address; typedef union { float as_float; diff --git a/src/chemistry.c b/src/chemistry.c index 44cbea1361d96c4cf1d4d3d21c3c91e5225640a5..4afa199258f56d4fc01d67c9335e87a86ead09bc 100644 --- a/src/chemistry.c +++ b/src/chemistry.c @@ -33,7 +33,7 @@ * @param phys_const The physical constants in internal units. * @param data The properties to initialise. */ -void chemistry_init(const struct swift_params* parameter_file, +void chemistry_init(struct swift_params* parameter_file, const struct unit_system* us, const struct phys_const* phys_const, struct chemistry_global_data* data) { diff --git a/src/chemistry.h b/src/chemistry.h index bacc15c483c168dbf86bd34dc2af92a3eefb9e02..f9daa41db22a69a09f06be3fb560a68edac2f078 100644 --- a/src/chemistry.h +++ b/src/chemistry.h @@ -43,7 +43,7 @@ #endif /* Common functions */ -void chemistry_init(const struct swift_params* parameter_file, +void chemistry_init(struct swift_params* parameter_file, const struct unit_system* us, const struct phys_const* phys_const, struct chemistry_global_data* data); diff --git a/src/chemistry/EAGLE/chemistry.h b/src/chemistry/EAGLE/chemistry.h index 459de24ef3c5e9140fd136155ba55d2364795fb8..96a645806a495801f6165353cad9e1c87087f8e3 100644 --- a/src/chemistry/EAGLE/chemistry.h +++ b/src/chemistry/EAGLE/chemistry.h @@ -78,6 +78,22 @@ __attribute__((always_inline)) INLINE static void chemistry_end_density( struct part* restrict p, const struct chemistry_global_data* cd, const struct cosmology* cosmo) {} +/** + * @brief Sets all particle fields to sensible values when the #part has 0 ngbs. + * + * @param p The particle to act upon + * @param xp The extended particle data to act upon + * @param cd #chemistry_global_data containing chemistry informations. + * @param cosmo The current cosmological model. + */ +__attribute__((always_inline)) INLINE static void +chemistry_part_has_no_neighbours(struct part* restrict p, + struct xpart* restrict xp, + const struct chemistry_global_data* cd, + const struct cosmology* cosmo) { + error("Needs implementing!"); +} + /** * @brief Sets the chemistry properties of the (x-)particles to a valid start * state. @@ -108,9 +124,10 @@ __attribute__((always_inline)) INLINE static void chemistry_first_init_part( * @param phys_const The physical constants in internal units. * @param data The properties to initialise. */ -static INLINE void chemistry_init_backend( - const struct swift_params* parameter_file, const struct unit_system* us, - const struct phys_const* phys_const, struct chemistry_global_data* data) { +static INLINE void chemistry_init_backend(struct swift_params* parameter_file, + const struct unit_system* us, + const struct phys_const* phys_const, + struct chemistry_global_data* data) { /* Read the total metallicity */ data->initial_metal_mass_fraction_total = diff --git a/src/chemistry/EAGLE/chemistry_io.h b/src/chemistry/EAGLE/chemistry_io.h index aab8ec240207a47289e35a711af8b245bf2b40fa..f87807579c46fe336f45e934d828318aed0377c7 100644 --- a/src/chemistry/EAGLE/chemistry_io.h +++ b/src/chemistry/EAGLE/chemistry_io.h @@ -30,7 +30,8 @@ * * @return Returns the number of fields to read. */ -int chemistry_read_particles(struct part* parts, struct io_props* list) { +INLINE static int chemistry_read_particles(struct part* parts, + struct io_props* list) { /* Nothing to read */ return 0; @@ -44,7 +45,8 @@ int chemistry_read_particles(struct part* parts, struct io_props* list) { * * @return Returns the number of fields to write. */ -int chemistry_write_particles(const struct part* parts, struct io_props* list) { +INLINE static int chemistry_write_particles(const struct part* parts, + struct io_props* list) { /* List what we want to write */ list[0] = io_make_output_field("ElementAbundance", FLOAT, @@ -101,7 +103,7 @@ int chemistry_write_particles(const struct part* parts, struct io_props* list) { * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void chemistry_write_flavour(hid_t h_grp) { +INLINE static void chemistry_write_flavour(hid_t h_grp) { io_write_attribute_s(h_grp, "Chemistry Model", "EAGLE"); for (int elem = 0; elem < chemistry_element_count; ++elem) { diff --git a/src/chemistry/GEAR/chemistry.h b/src/chemistry/GEAR/chemistry.h index a51051ca3ae45476986d39c868a9fc71bf7f9ae5..6212ed1efb423717b800d431a83f0e8bec7c6c6f 100644 --- a/src/chemistry/GEAR/chemistry.h +++ b/src/chemistry/GEAR/chemistry.h @@ -72,9 +72,10 @@ static INLINE void chemistry_print_backend( * @param phys_const The physical constants in internal units. * @param data The properties to initialise. */ -static INLINE void chemistry_init_backend( - const struct swift_params* parameter_file, const struct unit_system* us, - const struct phys_const* phys_const, struct chemistry_global_data* data) { +static INLINE void chemistry_init_backend(struct swift_params* parameter_file, + const struct unit_system* us, + const struct phys_const* phys_const, + struct chemistry_global_data* data) { /* read parameters */ data->initial_metallicity = parser_get_opt_param_float( @@ -134,6 +135,22 @@ __attribute__((always_inline)) INLINE static void chemistry_end_density( } } +/** + * @brief Sets all particle fields to sensible values when the #part has 0 ngbs. + * + * @param p The particle to act upon + * @param xp The extended particle data to act upon + * @param cd #chemistry_global_data containing chemistry informations. + * @param cosmo The current cosmological model. + */ +__attribute__((always_inline)) INLINE static void +chemistry_part_has_no_neighbours(struct part* restrict p, + struct xpart* restrict xp, + const struct chemistry_global_data* cd, + const struct cosmology* cosmo) { + error("Needs implementing!"); +} + /** * @brief Sets the chemistry properties of the (x-)particles to a valid start * state. diff --git a/src/chemistry/GEAR/chemistry_io.h b/src/chemistry/GEAR/chemistry_io.h index 0557d5c520dfc7ad5eaff2b92e6588751c072df5..2a0847bebfb8c1734f21bda2f6ad55b354a7aec9 100644 --- a/src/chemistry/GEAR/chemistry_io.h +++ b/src/chemistry/GEAR/chemistry_io.h @@ -48,8 +48,8 @@ chemistry_get_element_name(enum chemistry_element elem) { * * @return Returns the number of fields to read. */ -__attribute__((always_inline)) INLINE static int chemistry_read_particles( - struct part* parts, struct io_props* list) { +INLINE static int chemistry_read_particles(struct part* parts, + struct io_props* list) { /* List what we want to read */ list[0] = io_make_input_field( @@ -69,13 +69,14 @@ __attribute__((always_inline)) INLINE static int chemistry_read_particles( * * @return Returns the number of fields to write. */ -__attribute__((always_inline)) INLINE static int chemistry_write_particles( - const struct part* parts, struct io_props* list) { +INLINE static int chemistry_write_particles(const struct part* parts, + struct io_props* list) { /* List what we want to write */ list[0] = io_make_output_field( "SmoothedElementAbundance", FLOAT, chemistry_element_count, UNIT_CONV_NO_UNITS, parts, chemistry_data.smoothed_metal_mass_fraction); + list[1] = io_make_output_field("Z", FLOAT, 1, UNIT_CONV_NO_UNITS, parts, chemistry_data.Z); @@ -92,8 +93,7 @@ __attribute__((always_inline)) INLINE static int chemistry_write_particles( * @brief Writes the current model of SPH to the file * @param h_grp The HDF5 group in which to write */ -__attribute__((always_inline)) INLINE static void chemistry_write_flavour( - hid_t h_grp) { +INLINE static void chemistry_write_flavour(hid_t h_grp) { io_write_attribute_s(h_grp, "Chemistry Model", "GEAR"); for (enum chemistry_element i = chemistry_element_O; diff --git a/src/chemistry/none/chemistry.h b/src/chemistry/none/chemistry.h index 3ca51660ddfeead2b7ad0010979b719e59c4934e..dce06ffda339e8a6c4925c7b7c430485a208adb7 100644 --- a/src/chemistry/none/chemistry.h +++ b/src/chemistry/none/chemistry.h @@ -59,9 +59,10 @@ chemistry_get_element_name(enum chemistry_element elem) { * @param phys_const The physical constants in internal units. * @param data The global chemistry information (to be filled). */ -static INLINE void chemistry_init_backend( - const struct swift_params* parameter_file, const struct unit_system* us, - const struct phys_const* phys_const, struct chemistry_global_data* data) {} +static INLINE void chemistry_init_backend(struct swift_params* parameter_file, + const struct unit_system* us, + const struct phys_const* phys_const, + struct chemistry_global_data* data) {} /** * @brief Prints the properties of the chemistry model to stdout. @@ -86,6 +87,20 @@ __attribute__((always_inline)) INLINE static void chemistry_end_density( struct part* restrict p, const struct chemistry_global_data* cd, const struct cosmology* cosmo) {} +/** + * @brief Sets all particle fields to sensible values when the #part has 0 ngbs. + * + * @param p The particle to act upon + * @param xp The extended particle data to act upon + * @param cd #chemistry_global_data containing chemistry informations. + * @param cosmo The current cosmological model. + */ +__attribute__((always_inline)) INLINE static void +chemistry_part_has_no_neighbours(struct part* restrict p, + struct xpart* restrict xp, + const struct chemistry_global_data* cd, + const struct cosmology* cosmo) {} + /** * @brief Sets the chemistry properties of the (x-)particles to a valid start * state. diff --git a/src/chemistry/none/chemistry_io.h b/src/chemistry/none/chemistry_io.h index 142d2f75ce1487393e8689edbb9a6fdb1b1e85cd..ef7e0d8d87dfeab5978f0e86bbf6279f7901d10a 100644 --- a/src/chemistry/none/chemistry_io.h +++ b/src/chemistry/none/chemistry_io.h @@ -29,7 +29,8 @@ * * @return Returns the number of fields to write. */ -int chemistry_read_particles(struct part* parts, struct io_props* list) { +INLINE static int chemistry_read_particles(struct part* parts, + struct io_props* list) { /* update list according to hydro_io */ @@ -45,7 +46,8 @@ int chemistry_read_particles(struct part* parts, struct io_props* list) { * * @return Returns the number of fields to write. */ -int chemistry_write_particles(const struct part* parts, struct io_props* list) { +INLINE static int chemistry_write_particles(const struct part* parts, + struct io_props* list) { /* update list according to hydro_io */ @@ -59,7 +61,7 @@ int chemistry_write_particles(const struct part* parts, struct io_props* list) { * @brief Writes the current model of SPH to the file * @param h_grp The HDF5 group in which to write */ -void chemistry_write_flavour(hid_t h_grp) { +INLINE static void chemistry_write_flavour(hid_t h_grp) { io_write_attribute_s(h_grp, "Chemistry Model", "None"); } diff --git a/src/clocks.c b/src/clocks.c index fbaa83f15fafda23751d5d6c34d40750132287b5..cac0131acade08e41ee7ed4a22fabde49e197060 100644 --- a/src/clocks.c +++ b/src/clocks.c @@ -50,7 +50,7 @@ static int clocks_units_index = 0; static double clocks_units_scale = 1000.0; /* Local prototypes. */ -static void clocks_estimate_cpufreq(); +static void clocks_estimate_cpufreq(void); /** * @brief Get the current time. @@ -113,7 +113,7 @@ void clocks_set_cpufreq(unsigned long long freq) { * * @result the CPU frequency. */ -unsigned long long clocks_get_cpufreq() { +unsigned long long clocks_get_cpufreq(void) { if (clocks_cpufreq > 0) return clocks_cpufreq; @@ -132,7 +132,7 @@ unsigned long long clocks_get_cpufreq() { * file (probably a overestimate) or finally just use a value of 1 with * time units of ticks. */ -static void clocks_estimate_cpufreq() { +static void clocks_estimate_cpufreq(void) { #ifdef HAVE_CLOCK_GETTIME /* Try to time a nanosleep() in ticks. */ @@ -241,7 +241,7 @@ ticks clocks_to_ticks(double ms) { * * @result the current time units. */ -const char *clocks_getunit() { return clocks_units[clocks_units_index]; } +const char *clocks_getunit(void) { return clocks_units[clocks_units_index]; } /** * @brief returns the time since the start of the execution in seconds @@ -252,7 +252,7 @@ const char *clocks_getunit() { return clocks_units[clocks_units_index]; } * * @result the time since the start of the execution */ -const char *clocks_get_timesincestart() { +const char *clocks_get_timesincestart(void) { static char buffer[40]; @@ -274,7 +274,7 @@ const char *clocks_get_timesincestart() { * @result cpu time used in sysconf(_SC_CLK_TCK) ticks, usually 100/s not our * usual ticks. */ -double clocks_get_cputime_used() { +double clocks_get_cputime_used(void) { struct tms tmstic; times(&tmstic); diff --git a/src/clocks.h b/src/clocks.h index bdb3a6651e52f5b165e644015b91f96aa5812d57..f3901584774c7586d6a68b4415d6b443cb53c466 100644 --- a/src/clocks.h +++ b/src/clocks.h @@ -34,15 +34,15 @@ struct clocks_time { void clocks_gettime(struct clocks_time *time); double clocks_diff(struct clocks_time *start, struct clocks_time *end); -const char *clocks_getunit(); +const char *clocks_getunit(void); void clocks_set_cpufreq(unsigned long long freq); -unsigned long long clocks_get_cpufreq(); +unsigned long long clocks_get_cpufreq(void); double clocks_from_ticks(ticks tics); ticks clocks_to_ticks(double interval); double clocks_diff_ticks(ticks tic, ticks toc); -const char *clocks_get_timesincestart(); +const char *clocks_get_timesincestart(void); -double clocks_get_cputime_used(); +double clocks_get_cputime_used(void); #endif /* SWIFT_CLOCKS_H */ diff --git a/src/collectgroup.c b/src/collectgroup.c index b704a0a5ea33cc1c5332fc0575061ad8e38f4d21..0a7780aba1d5d41cef756d2132c75f9357796c73 100644 --- a/src/collectgroup.c +++ b/src/collectgroup.c @@ -36,14 +36,14 @@ /* Local collections for MPI reduces. */ struct mpicollectgroup1 { - size_t updates, g_updates, s_updates; + long long updates, g_updates, s_updates; integertime_t ti_hydro_end_min; integertime_t ti_gravity_end_min; int forcerebuild; }; /* Forward declarations. */ -static void mpicollect_create_MPI_type(); +static void mpicollect_create_MPI_type(void); /** * @brief MPI datatype for the #mpicollectgroup1 structure. @@ -60,7 +60,7 @@ static MPI_Op mpicollectgroup1_reduce_op; /** * @brief Perform any once only initialisations. Must be called once. */ -void collectgroup_init() { +void collectgroup_init(void) { #ifdef WITH_MPI /* Initialise the MPI types. */ @@ -88,7 +88,6 @@ void collectgroup1_apply(struct collectgroup1 *grp1, struct engine *e) { e->updates = grp1->updates; e->g_updates = grp1->g_updates; e->s_updates = grp1->s_updates; - e->forcerebuild = grp1->forcerebuild; } /** @@ -211,7 +210,7 @@ static void mpicollectgroup1_reduce(void *in, void *inout, int *len, /** * @brief Registers any MPI collection types and reduction functions. */ -static void mpicollect_create_MPI_type() { +static void mpicollect_create_MPI_type(void) { if (MPI_Type_contiguous(sizeof(struct mpicollectgroup1), MPI_BYTE, &mpicollectgroup1_type) != MPI_SUCCESS || diff --git a/src/collectgroup.h b/src/collectgroup.h index f2014ed254fdde7ea293224751061d824782b4a7..8bf8a9d1b75f9a5ddb3f19fa9cdb4103e044ea59 100644 --- a/src/collectgroup.h +++ b/src/collectgroup.h @@ -35,7 +35,7 @@ struct engine; struct collectgroup1 { /* Number of particles updated */ - size_t updates, g_updates, s_updates; + long long updates, g_updates, s_updates; /* Times for the time-step */ integertime_t ti_hydro_end_min, ti_hydro_end_max, ti_hydro_beg_max; @@ -45,7 +45,7 @@ struct collectgroup1 { int forcerebuild; }; -void collectgroup_init(); +void collectgroup_init(void); void collectgroup1_apply(struct collectgroup1 *grp1, struct engine *e); void collectgroup1_init(struct collectgroup1 *grp1, size_t updates, size_t g_updates, size_t s_updates, diff --git a/src/common_io.c b/src/common_io.c index 8b173adb7b5e5a014b0967b4fd04aef5ee6606e9..494a702125cf873946d06855b5683216cb2aceaf 100644 --- a/src/common_io.c +++ b/src/common_io.c @@ -25,12 +25,17 @@ #include "common_io.h" /* Local includes. */ +#include "chemistry_io.h" #include "engine.h" #include "error.h" +#include "gravity_io.h" #include "hydro.h" +#include "hydro_io.h" #include "io_properties.h" #include "kernel_hydro.h" #include "part.h" +#include "part_type.h" +#include "stars_io.h" #include "threadpool.h" #include "units.h" #include "version.h" @@ -340,6 +345,7 @@ void io_write_code_description(hid_t h_file) { io_write_attribute_s(h_grpcode, "CFLAGS", compilation_cflags()); io_write_attribute_s(h_grpcode, "HDF5 library version", hdf5_version()); io_write_attribute_s(h_grpcode, "Thread barriers", thread_barrier_version()); + io_write_attribute_s(h_grpcode, "Allocators", allocator_version()); #ifdef HAVE_FFTW io_write_attribute_s(h_grpcode, "FFTW library version", fftw3_version()); #endif @@ -805,3 +811,157 @@ void io_collect_dm_gparts(const struct gpart* const gparts, size_t Ntot, error("Collected the wrong number of dm particles (%zu vs. %zu expected)", count, Ndm); } + +/** + * @brief Verify the io parameter file + * + * @param params The #swift_params + * @param N_total The total number of each particle type. + */ +void io_check_output_fields(const struct swift_params* params, + const long long N_total[3]) { + + /* Create some fake particles as arguments for the writing routines */ + struct part p; + struct xpart xp; + struct spart sp; + struct gpart gp; + + /* Copy N_total to array with length == 6 */ + const long long nr_total[swift_type_count] = {N_total[0], N_total[1], 0, + 0, N_total[2], 0}; + + /* Loop over all particle types to check the fields */ + for (int ptype = 0; ptype < swift_type_count; ptype++) { + + int num_fields = 0; + struct io_props list[100]; + + /* Don't do anything if no particle of this kind */ + if (nr_total[ptype] == 0) continue; + + /* Gather particle fields from the particle structures */ + switch (ptype) { + + case swift_type_gas: + hydro_write_particles(&p, &xp, list, &num_fields); + num_fields += chemistry_write_particles(&p, list + num_fields); + break; + + case swift_type_dark_matter: + darkmatter_write_particles(&gp, list, &num_fields); + break; + + case swift_type_star: + star_write_particles(&sp, list, &num_fields); + break; + + default: + error("Particle Type %d not yet supported. Aborting", ptype); + } + + /* loop over each parameter */ + for (int param_id = 0; param_id < params->paramCount; param_id++) { + const char* param_name = params->data[param_id].name; + + char section_name[PARSER_MAX_LINE_SIZE]; + + /* Skip if wrong section */ + sprintf(section_name, "SelectOutput:"); + if (strstr(param_name, section_name) == NULL) continue; + + /* Skip if wrong particle type */ + sprintf(section_name, "_%s", part_type_names[ptype]); + if (strstr(param_name, section_name) == NULL) continue; + + int found = 0; + + /* loop over each possible output field */ + for (int field_id = 0; field_id < num_fields; field_id++) { + char field_name[PARSER_MAX_LINE_SIZE]; + sprintf(field_name, "SelectOutput:%s_%s", list[field_id].name, + part_type_names[ptype]); + + if (strcmp(param_name, field_name) == 0) { + found = 1; + /* check if correct input */ + int retParam = 0; + char str[PARSER_MAX_LINE_SIZE]; + sscanf(params->data[param_id].value, "%d%s", &retParam, str); + + /* Check that we have a 0 or 1 */ + if (retParam != 0 && retParam != 1) + message( + "WARNING: Unexpected input for %s. Received %i but expect 0 or " + "1. ", + field_name, retParam); + + /* Found it, so move to the next one. */ + break; + } + } + if (!found) + message( + "WARNING: Trying to dump particle field '%s' (read from '%s') that " + "does not exist.", + param_name, params->fileName); + } + } +} + +/** + * @brief Write the output field parameters file + * + * @param filename The file to write + */ +void io_write_output_field_parameter(const char* filename) { + + FILE* file = fopen(filename, "w"); + if (file == NULL) error("Error opening file '%s'", filename); + + /* Loop over all particle types */ + fprintf(file, "SelectOutput:\n"); + for (int ptype = 0; ptype < swift_type_count; ptype++) { + + int num_fields = 0; + struct io_props list[100]; + + /* Write particle fields from the particle structure */ + switch (ptype) { + + case swift_type_gas: + hydro_write_particles(NULL, NULL, list, &num_fields); + num_fields += chemistry_write_particles(NULL, list + num_fields); + break; + + case swift_type_dark_matter: + darkmatter_write_particles(NULL, list, &num_fields); + break; + + case swift_type_star: + star_write_particles(NULL, list, &num_fields); + break; + + default: + break; + } + + if (num_fields == 0) continue; + + /* Output a header for that particle type */ + fprintf(file, " # Particle Type %s\n", part_type_names[ptype]); + + /* Write all the fields of this particle type */ + for (int i = 0; i < num_fields; ++i) + fprintf(file, " %s_%s: 1\n", list[i].name, part_type_names[ptype]); + + fprintf(file, "\n"); + } + + fclose(file); + + printf( + "List of valid ouput fields for the particle in snapshots dumped in " + "'%s'.\n", + filename); +} diff --git a/src/common_io.h b/src/common_io.h index d9e676db934b58ee476f18894acf55c4d38344f9..f26a635a66f40424984238e586fcdf5bc752fc99 100644 --- a/src/common_io.h +++ b/src/common_io.h @@ -100,4 +100,9 @@ void io_duplicate_star_gparts(struct threadpool* tp, struct spart* const sparts, struct gpart* const gparts, size_t Nstars, size_t Ndm); +void io_check_output_fields(const struct swift_params* params, + const long long N_total[3]); + +void io_write_output_field_parameter(const char* filename); + #endif /* SWIFT_COMMON_IO_H */ diff --git a/src/cooling.c b/src/cooling.c index 57d1928a5d59ac2ff46c6cd20a45d69dec25ec60..154b859f74402d9e9a8adf1fb6c796b5195b8cd1 100644 --- a/src/cooling.c +++ b/src/cooling.c @@ -34,7 +34,7 @@ * @param phys_const The physical constants in internal units. * @param cooling The cooling properties to initialize */ -void cooling_init(const struct swift_params* parameter_file, +void cooling_init(struct swift_params* parameter_file, const struct unit_system* us, const struct phys_const* phys_const, struct cooling_function_data* cooling) { diff --git a/src/cooling.h b/src/cooling.h index 9d1001d360a1816837381e9aa52b17ba47f50fce..0fb04b9e484d989e746a254fc1934dc20033fb09 100644 --- a/src/cooling.h +++ b/src/cooling.h @@ -43,7 +43,7 @@ #endif /* Common functions */ -void cooling_init(const struct swift_params* parameter_file, +void cooling_init(struct swift_params* parameter_file, const struct unit_system* us, const struct phys_const* phys_const, struct cooling_function_data* cooling); diff --git a/src/cooling/EAGLE/cooling.h b/src/cooling/EAGLE/cooling.h index bdf3801887256cb97ae1d5b6a3095250764aa822..f059d995c65cf791f0692ab5d8505f92c1a206ca 100644 --- a/src/cooling/EAGLE/cooling.h +++ b/src/cooling/EAGLE/cooling.h @@ -109,10 +109,11 @@ __attribute__((always_inline)) INLINE static float cooling_get_radiated_energy( * @param phys_const The physical constants in internal units. * @param cooling The cooling properties to initialize */ -static INLINE void cooling_init_backend( - const struct swift_params* parameter_file, const struct unit_system* us, - const struct phys_const* phys_const, - struct cooling_function_data* cooling) {} +static INLINE void cooling_init_backend(struct swift_params* parameter_file, + const struct unit_system* us, + const struct phys_const* phys_const, + struct cooling_function_data* cooling) { +} /** * @brief Prints the properties of the cooling model to stdout. diff --git a/src/cooling/const_du/cooling.h b/src/cooling/const_du/cooling.h index ba8211174919419c37856dc1fcbdaa73b23e319e..b6fea7eea7b0fb208c4bffece425ec836d5df0c0 100644 --- a/src/cooling/const_du/cooling.h +++ b/src/cooling/const_du/cooling.h @@ -163,10 +163,10 @@ __attribute__((always_inline)) INLINE static float cooling_get_radiated_energy( * @param phys_const The physical constants in internal units. * @param cooling The cooling properties to initialize */ -static INLINE void cooling_init_backend( - const struct swift_params* parameter_file, const struct unit_system* us, - const struct phys_const* phys_const, - struct cooling_function_data* cooling) { +static INLINE void cooling_init_backend(struct swift_params* parameter_file, + const struct unit_system* us, + const struct phys_const* phys_const, + struct cooling_function_data* cooling) { cooling->cooling_rate = parser_get_param_double(parameter_file, "ConstCooling:cooling_rate"); diff --git a/src/cooling/const_lambda/cooling.h b/src/cooling/const_lambda/cooling.h index 43ca7ab75b0bce370d7405e52cea9b54335ae73c..f1a7abdbe14a39d98bbd01eb36ba870c8af0ee1a 100644 --- a/src/cooling/const_lambda/cooling.h +++ b/src/cooling/const_lambda/cooling.h @@ -171,10 +171,10 @@ __attribute__((always_inline)) INLINE static float cooling_get_radiated_energy( * @param phys_const The physical constants in internal units. * @param cooling The cooling properties to initialize */ -static INLINE void cooling_init_backend( - const struct swift_params* parameter_file, const struct unit_system* us, - const struct phys_const* phys_const, - struct cooling_function_data* cooling) { +static INLINE void cooling_init_backend(struct swift_params* parameter_file, + const struct unit_system* us, + const struct phys_const* phys_const, + struct cooling_function_data* cooling) { const double lambda_cgs = parser_get_param_double(parameter_file, "LambdaCooling:lambda_cgs"); diff --git a/src/cooling/grackle/cooling.h b/src/cooling/grackle/cooling.h index dd59e9af1431681a8c5bdc1e5cb0c22053063651..cb77b63294aacee425b917c1900eefd7ebfa5f34 100644 --- a/src/cooling/grackle/cooling.h +++ b/src/cooling/grackle/cooling.h @@ -771,7 +771,7 @@ __attribute__((always_inline)) INLINE static void cooling_init_grackle( * @param cooling The cooling properties to initialize */ __attribute__((always_inline)) INLINE static void cooling_init_backend( - const struct swift_params* parameter_file, const struct unit_system* us, + struct swift_params* parameter_file, const struct unit_system* us, const struct phys_const* phys_const, struct cooling_function_data* cooling) { diff --git a/src/cooling/grackle/cooling_io.h b/src/cooling/grackle/cooling_io.h index 5a6edb8f1c559a7b495351e256559f251b97c1cf..faf84cf97d8449d54f2727ec26b16a9d81d117c6 100644 --- a/src/cooling/grackle/cooling_io.h +++ b/src/cooling/grackle/cooling_io.h @@ -133,7 +133,7 @@ __attribute__((always_inline)) INLINE static int cooling_write_particles( * @param cooling The cooling properties to initialize */ __attribute__((always_inline)) INLINE static void cooling_read_parameters( - const struct swift_params* parameter_file, + struct swift_params* parameter_file, struct cooling_function_data* cooling) { parser_get_param_string(parameter_file, "GrackleCooling:CloudyTable", diff --git a/src/cooling/none/cooling.h b/src/cooling/none/cooling.h index 5081c7cbe6c4b5168da082ead80687226f9d0c16..0cc465adcdad8fe19afe4a9867e5d68a22ed9119 100644 --- a/src/cooling/none/cooling.h +++ b/src/cooling/none/cooling.h @@ -119,10 +119,11 @@ __attribute__((always_inline)) INLINE static float cooling_get_radiated_energy( * @param phys_const The physical constants in internal units. * @param cooling The cooling properties to initialize */ -static INLINE void cooling_init_backend( - const struct swift_params* parameter_file, const struct unit_system* us, - const struct phys_const* phys_const, - struct cooling_function_data* cooling) {} +static INLINE void cooling_init_backend(struct swift_params* parameter_file, + const struct unit_system* us, + const struct phys_const* phys_const, + struct cooling_function_data* cooling) { +} /** * @brief Prints the properties of the cooling model to stdout. diff --git a/src/cosmology.c b/src/cosmology.c index 4da9528784b1fb7fdb04761b77a3c0056a32f41a..09472fd77cd98185ff8799e79f687b6552bcd901 100644 --- a/src/cosmology.c +++ b/src/cosmology.c @@ -387,8 +387,7 @@ void cosmology_init_tables(struct cosmology *c) { * @param phys_const The physical constants in the current system of units. * @param c The #cosmology to initialise. */ -void cosmology_init(const struct swift_params *params, - const struct unit_system *us, +void cosmology_init(struct swift_params *params, const struct unit_system *us, const struct phys_const *phys_const, struct cosmology *c) { /* Read in the cosmological parameters */ diff --git a/src/cosmology.h b/src/cosmology.h index 109b80a57d8dbc4eb942dd4ecbbc0db84198100b..ea992d12deffbe60154ea56ca5fff69a1b06587c 100644 --- a/src/cosmology.h +++ b/src/cosmology.h @@ -181,8 +181,7 @@ double cosmology_get_therm_kick_factor(const struct cosmology *cosmo, double cosmology_get_delta_time(const struct cosmology *c, double a1, double a2); -void cosmology_init(const struct swift_params *params, - const struct unit_system *us, +void cosmology_init(struct swift_params *params, const struct unit_system *us, const struct phys_const *phys_const, struct cosmology *c); void cosmology_init_no_cosmo(struct cosmology *c); diff --git a/src/cycle.h b/src/cycle.h index f220ecd120b14db0a8cdaf5d1105be4bd0e70831..842510e066e2f6f94e736851bf636c9a73e4f25f 100644 --- a/src/cycle.h +++ b/src/cycle.h @@ -519,8 +519,19 @@ INLINE_ELAPSED(inline) #define HAVE_TICK_COUNTER #endif +#if defined(HAVE_ARMV7A_PMCCNTR) +typedef uint64_t ticks; +static inline ticks getticks(void) { + uint32_t r; + asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r"(r)); + return r; +} +INLINE_ELAPSED(inline) +#define HAVE_TICK_COUNTER +#endif + #if defined(__aarch64__) && defined(HAVE_ARMV8_CNTVCT_EL0) && \ - !defined(HAVE_ARMV8CC) + !defined(HAVE_ARMV8_PMCCNTR_EL0) typedef uint64_t ticks; static inline ticks getticks(void) { uint64_t Rt; @@ -531,7 +542,7 @@ INLINE_ELAPSED(inline) #define HAVE_TICK_COUNTER #endif -#if defined(__aarch64__) && defined(HAVE_ARMV8CC) +#if defined(__aarch64__) && defined(HAVE_ARMV8_PMCCNTR_EL0) typedef uint64_t ticks; static inline ticks getticks(void) { uint64_t cc = 0; diff --git a/src/debug.c b/src/debug.c index 93d14952f523be5f1d1fa90484e9e7951f8e3f6e..05c21de0a73bba3a5e867a4265de0a5c14736a14 100644 --- a/src/debug.c +++ b/src/debug.c @@ -642,7 +642,7 @@ void getProcMemUse(long *size, long *resident, long *share, long *trs, /** * @brief Print the current memory use of the process. A la "top". */ -void printProcMemUse() { +void printProcMemUse(void) { long size; long resident; long share; diff --git a/src/debug.h b/src/debug.h index 1e482c05c5af2dfebff1a254018fb1802df6cc5d..c9d65ad06cf5307a5fd8596c9c5b6c8b83cb6d9e 100644 --- a/src/debug.h +++ b/src/debug.h @@ -51,5 +51,5 @@ void dumpCellRanks(const char *prefix, struct cell *cells_top, int nr_cells); void getProcMemUse(long *size, long *resident, long *share, long *trs, long *lrs, long *drs, long *dt); -void printProcMemUse(); +void printProcMemUse(void); #endif /* SWIFT_DEBUG_H */ diff --git a/src/engine.c b/src/engine.c index ffa24e306024f2886c7766352e2212a55896dc75..5f05225522e69a5d754f3d056186aa4be70bb4a8 100644 --- a/src/engine.c +++ b/src/engine.c @@ -3854,6 +3854,11 @@ void engine_rebuild(struct engine *e, int clean_smoothing_length_values) { /* Print the status of the system */ if (e->verbose) engine_print_task_counts(e); + /* Clear the counters of updates since the last rebuild */ + e->updates_since_rebuild = 0; + e->g_updates_since_rebuild = 0; + e->s_updates_since_rebuild = 0; + /* Flag that a rebuild has taken place */ e->step_props |= engine_step_prop_rebuild; @@ -4108,13 +4113,13 @@ void engine_collect_end_of_step(struct engine *e, int apply) { MPI_COMM_WORLD) != MPI_SUCCESS) error("Failed to aggregate particle counts."); if (in_ll[0] != (long long)e->collect_group1.updates) - error("Failed to get same updates, is %lld, should be %ld", in_ll[0], + error("Failed to get same updates, is %lld, should be %lld", in_ll[0], e->collect_group1.updates); if (in_ll[1] != (long long)e->collect_group1.g_updates) - error("Failed to get same g_updates, is %lld, should be %ld", in_ll[1], + error("Failed to get same g_updates, is %lld, should be %lld", in_ll[1], e->collect_group1.g_updates); if (in_ll[2] != (long long)e->collect_group1.s_updates) - error("Failed to get same s_updates, is %lld, should be %ld", in_ll[2], + error("Failed to get same s_updates, is %lld, should be %lld", in_ll[2], e->collect_group1.s_updates); int buff = 0; @@ -4550,14 +4555,15 @@ void engine_step(struct engine *e) { if (e->nodeID == 0) { /* Print some information to the screen */ - printf(" %6d %14e %14e %10.5f %14e %4d %4d %12zu %12zu %12zu %21.3f %6d\n", - e->step, e->time, e->cosmology->a, e->cosmology->z, e->time_step, - e->min_active_bin, e->max_active_bin, e->updates, e->g_updates, - e->s_updates, e->wallclock_time, e->step_props); + printf( + " %6d %14e %14e %10.5f %14e %4d %4d %12lld %12lld %12lld %21.3f %6d\n", + e->step, e->time, e->cosmology->a, e->cosmology->z, e->time_step, + e->min_active_bin, e->max_active_bin, e->updates, e->g_updates, + e->s_updates, e->wallclock_time, e->step_props); fflush(stdout); fprintf(e->file_timesteps, - " %6d %14e %14e %14e %4d %4d %12zu %12zu %12zu %21.3f %6d\n", + " %6d %14e %14e %14e %4d %4d %12lld %12lld %12lld %21.3f %6d\n", e->step, e->time, e->cosmology->a, e->time_step, e->min_active_bin, e->max_active_bin, e->updates, e->g_updates, e->s_updates, e->wallclock_time, e->step_props); @@ -4644,11 +4650,6 @@ void engine_step(struct engine *e) { gravity_exact_force_check(e->s, e, 1e-1); #endif - /* Let's trigger a non-SPH rebuild every-so-often for good measure */ - if (!(e->policy & engine_policy_hydro) && // MATTHIEU improve this - (e->policy & engine_policy_self_gravity) && e->step % 20 == 0) - e->forcerebuild = 1; - /* Collect the values of rebuild from all nodes and recover the (integer) * end of the next time-step. Do these together to reduce the collective MPI * calls per step, but some of the gathered information is not applied just @@ -4656,6 +4657,17 @@ void engine_step(struct engine *e) { engine_collect_end_of_step(e, 0); e->forcerebuild = e->collect_group1.forcerebuild; + /* Update the counters */ + e->updates_since_rebuild += e->collect_group1.updates; + e->g_updates_since_rebuild += e->collect_group1.g_updates; + e->s_updates_since_rebuild += e->collect_group1.s_updates; + + /* Trigger a tree-rebuild if we passed the frequency threshold */ + if ((e->policy & engine_policy_self_gravity) && + ((double)e->g_updates_since_rebuild > + ((double)e->total_nr_gparts) * e->gravity_properties->rebuild_frequency)) + e->forcerebuild = 1; + /* Save some statistics ? */ if (e->ti_end_min >= e->ti_next_stats && e->ti_next_stats > 0) e->save_stats = 1; @@ -5353,7 +5365,7 @@ void engine_dump_snapshot(struct engine *e) { /** * @brief Returns the initial affinity the main thread is using. */ -static cpu_set_t *engine_entry_affinity() { +static cpu_set_t *engine_entry_affinity(void) { static int use_entry_affinity = 0; static cpu_set_t entry_affinity; @@ -5372,7 +5384,7 @@ static cpu_set_t *engine_entry_affinity() { * @brief Ensure the NUMA node on which we initialise (first touch) everything * doesn't change before engine_init allocates NUMA-local workers. */ -void engine_pin() { +void engine_pin(void) { #ifdef HAVE_SETAFFINITY cpu_set_t *entry_affinity = engine_entry_affinity(); @@ -5394,7 +5406,7 @@ void engine_pin() { /** * @brief Unpins the main thread. */ -void engine_unpin() { +void engine_unpin(void) { #ifdef HAVE_SETAFFINITY pthread_t main_thread = pthread_self(); cpu_set_t *entry_affinity = engine_entry_affinity(); @@ -5430,10 +5442,9 @@ void engine_unpin() { * @param chemistry The chemistry information. * @param sourceterms The properties of the source terms function. */ -void engine_init(struct engine *e, struct space *s, - const struct swift_params *params, long long Ngas, - long long Ngparts, long long Nstars, int policy, int verbose, - struct repartition *reparttype, +void engine_init(struct engine *e, struct space *s, struct swift_params *params, + long long Ngas, long long Ngparts, long long Nstars, + int policy, int verbose, struct repartition *reparttype, const struct unit_system *internal_units, const struct phys_const *physical_constants, struct cosmology *cosmo, const struct hydro_props *hydro, @@ -5561,10 +5572,9 @@ void engine_init(struct engine *e, struct space *s, * @param verbose Is this #engine talkative ? * @param restart_file The name of our restart file. */ -void engine_config(int restart, struct engine *e, - const struct swift_params *params, int nr_nodes, int nodeID, - int nr_threads, int with_aff, int verbose, - const char *restart_file) { +void engine_config(int restart, struct engine *e, struct swift_params *params, + int nr_nodes, int nodeID, int nr_threads, int with_aff, + int verbose, const char *restart_file) { /* Store the values and initialise global fields. */ e->nodeID = nodeID; @@ -6238,8 +6248,12 @@ void engine_recompute_displacement_constraint(struct engine *e) { /* Get the counts of each particle types */ const long long total_nr_dm_gparts = e->total_nr_gparts - e->total_nr_parts - e->total_nr_sparts; - float count_parts[swift_type_count] = { - e->total_nr_parts, total_nr_dm_gparts, 0.f, 0.f, e->total_nr_sparts, 0.f}; + float count_parts[swift_type_count] = {(float)e->total_nr_parts, + (float)total_nr_dm_gparts, + 0.f, + 0.f, + (float)e->total_nr_sparts, + 0.f}; /* Count of particles for the two species */ const float N_dm = count_parts[1]; diff --git a/src/engine.h b/src/engine.h index 18c95b3c39f87df863cc6e17f5a005346a335dae..044bf6c895b50855c005e95082ceb2bbfdcbf55b 100644 --- a/src/engine.h +++ b/src/engine.h @@ -187,7 +187,12 @@ struct engine { integertime_t ti_beg_max; /* Number of particles updated in the previous step */ - size_t updates, g_updates, s_updates; + long long updates, g_updates, s_updates; + + /* Number of updates since the last rebuild */ + long long updates_since_rebuild; + long long g_updates_since_rebuild; + long long s_updates_since_rebuild; /* Properties of the previous step */ int step_props; @@ -311,7 +316,7 @@ struct engine { struct sourceterms *sourceterms; /* The (parsed) parameter file */ - const struct swift_params *parameter_file; + struct swift_params *parameter_file; /* Temporary struct to hold a group of deferable properties (in MPI mode * these are reduced together, but may not be required just yet). */ @@ -350,10 +355,9 @@ void engine_drift_top_multipoles(struct engine *e); void engine_reconstruct_multipoles(struct engine *e); void engine_print_stats(struct engine *e); void engine_dump_snapshot(struct engine *e); -void engine_init(struct engine *e, struct space *s, - const struct swift_params *params, long long Ngas, - long long Ngparts, long long Nstars, int policy, int verbose, - struct repartition *reparttype, +void engine_init(struct engine *e, struct space *s, struct swift_params *params, + long long Ngas, long long Ngparts, long long Nstars, + int policy, int verbose, struct repartition *reparttype, const struct unit_system *internal_units, const struct phys_const *physical_constants, struct cosmology *cosmo, const struct hydro_props *hydro, @@ -362,10 +366,9 @@ void engine_init(struct engine *e, struct space *s, const struct cooling_function_data *cooling_func, const struct chemistry_global_data *chemistry, struct sourceterms *sourceterms); -void engine_config(int restart, struct engine *e, - const struct swift_params *params, int nr_nodes, int nodeID, - int nr_threads, int with_aff, int verbose, - const char *restart_file); +void engine_config(int restart, struct engine *e, struct swift_params *params, + int nr_nodes, int nodeID, int nr_threads, int with_aff, + int verbose, const char *restart_file); void engine_launch(struct engine *e); void engine_prepare(struct engine *e); void engine_init_particles(struct engine *e, int flag_entropy_ICs, @@ -385,8 +388,8 @@ void engine_makeproxies(struct engine *e); void engine_redistribute(struct engine *e); void engine_print_policy(struct engine *e); int engine_is_done(struct engine *e); -void engine_pin(); -void engine_unpin(); +void engine_pin(void); +void engine_unpin(void); void engine_clean(struct engine *e); int engine_estimate_nr_tasks(struct engine *e); diff --git a/src/equation_of_state/ideal_gas/equation_of_state.h b/src/equation_of_state/ideal_gas/equation_of_state.h index 36b3511558e1c38c9135c689f45eea17220be053..0d57f6a5ce51091f82e79b009b0e85f1368d51cf 100644 --- a/src/equation_of_state/ideal_gas/equation_of_state.h +++ b/src/equation_of_state/ideal_gas/equation_of_state.h @@ -178,7 +178,7 @@ __attribute__((always_inline)) INLINE static float gas_soundspeed_from_pressure( */ __attribute__((always_inline)) INLINE static void eos_init( struct eos_parameters *e, const struct phys_const *phys_const, - const struct unit_system *us, const struct swift_params *params) {} + const struct unit_system *us, struct swift_params *params) {} /** * @brief Print the equation of state * diff --git a/src/equation_of_state/isothermal/equation_of_state.h b/src/equation_of_state/isothermal/equation_of_state.h index c7afac6caaacf354f8067218fbe2013b9287309a..540bf073cee106b91e2a9f4ecedb1b20238856fd 100644 --- a/src/equation_of_state/isothermal/equation_of_state.h +++ b/src/equation_of_state/isothermal/equation_of_state.h @@ -195,7 +195,7 @@ __attribute__((always_inline)) INLINE static float gas_soundspeed_from_pressure( */ __attribute__((always_inline)) INLINE static void eos_init( struct eos_parameters *e, const struct phys_const *phys_const, - const struct unit_system *us, const struct swift_params *params) { + const struct unit_system *us, struct swift_params *params) { e->isothermal_internal_energy = parser_get_param_float(params, "EoS:isothermal_internal_energy"); diff --git a/src/equation_of_state/planetary/equation_of_state.h b/src/equation_of_state/planetary/equation_of_state.h index d7a7f64f87bb126094163ada16dbd05e48d8cf8d..61e23dc0b4eb82e9ae5c0869f7a10dfff97fc45e 100644 --- a/src/equation_of_state/planetary/equation_of_state.h +++ b/src/equation_of_state/planetary/equation_of_state.h @@ -54,7 +54,7 @@ enum eos_planetary_type_id { eos_planetary_type_HM80 = 2, eos_planetary_type_ANEOS = 3, eos_planetary_type_SESAME = 4, -} __attribute__((packed)); +}; /** * @brief Minor type for the planetary equation of state. @@ -104,7 +104,7 @@ enum eos_planetary_material_id { /*! SESAME iron */ eos_planetary_id_SESAME_iron = eos_planetary_type_SESAME * eos_planetary_type_factor, -} __attribute__((packed)); +}; /* Individual EOS function headers. */ #include "aneos.h" @@ -132,7 +132,8 @@ __attribute__((always_inline)) INLINE static float gas_internal_energy_from_entropy(float density, float entropy, enum eos_planetary_material_id mat_id) { - const enum eos_planetary_type_id type = mat_id / eos_planetary_type_factor; + const enum eos_planetary_type_id type = + (enum eos_planetary_type_id)(mat_id / eos_planetary_type_factor); /* Select the material base type */ switch (type) { @@ -241,7 +242,8 @@ gas_internal_energy_from_entropy(float density, float entropy, __attribute__((always_inline)) INLINE static float gas_pressure_from_entropy( float density, float entropy, enum eos_planetary_material_id mat_id) { - const enum eos_planetary_type_id type = mat_id / eos_planetary_type_factor; + const enum eos_planetary_type_id type = + (enum eos_planetary_type_id)(mat_id / eos_planetary_type_factor); /* Select the material base type */ switch (type) { @@ -344,7 +346,8 @@ __attribute__((always_inline)) INLINE static float gas_pressure_from_entropy( __attribute__((always_inline)) INLINE static float gas_entropy_from_pressure( float density, float P, enum eos_planetary_material_id mat_id) { - const enum eos_planetary_type_id type = mat_id / eos_planetary_type_factor; + const enum eos_planetary_type_id type = + (enum eos_planetary_type_id)(mat_id / eos_planetary_type_factor); /* Select the material base type */ switch (type) { @@ -445,7 +448,8 @@ __attribute__((always_inline)) INLINE static float gas_entropy_from_pressure( __attribute__((always_inline)) INLINE static float gas_soundspeed_from_entropy( float density, float entropy, enum eos_planetary_material_id mat_id) { - const enum eos_planetary_type_id type = mat_id / eos_planetary_type_factor; + const enum eos_planetary_type_id type = + (enum eos_planetary_type_id)(mat_id / eos_planetary_type_factor); /* Select the material base type */ switch (type) { @@ -549,8 +553,8 @@ __attribute__((always_inline)) INLINE static float gas_soundspeed_from_entropy( __attribute__((always_inline)) INLINE static float gas_entropy_from_internal_energy(float density, float u, enum eos_planetary_material_id mat_id) { - - const enum eos_planetary_type_id type = mat_id / eos_planetary_type_factor; + const enum eos_planetary_type_id type = + (enum eos_planetary_type_id)(mat_id / eos_planetary_type_factor); /* Select the material base type */ switch (type) { @@ -654,7 +658,8 @@ __attribute__((always_inline)) INLINE static float gas_pressure_from_internal_energy(float density, float u, enum eos_planetary_material_id mat_id) { - const enum eos_planetary_type_id type = mat_id / eos_planetary_type_factor; + const enum eos_planetary_type_id type = + (enum eos_planetary_type_id)(mat_id / eos_planetary_type_factor); /* Select the material base type */ switch (type) { @@ -762,7 +767,8 @@ __attribute__((always_inline)) INLINE static float gas_internal_energy_from_pressure(float density, float P, enum eos_planetary_material_id mat_id) { - const enum eos_planetary_type_id type = mat_id / eos_planetary_type_factor; + const enum eos_planetary_type_id type = + (enum eos_planetary_type_id)(mat_id / eos_planetary_type_factor); /* Select the material base type */ switch (type) { @@ -867,7 +873,8 @@ __attribute__((always_inline)) INLINE static float gas_soundspeed_from_internal_energy(float density, float u, enum eos_planetary_material_id mat_id) { - const enum eos_planetary_type_id type = mat_id / eos_planetary_type_factor; + const enum eos_planetary_type_id type = + (enum eos_planetary_type_id)(mat_id / eos_planetary_type_factor); /* Select the material base type */ switch (type) { @@ -975,7 +982,8 @@ gas_soundspeed_from_internal_energy(float density, float u, __attribute__((always_inline)) INLINE static float gas_soundspeed_from_pressure( float density, float P, enum eos_planetary_material_id mat_id) { - const enum eos_planetary_type_id type = mat_id / eos_planetary_type_factor; + const enum eos_planetary_type_id type = + (enum eos_planetary_type_id)(mat_id / eos_planetary_type_factor); /* Select the material base type */ switch (type) { @@ -1075,7 +1083,7 @@ __attribute__((always_inline)) INLINE static float gas_soundspeed_from_pressure( */ __attribute__((always_inline)) INLINE static void eos_init( struct eos_parameters *e, const struct phys_const *phys_const, - const struct unit_system *us, const struct swift_params *params) { + const struct unit_system *us, struct swift_params *params) { // Table file names char HM80_HHe_table_file[PARSER_MAX_LINE_SIZE]; diff --git a/src/gravity.h b/src/gravity.h index 073b0b053275491c555e28a7fe91e6ce4bf64a43..6497de8294dfa3f207332ff696ddb992c875eb28 100644 --- a/src/gravity.h +++ b/src/gravity.h @@ -36,7 +36,7 @@ struct engine; struct space; void gravity_exact_force_ewald_init(double boxSize); -void gravity_exact_force_ewald_free(); +void gravity_exact_force_ewald_free(void); void gravity_exact_force_ewald_evaluate(double rx, double ry, double rz, double corr_f[3], double *corr_p); void gravity_exact_force_compute(struct space *s, const struct engine *e); diff --git a/src/gravity/Default/gravity_io.h b/src/gravity/Default/gravity_io.h index 7b8ec2c8fef04cc4a4fc6836d8bb895b24d3c41f..7f453179641e2ba16b30e3172ddd7853245a1d2f 100644 --- a/src/gravity/Default/gravity_io.h +++ b/src/gravity/Default/gravity_io.h @@ -21,8 +21,8 @@ #include "io_properties.h" -void convert_gpart_pos(const struct engine* e, const struct gpart* gp, - double* ret) { +INLINE static void convert_gpart_pos(const struct engine* e, + const struct gpart* gp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(gp->x[0], 0.0, e->s->dim[0]); @@ -35,8 +35,8 @@ void convert_gpart_pos(const struct engine* e, const struct gpart* gp, } } -void convert_gpart_vel(const struct engine* e, const struct gpart* gp, - float* ret) { +INLINE static void convert_gpart_vel(const struct engine* e, + const struct gpart* gp, float* ret) { const int with_cosmology = (e->policy & engine_policy_cosmology); const struct cosmology* cosmo = e->cosmology; @@ -62,9 +62,9 @@ void convert_gpart_vel(const struct engine* e, const struct gpart* gp, ret[2] = gp->v_full[2] + gp->a_grav[2] * dt_kick_grav; /* Conversion from internal units to peculiar velocities */ - ret[0] *= cosmo->a2_inv; - ret[1] *= cosmo->a2_inv; - ret[2] *= cosmo->a2_inv; + ret[0] *= cosmo->a_inv; + ret[1] *= cosmo->a_inv; + ret[2] *= cosmo->a_inv; } /** @@ -74,8 +74,9 @@ void convert_gpart_vel(const struct engine* e, const struct gpart* gp, * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void darkmatter_read_particles(struct gpart* gparts, struct io_props* list, - int* num_fields) { +INLINE static void darkmatter_read_particles(struct gpart* gparts, + struct io_props* list, + int* num_fields) { /* Say how much we want to read */ *num_fields = 4; @@ -98,8 +99,9 @@ void darkmatter_read_particles(struct gpart* gparts, struct io_props* list, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void darkmatter_write_particles(const struct gpart* gparts, - struct io_props* list, int* num_fields) { +INLINE static void darkmatter_write_particles(const struct gpart* gparts, + struct io_props* list, + int* num_fields) { /* Say how much we want to write */ *num_fields = 5; diff --git a/src/gravity_properties.c b/src/gravity_properties.c index 928765cc14f856929cb30e936c6044d87b70186a..481ed2cd5e35382cd85d9bec4f94c109738ffb70 100644 --- a/src/gravity_properties.c +++ b/src/gravity_properties.c @@ -35,11 +35,19 @@ #define gravity_props_default_a_smooth 1.25f #define gravity_props_default_r_cut_max 4.5f #define gravity_props_default_r_cut_min 0.1f +#define gravity_props_default_rebuild_frequency 0.01f -void gravity_props_init(struct gravity_props *p, - const struct swift_params *params, +void gravity_props_init(struct gravity_props *p, struct swift_params *params, const struct cosmology *cosmo) { + /* Tree updates */ + p->rebuild_frequency = + parser_get_opt_param_float(params, "Gravity:rebuild_frequency", + gravity_props_default_rebuild_frequency); + + if (p->rebuild_frequency < 0.f || p->rebuild_frequency > 1.f) + error("Invalid tree rebuild frequency. Must be in [0., 1.]"); + /* Tree-PM parameters */ p->a_smooth = parser_get_opt_param_float(params, "Gravity:a_smooth", gravity_props_default_a_smooth); @@ -116,12 +124,16 @@ void gravity_props_print(const struct gravity_props *p) { message("Self-gravity tree cut-off: r_cut_max=%f", p->r_cut_max); message("Self-gravity truncation cut-off: r_cut_min=%f", p->r_cut_min); + + message("Self-gravity tree update frequency: f=%f", p->rebuild_frequency); } #if defined(HAVE_HDF5) void gravity_props_print_snapshot(hid_t h_grpgrav, const struct gravity_props *p) { + io_write_attribute_f(h_grpgrav, "Tree update frequency", + p->rebuild_frequency); io_write_attribute_f(h_grpgrav, "Time integration eta", p->eta); io_write_attribute_f( h_grpgrav, "Comoving softening length", diff --git a/src/gravity_properties.h b/src/gravity_properties.h index f36a39a6e187b8b397a5f597692f5a37c982aa7a..7faaa88d5e82b7882404269009b2f0542896eee1 100644 --- a/src/gravity_properties.h +++ b/src/gravity_properties.h @@ -36,6 +36,9 @@ */ struct gravity_props { + /*! Frequency of tree-rebuild in units of #gpart updates. */ + float rebuild_frequency; + /*! Mesh smoothing scale in units of top-level cell size */ float a_smooth; @@ -79,8 +82,7 @@ struct gravity_props { }; void gravity_props_print(const struct gravity_props *p); -void gravity_props_init(struct gravity_props *p, - const struct swift_params *params, +void gravity_props_init(struct gravity_props *p, struct swift_params *params, const struct cosmology *cosmo); void gravity_update(struct gravity_props *p, const struct cosmology *cosmo); diff --git a/src/hydro/Default/hydro.h b/src/hydro/Default/hydro.h index b1a999b63143437cab8518cfdd96885533d7401e..2c3a9c46f0500fb20aa3cfa2e5feb682b3dcec63 100644 --- a/src/hydro/Default/hydro.h +++ b/src/hydro/Default/hydro.h @@ -339,7 +339,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( /* Re-set problematic values */ p->rho = p->mass * kernel_root * h_inv_dim; - p->density.wcount = kernel_root * kernel_norm * h_inv_dim; + p->density.wcount = kernel_root * h_inv_dim; p->rho_dh = 0.f; p->density.wcount_dh = 0.f; p->density.div_v = 0.f; @@ -423,7 +423,7 @@ __attribute__((always_inline)) INLINE static void hydro_reset_acceleration( /* Reset the time derivatives. */ p->force.u_dt = 0.0f; p->force.h_dt = 0.0f; - p->force.v_sig = 0.0f; + p->force.v_sig = p->force.soundspeed; } /** diff --git a/src/hydro/Default/hydro_io.h b/src/hydro/Default/hydro_io.h index 542cc21d41741a203adacb4560c6bd701e3af758..d47c96fbf32e1ee00346888aaf2e8afabc22abc3 100644 --- a/src/hydro/Default/hydro_io.h +++ b/src/hydro/Default/hydro_io.h @@ -31,8 +31,9 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void hydro_read_particles(struct part* parts, struct io_props* list, - int* num_fields) { +INLINE static void hydro_read_particles(struct part* parts, + struct io_props* list, + int* num_fields) { *num_fields = 8; @@ -55,8 +56,9 @@ void hydro_read_particles(struct part* parts, struct io_props* list, UNIT_CONV_DENSITY, parts, rho); } -void convert_part_pos(const struct engine* e, const struct part* p, - const struct xpart* xp, double* ret) { +INLINE static void convert_part_pos(const struct engine* e, + const struct part* p, + const struct xpart* xp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(p->x[0], 0.0, e->s->dim[0]); @@ -69,8 +71,9 @@ void convert_part_pos(const struct engine* e, const struct part* p, } } -void convert_part_vel(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_vel(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { const int with_cosmology = (e->policy & engine_policy_cosmology); const struct cosmology* cosmo = e->cosmology; @@ -98,13 +101,14 @@ void convert_part_vel(const struct engine* e, const struct part* p, hydro_get_drifted_velocities(p, xp, dt_kick_hydro, dt_kick_grav, ret); /* Conversion from internal units to peculiar velocities */ - ret[0] *= cosmo->a2_inv; - ret[1] *= cosmo->a2_inv; - ret[2] *= cosmo->a2_inv; + ret[0] *= cosmo->a_inv; + ret[1] *= cosmo->a_inv; + ret[2] *= cosmo->a_inv; } -void convert_part_potential(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_potential(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { if (p->gpart != NULL) ret[0] = gravity_get_comoving_potential(p->gpart); @@ -119,8 +123,10 @@ void convert_part_potential(const struct engine* e, const struct part* p, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void hydro_write_particles(const struct part* parts, const struct xpart* xparts, - struct io_props* list, int* num_fields) { +INLINE static void hydro_write_particles(const struct part* parts, + const struct xpart* xparts, + struct io_props* list, + int* num_fields) { *num_fields = 8; @@ -149,7 +155,7 @@ void hydro_write_particles(const struct part* parts, const struct xpart* xparts, * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void hydro_write_flavour(hid_t h_grpsph) { +INLINE static void hydro_write_flavour(hid_t h_grpsph) { /* Viscosity and thermal conduction */ io_write_attribute_s(h_grpsph, "Thermal Conductivity Model", @@ -178,6 +184,6 @@ void hydro_write_flavour(hid_t h_grpsph) { * * @return 1 if entropy is in 'internal energy', 0 otherwise. */ -int writeEntropyFlag() { return 0; } +INLINE static int writeEntropyFlag(void) { return 0; } #endif /* SWIFT_DEFAULT_HYDRO_IO_H */ diff --git a/src/hydro/Gadget2/hydro.h b/src/hydro/Gadget2/hydro.h index bc06a24e2a8245556a1042f2459273b8d750489e..26e3bf97dd1924abbe7380d1eaadce75213344df 100644 --- a/src/hydro/Gadget2/hydro.h +++ b/src/hydro/Gadget2/hydro.h @@ -349,7 +349,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( /* Re-set problematic values */ p->rho = p->mass * kernel_root * h_inv_dim; - p->density.wcount = kernel_root * kernel_norm * h_inv_dim; + p->density.wcount = kernel_root * h_inv_dim; p->density.rho_dh = 0.f; p->density.wcount_dh = 0.f; p->density.div_v = 0.f; diff --git a/src/hydro/Gadget2/hydro_io.h b/src/hydro/Gadget2/hydro_io.h index 28c0eea4772f51fab35a08d43c0564472694eeeb..3f2af41dc7f0cc8f60992a15a0f09f3c90f764fe 100644 --- a/src/hydro/Gadget2/hydro_io.h +++ b/src/hydro/Gadget2/hydro_io.h @@ -31,8 +31,9 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void hydro_read_particles(struct part* parts, struct io_props* list, - int* num_fields) { +INLINE static void hydro_read_particles(struct part* parts, + struct io_props* list, + int* num_fields) { *num_fields = 8; @@ -55,20 +56,21 @@ void hydro_read_particles(struct part* parts, struct io_props* list, UNIT_CONV_DENSITY, parts, rho); } -void convert_part_u(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_u(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_internal_energy(p); } -void convert_part_P(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_P(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_pressure(p); } -void convert_part_pos(const struct engine* e, const struct part* p, - const struct xpart* xp, double* ret) { +INLINE static void convert_part_pos(const struct engine* e, + const struct part* p, + const struct xpart* xp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(p->x[0], 0.0, e->s->dim[0]); @@ -81,8 +83,9 @@ void convert_part_pos(const struct engine* e, const struct part* p, } } -void convert_part_vel(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_vel(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { const int with_cosmology = (e->policy & engine_policy_cosmology); const struct cosmology* cosmo = e->cosmology; @@ -110,13 +113,14 @@ void convert_part_vel(const struct engine* e, const struct part* p, hydro_get_drifted_velocities(p, xp, dt_kick_hydro, dt_kick_grav, ret); /* Conversion from internal units to peculiar velocities */ - ret[0] *= cosmo->a2_inv; - ret[1] *= cosmo->a2_inv; - ret[2] *= cosmo->a2_inv; + ret[0] *= cosmo->a_inv; + ret[1] *= cosmo->a_inv; + ret[2] *= cosmo->a_inv; } -void convert_part_potential(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_potential(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { if (p->gpart != NULL) ret[0] = gravity_get_comoving_potential(p->gpart); @@ -131,8 +135,10 @@ void convert_part_potential(const struct engine* e, const struct part* p, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void hydro_write_particles(const struct part* parts, const struct xpart* xparts, - struct io_props* list, int* num_fields) { +INLINE static void hydro_write_particles(const struct part* parts, + const struct xpart* xparts, + struct io_props* list, + int* num_fields) { *num_fields = 10; @@ -185,7 +191,7 @@ void hydro_write_particles(const struct part* parts, const struct xpart* xparts, * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void hydro_write_flavour(hid_t h_grpsph) { +INLINE static void hydro_write_flavour(hid_t h_grpsph) { /* Viscosity and thermal conduction */ io_write_attribute_s(h_grpsph, "Thermal Conductivity Model", @@ -202,6 +208,6 @@ void hydro_write_flavour(hid_t h_grpsph) { * * @return 1 if entropy is in 'internal energy', 0 otherwise. */ -int writeEntropyFlag() { return 0; } +INLINE static int writeEntropyFlag(void) { return 0; } #endif /* SWIFT_GADGET2_HYDRO_IO_H */ diff --git a/src/hydro/GizmoMFM/hydro.h b/src/hydro/GizmoMFM/hydro.h index 9c4be6af359d7236e483b712065b357c6ed35402..1ab142740b641bdc9a0dff5a02b19479bae8257e 100644 --- a/src/hydro/GizmoMFM/hydro.h +++ b/src/hydro/GizmoMFM/hydro.h @@ -398,7 +398,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( const float h_inv_dim = pow_dimension(h_inv); /* 1/h^d */ /* Re-set problematic values */ - p->density.wcount = kernel_root * kernel_norm * h_inv_dim; + p->density.wcount = kernel_root * h_inv_dim; p->density.wcount_dh = 0.f; p->geometry.volume = 1.0f; p->geometry.matrix_E[0][0] = 1.0f; @@ -421,8 +421,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( /** * @brief Prepare a particle for the gradient calculation. * - * The name of this method is confusing, as this method is really called after - * the density loop and before the gradient loop. + * This function is called after the density loop and before the gradient loop. * * We use it to set the physical timestep for the particle and to copy the * actual velocities, which we need to boost our interfaces during the flux @@ -433,7 +432,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( * @param xp The extended particle data to act upon. * @param cosmo The cosmological model. */ -__attribute__((always_inline)) INLINE static void hydro_prepare_force( +__attribute__((always_inline)) INLINE static void hydro_prepare_gradient( struct part* restrict p, struct xpart* restrict xp, const struct cosmology* cosmo) { @@ -446,6 +445,18 @@ __attribute__((always_inline)) INLINE static void hydro_prepare_force( hydro_velocities_prepare_force(p, xp); } +/** + * @brief Resets the variables that are required for a gradient calculation. + * + * This function is called after hydro_prepare_gradient. + * + * @param p The particle to act upon. + * @param xp The extended particle data to act upon. + * @param cosmo The cosmological model. + */ +__attribute__((always_inline)) INLINE static void hydro_reset_gradient( + struct part* restrict p) {} + /** * @brief Finishes the gradient calculation. * @@ -461,6 +472,28 @@ __attribute__((always_inline)) INLINE static void hydro_end_gradient( hydro_gradients_finalize(p); +#ifdef GIZMO_LLOYD_ITERATION + /* reset the gradients to zero, as we don't want them */ + hydro_gradients_init(p); +#endif +} + +/** + * @brief Prepare a particle for the force calculation. + * + * This function is called in the extra_ghost task to convert some quantities + * coming from the gradient loop over neighbours into quantities ready to be + * used in the force loop over neighbours. + * + * @param p The particle to act upon + * @param xp The extended particle data to act upon + * @param cosmo The current cosmological model. + */ +__attribute__((always_inline)) INLINE static void hydro_prepare_force( + struct part* restrict p, struct xpart* restrict xp, + const struct cosmology* cosmo) { + + /* Initialise values that are used in the force loop */ p->gravity.mflux[0] = 0.0f; p->gravity.mflux[1] = 0.0f; p->gravity.mflux[2] = 0.0f; @@ -470,11 +503,6 @@ __attribute__((always_inline)) INLINE static void hydro_end_gradient( p->conserved.flux.momentum[1] = 0.0f; p->conserved.flux.momentum[2] = 0.0f; p->conserved.flux.energy = 0.0f; - -#ifdef GIZMO_LLOYD_ITERATION - /* reset the gradients to zero, as we don't want them */ - hydro_gradients_init(p); -#endif } /** diff --git a/src/hydro/GizmoMFM/hydro_io.h b/src/hydro/GizmoMFM/hydro_io.h index 171132eacfcd43feeec57d3c16b5a458171b1d79..59d579f70cd4aedc728dbf42038eff78d4c507d5 100644 --- a/src/hydro/GizmoMFM/hydro_io.h +++ b/src/hydro/GizmoMFM/hydro_io.h @@ -40,8 +40,9 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void hydro_read_particles(struct part* parts, struct io_props* list, - int* num_fields) { +INLINE static void hydro_read_particles(struct part* parts, + struct io_props* list, + int* num_fields) { *num_fields = 8; @@ -72,8 +73,8 @@ void hydro_read_particles(struct part* parts, struct io_props* list, * @param p Particle. * @param ret (return) Internal energy of the particle */ -void convert_u(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_u(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_internal_energy(p); } @@ -85,8 +86,8 @@ void convert_u(const struct engine* e, const struct part* p, * @param p Particle. * @param ret (return) Entropic function of the particle */ -void convert_A(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_A(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_entropy(p); } @@ -97,8 +98,8 @@ void convert_A(const struct engine* e, const struct part* p, * @param p Particle. * @return Total energy of the particle */ -void convert_Etot(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_Etot(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { #ifdef GIZMO_TOTAL_ENERGY ret[0] = p->conserved.energy; #else @@ -112,8 +113,9 @@ void convert_Etot(const struct engine* e, const struct part* p, #endif } -void convert_part_pos(const struct engine* e, const struct part* p, - const struct xpart* xp, double* ret) { +INLINE static void convert_part_pos(const struct engine* e, + const struct part* p, + const struct xpart* xp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(p->x[0], 0.0, e->s->dim[0]); @@ -126,8 +128,9 @@ void convert_part_pos(const struct engine* e, const struct part* p, } } -void convert_part_vel(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_vel(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { const int with_cosmology = (e->policy & engine_policy_cosmology); const struct cosmology* cosmo = e->cosmology; @@ -155,13 +158,14 @@ void convert_part_vel(const struct engine* e, const struct part* p, hydro_get_drifted_velocities(p, xp, dt_kick_hydro, dt_kick_grav, ret); /* Conversion from internal units to peculiar velocities */ - ret[0] *= cosmo->a2_inv; - ret[1] *= cosmo->a2_inv; - ret[2] *= cosmo->a2_inv; + ret[0] *= cosmo->a_inv; + ret[1] *= cosmo->a_inv; + ret[2] *= cosmo->a_inv; } -void convert_part_potential(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_potential(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { if (p->gpart != NULL) ret[0] = gravity_get_comoving_potential(p->gpart); @@ -176,8 +180,10 @@ void convert_part_potential(const struct engine* e, const struct part* p, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void hydro_write_particles(const struct part* parts, const struct xpart* xparts, - struct io_props* list, int* num_fields) { +INLINE static void hydro_write_particles(const struct part* parts, + const struct xpart* xparts, + struct io_props* list, + int* num_fields) { *num_fields = 11; @@ -215,7 +221,7 @@ void hydro_write_particles(const struct part* parts, const struct xpart* xparts, * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void hydro_write_flavour(hid_t h_grpsph) { +INLINE static void hydro_write_flavour(hid_t h_grpsph) { /* Gradient information */ io_write_attribute_s(h_grpsph, "Gradient reconstruction model", HYDRO_GRADIENT_IMPLEMENTATION); @@ -239,6 +245,6 @@ void hydro_write_flavour(hid_t h_grpsph) { * * @return 1 if entropy is in 'internal energy', 0 otherwise. */ -int writeEntropyFlag() { return 0; } +INLINE static int writeEntropyFlag(void) { return 0; } #endif /* SWIFT_GIZMO_MFM_HYDRO_IO_H */ diff --git a/src/hydro/GizmoMFV/hydro.h b/src/hydro/GizmoMFV/hydro.h index 1d5abeaaf63d88b02817a691f160c537d0b1915b..6916fe33272692316354385b723ce9969606b6a2 100644 --- a/src/hydro/GizmoMFV/hydro.h +++ b/src/hydro/GizmoMFV/hydro.h @@ -398,7 +398,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( const float h_inv_dim = pow_dimension(h_inv); /* 1/h^d */ /* Re-set problematic values */ - p->density.wcount = kernel_root * kernel_norm * h_inv_dim; + p->density.wcount = kernel_root * h_inv_dim; p->density.wcount_dh = 0.f; p->geometry.volume = 1.0f; p->geometry.matrix_E[0][0] = 1.0f; @@ -421,8 +421,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( /** * @brief Prepare a particle for the gradient calculation. * - * The name of this method is confusing, as this method is really called after - * the density loop and before the gradient loop. + * This function is called after the density loop and before the gradient loop. * * We use it to set the physical timestep for the particle and to copy the * actual velocities, which we need to boost our interfaces during the flux @@ -433,7 +432,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( * @param xp The extended particle data to act upon. * @param cosmo The cosmological model. */ -__attribute__((always_inline)) INLINE static void hydro_prepare_force( +__attribute__((always_inline)) INLINE static void hydro_prepare_gradient( struct part* restrict p, struct xpart* restrict xp, const struct cosmology* cosmo) { @@ -452,8 +451,6 @@ __attribute__((always_inline)) INLINE static void hydro_prepare_force( * Just a wrapper around hydro_gradients_finalize, which can be an empty method, * in which case no gradients are used. * - * This method also initializes the force loop variables. - * * @param p The particle to act upon. */ __attribute__((always_inline)) INLINE static void hydro_end_gradient( @@ -461,6 +458,28 @@ __attribute__((always_inline)) INLINE static void hydro_end_gradient( hydro_gradients_finalize(p); +#ifdef GIZMO_LLOYD_ITERATION + /* reset the gradients to zero, as we don't want them */ + hydro_gradients_init(p); +#endif +} + +/** + * @brief Prepare a particle for the force calculation. + * + * This function is called in the extra_ghost task to convert some quantities + * coming from the gradient loop over neighbours into quantities ready to be + * used in the force loop over neighbours. + * + * @param p The particle to act upon + * @param xp The extended particle data to act upon + * @param cosmo The current cosmological model. + */ +__attribute__((always_inline)) INLINE static void hydro_prepare_force( + struct part* restrict p, struct xpart* restrict xp, + const struct cosmology* cosmo) { + + /* Initialise values that are used in the force loop */ p->gravity.mflux[0] = 0.0f; p->gravity.mflux[1] = 0.0f; p->gravity.mflux[2] = 0.0f; @@ -470,11 +489,6 @@ __attribute__((always_inline)) INLINE static void hydro_end_gradient( p->conserved.flux.momentum[1] = 0.0f; p->conserved.flux.momentum[2] = 0.0f; p->conserved.flux.energy = 0.0f; - -#ifdef GIZMO_LLOYD_ITERATION - /* reset the gradients to zero, as we don't want them */ - hydro_gradients_init(p); -#endif } /** @@ -497,6 +511,18 @@ __attribute__((always_inline)) INLINE static void hydro_reset_acceleration( p->force.h_dt = 0.0f; } +/** + * @brief Resets the variables that are required for a gradient calculation. + * + * This function is called after hydro_prepare_gradient. + * + * @param p The particle to act upon. + * @param xp The extended particle data to act upon. + * @param cosmo The cosmological model. + */ +__attribute__((always_inline)) INLINE static void hydro_reset_gradient( + struct part* restrict p) {} + /** * @brief Sets the values to be predicted in the drifts to their values at a * kick time diff --git a/src/hydro/GizmoMFV/hydro_io.h b/src/hydro/GizmoMFV/hydro_io.h index c1b151230f3198a30d6696e36a2704156804fdce..92e4378f071cb71678929716be86588a3405f40e 100644 --- a/src/hydro/GizmoMFV/hydro_io.h +++ b/src/hydro/GizmoMFV/hydro_io.h @@ -40,8 +40,9 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void hydro_read_particles(struct part* parts, struct io_props* list, - int* num_fields) { +INLINE static void hydro_read_particles(struct part* parts, + struct io_props* list, + int* num_fields) { *num_fields = 8; @@ -72,8 +73,8 @@ void hydro_read_particles(struct part* parts, struct io_props* list, * @param p Particle. * @param ret (return) Internal energy of the particle */ -void convert_u(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_u(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_internal_energy(p); } @@ -85,8 +86,8 @@ void convert_u(const struct engine* e, const struct part* p, * @param p Particle. * @param ret (return) Entropic function of the particle */ -void convert_A(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_A(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_entropy(p); } @@ -97,8 +98,8 @@ void convert_A(const struct engine* e, const struct part* p, * @param p Particle. * @return Total energy of the particle */ -void convert_Etot(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_Etot(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { #ifdef GIZMO_TOTAL_ENERGY ret[0] = p->conserved.energy; #else @@ -112,8 +113,9 @@ void convert_Etot(const struct engine* e, const struct part* p, #endif } -void convert_part_pos(const struct engine* e, const struct part* p, - const struct xpart* xp, double* ret) { +INLINE static void convert_part_pos(const struct engine* e, + const struct part* p, + const struct xpart* xp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(p->x[0], 0.0, e->s->dim[0]); @@ -126,8 +128,9 @@ void convert_part_pos(const struct engine* e, const struct part* p, } } -void convert_part_vel(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_vel(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { const int with_cosmology = (e->policy & engine_policy_cosmology); const struct cosmology* cosmo = e->cosmology; @@ -155,13 +158,14 @@ void convert_part_vel(const struct engine* e, const struct part* p, hydro_get_drifted_velocities(p, xp, dt_kick_hydro, dt_kick_grav, ret); /* Conversion from internal units to peculiar velocities */ - ret[0] *= cosmo->a2_inv; - ret[1] *= cosmo->a2_inv; - ret[2] *= cosmo->a2_inv; + ret[0] *= cosmo->a_inv; + ret[1] *= cosmo->a_inv; + ret[2] *= cosmo->a_inv; } -void convert_part_potential(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_potential(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { if (p->gpart != NULL) ret[0] = gravity_get_comoving_potential(p->gpart); @@ -176,8 +180,10 @@ void convert_part_potential(const struct engine* e, const struct part* p, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void hydro_write_particles(const struct part* parts, const struct xpart* xparts, - struct io_props* list, int* num_fields) { +INLINE static void hydro_write_particles(const struct part* parts, + const struct xpart* xparts, + struct io_props* list, + int* num_fields) { *num_fields = 11; @@ -215,7 +221,7 @@ void hydro_write_particles(const struct part* parts, const struct xpart* xparts, * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void hydro_write_flavour(hid_t h_grpsph) { +INLINE static void hydro_write_flavour(hid_t h_grpsph) { /* Gradient information */ io_write_attribute_s(h_grpsph, "Gradient reconstruction model", HYDRO_GRADIENT_IMPLEMENTATION); @@ -239,6 +245,6 @@ void hydro_write_flavour(hid_t h_grpsph) { * * @return 1 if entropy is in 'internal energy', 0 otherwise. */ -int writeEntropyFlag() { return 0; } +INLINE static int writeEntropyFlag(void) { return 0; } #endif /* SWIFT_GIZMO_MFV_HYDRO_IO_H */ diff --git a/src/hydro/Minimal/hydro.h b/src/hydro/Minimal/hydro.h index 3f9d99683bde4ed6db64d8aaa5b111e2f67f0969..812f8ad72de55ad7990ee6ef88223a401780bc4b 100644 --- a/src/hydro/Minimal/hydro.h +++ b/src/hydro/Minimal/hydro.h @@ -367,7 +367,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( /* Re-set problematic values */ p->rho = p->mass * kernel_root * h_inv_dim; - p->density.wcount = kernel_root * kernel_norm * h_inv_dim; + p->density.wcount = kernel_root * h_inv_dim; p->density.rho_dh = 0.f; p->density.wcount_dh = 0.f; } @@ -426,7 +426,7 @@ __attribute__((always_inline)) INLINE static void hydro_reset_acceleration( /* Reset the time derivatives. */ p->u_dt = 0.0f; p->force.h_dt = 0.0f; - p->force.v_sig = 0.0f; + p->force.v_sig = p->force.soundspeed; } /** diff --git a/src/hydro/Minimal/hydro_debug.h b/src/hydro/Minimal/hydro_debug.h index 541029ee06dd2799443fc89b688d7baca3fae0f8..73ffc26b8acf687a5445591ddccd72ea8e8fa8ae 100644 --- a/src/hydro/Minimal/hydro_debug.h +++ b/src/hydro/Minimal/hydro_debug.h @@ -36,16 +36,17 @@ __attribute__((always_inline)) INLINE static void hydro_debug_particle( const struct part* p, const struct xpart* xp) { printf( - "x=[%.3e,%.3e,%.3e], " - "v=[%.3e,%.3e,%.3e],v_full=[%.3e,%.3e,%.3e] \n a=[%.3e,%.3e,%.3e], " - "u=%.3e, du/dt=%.3e v_sig=%.3e, P=%.3e\n" - "h=%.3e, dh/dt=%.3e wcount=%d, m=%.3e, dh_drho=%.3e, rho=%.3e, " - "time_bin=%d\n", + "\n " + "x=[%.6g, %.6g, %.6g], v=[%.3g, %.3g, %.3g], \n " + "v_full=[%.3g, %.3g, %.3g], a=[%.3g, %.3g, %.3g], \n " + "m=%.3g, u=%.3g, du/dt=%.3g, P=%.3g, c_s=%.3g, \n " + "v_sig=%.3g, h=%.3g, dh/dt=%.3g, wcount=%.3g, rho=%.3g, \n " + "dh_drho=%.3g, time_bin=%d \n", p->x[0], p->x[1], p->x[2], p->v[0], p->v[1], p->v[2], xp->v_full[0], xp->v_full[1], xp->v_full[2], p->a_hydro[0], p->a_hydro[1], p->a_hydro[2], - p->u, p->u_dt, p->force.v_sig, hydro_get_comoving_pressure(p), p->h, - p->force.h_dt, (int)p->density.wcount, p->mass, p->density.rho_dh, p->rho, - p->time_bin); + p->mass, p->u, p->u_dt, hydro_get_comoving_pressure(p), + p->force.soundspeed, p->force.v_sig, p->h, p->force.h_dt, + p->density.wcount, p->rho, p->density.rho_dh, p->time_bin); } #endif /* SWIFT_MINIMAL_HYDRO_DEBUG_H */ diff --git a/src/hydro/Minimal/hydro_io.h b/src/hydro/Minimal/hydro_io.h index 380d6120e05acf2e015ece6f133df02fad3b761d..879255640fc1a1d6a06a666c80d3860c9c31ab64 100644 --- a/src/hydro/Minimal/hydro_io.h +++ b/src/hydro/Minimal/hydro_io.h @@ -45,8 +45,9 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void hydro_read_particles(struct part* parts, struct io_props* list, - int* num_fields) { +INLINE static void hydro_read_particles(struct part* parts, + struct io_props* list, + int* num_fields) { *num_fields = 8; @@ -69,20 +70,21 @@ void hydro_read_particles(struct part* parts, struct io_props* list, UNIT_CONV_DENSITY, parts, rho); } -void convert_S(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_S(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_entropy(p); } -void convert_P(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_P(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_pressure(p); } -void convert_part_pos(const struct engine* e, const struct part* p, - const struct xpart* xp, double* ret) { +INLINE static void convert_part_pos(const struct engine* e, + const struct part* p, + const struct xpart* xp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(p->x[0], 0.0, e->s->dim[0]); @@ -95,8 +97,9 @@ void convert_part_pos(const struct engine* e, const struct part* p, } } -void convert_part_vel(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_vel(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { const int with_cosmology = (e->policy & engine_policy_cosmology); const struct cosmology* cosmo = e->cosmology; @@ -124,13 +127,14 @@ void convert_part_vel(const struct engine* e, const struct part* p, hydro_get_drifted_velocities(p, xp, dt_kick_hydro, dt_kick_grav, ret); /* Conversion from internal units to peculiar velocities */ - ret[0] *= cosmo->a2_inv; - ret[1] *= cosmo->a2_inv; - ret[2] *= cosmo->a2_inv; + ret[0] *= cosmo->a_inv; + ret[1] *= cosmo->a_inv; + ret[2] *= cosmo->a_inv; } -void convert_part_potential(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_potential(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { if (p->gpart != NULL) ret[0] = gravity_get_comoving_potential(p->gpart); @@ -146,8 +150,10 @@ void convert_part_potential(const struct engine* e, const struct part* p, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void hydro_write_particles(const struct part* parts, const struct xpart* xparts, - struct io_props* list, int* num_fields) { +INLINE static void hydro_write_particles(const struct part* parts, + const struct xpart* xparts, + struct io_props* list, + int* num_fields) { *num_fields = 10; @@ -182,7 +188,7 @@ void hydro_write_particles(const struct part* parts, const struct xpart* xparts, * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void hydro_write_flavour(hid_t h_grpsph) { +INLINE static void hydro_write_flavour(hid_t h_grpsph) { /* Viscosity and thermal conduction */ /* Nothing in this minimal model... */ @@ -200,6 +206,6 @@ void hydro_write_flavour(hid_t h_grpsph) { * * @return 1 if entropy is in 'internal energy', 0 otherwise. */ -int writeEntropyFlag() { return 0; } +INLINE static int writeEntropyFlag(void) { return 0; } #endif /* SWIFT_MINIMAL_HYDRO_IO_H */ diff --git a/src/hydro/MinimalMultiMat/hydro.h b/src/hydro/MinimalMultiMat/hydro.h index 5383ffda8fe67a591691766e4150e75a9dbd4cb0..cfad6b2b2b389da9f423540cb30f1df4cebc5416 100644 --- a/src/hydro/MinimalMultiMat/hydro.h +++ b/src/hydro/MinimalMultiMat/hydro.h @@ -368,7 +368,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( /* Re-set problematic values */ p->rho = p->mass * kernel_root * h_inv_dim; - p->density.wcount = kernel_root * kernel_norm * h_inv_dim; + p->density.wcount = kernel_root * h_inv_dim; p->density.rho_dh = 0.f; p->density.wcount_dh = 0.f; } @@ -429,7 +429,7 @@ __attribute__((always_inline)) INLINE static void hydro_reset_acceleration( /* Reset the time derivatives. */ p->u_dt = 0.0f; p->force.h_dt = 0.0f; - p->force.v_sig = 0.0f; + p->force.v_sig = p->force.soundspeed; } /** diff --git a/src/hydro/MinimalMultiMat/hydro_debug.h b/src/hydro/MinimalMultiMat/hydro_debug.h index d8fe73313fbb175faa9547970c6680346dad0a1b..17b624ad0f660152be4ba685905a3c855e1761f8 100644 --- a/src/hydro/MinimalMultiMat/hydro_debug.h +++ b/src/hydro/MinimalMultiMat/hydro_debug.h @@ -38,16 +38,17 @@ __attribute__((always_inline)) INLINE static void hydro_debug_particle( const struct part* p, const struct xpart* xp) { printf( - "x=[%.3e,%.3e,%.3e], " - "v=[%.3e,%.3e,%.3e],v_full=[%.3e,%.3e,%.3e] \n a=[%.3e,%.3e,%.3e], " - "u=%.3e, du/dt=%.3e v_sig=%.3e, P=%.3e\n" - "h=%.3e, dh/dt=%.3e wcount=%d, m=%.3e, dh_drho=%.3e, rho=%.3e, " - "time_bin=%d, mat_id=%d\n", + "\n " + "x=[%.6g, %.6g, %.6g], v=[%.3g, %.3g, %.3g], \n " + "v_full=[%.3g, %.3g, %.3g], a=[%.3g, %.3g, %.3g], \n " + "m=%.3g, u=%.3g, du/dt=%.3g, P=%.3g, c_s=%.3g, \n " + "v_sig=%.3g, h=%.3g, dh/dt=%.3g, wcount=%.3g, rho=%.3g, \n " + "dh_drho=%.3g, time_bin=%d, mat_id=%d \n", p->x[0], p->x[1], p->x[2], p->v[0], p->v[1], p->v[2], xp->v_full[0], xp->v_full[1], xp->v_full[2], p->a_hydro[0], p->a_hydro[1], p->a_hydro[2], - p->u, p->u_dt, p->force.v_sig, hydro_get_comoving_pressure(p), p->h, - p->force.h_dt, (int)p->density.wcount, p->mass, p->density.rho_dh, p->rho, - p->time_bin, p->mat_id); + p->mass, p->u, p->u_dt, hydro_get_comoving_pressure(p), + p->force.soundspeed, p->force.v_sig, p->h, p->force.h_dt, + p->density.wcount, p->rho, p->density.rho_dh, p->time_bin, p->mat_id); } #endif /* SWIFT_MINIMAL_MULTI_MAT_HYDRO_DEBUG_H */ diff --git a/src/hydro/MinimalMultiMat/hydro_io.h b/src/hydro/MinimalMultiMat/hydro_io.h index 2a5eeb6a54d079ae72e1591116a8984b0d7a6f38..7f41f5e227b6c8a8904b5546a2568b4700109abd 100644 --- a/src/hydro/MinimalMultiMat/hydro_io.h +++ b/src/hydro/MinimalMultiMat/hydro_io.h @@ -46,8 +46,9 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void hydro_read_particles(struct part* parts, struct io_props* list, - int* num_fields) { +INLINE static void hydro_read_particles(struct part* parts, + struct io_props* list, + int* num_fields) { *num_fields = 9; @@ -68,24 +69,25 @@ void hydro_read_particles(struct part* parts, struct io_props* list, UNIT_CONV_ACCELERATION, parts, a_hydro); list[7] = io_make_input_field("Density", FLOAT, 1, OPTIONAL, UNIT_CONV_DENSITY, parts, rho); - list[8] = - io_make_input_field("MaterialID", INT, 1, OPTIONAL, 1, parts, mat_id); + list[8] = io_make_input_field("MaterialID", INT, 1, COMPULSORY, + UNIT_CONV_NO_UNITS, parts, mat_id); } -void convert_S(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_S(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_entropy(p); } -void convert_P(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_P(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_pressure(p); } -void convert_part_pos(const struct engine* e, const struct part* p, - const struct xpart* xp, double* ret) { +INLINE static void convert_part_pos(const struct engine* e, + const struct part* p, + const struct xpart* xp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(p->x[0], 0.0, e->s->dim[0]); @@ -98,8 +100,9 @@ void convert_part_pos(const struct engine* e, const struct part* p, } } -void convert_part_vel(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_vel(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { const int with_cosmology = (e->policy & engine_policy_cosmology); const struct cosmology* cosmo = e->cosmology; @@ -127,13 +130,14 @@ void convert_part_vel(const struct engine* e, const struct part* p, hydro_get_drifted_velocities(p, xp, dt_kick_hydro, dt_kick_grav, ret); /* Conversion from internal units to peculiar velocities */ - ret[0] *= cosmo->a2_inv; - ret[1] *= cosmo->a2_inv; - ret[2] *= cosmo->a2_inv; + ret[0] *= cosmo->a_inv; + ret[1] *= cosmo->a_inv; + ret[2] *= cosmo->a_inv; } -void convert_part_potential(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_potential(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { if (p->gpart != NULL) ret[0] = gravity_get_comoving_potential(p->gpart); @@ -149,8 +153,10 @@ void convert_part_potential(const struct engine* e, const struct part* p, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void hydro_write_particles(const struct part* parts, const struct xpart* xparts, - struct io_props* list, int* num_fields) { +INLINE static void hydro_write_particles(const struct part* parts, + const struct xpart* xparts, + struct io_props* list, + int* num_fields) { *num_fields = 11; @@ -186,7 +192,7 @@ void hydro_write_particles(const struct part* parts, const struct xpart* xparts, * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void hydro_write_flavour(hid_t h_grpsph) { +INLINE static void hydro_write_flavour(hid_t h_grpsph) { /* Viscosity and thermal conduction */ /* Nothing in this minimal model... */ @@ -204,6 +210,6 @@ void hydro_write_flavour(hid_t h_grpsph) { * * @return 1 if entropy is in 'internal energy', 0 otherwise. */ -int writeEntropyFlag() { return 0; } +INLINE static int writeEntropyFlag(void) { return 0; } #endif /* SWIFT_MINIMAL_MULTI_MAT_HYDRO_IO_H */ diff --git a/src/hydro/PressureEnergy/hydro.h b/src/hydro/PressureEnergy/hydro.h index 1e0d8208e82f7f48691d1df7603a6b02d1471c12..ea086daeeb1e93d7f1476302564fb4182a6fb611 100644 --- a/src/hydro/PressureEnergy/hydro.h +++ b/src/hydro/PressureEnergy/hydro.h @@ -403,7 +403,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( p->rho = p->mass * kernel_root * h_inv_dim; p->pressure_bar = p->mass * p->u * hydro_gamma_minus_one * kernel_root * h_inv_dim; - p->density.wcount = kernel_root * kernel_norm * h_inv_dim; + p->density.wcount = kernel_root * h_inv_dim; p->density.rho_dh = 0.f; p->density.wcount_dh = 0.f; p->density.pressure_bar_dh = 0.f; @@ -480,7 +480,7 @@ __attribute__((always_inline)) INLINE static void hydro_reset_acceleration( /* Reset the time derivatives. */ p->u_dt = 0.0f; p->force.h_dt = 0.0f; - p->force.v_sig = 0.0f; + p->force.v_sig = p->force.soundspeed; } /** diff --git a/src/hydro/PressureEnergy/hydro_io.h b/src/hydro/PressureEnergy/hydro_io.h index 776e7653ac3152e1594f25a33796a470dfcf69d3..78967faec218f0efffbb624c4e8d25af214aad94 100644 --- a/src/hydro/PressureEnergy/hydro_io.h +++ b/src/hydro/PressureEnergy/hydro_io.h @@ -43,8 +43,9 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void hydro_read_particles(struct part* parts, struct io_props* list, - int* num_fields) { +INLINE static void hydro_read_particles(struct part* parts, + struct io_props* list, + int* num_fields) { *num_fields = 8; @@ -67,26 +68,27 @@ void hydro_read_particles(struct part* parts, struct io_props* list, UNIT_CONV_DENSITY, parts, rho); } -void convert_u(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_u(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_internal_energy(p); } -void convert_S(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_S(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_entropy(p); } -void convert_P(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_P(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_pressure(p); } -void convert_part_pos(const struct engine* e, const struct part* p, - const struct xpart* xp, double* ret) { +INLINE static void convert_part_pos(const struct engine* e, + const struct part* p, + const struct xpart* xp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(p->x[0], 0.0, e->s->dim[0]); @@ -99,8 +101,9 @@ void convert_part_pos(const struct engine* e, const struct part* p, } } -void convert_part_vel(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_vel(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { const int with_cosmology = (e->policy & engine_policy_cosmology); const struct cosmology* cosmo = e->cosmology; @@ -128,9 +131,9 @@ void convert_part_vel(const struct engine* e, const struct part* p, hydro_get_drifted_velocities(p, xp, dt_kick_hydro, dt_kick_grav, ret); /* Conversion from internal units to peculiar velocities */ - ret[0] *= cosmo->a2_inv; - ret[1] *= cosmo->a2_inv; - ret[2] *= cosmo->a2_inv; + ret[0] *= cosmo->a_inv; + ret[1] *= cosmo->a_inv; + ret[2] *= cosmo->a_inv; } /** @@ -140,8 +143,10 @@ void convert_part_vel(const struct engine* e, const struct part* p, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void hydro_write_particles(const struct part* parts, const struct xpart* xparts, - struct io_props* list, int* num_fields) { +INLINE static void hydro_write_particles(const struct part* parts, + const struct xpart* xparts, + struct io_props* list, + int* num_fields) { *num_fields = 9; @@ -173,7 +178,7 @@ void hydro_write_particles(const struct part* parts, const struct xpart* xparts, * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void hydro_write_flavour(hid_t h_grpsph) { +INLINE static void hydro_write_flavour(hid_t h_grpsph) { /* Viscosity and thermal conduction */ /* Nothing in this minimal model... */ @@ -191,6 +196,6 @@ void hydro_write_flavour(hid_t h_grpsph) { * * @return 1 if entropy is in 'internal energy', 0 otherwise. */ -int writeEntropyFlag() { return 0; } +INLINE static int writeEntropyFlag(void) { return 0; } #endif /* SWIFT_MINIMAL_HYDRO_IO_H */ diff --git a/src/hydro/PressureEntropy/hydro.h b/src/hydro/PressureEntropy/hydro.h index 87d46c6d43f0d4f6de6d18f5400b38f0fc4d0f55..e4b7cf06e083638a94526cc1f9e7212cf19dfad4 100644 --- a/src/hydro/PressureEntropy/hydro.h +++ b/src/hydro/PressureEntropy/hydro.h @@ -357,7 +357,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( /* Re-set problematic values */ p->rho = p->mass * kernel_root * h_inv_dim; p->rho_bar = p->mass * kernel_root * h_inv_dim; - p->density.wcount = kernel_root * kernel_norm * h_inv_dim; + p->density.wcount = kernel_root * h_inv_dim; p->density.rho_dh = 0.f; p->density.wcount_dh = 0.f; p->density.pressure_dh = 0.f; @@ -441,7 +441,7 @@ __attribute__((always_inline)) INLINE static void hydro_reset_acceleration( p->force.h_dt = 0.0f; /* Reset maximal signal velocity */ - p->force.v_sig = 0.0f; + p->force.v_sig = p->force.soundspeed; } /** diff --git a/src/hydro/PressureEntropy/hydro_io.h b/src/hydro/PressureEntropy/hydro_io.h index 78371c1eb21fafed56e89d46690d0cf1e0f2a0f0..8c11bf6e334e18b10217e90f6573a42e40880955 100644 --- a/src/hydro/PressureEntropy/hydro_io.h +++ b/src/hydro/PressureEntropy/hydro_io.h @@ -42,8 +42,9 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void hydro_read_particles(struct part* parts, struct io_props* list, - int* num_fields) { +INLINE static void hydro_read_particles(struct part* parts, + struct io_props* list, + int* num_fields) { *num_fields = 8; @@ -67,20 +68,21 @@ void hydro_read_particles(struct part* parts, struct io_props* list, UNIT_CONV_DENSITY, parts, rho); } -void convert_u(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_u(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_internal_energy(p); } -void convert_P(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_P(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_comoving_pressure(p); } -void convert_part_pos(const struct engine* e, const struct part* p, - const struct xpart* xp, double* ret) { +INLINE static void convert_part_pos(const struct engine* e, + const struct part* p, + const struct xpart* xp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(p->x[0], 0.0, e->s->dim[0]); @@ -93,8 +95,9 @@ void convert_part_pos(const struct engine* e, const struct part* p, } } -void convert_part_vel(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_vel(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { const int with_cosmology = (e->policy & engine_policy_cosmology); const struct cosmology* cosmo = e->cosmology; @@ -122,13 +125,14 @@ void convert_part_vel(const struct engine* e, const struct part* p, hydro_get_drifted_velocities(p, xp, dt_kick_hydro, dt_kick_grav, ret); /* Conversion from internal units to peculiar velocities */ - ret[0] *= cosmo->a2_inv; - ret[1] *= cosmo->a2_inv; - ret[2] *= cosmo->a2_inv; + ret[0] *= cosmo->a_inv; + ret[1] *= cosmo->a_inv; + ret[2] *= cosmo->a_inv; } -void convert_part_potential(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_part_potential(const struct engine* e, + const struct part* p, + const struct xpart* xp, float* ret) { if (p->gpart != NULL) ret[0] = gravity_get_comoving_potential(p->gpart); @@ -143,8 +147,10 @@ void convert_part_potential(const struct engine* e, const struct part* p, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void hydro_write_particles(const struct part* parts, const struct xpart* xparts, - struct io_props* list, int* num_fields) { +INLINE static void hydro_write_particles(const struct part* parts, + const struct xpart* xparts, + struct io_props* list, + int* num_fields) { *num_fields = 11; @@ -180,7 +186,7 @@ void hydro_write_particles(const struct part* parts, const struct xpart* xparts, * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void hydro_write_flavour(hid_t h_grpsph) { +INLINE static void hydro_write_flavour(hid_t h_grpsph) { /* Viscosity and thermal conduction */ /* Nothing in this minimal model... */ @@ -201,6 +207,6 @@ void hydro_write_flavour(hid_t h_grpsph) { * * @return 1 if entropy is in 'internal energy', 0 otherwise. */ -int writeEntropyFlag() { return 0; } +INLINE static int writeEntropyFlag(void) { return 0; } #endif /* SWIFT_PRESSURE_ENTROPY_HYDRO_IO_H */ diff --git a/src/hydro/Shadowswift/hydro.h b/src/hydro/Shadowswift/hydro.h index 36078798cdd1ac68a456134fd7887408752f18c9..025779f17496e7bf30fdf12353c4381c7d6292ce 100644 --- a/src/hydro/Shadowswift/hydro.h +++ b/src/hydro/Shadowswift/hydro.h @@ -257,7 +257,7 @@ __attribute__((always_inline)) INLINE static void hydro_part_has_no_neighbours( const float h_inv_dim = pow_dimension(h_inv); /* 1/h^d */ /* Re-set problematic values */ - p->density.wcount = kernel_root * kernel_norm * h_inv_dim; + p->density.wcount = kernel_root * h_inv_dim; p->density.wcount_dh = 0.f; } diff --git a/src/hydro/Shadowswift/hydro_io.h b/src/hydro/Shadowswift/hydro_io.h index 8525d22025c1943529ddcd86cf3a42ba0ae4f5d4..1f6bb86e62c6a3359d1242328775c6e4067ef8f2 100644 --- a/src/hydro/Shadowswift/hydro_io.h +++ b/src/hydro/Shadowswift/hydro_io.h @@ -32,8 +32,9 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void hydro_read_particles(struct part* parts, struct io_props* list, - int* num_fields) { +INLINE static void hydro_read_particles(struct part* parts, + struct io_props* list, + int* num_fields) { *num_fields = 8; @@ -64,8 +65,8 @@ void hydro_read_particles(struct part* parts, struct io_props* list, * @param p Particle. * @return Internal energy of the particle */ -void convert_u(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_u(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_internal_energy(p); } @@ -76,8 +77,8 @@ void convert_u(const struct engine* e, const struct part* p, * @param p Particle. * @return Entropic function of the particle */ -void convert_A(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_A(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { ret[0] = hydro_get_entropy(p); } @@ -88,8 +89,8 @@ void convert_A(const struct engine* e, const struct part* p, * @param p Particle. * @return Total energy of the particle */ -void convert_Etot(const struct engine* e, const struct part* p, - const struct xpart* xp, float* ret) { +INLINE static void convert_Etot(const struct engine* e, const struct part* p, + const struct xpart* xp, float* ret) { #ifdef SHADOWFAX_TOTAL_ENERGY return p->conserved.energy; #else @@ -107,8 +108,9 @@ void convert_Etot(const struct engine* e, const struct part* p, #endif } -void convert_part_pos(const struct engine* e, const struct part* p, - const struct xpart* xp, double* ret) { +INLINE static void convert_part_pos(const struct engine* e, + const struct part* p, + const struct xpart* xp, double* ret) { if (e->s->periodic) { ret[0] = box_wrap(p->x[0], 0.0, e->s->dim[0]); @@ -128,8 +130,10 @@ void convert_part_pos(const struct engine* e, const struct part* p, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void hydro_write_particles(const struct part* parts, const struct xpart* xparts, - struct io_props* list, int* num_fields) { +INLINE static void hydro_write_particles(const struct part* parts, + const struct xpart* xparts, + struct io_props* list, + int* num_fields) { *num_fields = 13; @@ -168,7 +172,7 @@ void hydro_write_particles(const struct part* parts, const struct xpart* xparts, * @brief Writes the current model of SPH to the file * @param h_grpsph The HDF5 group in which to write */ -void hydro_write_flavour(hid_t h_grpsph) { +INLINE static void hydro_write_flavour(hid_t h_grpsph) { /* Gradient information */ io_write_attribute_s(h_grpsph, "Gradient reconstruction model", HYDRO_GRADIENT_IMPLEMENTATION); @@ -189,4 +193,4 @@ void hydro_write_flavour(hid_t h_grpsph) { * * @return 1 if entropy is in 'internal energy', 0 otherwise. */ -int writeEntropyFlag() { return 0; } +INLINE static int writeEntropyFlag(void) { return 0; } diff --git a/src/hydro_properties.c b/src/hydro_properties.c index e63679eaad3c7f61fe67e63326ca59a04c1caffb..c5448f77353e1859c1f8853394bbefbe26d0a3a9 100644 --- a/src/hydro_properties.c +++ b/src/hydro_properties.c @@ -52,7 +52,7 @@ void hydro_props_init(struct hydro_props *p, const struct phys_const *phys_const, const struct unit_system *us, - const struct swift_params *params) { + struct swift_params *params) { /* Kernel properties */ p->eta_neighbours = parser_get_param_float(params, "SPH:resolution_eta"); diff --git a/src/hydro_properties.h b/src/hydro_properties.h index 2799f6c86eec7f0ebf140643c5ab7fa9b60e6273..64a840692db677704b8617e962d7883505983cc0 100644 --- a/src/hydro_properties.h +++ b/src/hydro_properties.h @@ -86,7 +86,7 @@ void hydro_props_print(const struct hydro_props *p); void hydro_props_init(struct hydro_props *p, const struct phys_const *phys_const, const struct unit_system *us, - const struct swift_params *params); + struct swift_params *params); #if defined(HAVE_HDF5) void hydro_props_print_snapshot(hid_t h_grpsph, const struct hydro_props *p); diff --git a/src/parallel_io.c b/src/parallel_io.c index 6cd15be2948a0e46e5a1f8d79bdba8a443a5c454..d37c8632675dc13e487e0c80e2f7390f5c14e527 100644 --- a/src/parallel_io.c +++ b/src/parallel_io.c @@ -49,12 +49,13 @@ #include "io_properties.h" #include "kernel_hydro.h" #include "part.h" +#include "part_type.h" #include "stars_io.h" #include "units.h" #include "xmf.h" /* The current limit of ROMIO (the underlying MPI-IO layer) is 2GB */ -#define HDF5_PARALLEL_IO_MAX_BYTES 2000000000LL +#define HDF5_PARALLEL_IO_MAX_BYTES 2147000000LL /* Are we timing the i/o? */ //#define IO_SPEED_MEASUREMENT @@ -84,7 +85,7 @@ void readArray_chunk(hid_t h_data, hid_t h_plist_id, /* Can't handle writes of more than 2GB */ if (N * props.dimension * typeSize > HDF5_PARALLEL_IO_MAX_BYTES) - error("Dataset too large to be written in one pass!"); + error("Dataset too large to be read in one pass!"); /* Allocate temporary buffer */ void* temp = malloc(num_elements * typeSize); @@ -205,6 +206,57 @@ void readArray(hid_t grp, struct io_props props, size_t N, long long N_total, const hid_t h_data = H5Dopen2(grp, props.name, H5P_DEFAULT); if (h_data < 0) error("Error while opening data space '%s'.", props.name); +/* Parallel-HDF5 1.10.2 incorrectly reads data that was compressed */ +/* We detect this here and crash with an error message instead of */ +/* continuing with garbage data. */ +#if H5_VERSION_LE(1, 10, 2) && H5_VERSION_GE(1, 10, 2) + if (mpi_rank == 0) { + + /* Recover the list of filters that were applied to the data */ + const hid_t h_plist = H5Dget_create_plist(h_data); + if (h_plist < 0) + error("Error getting property list for data set '%s'", props.name); + + /* Recover the number of filters in the list */ + const int n_filters = H5Pget_nfilters(h_plist); + + for (int n = 0; n < n_filters; ++n) { + + unsigned int flag; + size_t cd_nelmts = 32; + unsigned int* cd_values = malloc(cd_nelmts * sizeof(unsigned int)); + size_t namelen = 256; + char* name = calloc(namelen, sizeof(char)); + unsigned int filter_config; + + /* Recover the n^th filter in the list */ + const H5Z_filter_t filter = + H5Pget_filter(h_plist, n, &flag, &cd_nelmts, cd_values, namelen, name, + &filter_config); + if (filter < 0) + error("Error retrieving %d^th (%d) filter for data set '%s'", n, + n_filters, props.name); + + /* Now check whether the deflate filter had been applied */ + if (filter == H5Z_FILTER_DEFLATE) + error( + "HDF5 1.10.2 cannot correctly read data that was compressed with " + "the 'deflate' filter.\nThe field '%s' has had this filter applied " + "and the code would silently read garbage into the particle arrays " + "so we'd rather stop here. You can:\n - Recompile the code with an " + "earlier or older version of HDF5.\n - Use the 'h5repack' tool to " + "remove the filter from the ICs (e.g. h5repack -f NONE -i in_file " + "-o out_file).\n", + props.name); + + free(name); + free(cd_values); + } + + H5Pclose(h_plist); + } +#endif + /* Create property list for collective dataset read. */ const hid_t h_plist_id = H5Pcreate(H5P_DATASET_XFER); H5Pset_dxpl_mpio(h_plist_id, H5FD_MPIO_COLLECTIVE); @@ -838,6 +890,7 @@ void prepare_file(struct engine* e, const char* baseName, long long N_total[6], const struct xpart* xparts = e->s->xparts; const struct gpart* gparts = e->s->gparts; const struct spart* sparts = e->s->sparts; + struct swift_params* params = e->parameter_file; FILE* xmfFile = 0; int periodic = e->s->periodic; int numFiles = 1; @@ -965,7 +1018,14 @@ void prepare_file(struct engine* e, const char* baseName, long long N_total[6], h_grp = H5Gcreate(h_file, "/Parameters", H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT); if (h_grp < 0) error("Error while creating parameters group"); - parser_write_params_to_hdf5(e->parameter_file, h_grp); + parser_write_params_to_hdf5(e->parameter_file, h_grp, 1); + H5Gclose(h_grp); + + /* Print the runtime unused parameters */ + h_grp = H5Gcreate(h_file, "/UnusedParameters", H5P_DEFAULT, H5P_DEFAULT, + H5P_DEFAULT); + if (h_grp < 0) error("Error while creating parameters group"); + parser_write_params_to_hdf5(e->parameter_file, h_grp, 0); H5Gclose(h_grp); /* Print the system of Units used in the spashot */ @@ -1017,10 +1077,19 @@ void prepare_file(struct engine* e, const char* baseName, long long N_total[6], error("Particle Type %d not yet supported. Aborting", ptype); } - /* Prepare everything */ - for (int i = 0; i < num_fields; ++i) - prepareArray(e, h_grp, fileName, xmfFile, partTypeGroupName, list[i], - N_total[ptype], snapshot_units); + /* Prepare everything that is not cancelled */ + for (int i = 0; i < num_fields; ++i) { + + /* Did the user cancel this field? */ + char field[PARSER_MAX_LINE_SIZE]; + sprintf(field, "SelectOutput:%s_%s", list[i].name, + part_type_names[ptype]); + int should_write = parser_get_opt_param_int(params, field, 1); + + if (should_write) + prepareArray(e, h_grp, fileName, xmfFile, partTypeGroupName, list[i], + N_total[ptype], snapshot_units); + } /* Close particle group */ H5Gclose(h_grp); @@ -1072,6 +1141,7 @@ void write_output_parallel(struct engine* e, const char* baseName, struct gpart* dmparts = NULL; const struct spart* sparts = e->s->sparts; const struct cooling_function_data* cooling = e->cooling_func; + struct swift_params* params = e->parameter_file; /* Number of unassociated gparts */ const size_t Ndm = Ntot > 0 ? Ntot - (Ngas + Nstars) : 0; @@ -1261,11 +1331,20 @@ void write_output_parallel(struct engine* e, const char* baseName, error("Particle Type %d not yet supported. Aborting", ptype); } - /* Write everything */ - for (int i = 0; i < num_fields; ++i) - writeArray(e, h_grp, fileName, partTypeGroupName, list[i], Nparticles, - N_total[ptype], mpi_rank, offset[ptype], internal_units, - snapshot_units); + /* Write everything that is not cancelled */ + for (int i = 0; i < num_fields; ++i) { + + /* Did the user cancel this field? */ + char field[PARSER_MAX_LINE_SIZE]; + sprintf(field, "SelectOutput:%s_%s", list[i].name, + part_type_names[ptype]); + int should_write = parser_get_opt_param_int(params, field, 1); + + if (should_write) + writeArray(e, h_grp, fileName, partTypeGroupName, list[i], Nparticles, + N_total[ptype], mpi_rank, offset[ptype], internal_units, + snapshot_units); + } /* Free temporary array */ if (dmparts) { diff --git a/src/parser.c b/src/parser.c index af9ef5fd6b7228dd4ad96640b59322175586862b..78d8aef2c3194acd0a9128867e6e5867a0cbc7b0 100644 --- a/src/parser.c +++ b/src/parser.c @@ -57,12 +57,23 @@ static void find_duplicate_section(const struct swift_params *params, static int lineNumber = 0; /** - * @brief Reads an input file and stores each parameter in a structure. + * @brief Initialize the parser structure. * * @param file_name Name of file to be read * @param params Structure to be populated from file */ +void parser_init(const char *file_name, struct swift_params *params) { + params->paramCount = 0; + params->sectionCount = 0; + strcpy(params->fileName, file_name); +} +/** + * @brief Reads an input file and stores each parameter in a structure. + * + * @param file_name Name of file to be read + * @param params Structure to be populated from file + */ void parser_read_file(const char *file_name, struct swift_params *params) { /* Open file for reading */ FILE *file = fopen(file_name, "r"); @@ -71,9 +82,7 @@ void parser_read_file(const char *file_name, struct swift_params *params) { char line[PARSER_MAX_LINE_SIZE]; /* Initialise parameter count. */ - params->paramCount = 0; - params->sectionCount = 0; - strcpy(params->fileName, file_name); + parser_init(file_name, params); /* Check if parameter file exits. */ if (file == NULL) { @@ -143,6 +152,7 @@ void parser_set_param(struct swift_params *params, const char *namevalue) { if (!updated) { strcpy(params->data[params->paramCount].name, name); strcpy(params->data[params->paramCount].value, value); + params->data[params->paramCount].used = 0; params->paramCount++; if (params->paramCount == PARSER_MAX_NO_OF_PARAMS) error("Too many parameters, current maximum is %d.", params->paramCount); @@ -359,6 +369,7 @@ static void parse_value(char *line, struct swift_params *params) { * section. */ strcpy(params->data[params->paramCount].name, tmpStr); strcpy(params->data[params->paramCount].value, token); + params->data[params->paramCount].used = 0; if (params->paramCount == PARSER_MAX_NO_OF_PARAMS - 1) { error( "Maximal number of parameters in parameter file reached. Aborting " @@ -418,6 +429,7 @@ static void parse_section_param(char *line, int *isFirstParam, strcpy(params->data[params->paramCount].name, paramName); strcpy(params->data[params->paramCount].value, token); + params->data[params->paramCount].used = 0; if (params->paramCount == PARSER_MAX_NO_OF_PARAMS - 1) { error("Maximal number of parameters in parameter file reached. Aborting !"); } else { @@ -432,7 +444,7 @@ static void parse_section_param(char *line, int *isFirstParam, * @param name Name of the parameter to be found * @return Value of the parameter found */ -int parser_get_param_int(const struct swift_params *params, const char *name) { +int parser_get_param_int(struct swift_params *params, const char *name) { char str[PARSER_MAX_LINE_SIZE]; int retParam = 0; @@ -447,6 +459,9 @@ int parser_get_param_int(const struct swift_params *params, const char *name) { params->data[i].name, params->data[i].value, str); } + /* this parameter has been used */ + params->data[i].used = 1; + return retParam; } } @@ -463,8 +478,7 @@ int parser_get_param_int(const struct swift_params *params, const char *name) { * @param name Name of the parameter to be found * @return Value of the parameter found */ -char parser_get_param_char(const struct swift_params *params, - const char *name) { +char parser_get_param_char(struct swift_params *params, const char *name) { char str[PARSER_MAX_LINE_SIZE]; char retParam = 0; @@ -479,6 +493,9 @@ char parser_get_param_char(const struct swift_params *params, params->data[i].name, params->data[i].value, str); } + /* this parameter has been used */ + params->data[i].used = 1; + return retParam; } } @@ -495,8 +512,7 @@ char parser_get_param_char(const struct swift_params *params, * @param name Name of the parameter to be found * @return Value of the parameter found */ -float parser_get_param_float(const struct swift_params *params, - const char *name) { +float parser_get_param_float(struct swift_params *params, const char *name) { char str[PARSER_MAX_LINE_SIZE]; float retParam = 0.f; @@ -511,6 +527,9 @@ float parser_get_param_float(const struct swift_params *params, params->data[i].name, params->data[i].value, str); } + /* this parameter has been used */ + params->data[i].used = 1; + return retParam; } } @@ -527,8 +546,7 @@ float parser_get_param_float(const struct swift_params *params, * @param name Name of the parameter to be found * @return Value of the parameter found */ -double parser_get_param_double(const struct swift_params *params, - const char *name) { +double parser_get_param_double(struct swift_params *params, const char *name) { char str[PARSER_MAX_LINE_SIZE]; double retParam = 0.; @@ -542,6 +560,10 @@ double parser_get_param_double(const struct swift_params *params, "characters '%s'.", params->data[i].name, params->data[i].value, str); } + + /* this parameter has been used */ + params->data[i].used = 1; + return retParam; } } @@ -558,11 +580,14 @@ double parser_get_param_double(const struct swift_params *params, * @param name Name of the parameter to be found * @param retParam (return) Value of the parameter found */ -void parser_get_param_string(const struct swift_params *params, - const char *name, char *retParam) { +void parser_get_param_string(struct swift_params *params, const char *name, + char *retParam) { + for (int i = 0; i < params->paramCount; i++) { if (!strcmp(name, params->data[i].name)) { strcpy(retParam, params->data[i].value); + /* this parameter has been used */ + params->data[i].used = 1; return; } } @@ -578,8 +603,8 @@ void parser_get_param_string(const struct swift_params *params, * @param def Default value of the parameter of not found. * @return Value of the parameter found */ -int parser_get_opt_param_int(const struct swift_params *params, - const char *name, int def) { +int parser_get_opt_param_int(struct swift_params *params, const char *name, + int def) { char str[PARSER_MAX_LINE_SIZE]; int retParam = 0; @@ -594,10 +619,22 @@ int parser_get_opt_param_int(const struct swift_params *params, params->data[i].name, params->data[i].value, str); } + /* this parameter has been used */ + params->data[i].used = 1; + return retParam; } } + /* Generate string for new parameter */ + sprintf(str, "%s: %i", name, def); + + /* Add it to params */ + parser_set_param(params, str); + + /* Set parameter as used */ + params->data[params->paramCount - 1].used = 1; + return def; } @@ -609,8 +646,8 @@ int parser_get_opt_param_int(const struct swift_params *params, * @param def Default value of the parameter of not found. * @return Value of the parameter found */ -char parser_get_opt_param_char(const struct swift_params *params, - const char *name, char def) { +char parser_get_opt_param_char(struct swift_params *params, const char *name, + char def) { char str[PARSER_MAX_LINE_SIZE]; char retParam = 0; @@ -625,10 +662,22 @@ char parser_get_opt_param_char(const struct swift_params *params, params->data[i].name, params->data[i].value, str); } + /* this parameter has been used */ + params->data[i].used = 1; + return retParam; } } + /* Generate string for new parameter */ + sprintf(str, "%s: %c", name, def); + + /* Add it to params */ + parser_set_param(params, str); + + /* Set parameter as used */ + params->data[params->paramCount - 1].used = 1; + return def; } @@ -640,8 +689,8 @@ char parser_get_opt_param_char(const struct swift_params *params, * @param def Default value of the parameter of not found. * @return Value of the parameter found */ -float parser_get_opt_param_float(const struct swift_params *params, - const char *name, float def) { +float parser_get_opt_param_float(struct swift_params *params, const char *name, + float def) { char str[PARSER_MAX_LINE_SIZE]; float retParam = 0.f; @@ -656,10 +705,22 @@ float parser_get_opt_param_float(const struct swift_params *params, params->data[i].name, params->data[i].value, str); } + /* this parameter has been used */ + params->data[i].used = 1; + return retParam; } } + /* Generate string for new parameter */ + sprintf(str, "%s: %f", name, def); + + /* Add it to params */ + parser_set_param(params, str); + + /* Set parameter as used */ + params->data[params->paramCount - 1].used = 1; + return def; } @@ -671,7 +732,7 @@ float parser_get_opt_param_float(const struct swift_params *params, * @param def Default value of the parameter of not found. * @return Value of the parameter found */ -double parser_get_opt_param_double(const struct swift_params *params, +double parser_get_opt_param_double(struct swift_params *params, const char *name, double def) { char str[PARSER_MAX_LINE_SIZE]; @@ -686,10 +747,23 @@ double parser_get_opt_param_double(const struct swift_params *params, "characters '%s'.", params->data[i].name, params->data[i].value, str); } + + /* this parameter has been used */ + params->data[i].used = 1; + return retParam; } } + /* Generate string for new parameter */ + sprintf(str, "%s: %lf", name, def); + + /* Add it to params */ + parser_set_param(params, str); + + /* Set parameter as used */ + params->data[params->paramCount - 1].used = 1; + return def; } @@ -701,16 +775,30 @@ double parser_get_opt_param_double(const struct swift_params *params, * @param def Default value of the parameter of not found. * @param retParam (return) Value of the parameter found */ -void parser_get_opt_param_string(const struct swift_params *params, - const char *name, char *retParam, - const char *def) { +void parser_get_opt_param_string(struct swift_params *params, const char *name, + char *retParam, const char *def) { + for (int i = 0; i < params->paramCount; i++) { if (!strcmp(name, params->data[i].name)) { strcpy(retParam, params->data[i].value); + + /* this parameter has been used */ + params->data[i].used = 1; + return; } } + /* Generate string for new parameter */ + char str[PARSER_MAX_LINE_SIZE]; + sprintf(str, "%s: %s", name, def); + + /* Add it to params */ + parser_set_param(params, str); + + /* Set parameter as used */ + params->data[params->paramCount - 1].used = 1; + strcpy(retParam, def); } @@ -727,6 +815,7 @@ void parser_print_params(const struct swift_params *params) { for (int i = 0; i < params->paramCount; i++) { printf("Parameter name: %s\n", params->data[i].name); printf("Parameter value: %s\n", params->data[i].value); + printf("Parameter used: %i\n", params->data[i].used); } } @@ -736,9 +825,10 @@ void parser_print_params(const struct swift_params *params) { * * @param params Structure that holds the parameters * @param file_name Name of file to be written + * @param write_used Write used fields or unused fields. */ void parser_write_params_to_file(const struct swift_params *params, - const char *file_name) { + const char *file_name, int write_used) { FILE *file = fopen(file_name, "w"); char section[PARSER_MAX_LINE_SIZE] = {0}; char param_name[PARSER_MAX_LINE_SIZE] = {0}; @@ -748,6 +838,16 @@ void parser_write_params_to_file(const struct swift_params *params, fprintf(file, "%s\n", PARSER_START_OF_FILE); for (int i = 0; i < params->paramCount; i++) { + if (write_used && !params->data[i].used) { +#ifdef SWIFT_DEBUG_CHECKS + message( + "Parameter `%s` was not used. " + "Only the parameter used are written.", + params->data[i].name); +#endif + continue; + } else if (!write_used && params->data[i].used) + continue; /* Check that the parameter name contains a section name. */ if (strchr(params->data[i].name, PARSER_VALUE_CHAR)) { /* Copy the parameter name into a temporary string and find the section @@ -773,16 +873,30 @@ void parser_write_params_to_file(const struct swift_params *params, } /* End of file identifier in YAML. */ - fprintf(file, PARSER_END_OF_FILE); + fprintf(file, "%s\n", PARSER_END_OF_FILE); fclose(file); } #if defined(HAVE_HDF5) -void parser_write_params_to_hdf5(const struct swift_params *params, hid_t grp) { - for (int i = 0; i < params->paramCount; i++) +/** + * @brief Write the contents of the parameter structure to a hdf5 file + * + * @param params Structure that holds the parameters + * @param grp HDF5 group + * @param write_used Write used fields or unused fields. + */ +void parser_write_params_to_hdf5(const struct swift_params *params, hid_t grp, + int write_used) { + + for (int i = 0; i < params->paramCount; i++) { + if (write_used && !params->data[i].used) + continue; + else if (!write_used && params->data[i].used) + continue; io_write_attribute_s(grp, params->data[i].name, params->data[i].value); + } } #endif diff --git a/src/parser.h b/src/parser.h index 58f4a53ea2114dbed29f49e2b5ccca69239b45e3..2b06fce03cbfe2d6a73bc28960bc83bc822e4459 100644 --- a/src/parser.h +++ b/src/parser.h @@ -39,6 +39,7 @@ struct parameter { char name[PARSER_MAX_LINE_SIZE]; char value[PARSER_MAX_LINE_SIZE]; + int used; }; struct section { @@ -55,35 +56,34 @@ struct swift_params { }; /* Public API. */ +void parser_init(const char *file_name, struct swift_params *params); void parser_read_file(const char *file_name, struct swift_params *params); void parser_print_params(const struct swift_params *params); void parser_write_params_to_file(const struct swift_params *params, - const char *file_name); + const char *file_name, int write_all); void parser_set_param(struct swift_params *params, const char *desc); -char parser_get_param_char(const struct swift_params *params, const char *name); -int parser_get_param_int(const struct swift_params *params, const char *name); -float parser_get_param_float(const struct swift_params *params, - const char *name); -double parser_get_param_double(const struct swift_params *params, - const char *name); -void parser_get_param_string(const struct swift_params *params, - const char *name, char *retParam); +char parser_get_param_char(struct swift_params *params, const char *name); +int parser_get_param_int(struct swift_params *params, const char *name); +float parser_get_param_float(struct swift_params *params, const char *name); +double parser_get_param_double(struct swift_params *params, const char *name); +void parser_get_param_string(struct swift_params *params, const char *name, + char *retParam); -char parser_get_opt_param_char(const struct swift_params *params, - const char *name, char def); -int parser_get_opt_param_int(const struct swift_params *params, - const char *name, int def); -float parser_get_opt_param_float(const struct swift_params *params, - const char *name, float def); -double parser_get_opt_param_double(const struct swift_params *params, +char parser_get_opt_param_char(struct swift_params *params, const char *name, + char def); +int parser_get_opt_param_int(struct swift_params *params, const char *name, + int def); +float parser_get_opt_param_float(struct swift_params *params, const char *name, + float def); +double parser_get_opt_param_double(struct swift_params *params, const char *name, double def); -void parser_get_opt_param_string(const struct swift_params *params, - const char *name, char *retParam, - const char *def); +void parser_get_opt_param_string(struct swift_params *params, const char *name, + char *retParam, const char *def); #if defined(HAVE_HDF5) -void parser_write_params_to_hdf5(const struct swift_params *params, hid_t grp); +void parser_write_params_to_hdf5(const struct swift_params *params, hid_t grp, + int write_all); #endif /* Dump/restore. */ diff --git a/src/part.c b/src/part.c index 1b696a8cbc135fd2c128b5ad705a0e6e24a2d5c8..050e10e9cdd0ab56adcd34ba3e6f2d35c274f14a 100644 --- a/src/part.c +++ b/src/part.c @@ -259,7 +259,7 @@ MPI_Datatype multipole_mpi_type; /** * @brief Registers MPI particle types. */ -void part_create_mpi_types() { +void part_create_mpi_types(void) { /* This is not the recommended way of doing this. One should define the structure field by field diff --git a/src/part.h b/src/part.h index e6750ea864bf3785df0b4ebe011e0ad741d7b5c7..03ec331cb17b95b0133be568d6e857a44d1eaf73 100644 --- a/src/part.h +++ b/src/part.h @@ -104,7 +104,7 @@ extern MPI_Datatype gpart_mpi_type; extern MPI_Datatype spart_mpi_type; extern MPI_Datatype multipole_mpi_type; -void part_create_mpi_types(); +void part_create_mpi_types(void); #endif #endif /* SWIFT_PART_H */ diff --git a/src/partition.c b/src/partition.c index 8feae3b7a8dce7c78310a3b35762d31b439c69e3..85a51dddf2797e7d203da95abc42639c29f11aa6 100644 --- a/src/partition.c +++ b/src/partition.c @@ -1041,7 +1041,7 @@ void partition_initial_partition(struct partition *initial_partition, */ void partition_init(struct partition *partition, struct repartition *repartition, - const struct swift_params *params, int nr_nodes) { + struct swift_params *params, int nr_nodes) { #ifdef WITH_MPI diff --git a/src/partition.h b/src/partition.h index 3ad479c6b1b343106ac736e2d9c77aa9bc93cf60..ec7d670a43537c4717090b857b6e6ba9186b8f1c 100644 --- a/src/partition.h +++ b/src/partition.h @@ -76,7 +76,7 @@ int partition_space_to_space(double *oldh, double *oldcdim, int *oldnodeID, struct space *s); void partition_init(struct partition *partition, struct repartition *repartition, - const struct swift_params *params, int nr_nodes); + struct swift_params *params, int nr_nodes); /* Dump/restore. */ void partition_store_celllist(struct space *s, struct repartition *reparttype); diff --git a/src/physical_constants.c b/src/physical_constants.c index b1dbeaeecfbf2e056a68b7866766bb07efb5efba..2c0ea6191b20e7786b0e2c55356b6c9decf7b0a4 100644 --- a/src/physical_constants.c +++ b/src/physical_constants.c @@ -38,8 +38,7 @@ * @param params The parsed parameter file. * @param internal_const The physical constants to initialize. */ -void phys_const_init(const struct unit_system *us, - const struct swift_params *params, +void phys_const_init(const struct unit_system *us, struct swift_params *params, struct phys_const *internal_const) { /* Units are declared as {U_M, U_L, U_t, U_I, U_T} */ diff --git a/src/physical_constants.h b/src/physical_constants.h index b0f929632ba8a55a57376975597e444a8344e4fc..606e7eeb584fc670c5c690aa9dfa683330ea3644 100644 --- a/src/physical_constants.h +++ b/src/physical_constants.h @@ -89,8 +89,7 @@ struct phys_const { double const_earth_mass; }; -void phys_const_init(const struct unit_system* us, - const struct swift_params* params, +void phys_const_init(const struct unit_system* us, struct swift_params* params, struct phys_const* internal_const); void phys_const_print(const struct phys_const* internal_const); diff --git a/src/potential.c b/src/potential.c index 1fda6fc8752ff626a5262d7824ea68fd3bc16d46..a313598dae36569f8de9bf15078719886805a2a3 100644 --- a/src/potential.c +++ b/src/potential.c @@ -35,7 +35,7 @@ * @param s The #space we run in. * @param potential The external potential properties to initialize */ -void potential_init(const struct swift_params* parameter_file, +void potential_init(struct swift_params* parameter_file, const struct phys_const* phys_const, const struct unit_system* us, const struct space* s, struct external_potential* potential) { diff --git a/src/potential.h b/src/potential.h index 680d4e235fdf7a7666901f34a82f62feda4ae9bb..814b83c69180631db21e392704c0279808a6f03e 100644 --- a/src/potential.h +++ b/src/potential.h @@ -47,7 +47,7 @@ #endif /* Now, some generic functions, defined in the source file */ -void potential_init(const struct swift_params* parameter_file, +void potential_init(struct swift_params* parameter_file, const struct phys_const* phys_const, const struct unit_system* us, const struct space* s, struct external_potential* potential); diff --git a/src/potential/disc_patch/potential.h b/src/potential/disc_patch/potential.h index ab229d009c692db727e8f2341c3c49813f74f2b8..40c747314994cfdc4a38679d747e3351e4fbd4d1 100644 --- a/src/potential/disc_patch/potential.h +++ b/src/potential/disc_patch/potential.h @@ -269,9 +269,9 @@ external_gravity_get_potential_energy( * @param potential The external potential properties to initialize */ static INLINE void potential_init_backend( - const struct swift_params* parameter_file, - const struct phys_const* phys_const, const struct unit_system* us, - const struct space* s, struct external_potential* potential) { + struct swift_params* parameter_file, const struct phys_const* phys_const, + const struct unit_system* us, const struct space* s, + struct external_potential* potential) { potential->surface_density = parser_get_param_double( parameter_file, "DiscPatchPotential:surface_density"); diff --git a/src/potential/isothermal/potential.h b/src/potential/isothermal/potential.h index c974618d7b581884f871863bb83200b8cecee7a5..4267c9becc7251ffe276b69f078e374504c22aab 100644 --- a/src/potential/isothermal/potential.h +++ b/src/potential/isothermal/potential.h @@ -162,9 +162,9 @@ external_gravity_get_potential_energy( * @param potential The external potential properties to initialize */ static INLINE void potential_init_backend( - const struct swift_params* parameter_file, - const struct phys_const* phys_const, const struct unit_system* us, - const struct space* s, struct external_potential* potential) { + struct swift_params* parameter_file, const struct phys_const* phys_const, + const struct unit_system* us, const struct space* s, + struct external_potential* potential) { potential->x = s->dim[0] / 2. + diff --git a/src/potential/none/potential.h b/src/potential/none/potential.h index a8550cad702891ff211539c95c42eca57418c464..2303f2531d059660a063a92be999405f05e9e6aa 100644 --- a/src/potential/none/potential.h +++ b/src/potential/none/potential.h @@ -98,9 +98,9 @@ external_gravity_get_potential_energy( * @param potential The external potential properties to initialize */ static INLINE void potential_init_backend( - const struct swift_params* parameter_file, - const struct phys_const* phys_const, const struct unit_system* us, - const struct space* s, struct external_potential* potential) {} + struct swift_params* parameter_file, const struct phys_const* phys_const, + const struct unit_system* us, const struct space* s, + struct external_potential* potential) {} /** * @brief Prints the properties of the external potential to stdout. diff --git a/src/potential/point_mass/potential.h b/src/potential/point_mass/potential.h index adea9d912056fd07e134fb98a7603030e897ec7a..db875842d51f6c0a28dd1308fd7dd1728e746ce4 100644 --- a/src/potential/point_mass/potential.h +++ b/src/potential/point_mass/potential.h @@ -152,9 +152,9 @@ external_gravity_get_potential_energy( * @param potential The external potential properties to initialize */ static INLINE void potential_init_backend( - const struct swift_params* parameter_file, - const struct phys_const* phys_const, const struct unit_system* us, - const struct space* s, struct external_potential* potential) { + struct swift_params* parameter_file, const struct phys_const* phys_const, + const struct unit_system* us, const struct space* s, + struct external_potential* potential) { potential->x = parser_get_param_double(parameter_file, "PointMassPotential:position_x"); diff --git a/src/potential/point_mass_ring/potential.h b/src/potential/point_mass_ring/potential.h index ebf047ea7c1f946536300f976713893e66295c59..551efe32521a5c5ee8068ba409dbb81547103e8f 100644 --- a/src/potential/point_mass_ring/potential.h +++ b/src/potential/point_mass_ring/potential.h @@ -192,9 +192,9 @@ external_gravity_get_potential_energy( * @param potential The external potential properties to initialize */ static INLINE void potential_init_backend( - const struct swift_params* parameter_file, - const struct phys_const* phys_const, const struct unit_system* us, - const struct space* s, struct external_potential* potential) { + struct swift_params* parameter_file, const struct phys_const* phys_const, + const struct unit_system* us, const struct space* s, + struct external_potential* potential) { potential->x = parser_get_param_double(parameter_file, "PointMassPotential:position_x"); diff --git a/src/potential/point_mass_softened/potential.h b/src/potential/point_mass_softened/potential.h index 83a79ea3cddbff37fdd70d58d70afcaf46f7bc0e..80959ec923cbedcbea5fba3293c8ae4f94f65679 100644 --- a/src/potential/point_mass_softened/potential.h +++ b/src/potential/point_mass_softened/potential.h @@ -179,9 +179,9 @@ external_gravity_get_potential_energy( * @param potential The external potential properties to initialize */ static INLINE void potential_init_backend( - const struct swift_params* parameter_file, - const struct phys_const* phys_const, const struct unit_system* us, - const struct space* s, struct external_potential* potential) { + struct swift_params* parameter_file, const struct phys_const* phys_const, + const struct unit_system* us, const struct space* s, + struct external_potential* potential) { potential->x = parser_get_param_double(parameter_file, "PointMassPotential:position_x"); diff --git a/src/potential/sine_wave/potential.h b/src/potential/sine_wave/potential.h index 1a4ee8aae8238c5db4c99eacb9e96bd967bcc7c4..8a1786baaf9ed0b0683ea16fdefee997ecb4eceb 100644 --- a/src/potential/sine_wave/potential.h +++ b/src/potential/sine_wave/potential.h @@ -117,9 +117,9 @@ external_gravity_get_potential_energy( * @param potential The external potential properties to initialize */ static INLINE void potential_init_backend( - const struct swift_params* parameter_file, - const struct phys_const* phys_const, const struct unit_system* us, - const struct space* s, struct external_potential* potential) { + struct swift_params* parameter_file, const struct phys_const* phys_const, + const struct unit_system* us, const struct space* s, + struct external_potential* potential) { potential->amplitude = parser_get_param_double(parameter_file, "SineWavePotential:amplitude"); diff --git a/src/profiler.c b/src/profiler.c index 1dd1a41cc336942d17790e96c8f883d65e54a51f..58fd279d312d3c752d65ccaceab803ace66fddac 100644 --- a/src/profiler.c +++ b/src/profiler.c @@ -146,8 +146,8 @@ void profiler_write_all_timing_info_headers(const struct engine *e, void profiler_write_timing_info(const struct engine *e, ticks time, FILE *file) { - fprintf(file, " %6d %14e %14e %10zu %10zu %10zu %21.3f\n", e->step, e->time, - e->time_step, e->updates, e->g_updates, e->s_updates, + fprintf(file, " %6d %14e %14e %10lld %10lld %10lld %21.3f\n", e->step, + e->time, e->time_step, e->updates, e->g_updates, e->s_updates, clocks_from_ticks(time)); fflush(file); } diff --git a/src/runner.c b/src/runner.c index 5a382a916abda9dddd45dcb5e703578457f0548a..96d68745a5a134b9a6c516f277a768364c9339b0 100644 --- a/src/runner.c +++ b/src/runner.c @@ -606,8 +606,10 @@ void runner_do_extra_ghost(struct runner *r, struct cell *c, int timer) { #ifdef EXTRA_HYDRO_LOOP struct part *restrict parts = c->parts; + struct xpart *restrict xparts = c->xparts; const int count = c->count; const struct engine *e = r->e; + const struct cosmology *cosmo = e->cosmology; TIMER_TIC; @@ -625,11 +627,23 @@ void runner_do_extra_ghost(struct runner *r, struct cell *c, int timer) { /* Get a direct pointer on the part. */ struct part *restrict p = &parts[i]; + struct xpart *restrict xp = &xparts[i]; if (part_is_active(p, e)) { - /* Get ready for a force calculation */ + /* Finish the gradient calculation */ hydro_end_gradient(p); + + /* As of here, particle force variables will be set. */ + + /* Compute variables required for the force loop */ + hydro_prepare_force(p, xp, cosmo); + + /* The particle force values are now set. Do _NOT_ + try to read any particle density variables! */ + + /* Prepare the particle for the force loop over neighbours */ + hydro_reset_acceleration(p); } } } @@ -710,9 +724,13 @@ void runner_do_ghost(struct runner *r, struct cell *c, int timer) { const float h_old_dim = pow_dimension(h_old); const float h_old_dim_minus_one = pow_dimension_minus_one(h_old); float h_new; + int has_no_neighbours = 0; if (p->density.wcount == 0.f) { /* No neighbours case */ + /* Flag that there were no neighbours */ + has_no_neighbours = 1; + /* Double h and try again */ h_new = 2.f * h_old; } else { @@ -729,7 +747,8 @@ void runner_do_ghost(struct runner *r, struct cell *c, int timer) { p->density.wcount_dh * h_old_dim + hydro_dimension * p->density.wcount * h_old_dim_minus_one; - h_new = h_old - f / f_prime; + /* Avoid floating point exception from f_prime = 0 */ + h_new = h_old - f / (f_prime + FLT_MIN); #ifdef SWIFT_DEBUG_CHECKS if ((f > 0.f && h_new > h_old) || (f < 0.f && h_new < h_old)) @@ -768,13 +787,30 @@ void runner_do_ghost(struct runner *r, struct cell *c, int timer) { p->h = hydro_h_max; /* Do some damage control if no neighbours at all were found */ - if (p->density.wcount == kernel_root * kernel_norm) + if (has_no_neighbours) { hydro_part_has_no_neighbours(p, xp, cosmo); + chemistry_part_has_no_neighbours(p, xp, chemistry, cosmo); + } } } - /* We now have a particle whose smoothing length has converged */ +/* We now have a particle whose smoothing length has converged */ + +#ifdef EXTRA_HYDRO_LOOP + + /* As of here, particle gradient variables will be set. */ + /* The force variables are set in the extra ghost. */ + + /* Compute variables required for the gradient loop */ + hydro_prepare_gradient(p, xp, cosmo); + + /* The particle gradient values are now set. Do _NOT_ + try to read any particle density variables! */ + + /* Prepare the particle for the gradient loop over neighbours */ + hydro_reset_gradient(p); +#else /* As of here, particle force variables will be set. */ /* Compute variables required for the force loop */ @@ -785,6 +821,8 @@ void runner_do_ghost(struct runner *r, struct cell *c, int timer) { /* Prepare the particle for the force loop over neighbours */ hydro_reset_acceleration(p); + +#endif /* EXTRA_HYDRO_LOOP */ } /* We now need to treat the particles whose smoothing length had not @@ -1630,13 +1668,20 @@ void runner_do_end_force(struct runner *r, struct cell *c, int timer) { gravity_end_force(gp, const_G); #ifdef SWIFT_NO_GRAVITY_BELOW_ID + + /* Get the ID of the gpart */ + long long id = 0; + if (gp->type == swift_type_gas) + id = e->s->parts[-gp->id_or_neg_offset].id; + else if (gp->type == swift_type_star) + id = e->s->sparts[-gp->id_or_neg_offset].id; + else if (gp->type == swift_type_black_hole) + error("Unexisting type"); + else + id = gp->id_or_neg_offset; + /* Cancel gravity forces of these particles */ - if ((gp->type == swift_type_dark_matter && - gp->id_or_neg_offset < SWIFT_NO_GRAVITY_BELOW_ID) || - (gp->type == swift_type_gas && - parts[-gp->id_or_neg_offset].id < SWIFT_NO_GRAVITY_BELOW_ID) || - (gp->type == swift_type_star && - sparts[-gp->id_or_neg_offset].id < SWIFT_NO_GRAVITY_BELOW_ID)) { + if (id < SWIFT_NO_GRAVITY_BELOW_ID) { /* Don't move ! */ gp->a_grav[0] = 0.f; @@ -1653,14 +1698,27 @@ void runner_do_end_force(struct runner *r, struct cell *c, int timer) { /* Check that this gpart has interacted with all the other * particles (via direct or multipoles) in the box */ - if (gp->num_interacted != e->total_nr_gparts) + if (gp->num_interacted != e->total_nr_gparts) { + + /* Get the ID of the gpart */ + long long my_id = 0; + if (gp->type == swift_type_gas) + my_id = e->s->parts[-gp->id_or_neg_offset].id; + else if (gp->type == swift_type_star) + my_id = e->s->sparts[-gp->id_or_neg_offset].id; + else if (gp->type == swift_type_black_hole) + error("Unexisting type"); + else + my_id = gp->id_or_neg_offset; + error( "g-particle (id=%lld, type=%s) did not interact " - "gravitationally " - "with all other gparts gp->num_interacted=%lld, " - "total_gparts=%lld (local num_gparts=%zd)", - gp->id_or_neg_offset, part_type_names[gp->type], - gp->num_interacted, e->total_nr_gparts, e->s->nr_gparts); + "gravitationally with all other gparts " + "gp->num_interacted=%lld, total_gparts=%lld (local " + "num_gparts=%zd)", + my_id, part_type_names[gp->type], gp->num_interacted, + e->total_nr_gparts, e->s->nr_gparts); + } } #endif } diff --git a/src/serial_io.c b/src/serial_io.c index 9403caad7670b9af369f4b3598b8a05cf2d0d9e9..4074f9bd54754f4b50a0e62a6a54089efb4d5bfb 100644 --- a/src/serial_io.c +++ b/src/serial_io.c @@ -49,6 +49,7 @@ #include "io_properties.h" #include "kernel_hydro.h" #include "part.h" +#include "part_type.h" #include "stars_io.h" #include "units.h" #include "xmf.h" @@ -233,6 +234,11 @@ void prepareArray(const struct engine* e, hid_t grp, char* fileName, error("Error while setting chunk size (%llu, %llu) for field '%s'.", chunk_shape[0], chunk_shape[1], props.name); + /* Impose check-sum to verify data corruption */ + h_err = H5Pset_fletcher32(h_prop); + if (h_err < 0) + error("Error while setting checksum options for field '%s'.", props.name); + /* Impose data compression */ if (e->snapshot_compression > 0) { h_err = H5Pset_shuffle(h_prop); @@ -727,6 +733,7 @@ void write_output_serial(struct engine* e, const char* baseName, struct gpart* dmparts = NULL; const struct spart* sparts = e->s->sparts; const struct cooling_function_data* cooling = e->cooling_func; + struct swift_params* params = e->parameter_file; FILE* xmfFile = 0; /* Number of unassociated gparts */ @@ -872,7 +879,14 @@ void write_output_serial(struct engine* e, const char* baseName, h_grp = H5Gcreate(h_file, "/Parameters", H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT); if (h_grp < 0) error("Error while creating parameters group"); - parser_write_params_to_hdf5(e->parameter_file, h_grp); + parser_write_params_to_hdf5(e->parameter_file, h_grp, 1); + H5Gclose(h_grp); + + /* Print the runtime unused parameters */ + h_grp = H5Gcreate(h_file, "/UnusedParameters", H5P_DEFAULT, H5P_DEFAULT, + H5P_DEFAULT); + if (h_grp < 0) error("Error while creating parameters group"); + parser_write_params_to_hdf5(e->parameter_file, h_grp, 0); H5Gclose(h_grp); /* Print the system of Units used in the spashot */ @@ -1005,11 +1019,20 @@ void write_output_serial(struct engine* e, const char* baseName, error("Particle Type %d not yet supported. Aborting", ptype); } - /* Write everything */ - for (int i = 0; i < num_fields; ++i) - writeArray(e, h_grp, fileName, xmfFile, partTypeGroupName, list[i], - Nparticles, N_total[ptype], mpi_rank, offset[ptype], - internal_units, snapshot_units); + /* Write everything that is not cancelled */ + for (int i = 0; i < num_fields; ++i) { + + /* Did the user cancel this field? */ + char field[PARSER_MAX_LINE_SIZE]; + sprintf(field, "SelectOutput:%s_%s", list[i].name, + part_type_names[ptype]); + int should_write = parser_get_opt_param_int(params, field, 1); + + if (should_write) + writeArray(e, h_grp, fileName, xmfFile, partTypeGroupName, list[i], + Nparticles, N_total[ptype], mpi_rank, offset[ptype], + internal_units, snapshot_units); + } /* Free temporary array */ if (dmparts) { diff --git a/src/single_io.c b/src/single_io.c index d7afdd4a886ccde9701e7665f978e4e2ffa907aa..975487a1d954e0144f4675f4b90b7cd3b70f3a13 100644 --- a/src/single_io.c +++ b/src/single_io.c @@ -48,6 +48,7 @@ #include "io_properties.h" #include "kernel_hydro.h" #include "part.h" +#include "part_type.h" #include "stars_io.h" #include "units.h" #include "xmf.h" @@ -239,6 +240,11 @@ void writeArray(const struct engine* e, hid_t grp, char* fileName, error("Error while setting chunk size (%llu, %llu) for field '%s'.", chunk_shape[0], chunk_shape[1], props.name); + /* Impose check-sum to verify data corruption */ + h_err = H5Pset_fletcher32(h_prop); + if (h_err < 0) + error("Error while setting checksum options for field '%s'.", props.name); + /* Impose data compression */ if (e->snapshot_compression > 0) { h_err = H5Pset_shuffle(h_prop); @@ -315,8 +321,9 @@ void writeArray(const struct engine* e, hid_t grp, char* fileName, * @todo Read snapshots distributed in more than one file. * */ -void read_ic_single(char* fileName, const struct unit_system* internal_units, - double dim[3], struct part** parts, struct gpart** gparts, +void read_ic_single(const char* fileName, + const struct unit_system* internal_units, double dim[3], + struct part** parts, struct gpart** gparts, struct spart** sparts, size_t* Ngas, size_t* Ngparts, size_t* Nstars, int* periodic, int* flag_entropy, int with_hydro, int with_gravity, int with_stars, @@ -593,6 +600,7 @@ void write_output_single(struct engine* e, const char* baseName, struct gpart* dmparts = NULL; const struct spart* sparts = e->s->sparts; const struct cooling_function_data* cooling = e->cooling_func; + struct swift_params* params = e->parameter_file; /* Number of unassociated gparts */ const size_t Ndm = Ntot > 0 ? Ntot - (Ngas + Nstars) : 0; @@ -724,7 +732,14 @@ void write_output_single(struct engine* e, const char* baseName, h_grp = H5Gcreate(h_file, "/Parameters", H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT); if (h_grp < 0) error("Error while creating parameters group"); - parser_write_params_to_hdf5(e->parameter_file, h_grp); + parser_write_params_to_hdf5(e->parameter_file, h_grp, 1); + H5Gclose(h_grp); + + /* Print the runtime unused parameters */ + h_grp = H5Gcreate(h_file, "/UnusedParameters", H5P_DEFAULT, H5P_DEFAULT, + H5P_DEFAULT); + if (h_grp < 0) error("Error while creating parameters group"); + parser_write_params_to_hdf5(e->parameter_file, h_grp, 0); H5Gclose(h_grp); /* Print the system of Units used in the spashot */ @@ -823,10 +838,19 @@ void write_output_single(struct engine* e, const char* baseName, error("Particle Type %d not yet supported. Aborting", ptype); } - /* Write everything */ - for (int i = 0; i < num_fields; ++i) - writeArray(e, h_grp, fileName, xmfFile, partTypeGroupName, list[i], N, - internal_units, snapshot_units); + /* Write everything that is not cancelled */ + for (int i = 0; i < num_fields; ++i) { + + /* Did the user cancel this field? */ + char field[PARSER_MAX_LINE_SIZE]; + sprintf(field, "SelectOutput:%s_%s", list[i].name, + part_type_names[ptype]); + int should_write = parser_get_opt_param_int(params, field, 1); + + if (should_write) + writeArray(e, h_grp, fileName, xmfFile, partTypeGroupName, list[i], N, + internal_units, snapshot_units); + } /* Free temporary array */ if (dmparts) { diff --git a/src/single_io.h b/src/single_io.h index 26b849716e3e018d9a10c5c5c513ad26c7ccb274..aa1a3b7de82e6f882b3a59064eb351e7c65c6aab 100644 --- a/src/single_io.h +++ b/src/single_io.h @@ -29,8 +29,9 @@ #include "part.h" #include "units.h" -void read_ic_single(char* fileName, const struct unit_system* internal_units, - double dim[3], struct part** parts, struct gpart** gparts, +void read_ic_single(const char* fileName, + const struct unit_system* internal_units, double dim[3], + struct part** parts, struct gpart** gparts, struct spart** sparts, size_t* Ngas, size_t* Ndm, size_t* Nstars, int* periodic, int* flag_entropy, int with_hydro, int with_gravity, int with_stars, diff --git a/src/sourceterms.c b/src/sourceterms.c index 994658740a50a764edd3988ef7d6b78e00546f8f..993045e61503e4e78b855816921bc057706b76d1 100644 --- a/src/sourceterms.c +++ b/src/sourceterms.c @@ -36,7 +36,7 @@ * @param us The current internal system of units * @param source the structure that has all the source term properties */ -void sourceterms_init(const struct swift_params *parameter_file, +void sourceterms_init(struct swift_params *parameter_file, struct unit_system *us, struct sourceterms *source) { #ifdef SOURCETERMS_SN_FEEDBACK supernova_init(parameter_file, us, source); diff --git a/src/sourceterms.h b/src/sourceterms.h index a5d0c3c727d70d50fb3388d5e5e3cc3d6362276f..407d2f19362531a3fd3537889593c484319919b5 100644 --- a/src/sourceterms.h +++ b/src/sourceterms.h @@ -41,7 +41,7 @@ struct sourceterms { #include "sourceterms/sn_feedback/sn_feedback.h" #endif -void sourceterms_init(const struct swift_params* parameter_file, +void sourceterms_init(struct swift_params* parameter_file, struct unit_system* us, struct sourceterms* source); void sourceterms_print(struct sourceterms* source); diff --git a/src/sourceterms/sn_feedback/sn_feedback.h b/src/sourceterms/sn_feedback/sn_feedback.h index f2f224ce871ebb768c318aef42a690861dd974df..411673c37e82ff89d906425d1cadaa135c46a38d 100644 --- a/src/sourceterms/sn_feedback/sn_feedback.h +++ b/src/sourceterms/sn_feedback/sn_feedback.h @@ -171,7 +171,7 @@ __attribute__((always_inline)) INLINE static void supernova_feedback_apply( */ __attribute__((always_inline)) INLINE static void supernova_init( - const struct swift_params* parameter_file, struct unit_system* us, + struct swift_params* parameter_file, struct unit_system* us, struct sourceterms* source) { source->supernova.time = parser_get_param_double(parameter_file, "SN:time"); source->supernova.energy = diff --git a/src/space.c b/src/space.c index 457d04f6066ea9ea1cd75e9a5eb5db9265207505..4a9f3bacf6c2bfbc07bc66034c49bbf2e9a4b434 100644 --- a/src/space.c +++ b/src/space.c @@ -2361,7 +2361,7 @@ void space_first_init_parts_mapper(void *restrict map_data, int count, const struct cosmology *cosmo = s->e->cosmology; const struct phys_const *phys_const = s->e->physical_constants; const struct unit_system *us = s->e->internal_units; - const float a_factor_vel = cosmo->a * cosmo->a; + const float a_factor_vel = cosmo->a; const struct hydro_props *hydro_props = s->e->hydro_properties; const float u_init = hydro_props->initial_internal_energy; @@ -2436,7 +2436,7 @@ void space_first_init_gparts_mapper(void *restrict map_data, int count, const struct space *restrict s = (struct space *)extra_data; const struct cosmology *cosmo = s->e->cosmology; - const float a_factor_vel = cosmo->a * cosmo->a; + const float a_factor_vel = cosmo->a; const struct gravity_props *grav_props = s->e->gravity_properties; for (int k = 0; k < count; k++) { @@ -2493,7 +2493,7 @@ void space_first_init_sparts_mapper(void *restrict map_data, int count, #endif const struct cosmology *cosmo = s->e->cosmology; - const float a_factor_vel = cosmo->a * cosmo->a; + const float a_factor_vel = cosmo->a; for (int k = 0; k < count; k++) { /* Convert velocities to internal units */ @@ -2648,7 +2648,7 @@ void space_convert_quantities(struct space *s, int verbose) { * parts with a cutoff below half the cell width are then split * recursively. */ -void space_init(struct space *s, const struct swift_params *params, +void space_init(struct space *s, struct swift_params *params, const struct cosmology *cosmo, double dim[3], struct part *parts, struct gpart *gparts, struct spart *sparts, size_t Npart, size_t Ngpart, size_t Nspart, int periodic, @@ -3122,7 +3122,7 @@ void space_check_cosmology(struct space *s, const struct cosmology *cosmo, if (fabs(Omega_m - cosmo->Omega_m) > 1e-3) error( "The matter content of the simulation does not match the cosmology " - "in the parameter file comso.Omega_m=%e Omega_m=%e", + "in the parameter file cosmo.Omega_m=%e Omega_m=%e", cosmo->Omega_m, Omega_m); } } diff --git a/src/space.h b/src/space.h index 546b06a609d30e5b9f40bf8ee9f92f03faf83576..c6b695a06042e4417ea419b2b86884815b903809 100644 --- a/src/space.h +++ b/src/space.h @@ -209,7 +209,7 @@ void space_gparts_sort(struct gpart *gparts, struct part *parts, void space_sparts_sort(struct spart *sparts, int *ind, int *counts, int num_bins, ptrdiff_t sparts_offset); void space_getcells(struct space *s, int nr_cells, struct cell **cells); -void space_init(struct space *s, const struct swift_params *params, +void space_init(struct space *s, struct swift_params *params, const struct cosmology *cosmo, double dim[3], struct part *parts, struct gpart *gparts, struct spart *sparts, size_t Npart, size_t Ngpart, size_t Nspart, int periodic, @@ -243,9 +243,9 @@ void space_gparts_get_cell_index(struct space *s, int *gind, int *cell_counts, void space_sparts_get_cell_index(struct space *s, int *sind, int *cell_counts, struct cell *cells, int verbose); void space_synchronize_particle_positions(struct space *s); -void space_do_parts_sort(); -void space_do_gparts_sort(); -void space_do_sparts_sort(); +void space_do_parts_sort(void); +void space_do_gparts_sort(void); +void space_do_sparts_sort(void); void space_first_init_parts(struct space *s, int verbose); void space_first_init_gparts(struct space *s, int verbose); void space_first_init_sparts(struct space *s, int verbose); diff --git a/src/stars/Default/star_io.h b/src/stars/Default/star_io.h index c3dc31096383533e1e15fa65615d2c9aac0f43e3..7ad29f0a935c002b1337c2a75d6f987c05c9bb43 100644 --- a/src/stars/Default/star_io.h +++ b/src/stars/Default/star_io.h @@ -28,8 +28,8 @@ * @param list The list of i/o properties to read. * @param num_fields The number of i/o fields to read. */ -void star_read_particles(struct spart* sparts, struct io_props* list, - int* num_fields) { +INLINE static void star_read_particles(struct spart* sparts, + struct io_props* list, int* num_fields) { /* Say how much we want to read */ *num_fields = 4; @@ -52,8 +52,9 @@ void star_read_particles(struct spart* sparts, struct io_props* list, * @param list The list of i/o properties to write. * @param num_fields The number of i/o fields to write. */ -void star_write_particles(const struct spart* sparts, struct io_props* list, - int* num_fields) { +INLINE static void star_write_particles(const struct spart* sparts, + struct io_props* list, + int* num_fields) { /* Say how much we want to read */ *num_fields = 4; diff --git a/src/statistics.c b/src/statistics.c index 62a4f9a1420e88712e8fb527fc4d3db7f4b0abc0..bdca6cfb4ef84bb64aa4776bfc600b0727e0d606 100644 --- a/src/statistics.c +++ b/src/statistics.c @@ -396,7 +396,7 @@ void stats_add_MPI(void *in, void *inout, int *len, MPI_Datatype *datatype) { /** * @brief Registers MPI #statistics type and reduction function. */ -void stats_create_MPI_type() { +void stats_create_MPI_type(void) { /* This is not the recommended way of doing this. One should define the structure field by field diff --git a/src/statistics.h b/src/statistics.h index e8cddda1e855d156070b8a000cb15b515f127740..adc9f5b6a24a093419b7dd644404a68ef736a685 100644 --- a/src/statistics.h +++ b/src/statistics.h @@ -77,7 +77,7 @@ extern MPI_Datatype statistics_mpi_type; extern MPI_Op statistics_mpi_reduce_op; void stats_add_MPI(void* in, void* out, int* len, MPI_Datatype* datatype); -void stats_create_MPI_type(); +void stats_create_MPI_type(void); #endif #endif /* SWIFT_STATISTICS_H */ diff --git a/src/swift.h b/src/swift.h index 7691720942f32d29d3269ccdd7adbe8db32280bf..17f7c5c7e4e56ed1650d6cdd078060244c67b643 100644 --- a/src/swift.h +++ b/src/swift.h @@ -29,8 +29,10 @@ #include "cell.h" #include "chemistry.h" #include "clocks.h" +#include "common_io.h" #include "const.h" #include "cooling.h" +#include "cooling_struct.h" #include "cosmology.h" #include "cycle.h" #include "debug.h" diff --git a/src/timers.c b/src/timers.c index fec111dd939528bd0648609d8a1f5f83e595ec02..e3beda71310b6a177833db73e207179e5a4b5468 100644 --- a/src/timers.c +++ b/src/timers.c @@ -110,7 +110,7 @@ void timers_reset(unsigned long long mask) { * @brief Re-set all the timers. * */ -void timers_reset_all() { timers_reset(timers_mask_all); } +void timers_reset_all(void) { timers_reset(timers_mask_all); } /** * @brief Outputs all the timers to the timers dump file. @@ -145,4 +145,4 @@ void timers_open_file(int rank) { /** * @brief Close the file containing the timer info. */ -void timers_close_file() { fclose(timers_file); } +void timers_close_file(void) { fclose(timers_file); } diff --git a/src/timers.h b/src/timers.h index 38ede8251eb5d640282e728e17d9330956a1cba8..82132865769604a2ac2e7be3541e2f2f4164f6c3 100644 --- a/src/timers.h +++ b/src/timers.h @@ -119,10 +119,10 @@ INLINE static ticks timers_toc(unsigned int t, ticks tic) { #endif /* Function prototypes. */ -void timers_reset_all(); +void timers_reset_all(void); void timers_reset(unsigned long long mask); void timers_open_file(int rank); -void timers_close_file(); +void timers_close_file(void); void timers_print(int step); #endif /* SWIFT_TIMERS_H */ diff --git a/src/tools.h b/src/tools.h index bb141101a3bf6fad38a83a15ea7f6bb5de86e9f8..a54510000d3c2843e8d60047752b19a46bd502d9 100644 --- a/src/tools.h +++ b/src/tools.h @@ -52,6 +52,6 @@ int compare_values(double a, double b, double threshold, double *absDiff, double *absSum, double *relDiff); int compare_particles(struct part a, struct part b, double threshold); -long get_maxrss(); +long get_maxrss(void); #endif /* SWIFT_TOOL_H */ diff --git a/src/units.c b/src/units.c index ae33b0c263dc014dbaf5406a8dcdc8ed254d26dd..48f0a3aee6e348b5df24ac41b308aebf6f70224a 100644 --- a/src/units.c +++ b/src/units.c @@ -77,8 +77,7 @@ void units_init(struct unit_system* us, double U_M_in_cgs, double U_L_in_cgs, * @param params The parsed parameter file. * @param category The section of the parameter file to read from. */ -void units_init_from_params(struct unit_system* us, - const struct swift_params* params, +void units_init_from_params(struct unit_system* us, struct swift_params* params, const char* category) { char buffer[200]; @@ -104,9 +103,8 @@ void units_init_from_params(struct unit_system* us, * @param category The section of the parameter file to read from. * @param def The default unit system to copy from if required. */ -void units_init_default(struct unit_system* us, - const struct swift_params* params, const char* category, - const struct unit_system* def) { +void units_init_default(struct unit_system* us, struct swift_params* params, + const char* category, const struct unit_system* def) { if (!def) error("Default unit_system not allocated"); diff --git a/src/units.h b/src/units.h index a6169d0a2f0e79156428f21d6096d66da7782837..829a1ce542500308cbc64a2463545fbd23921eef 100644 --- a/src/units.h +++ b/src/units.h @@ -98,11 +98,10 @@ enum unit_conversion_factor { void units_init_cgs(struct unit_system*); void units_init(struct unit_system* us, double U_M_in_cgs, double U_L_in_cgs, double U_t_in_cgs, double U_C_in_cgs, double U_T_in_cgs); -void units_init_from_params(struct unit_system*, const struct swift_params*, +void units_init_from_params(struct unit_system*, struct swift_params*, const char* category); -void units_init_default(struct unit_system* us, - const struct swift_params* params, const char* category, - const struct unit_system* def); +void units_init_default(struct unit_system* us, struct swift_params* params, + const char* category, const struct unit_system* def); int units_are_equal(const struct unit_system* a, const struct unit_system* b); diff --git a/src/vector.h b/src/vector.h index 9048e273759ae0c0978c8ddbf26a810d4761f464..a1ecddc6ed68ef659759665f15f25aa7e32dc908 100644 --- a/src/vector.h +++ b/src/vector.h @@ -493,7 +493,7 @@ __attribute__((always_inline)) INLINE vector vector_set1(const float x) { * @return temp set #vector. * @return A #vector filled with zeros. */ -__attribute__((always_inline)) INLINE vector vector_setzero() { +__attribute__((always_inline)) INLINE vector vector_setzero(void) { vector temp; temp.v = vec_setzero(); diff --git a/src/version.c b/src/version.c index 54749721de96bde010f56965152c536b08672230..69f70b9aec3549c061c162f2ce183f8fafcc2e9f 100644 --- a/src/version.c +++ b/src/version.c @@ -146,7 +146,7 @@ const char *configuration_options(void) { static int initialised = 0; static const char *config = SWIFT_CONFIG_FLAGS; if (!initialised) { - snprintf(buf, 1024, "'%s'", config); + snprintf(buf, 1024, "'%.1021s'", config); initialised = 1; } return buf; @@ -162,7 +162,7 @@ const char *compilation_cflags(void) { static int initialised = 0; static const char *cflags = SWIFT_CFLAGS; if (!initialised) { - snprintf(buf, 1024, "'%s'", cflags); + snprintf(buf, 1024, "'%.1021s'", cflags); initialised = 1; } return buf; @@ -272,12 +272,12 @@ const char *mpi_version(void) { #else /* Use autoconf guessed value. */ static char lib_version[60] = {0}; - snprintf(lib_version, 60, "%s", SWIFT_MPI_LIBRARY); + snprintf(lib_version, 60, "%.60s", SWIFT_MPI_LIBRARY); #endif /* Numeric version. */ MPI_Get_version(&std_version, &std_subversion); - snprintf(version, 80, "%s (MPI std v%i.%i)", lib_version, std_version, + snprintf(version, 80, "%.60s (MPI std v%i.%i)", lib_version, std_version, std_subversion); #else sprintf(version, "Code was not compiled with MPI support"); @@ -345,7 +345,7 @@ const char *libgsl_version(void) { static char version[256] = {0}; #if defined(HAVE_LIBGSL) - sprintf(version, "%s", gsl_version); + sprintf(version, "%.255s", gsl_version); #else sprintf(version, "Unknown version"); #endif @@ -368,6 +368,26 @@ const char *thread_barrier_version(void) { return version; } +/** + * @brief return the allocator library used in SWIFT. + * + * @result description of the allocation library + */ +const char *allocator_version(void) { + + static char version[256] = {0}; +#if defined(HAVE_TBBMALLOC) + sprintf(version, "TBB malloc"); +#elif defined(HAVE_TCMALLOC) + sprintf(version, "tc-malloc"); +#elif defined(HAVE_JEMALLOC) + sprintf(version, "je-malloc"); +#else + sprintf(version, "Compiler version (probably glibc)"); +#endif + return version; +} + /** * @brief Prints a greeting message to the standard output containing code * version and revision number diff --git a/src/version.h b/src/version.h index 3163f242c50e56c64cc709b13dfe926f93672a00..44119b6a3bbdf57c3f0195bae5ff329d05c61fd5 100644 --- a/src/version.h +++ b/src/version.h @@ -36,6 +36,7 @@ const char* hdf5_version(void); const char* fftw3_version(void); const char* libgsl_version(void); const char* thread_barrier_version(void); +const char* allocator_version(void); void greetings(void); #endif /* SWIFT_VERSION_H */ diff --git a/tests/Makefile.am b/tests/Makefile.am index 891eef3f518f83c17b66623e3dac1832512d31f3..e8f2a4f0ae39254c28e981fef5e36866e56395d8 100644 --- a/tests/Makefile.am +++ b/tests/Makefile.am @@ -17,7 +17,7 @@ # Add the source directory and the non-standard paths to the included library headers to CFLAGS AM_CFLAGS = -I$(top_srcdir)/src $(HDF5_CPPFLAGS) $(GSL_INCS) $(FFTW_INCS) -AM_LDFLAGS = ../src/.libs/libswiftsim.a $(HDF5_LDFLAGS) $(HDF5_LIBS) $(FFTW_LIBS) $(GRACKLE_LIBS) $(GSL_LIBS) $(PROFILER_LIBS) +AM_LDFLAGS = ../src/.libs/libswiftsim.a $(HDF5_LDFLAGS) $(HDF5_LIBS) $(FFTW_LIBS) $(TCMALLOC_LIBS) $(JEMALLOC_LIBS) $(TBBMALLOC_LIBS) $(GRACKLE_LIBS) $(GSL_LIBS) $(PROFILER_LIBS) # List of programs and scripts to run in the test suite TESTS = testGreetings testMaths testReading.sh testSingle testKernel testSymmetry \ @@ -27,7 +27,7 @@ TESTS = testGreetings testMaths testReading.sh testSingle testKernel testSymmetr testMatrixInversion testThreadpool testDump testLogger testInteractions.sh \ testVoronoi1D testVoronoi2D testVoronoi3D testGravityDerivatives \ testPeriodicBC.sh testPeriodicBCPerturbed.sh testPotentialSelf \ - testPotentialPair testEOS testUtilities + testPotentialPair testEOS testUtilities testSelectOutput.sh # List of test programs to compile check_PROGRAMS = testGreetings testReading testSingle testTimeIntegration \ @@ -37,7 +37,8 @@ check_PROGRAMS = testGreetings testReading testSingle testTimeIntegration \ testAdiabaticIndex testRiemannExact testRiemannTRRS \ testRiemannHLLC testMatrixInversion testDump testLogger \ testVoronoi1D testVoronoi2D testVoronoi3D testPeriodicBC \ - testGravityDerivatives testPotentialSelf testPotentialPair testEOS testUtilities + testGravityDerivatives testPotentialSelf testPotentialPair testEOS testUtilities \ + testSelectOutput # Rebuild tests when SWIFT is updated. $(check_PROGRAMS): ../src/.libs/libswiftsim.a @@ -49,6 +50,8 @@ testMaths_SOURCES = testMaths.c testReading_SOURCES = testReading.c +testSelectOutput_SOURCES = testSelectOutput.c + testSymmetry_SOURCES = testSymmetry.c # Added because of issues using memcmp on clang 4.x @@ -120,4 +123,4 @@ EXTRA_DIST = testReading.sh makeInput.py testActivePair.sh \ tolerance_27_normal.dat tolerance_27_perturbed.dat tolerance_27_perturbed_h.dat tolerance_27_perturbed_h2.dat \ tolerance_testInteractions.dat tolerance_pair_active.dat tolerance_pair_force_active.dat \ fft_params.yml tolerance_periodic_BC_normal.dat tolerance_periodic_BC_perturbed.dat \ - testEOS.sh testEOS_plot.sh + testEOS.sh testEOS_plot.sh testSelectOutput.sh selectOutput.yml diff --git a/tests/selectOutput.yml b/tests/selectOutput.yml new file mode 100644 index 0000000000000000000000000000000000000000..1778935146b19992e25efcb320d8cc523c6472a5 --- /dev/null +++ b/tests/selectOutput.yml @@ -0,0 +1,12 @@ +SelectOutput: + # Particle Type 0 + Coordinates_Gas: 1 # check if written when specified + Velocities_Gas: 0 # check if not written when specified + Masses_Gas: -5 # check warning if not 0 or 1 and if written + Pot_Gas: 1 # check warning if wrong name + # Density_Gas: 1 # check if written when not specified + +# Parameters for the hydrodynamics scheme +SPH: + resolution_eta: 1.2348 # Target smoothing length in units of the mean inter-particle separation (1.2348 == 48Ngbs with the cubic spline kernel). + CFL_condition: 0.1 # Courant-Friedrich-Levy condition for time integration. diff --git a/tests/test125cells.c b/tests/test125cells.c index a50b847308422cbf10f58c737811934979d21899..ddce3176d463f2ef754bf82b364858085e317e4b 100644 --- a/tests/test125cells.c +++ b/tests/test125cells.c @@ -268,7 +268,7 @@ struct cell *make_cell(size_t n, const double offset[3], double size, double h, const size_t count = n * n * n; const double volume = size * size * size; - struct cell *cell = malloc(sizeof(struct cell)); + struct cell *cell = (struct cell *)malloc(sizeof(struct cell)); bzero(cell, sizeof(struct cell)); if (posix_memalign((void **)&cell->parts, part_align, @@ -637,8 +637,8 @@ int main(int argc, char *argv[]) { main_cell = cells[62]; /* Construct the real solution */ - struct solution_part *solution = - malloc(main_cell->count * sizeof(struct solution_part)); + struct solution_part *solution = (struct solution_part *)malloc( + main_cell->count * sizeof(struct solution_part)); get_solution(main_cell, solution, rho, vel, press, size); ticks timings[27]; @@ -759,7 +759,7 @@ int main(int argc, char *argv[]) { /* Dump if necessary */ if (n == 0) { - sprintf(outputFileName, "swift_dopair_125_%s.dat", + sprintf(outputFileName, "swift_dopair_125_%.150s.dat", outputFileNameExtension); dump_particle_fields(outputFileName, main_cell, solution, 0); } @@ -876,7 +876,8 @@ int main(int argc, char *argv[]) { /* Output timing */ message("Brute force calculation took : %15lli ticks.", toc - tic); - sprintf(outputFileName, "brute_force_125_%s.dat", outputFileNameExtension); + sprintf(outputFileName, "brute_force_125_%.150s.dat", + outputFileNameExtension); dump_particle_fields(outputFileName, main_cell, solution, 0); /* Clean things to make the sanitizer happy ... */ diff --git a/tests/test27cells.c b/tests/test27cells.c index e60262df71f6dc455e944b82a337261a57bcc9bc..ada1b782cfff3866bf26937391007947e9c9a175 100644 --- a/tests/test27cells.c +++ b/tests/test27cells.c @@ -98,7 +98,7 @@ struct cell *make_cell(size_t n, double *offset, double size, double h, const size_t count = n * n * n; const double volume = size * size * size; float h_max = 0.f; - struct cell *cell = malloc(sizeof(struct cell)); + struct cell *cell = (struct cell *)malloc(sizeof(struct cell)); bzero(cell, sizeof(struct cell)); if (posix_memalign((void **)&cell->parts, part_align, @@ -502,7 +502,7 @@ int main(int argc, char *argv[]) { #if defined(TEST_DOSELF_SUBSET) || defined(TEST_DOPAIR_SUBSET) int *pid = NULL; int count = 0; - if ((pid = malloc(sizeof(int) * main_cell->count)) == NULL) + if ((pid = (int *)malloc(sizeof(int) * main_cell->count)) == NULL) error("Can't allocate memory for pid."); for (int k = 0; k < main_cell->count; k++) if (part_is_active(&main_cell->parts[k], &engine)) { @@ -546,7 +546,7 @@ int main(int argc, char *argv[]) { /* Dump if necessary */ if (i % 50 == 0) { - sprintf(outputFileName, "swift_dopair_27_%s.dat", + sprintf(outputFileName, "swift_dopair_27_%.150s.dat", outputFileNameExtension); dump_particle_fields(outputFileName, main_cell, cells); } @@ -589,7 +589,7 @@ int main(int argc, char *argv[]) { end_calculation(main_cell, &cosmo); /* Dump */ - sprintf(outputFileName, "brute_force_27_%s.dat", outputFileNameExtension); + sprintf(outputFileName, "brute_force_27_%.150s.dat", outputFileNameExtension); dump_particle_fields(outputFileName, main_cell, cells); /* Output timing */ diff --git a/tests/testActivePair.c b/tests/testActivePair.c index 0453f6d5896eaa53b0f44a567d353d7d8e8fb7df..6889a18887894af0a9434f786df21dbf842e87e5 100644 --- a/tests/testActivePair.c +++ b/tests/testActivePair.c @@ -59,7 +59,7 @@ struct cell *make_cell(size_t n, double *offset, double size, double h, const size_t count = n * n * n; const double volume = size * size * size; float h_max = 0.f; - struct cell *cell = malloc(sizeof(struct cell)); + struct cell *cell = (struct cell *)malloc(sizeof(struct cell)); bzero(cell, sizeof(struct cell)); if (posix_memalign((void **)&cell->parts, part_align, @@ -578,8 +578,9 @@ int main(int argc, char *argv[]) { runner->e = &engine; /* Create output file names. */ - sprintf(swiftOutputFileName, "swift_dopair_%s.dat", outputFileNameExtension); - sprintf(bruteForceOutputFileName, "brute_force_pair_%s.dat", + sprintf(swiftOutputFileName, "swift_dopair_%.150s.dat", + outputFileNameExtension); + sprintf(bruteForceOutputFileName, "brute_force_pair_%.150s.dat", outputFileNameExtension); /* Delete files if they already exist. */ @@ -632,9 +633,9 @@ int main(int argc, char *argv[]) { finalise = &end_calculation_force; /* Create new output file names. */ - sprintf(swiftOutputFileName, "swift_dopair2_force_%s.dat", + sprintf(swiftOutputFileName, "swift_dopair2_force_%.150s.dat", outputFileNameExtension); - sprintf(bruteForceOutputFileName, "brute_force_dopair2_%s.dat", + sprintf(bruteForceOutputFileName, "brute_force_dopair2_%.150s.dat", outputFileNameExtension); /* Delete files if they already exist. */ diff --git a/tests/testAdiabaticIndex.c b/tests/testAdiabaticIndex.c index 64a60fd2aa1f85a9a28fa312922f5fd68daa62d7..60ecefa264f48bed2d4df205766dc392a1a03d0f 100644 --- a/tests/testAdiabaticIndex.c +++ b/tests/testAdiabaticIndex.c @@ -16,7 +16,6 @@ * along with this program. If not, see <http://www.gnu.org/licenses/>. * ******************************************************************************/ - #include "../config.h" #include <fenv.h> @@ -42,7 +41,7 @@ void check_value(float a, float b, const char* s) { * @brief Check that the pre-defined adiabatic index constants contain correct * values */ -void check_constants() { +void check_constants(void) { float val; val = 0.5 * (hydro_gamma + 1.0f) / hydro_gamma; @@ -115,7 +114,7 @@ void check_functions(float x) { /** * @brief Check adiabatic index constants and power functions */ -int main() { +int main(int argc, char* argv[]) { /* Initialize CPU frequency, this also starts time. */ unsigned long long cpufreq = 0; diff --git a/tests/testDump.c b/tests/testDump.c index fa68ef9869f2f3ac2ee790b9815e42f73976ac9f..f47a44256536d6ac1d9676c844f7081a6daa5ca4 100644 --- a/tests/testDump.c +++ b/tests/testDump.c @@ -38,7 +38,7 @@ void dump_mapper(void *map_data, int num_elements, void *extra_data) { struct dump *d = (struct dump *)extra_data; size_t offset; - char *out_string = dump_get(d, 7, &offset); + char *out_string = (char *)dump_get(d, 7, &offset); char out_buff[8]; /* modulo due to bug in gcc, should be removed */ snprintf(out_buff, 8, "%06zi\n", (offset / 7) % 1000000); diff --git a/tests/testEOS.c b/tests/testEOS.c index 2e72d3d1768f3a6ea4ab1665a099efeb28f8f3f9..595dd0726a0a4a1606390cd38eb06c71399acb78 100644 --- a/tests/testEOS.c +++ b/tests/testEOS.c @@ -86,7 +86,7 @@ int main(int argc, char *argv[]) { float rho, log_rho, log_u, P; struct unit_system us; const struct phys_const *phys_const = 0; // Unused placeholder - const struct swift_params *params = 0; // Unused placeholder + struct swift_params *params = 0; // Unused placeholder const float J_kg_to_erg_g = 1e4; // Convert J/kg to erg/g char filename[64]; // Output table params @@ -274,5 +274,5 @@ int main(int argc, char *argv[]) { return 0; } #else -int main() { return 0; } +int main(int argc, char *argv[]) { return 0; } #endif diff --git a/tests/testFFT.c b/tests/testFFT.c index b93ec9731687a1b08ea7c0abe075d302bd0e8786..5661ec6d98652cd8cbf229f1f345663d422ae588 100644 --- a/tests/testFFT.c +++ b/tests/testFFT.c @@ -22,7 +22,7 @@ #ifndef HAVE_FFTW -int main() { return 0; } +int main(int argc, char *argv[]) { return 0; } #else @@ -44,8 +44,7 @@ int is_close(double x, double y, double abs_err) { return (abs(x - y) < abs_err); } -int main() { - +int main(int argc, char *argv[]) { /* Initialize CPU frequency, this also starts time. */ unsigned long long cpufreq = 0; clocks_set_cpufreq(cpufreq); @@ -69,7 +68,8 @@ int main() { gparts[0].mass = 1.f; /* Read the parameter file */ - struct swift_params *params = malloc(sizeof(struct swift_params)); + struct swift_params *params = + (struct swift_params *)malloc(sizeof(struct swift_params)); parser_read_file("fft_params.yml", params); struct cosmology cosmo; @@ -117,10 +117,10 @@ int main() { /* Now check that we got the right answer */ int nr_cells = space.nr_cells; - double *r = malloc(nr_cells * sizeof(double)); - double *m = malloc(nr_cells * sizeof(double)); - double *pot = malloc(nr_cells * sizeof(double)); - double *pot_exact = malloc(nr_cells * sizeof(double)); + double *r = (double *)malloc(nr_cells * sizeof(double)); + double *m = (double *)malloc(nr_cells * sizeof(double)); + double *pot = (double *)malloc(nr_cells * sizeof(double)); + double *pot_exact = (double *)malloc(nr_cells * sizeof(double)); FILE *file = fopen("potential.dat", "w"); for (int i = 0; i < nr_cells; ++i) { diff --git a/tests/testGravityDerivatives.c b/tests/testGravityDerivatives.c index 1e58dcc49a9fe277ddbc6982b71cfd741992e3b3..a6a709cb1a71d7b23fc1a9528aa718f378448265 100644 --- a/tests/testGravityDerivatives.c +++ b/tests/testGravityDerivatives.c @@ -924,7 +924,7 @@ void test(double x, double y, double tol, double min, const char* name) { /* message("'%s' (%e -- %e) OK!", name, x, y); */ } -int main() { +int main(int argc, char* argv[]) { /* Initialize CPU frequency, this also starts time. */ unsigned long long cpufreq = 0; diff --git a/tests/testGreetings.c b/tests/testGreetings.c index 2f17bddf5731692d515675d2a21f6c3b4a725ebf..ea2819d4616ed1f1d87c61065bf09fce4043243a 100644 --- a/tests/testGreetings.c +++ b/tests/testGreetings.c @@ -19,7 +19,7 @@ #include "swift.h" -int main() { +int main(int argc, char *argv[]) { greetings(); diff --git a/tests/testInteractions.c b/tests/testInteractions.c index 5473dc2588d66e0df2e3e3caddfc04ba3e6f7a2c..b8d4073c179238370684c2b0cf15944e613ce002 100644 --- a/tests/testInteractions.c +++ b/tests/testInteractions.c @@ -748,6 +748,6 @@ int main(int argc, char *argv[]) { #else -int main() { return 1; } +int main(int argc, char *argv[]) { return 1; } #endif diff --git a/tests/testKernel.c b/tests/testKernel.c index dc29d053c2049c9253290db81ea9991828bd5e1b..e3a13a4d54697f32c100b1f149a768a342da37a7 100644 --- a/tests/testKernel.c +++ b/tests/testKernel.c @@ -29,7 +29,7 @@ const int numPoints = (1 << 28); -int main() { +int main(int argc, char *argv[]) { /* Initialize CPU frequency, this also starts time. */ unsigned long long cpufreq = 0; diff --git a/tests/testKernelGrav.c b/tests/testKernelGrav.c index b4a5e4d9f1ff05d8f34840dd19b2a2ccb9ec79b5..36d65ae1d0cc4a7807f60203e8f057e6a9d83cb5 100644 --- a/tests/testKernelGrav.c +++ b/tests/testKernelGrav.c @@ -58,7 +58,7 @@ float gadget(float r, float epsilon) { } } -int main() { +int main(int argc, char *argv[]) { const float h = 3.f; const float r_max = 6.f; diff --git a/tests/testLogger.c b/tests/testLogger.c index 9ec08607383fdb192b7ba994e4af506fde12fea9..b954b67ad6044ae5ec734706f7a1a4ff181541d8 100644 --- a/tests/testLogger.c +++ b/tests/testLogger.c @@ -63,7 +63,7 @@ void test_log_parts(struct dump *d) { /* Recover the last part from the dump. */ bzero(&p, sizeof(struct part)); size_t offset_old = offset; - int mask = logger_read_part(&p, &offset, d->data); + int mask = logger_read_part(&p, &offset, (const char *)d->data); printf( "Recovered part at offset %#016zx with mask %#04x: p.x[0]=%e, " "p.v[0]=%e.\n", @@ -76,7 +76,7 @@ void test_log_parts(struct dump *d) { /* Recover the second part from the dump (only position). */ bzero(&p, sizeof(struct part)); offset_old = offset; - mask = logger_read_part(&p, &offset, d->data); + mask = logger_read_part(&p, &offset, (const char *)d->data); printf( "Recovered part at offset %#016zx with mask %#04x: p.x[0]=%e, " "p.v[0]=%e.\n", @@ -89,7 +89,7 @@ void test_log_parts(struct dump *d) { /* Recover the first part from the dump. */ bzero(&p, sizeof(struct part)); offset_old = offset; - mask = logger_read_part(&p, &offset, d->data); + mask = logger_read_part(&p, &offset, (const char *)d->data); printf( "Recovered part at offset %#016zx with mask %#04x: p.x[0]=%e, " "p.v[0]=%e.\n", @@ -131,7 +131,7 @@ void test_log_gparts(struct dump *d) { /* Recover the last part from the dump. */ bzero(&p, sizeof(struct gpart)); size_t offset_old = offset; - int mask = logger_read_gpart(&p, &offset, d->data); + int mask = logger_read_gpart(&p, &offset, (const char *)d->data); printf( "Recovered gpart at offset %#016zx with mask %#04x: p.x[0]=%e, " "p.v[0]=%e.\n", @@ -144,7 +144,7 @@ void test_log_gparts(struct dump *d) { /* Recover the second part from the dump. */ bzero(&p, sizeof(struct gpart)); offset_old = offset; - mask = logger_read_gpart(&p, &offset, d->data); + mask = logger_read_gpart(&p, &offset, (const char *)d->data); printf( "Recovered gpart at offset %#016zx with mask %#04x: p.x[0]=%e, " "p.v[0]=%e.\n", @@ -157,7 +157,7 @@ void test_log_gparts(struct dump *d) { /* Recover the first part from the dump. */ bzero(&p, sizeof(struct gpart)); offset_old = offset; - mask = logger_read_gpart(&p, &offset, d->data); + mask = logger_read_gpart(&p, &offset, (const char *)d->data); printf( "Recovered gpart at offset %#016zx with mask %#04x: p.x[0]=%e, " "p.v[0]=%e.\n", @@ -189,7 +189,7 @@ void test_log_timestamps(struct dump *d) { /* Recover the three timestamps. */ size_t offset_old = offset; t = 0; - int mask = logger_read_timestamp(&t, &offset, d->data); + int mask = logger_read_timestamp(&t, &offset, (const char *)d->data); printf("Recovered timestamp %020llu at offset %#016zx with mask %#04x.\n", t, offset_old, mask); if (t != 30) { @@ -199,7 +199,7 @@ void test_log_timestamps(struct dump *d) { offset_old = offset; t = 0; - mask = logger_read_timestamp(&t, &offset, d->data); + mask = logger_read_timestamp(&t, &offset, (const char *)d->data); printf("Recovered timestamp %020llu at offset %#016zx with mask %#04x.\n", t, offset_old, mask); if (t != 20) { @@ -209,7 +209,7 @@ void test_log_timestamps(struct dump *d) { offset_old = offset; t = 0; - mask = logger_read_timestamp(&t, &offset, d->data); + mask = logger_read_timestamp(&t, &offset, (const char *)d->data); printf("Recovered timestamp %020llu at offset %#016zx with mask %#04x.\n", t, offset_old, mask); if (t != 10) { diff --git a/tests/testMaths.c b/tests/testMaths.c index 3d8f9a8f9db0cf01276eff89aa44157008cbddc6..2abb3aa99902323597b3d20fb19769a8ea1bafbe 100644 --- a/tests/testMaths.c +++ b/tests/testMaths.c @@ -25,7 +25,7 @@ #include <math.h> #include <stdio.h> -int main() { +int main(int argc, char *argv[]) { const int numPoints = 60000; diff --git a/tests/testMatrixInversion.c b/tests/testMatrixInversion.c index 9a45cd52d6f5d3ec96cc6d3f34fd683971f4cf19..a15e0dab7ec793cf4a914b6eb89c63863ab24fb0 100644 --- a/tests/testMatrixInversion.c +++ b/tests/testMatrixInversion.c @@ -95,7 +95,7 @@ void multiply_matrices(float A[3][3], float B[3][3], float C[3][3]) { #endif } -int main() { +int main(int argc, char* argv[]) { float A[3][3], B[3][3], C[3][3]; setup_matrix(A); diff --git a/tests/testParser.c b/tests/testParser.c index f1211199924df728dfe57376781dc07fe862cec7..c50e9b1a5473c1406e97894f211cc94aa7689d12 100644 --- a/tests/testParser.c +++ b/tests/testParser.c @@ -35,9 +35,6 @@ int main(int argc, char *argv[]) { /* Print the contents of the structure to stdout. */ parser_print_params(¶m_file); - /* Print the contents of the structure to a file in YAML format. */ - parser_write_params_to_file(¶m_file, "parser_output.yml"); - /* Retrieve parameters and store them in variables defined above. * Have to specify the name of the parameter as it appears in the * input file: testParserInput.yaml.*/ @@ -50,7 +47,8 @@ int main(int argc, char *argv[]) { parser_get_param_double(¶m_file, "Simulation:start_time"); const int kernel = parser_get_param_int(¶m_file, "kernel"); - const int optional = parser_get_opt_param_int(¶m_file, "optional", 1); + const int optional = + parser_get_opt_param_int(¶m_file, "Simulation:optional", 1); char ic_file[PARSER_MAX_LINE_SIZE]; parser_get_param_string(¶m_file, "IO:ic_file", ic_file); @@ -62,6 +60,10 @@ int main(int argc, char *argv[]) { no_of_threads, no_of_time_steps, max_h, start_time, ic_file, kernel, optional); + /* Print the contents of the structure to a file in YAML format. */ + parser_write_params_to_file(¶m_file, "used_parser_output.yml", 1); + parser_write_params_to_file(¶m_file, "unused_parser_output.yml", 0); + assert(no_of_threads == 16); assert(no_of_time_steps == 10); assert(fabs(max_h - 1.1255) < 0.00001); diff --git a/tests/testPeriodicBC.c b/tests/testPeriodicBC.c index 385de9752f361f4f015eb64a466473324901030f..ffaa3bda0ccb62cd44169e228086267d2399c31f 100644 --- a/tests/testPeriodicBC.c +++ b/tests/testPeriodicBC.c @@ -78,7 +78,7 @@ struct cell *make_cell(size_t n, double *offset, double size, double h, enum velocity_types vel) { const size_t count = n * n * n; const double volume = size * size * size; - struct cell *cell = malloc(sizeof(struct cell)); + struct cell *cell = (struct cell *)malloc(sizeof(struct cell)); bzero(cell, sizeof(struct cell)); if (posix_memalign((void **)&cell->parts, part_align, @@ -512,9 +512,9 @@ int main(int argc, char *argv[]) { } /* Create output file names. */ - sprintf(swiftOutputFileName, "swift_periodic_BC_%s.dat", + sprintf(swiftOutputFileName, "swift_periodic_BC_%.150s.dat", outputFileNameExtension); - sprintf(bruteForceOutputFileName, "brute_force_periodic_BC_%s.dat", + sprintf(bruteForceOutputFileName, "brute_force_periodic_BC_%.150s.dat", outputFileNameExtension); /* Delete files if they already exist. */ diff --git a/tests/testPotentialPair.c b/tests/testPotentialPair.c index 53fc54ccdd63a9a9150b6701c1a76ac20af91d4c..1f7b0ab0c2577e3b2adc71ca8c121d275679b63d 100644 --- a/tests/testPotentialPair.c +++ b/tests/testPotentialPair.c @@ -82,7 +82,7 @@ double acceleration(double mass, double r, double H, double rlr) { return r * acc * (4. * x * S_prime(2 * x) - 2. * S(2. * x) + 2.); } -int main() { +int main(int argc, char *argv[]) { /* Initialize CPU frequency, this also starts time. */ unsigned long long cpufreq = 0; @@ -150,8 +150,10 @@ int main() { cj.ti_gravity_end_max = 8; /* Allocate multipoles */ - ci.multipole = malloc(sizeof(struct gravity_tensors)); - cj.multipole = malloc(sizeof(struct gravity_tensors)); + ci.multipole = + (struct gravity_tensors *)malloc(sizeof(struct gravity_tensors)); + cj.multipole = + (struct gravity_tensors *)malloc(sizeof(struct gravity_tensors)); bzero(ci.multipole, sizeof(struct gravity_tensors)); bzero(cj.multipole, sizeof(struct gravity_tensors)); diff --git a/tests/testPotentialSelf.c b/tests/testPotentialSelf.c index 6d31f079fa79f7463637ec71dc2c75f37a10b129..25c441b399c9c3ed71479d26bd373e053817036d 100644 --- a/tests/testPotentialSelf.c +++ b/tests/testPotentialSelf.c @@ -85,7 +85,7 @@ double acceleration(double mass, double r, double H, double rlr) { return r * acc * (4. * x * S_prime(2 * x) - 2. * S(2. * x) + 2.); } -int main() { +int main(int argc, char *argv[]) { /* Initialize CPU frequency, this also starts time. */ unsigned long long cpufreq = 0; diff --git a/tests/testReading.c b/tests/testReading.c index ca1e0ef69078c5e384a9cd4eab1098923ce9f279..6b4ca8717ec4508f7e23fde6f287877684c358b8 100644 --- a/tests/testReading.c +++ b/tests/testReading.c @@ -23,7 +23,7 @@ /* Includes. */ #include "swift.h" -int main() { +int main(int argc, char *argv[]) { size_t Ngas = 0, Ngpart = 0, Nspart = 0; int periodic = -1; diff --git a/tests/testRiemannExact.c b/tests/testRiemannExact.c index bce7c52d422f966e10d530cdcbc8f6d20431e153..aa630a76f5f82bd87dc15f00d7d90dff2405e749 100644 --- a/tests/testRiemannExact.c +++ b/tests/testRiemannExact.c @@ -169,7 +169,7 @@ void check_riemann_solution(struct riemann_statevector* WL, /** * @brief Check the exact Riemann solver on the Toro test problems */ -void check_riemann_exact() { +void check_riemann_exact(void) { struct riemann_statevector WL, WR, Whalf; /* Test 1 */ @@ -296,7 +296,7 @@ void check_riemann_exact() { /** * @brief Check the symmetry of the TRRS Riemann solver */ -void check_riemann_symmetry() { +void check_riemann_symmetry(void) { float WL[5], WR[5], Whalf1[5], Whalf2[5], n_unit1[3], n_unit2[3], n_norm, vij[3], totflux1[5], totflux2[5]; @@ -395,7 +395,7 @@ void check_riemann_symmetry() { /** * @brief Check the exact Riemann solver */ -int main() { +int main(int argc, char* argv[]) { /* Initialize CPU frequency, this also starts time. */ unsigned long long cpufreq = 0; diff --git a/tests/testRiemannHLLC.c b/tests/testRiemannHLLC.c index b988825eb0535fcdc46baa1db1203d0dbac3537a..0cce0f9d144f7be9000d368d6ac28e8e49c7d9aa 100644 --- a/tests/testRiemannHLLC.c +++ b/tests/testRiemannHLLC.c @@ -85,7 +85,7 @@ int are_symmetric(float a, float b) { /** * @brief Check the symmetry of the HLLC Riemann solver for a random setup */ -void check_riemann_symmetry() { +void check_riemann_symmetry(void) { float WL[5], WR[5], n_unit1[3], n_unit2[3], n_norm, vij[3], totflux1[5], totflux2[5]; @@ -150,7 +150,7 @@ void check_riemann_symmetry() { /** * @brief Check the HLLC Riemann solver */ -int main() { +int main(int argc, char *argv[]) { /* Initialize CPU frequency, this also starts time. */ unsigned long long cpufreq = 0; diff --git a/tests/testRiemannTRRS.c b/tests/testRiemannTRRS.c index 4a0eac0be23581e175d2c0e599b786fd4508b14a..2c7098367a1ca8db84f097ad01aa2e1e411c433d 100644 --- a/tests/testRiemannTRRS.c +++ b/tests/testRiemannTRRS.c @@ -105,7 +105,7 @@ void check_riemann_solution(struct riemann_statevector* WL, /** * @brief Check the TRRS Riemann solver on the Toro test problems */ -void check_riemann_trrs() { +void check_riemann_trrs(void) { struct riemann_statevector WL, WR, Whalf; /* Test 1 */ @@ -232,7 +232,7 @@ void check_riemann_trrs() { /** * @brief Check the symmetry of the TRRS Riemann solver */ -void check_riemann_symmetry() { +void check_riemann_symmetry(void) { float WL[5], WR[5], Whalf1[5], Whalf2[5], n_unit1[3], n_unit2[3], n_norm, vij[3], totflux1[5], totflux2[5]; @@ -311,7 +311,7 @@ void check_riemann_symmetry() { /** * @brief Check the TRRS Riemann solver */ -int main() { +int main(int argc, char* argv[]) { /* check the TRRS Riemann solver */ check_riemann_trrs(); diff --git a/tests/testSPHStep.c b/tests/testSPHStep.c index 08d6abaa7521de2a7d12fd9672db0d24a5a20a97..63834d94b7696e160dd7ca487ab7e9f1e943abfb 100644 --- a/tests/testSPHStep.c +++ b/tests/testSPHStep.c @@ -27,7 +27,7 @@ */ struct cell *make_cell(size_t N, float cellSize, int offset[3], int id_offset) { size_t count = N * N * N; - struct cell *cell = malloc(sizeof(struct cell)); + struct cell *cell = (struct cell *)malloc(sizeof(struct cell)); bzero(cell, sizeof(struct cell)); struct part *part; struct xpart *xpart; @@ -93,7 +93,7 @@ void runner_dopair1_density(struct runner *r, struct cell *ci, struct cell *cj); void runner_dopair2_force(struct runner *r, struct cell *ci, struct cell *cj); /* Run a full time step integration for one cell */ -int main() { +int main(int argc, char *argv[]) { #ifndef DEFAULT_SPH return 0; diff --git a/tests/testSelectOutput.c b/tests/testSelectOutput.c new file mode 100644 index 0000000000000000000000000000000000000000..3bedddd03784bafddd4599c124929a6d88fbd9b0 --- /dev/null +++ b/tests/testSelectOutput.c @@ -0,0 +1,162 @@ +/******************************************************************************* + * This file is part of SWIFT. + * Copyright (C) 2015 Matthieu Schaller (matthieu.schaller@durham.ac.uk). + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation, either version 3 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * + ******************************************************************************/ + +/* Some standard headers. */ +#include <stdlib.h> + +/* Includes. */ +#include "swift.h" + +void select_output_engine_init(struct engine *e, struct space *s, + struct cosmology *cosmo, + struct swift_params *params, + struct cooling_function_data *cooling, + struct hydro_props *hydro_properties) { + /* set structures */ + e->s = s; + e->cooling_func = cooling; + e->parameter_file = params; + e->cosmology = cosmo; + e->policy = engine_policy_hydro; + e->hydro_properties = hydro_properties; + + /* initialization of threadpool */ + threadpool_init(&e->threadpool, 1); + + /* set parameters */ + e->verbose = 1; + e->time = 0; + e->snapshot_output_count = 0; + e->snapshot_compression = 0; + e->snapshot_label_delta = 1; +}; + +void select_output_space_init(struct space *s, double *dim, int periodic, + size_t Ngas, size_t Nspart, size_t Ngpart, + struct part *parts, struct spart *sparts, + struct gpart *gparts) { + s->periodic = periodic; + for (int i = 0; i < 3; i++) { + s->dim[i] = dim[i]; + } + + /* init space particles */ + s->nr_parts = Ngas; + s->nr_sparts = Nspart; + s->nr_gparts = Ngpart; + + s->parts = parts; + s->gparts = gparts; + s->sparts = sparts; + + /* Allocate the extra parts array for the gas particles. */ + if (posix_memalign((void **)&s->xparts, xpart_align, + Ngas * sizeof(struct xpart)) != 0) + error("Failed to allocate xparts."); + bzero(s->xparts, Ngas * sizeof(struct xpart)); +}; + +void select_output_space_clean(struct space *s) { free(s->xparts); }; + +void select_output_engine_clean(struct engine *e) { + threadpool_clean(&e->threadpool); +} + +int main(int argc, char *argv[]) { + + /* Initialize CPU frequency, this also starts time. */ + unsigned long long cpufreq = 0; + clocks_set_cpufreq(cpufreq); + + char *base_name = "testSelectOutput"; + size_t Ngas = 0, Ngpart = 0, Nspart = 0; + int periodic = -1; + int flag_entropy_ICs = -1; + double dim[3]; + struct part *parts = NULL; + struct gpart *gparts = NULL; + struct spart *sparts = NULL; + + /* parse parameters */ + message("Reading parameters."); + struct swift_params param_file; + char *input_file = "selectOutput.yml"; + parser_read_file(input_file, ¶m_file); + + /* Default unit system */ + message("Initialization of the unit system."); + struct unit_system us; + units_init_cgs(&us); + + /* Default physical constants */ + message("Initialization of the physical constants."); + struct phys_const prog_const; + phys_const_init(&us, ¶m_file, &prog_const); + + /* Read data */ + message("Reading initial conditions."); + read_ic_single("input.hdf5", &us, dim, &parts, &gparts, &sparts, &Ngas, + &Ngpart, &Nspart, &periodic, &flag_entropy_ICs, 1, 0, 0, 0, 1., + 1, 0); + + /* pseudo initialization of the space */ + message("Initialization of the space."); + struct space s; + select_output_space_init(&s, dim, periodic, Ngas, Nspart, Ngpart, parts, + sparts, gparts); + + /* initialization of cosmology */ + message("Initialization of the cosmology."); + struct cosmology cosmo; + cosmology_init_no_cosmo(&cosmo); + + /* pseudo initialization of cooling */ + message("Initialization of the cooling."); + struct cooling_function_data cooling; + + /* pseudo initialization of hydro */ + message("Initialization of the hydro."); + struct hydro_props hydro_properties; + hydro_props_init(&hydro_properties, &prog_const, &us, ¶m_file); + + /* pseudo initialization of the engine */ + message("Initialization of the engine."); + struct engine e; + select_output_engine_init(&e, &s, &cosmo, ¶m_file, &cooling, + &hydro_properties); + + /* check output selection */ + message("Checking output parameters."); + long long N_total[swift_type_count] = {Ngas, Ngpart, 0, 0, Nspart, 0}; + io_check_output_fields(¶m_file, N_total); + + /* write output file */ + message("Writing output."); + write_output_single(&e, base_name, &us, &us); + + /* Clean-up */ + message("Cleaning memory."); + select_output_engine_clean(&e); + select_output_space_clean(&s); + cosmology_clean(&cosmo); + free(parts); + free(gparts); + + return 0; +} diff --git a/tests/testSelectOutput.py b/tests/testSelectOutput.py new file mode 100644 index 0000000000000000000000000000000000000000..aec7f4671fb2768acde768fd9929168559ebb3cb --- /dev/null +++ b/tests/testSelectOutput.py @@ -0,0 +1,54 @@ +############################################################################### + # This file is part of SWIFT. + # Copyright (c) 2015 Bert Vandenbroucke (bert.vandenbroucke@ugent.be) + # Matthieu Schaller (matthieu.schaller@durham.ac.uk) + # + # This program is free software: you can redistribute it and/or modify + # it under the terms of the GNU Lesser General Public License as published + # by the Free Software Foundation, either version 3 of the License, or + # (at your option) any later version. + # + # This program is distributed in the hope that it will be useful, + # but WITHOUT ANY WARRANTY; without even the implied warranty of + # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + # GNU General Public License for more details. + # + # You should have received a copy of the GNU Lesser General Public License + # along with this program. If not, see <http://www.gnu.org/licenses/>. + # + ############################################################################## + +# Check the output done with swift + +import h5py + +filename = "testSelectOutput_0000.hdf5" +log_filename = "select_output.log" + +# Read the simulation data +sim = h5py.File(filename, "r") +part0 = sim["/PartType0"] + +# check presence / absence fields +if "Velocities" in part0: + raise Exception("`Velocities` present in HDF5 but should not be written") + +if "Coordinates" not in part0: + raise Exception("`Coordinates` not present in HDF5 but should be written") + +if "Masses" not in part0: + raise Exception("`Masses` not present in HDF5 but should be written") + +if "Density" not in part0: + raise Exception("`Density` not present in HDF5 but should be written") + + +# check error detection +with open(log_filename, "r") as f: + data = f.read() + +if "SelectOutput:Masses_Gas" not in data: + raise Exception("Input error in `SelectOuput:Masses_Gas` not detected") + +if "SelectOutput:Pot_Gas" not in data: + raise Exception("Parameter name error not detected for `SelectOutput:Pot_Gas`") diff --git a/tests/testSelectOutput.sh.in b/tests/testSelectOutput.sh.in new file mode 100644 index 0000000000000000000000000000000000000000..85fd999643f82fd10d96013ad360a75a441a9e1a --- /dev/null +++ b/tests/testSelectOutput.sh.in @@ -0,0 +1,14 @@ +#!/bin/bash + +echo "Creating initial conditions" +python @srcdir@/makeInput.py + +echo "Generating output" +./testSelectOutput 2>&1 | tee select_output.log + +echo "Checking output" +python @srcdir@/testSelectOutput.py + +rm -f testSelectOutput_0000.hdf5 testSelectOutput.xmf select_output.log + +echo "Test passed" diff --git a/tests/testSingle.c b/tests/testSingle.c index e2ec35bc4382658be7754b9c11fc3a3dbe4bbdc1..52fe51c529b8c3b43f9c5f03fe44b5b742acfc07 100644 --- a/tests/testSingle.c +++ b/tests/testSingle.c @@ -142,6 +142,6 @@ int main(int argc, char *argv[]) { } #else -int main() { return 0; } +int main(int argc, char *argv[]) { return 0; } #endif diff --git a/tests/testSymmetry.c b/tests/testSymmetry.c index 1ab493a7c149070dc667a2377ab205df7f873856..886290ab984603d0afb3201377611598cd7163e4 100644 --- a/tests/testSymmetry.c +++ b/tests/testSymmetry.c @@ -16,7 +16,6 @@ * along with this program. If not, see <http://www.gnu.org/licenses/>. * ******************************************************************************/ - #include "../config.h" #include <fenv.h> @@ -32,7 +31,7 @@ void print_bytes(void *p, size_t len) { printf(")\n"); } -void test() { +void test(void) { #if defined(SHADOWFAX_SPH) /* Initialize the Voronoi simulation box */ diff --git a/tests/testTimeIntegration.c b/tests/testTimeIntegration.c index 972e6f2323c0401c70de2990bcb088f95b3dfd83..2034c402a2d626a7b503613f6cade821ec438151 100644 --- a/tests/testTimeIntegration.c +++ b/tests/testTimeIntegration.c @@ -26,7 +26,7 @@ * @brief Test the kick-drift-kick leapfrog integration * via a Sun-Earth simulation */ -int main() { +int main(int argc, char *argv[]) { struct cell c; int i; @@ -63,10 +63,10 @@ int main() { /* Create a particle */ struct part *parts = NULL; - parts = malloc(sizeof(struct part)); + parts = (struct part *)malloc(sizeof(struct part)); bzero(parts, sizeof(struct part)); struct xpart *xparts = NULL; - xparts = malloc(sizeof(struct xpart)); + xparts = (struct xpart *)malloc(sizeof(struct xpart)); bzero(xparts, sizeof(struct xpart)); /* Put the particle on the orbit */ diff --git a/tests/testUtilities.c b/tests/testUtilities.c index b835faba9026661361c8828fce5c18beb2b80889..963e4d2233bbd56f7d61d5e2a0d2424006aa63ab 100644 --- a/tests/testUtilities.c +++ b/tests/testUtilities.c @@ -23,7 +23,7 @@ /** * @brief Test generic utility functions */ -int main() { +int main(int argc, char *argv[]) { /// Test find_value_in_monot_incr_array() int n = 100; float array[n]; diff --git a/tests/testVoronoi1D.c b/tests/testVoronoi1D.c index d16a36d9449d7bfdb2c74408efad61b219b1d7e3..083d9aaa279f241ae1ac4d0bfaeb2780a39574a4 100644 --- a/tests/testVoronoi1D.c +++ b/tests/testVoronoi1D.c @@ -16,10 +16,9 @@ * along with this program. If not, see <http://www.gnu.org/licenses/>. * ******************************************************************************/ - #include "hydro/Shadowswift/voronoi1d_algorithm.h" -int main() { +int main(int argc, char *argv[]) { double box_anchor[1] = {-0.5}; double box_side[1] = {2.}; diff --git a/tests/testVoronoi2D.c b/tests/testVoronoi2D.c index 509d3ab69976fa8618db389ebd87eedb9ea34409..60a71624904c11a3cdb3b90906189df60bfc6956 100644 --- a/tests/testVoronoi2D.c +++ b/tests/testVoronoi2D.c @@ -16,14 +16,13 @@ * along with this program. If not, see <http://www.gnu.org/licenses/>. * ******************************************************************************/ - #include "hydro/Shadowswift/voronoi2d_algorithm.h" #include "tools.h" /* Number of cells used to test the 2D interaction algorithm */ #define TESTVORONOI2D_NUMCELL 100 -int main() { +int main(int argc, char *argv[]) { /* initialize simulation box */ double anchor[3] = {-0.5f, -0.5f, -0.5f}; diff --git a/tests/testVoronoi3D.c b/tests/testVoronoi3D.c index b4f219a41368bb3ce4e8111ae44c43e7fa1f7441..db5c33aa6e4ef0792373febd5d773a6d1198db29 100644 --- a/tests/testVoronoi3D.c +++ b/tests/testVoronoi3D.c @@ -53,7 +53,7 @@ * * @return Volume of the simulation box as it is stored in the global variables. */ -float voronoi_get_box_volume() { +float voronoi_get_box_volume(void) { return VORONOI3D_BOX_SIDE_X * VORONOI3D_BOX_SIDE_Y * VORONOI3D_BOX_SIDE_Z; } @@ -129,7 +129,7 @@ float voronoi_get_box_face(unsigned long long id, float *face_midpoint) { /** * @brief Check if voronoi_volume_tetrahedron() works */ -void test_voronoi_volume_tetrahedron() { +void test_voronoi_volume_tetrahedron(void) { float v1[3] = {0., 0., 0.}; float v2[3] = {0., 0., 1.}; float v3[3] = {0., 1., 0.}; @@ -142,7 +142,7 @@ void test_voronoi_volume_tetrahedron() { /** * @brief Check if voronoi_centroid_tetrahedron() works */ -void test_voronoi_centroid_tetrahedron() { +void test_voronoi_centroid_tetrahedron(void) { float v1[3] = {0., 0., 0.}; float v2[3] = {0., 0., 1.}; float v3[3] = {0., 1., 0.}; @@ -158,7 +158,7 @@ void test_voronoi_centroid_tetrahedron() { /** * @brief Check if voronoi_calculate_cell() works */ -void test_calculate_cell() { +void test_calculate_cell(void) { double box_anchor[3] = {VORONOI3D_BOX_ANCHOR_X, VORONOI3D_BOX_ANCHOR_Y, VORONOI3D_BOX_ANCHOR_Z}; @@ -234,7 +234,7 @@ void test_calculate_cell() { assert(cell.face_midpoints[5][2] == face_midpoint[2] - cell.x[2]); } -void test_paths() { +void test_paths(void) { float u, l, q; int up, us, uw, lp, ls, lw, qp, qs, qw; float r2, dx[3]; @@ -1240,7 +1240,7 @@ void set_coordinates(struct part *p, double x, double y, double z, } #endif -void test_degeneracies() { +void test_degeneracies(void) { #ifdef SHADOWFAX_SPH int idx = 0; /* make a small cube */ @@ -1308,7 +1308,7 @@ void test_degeneracies() { #endif } -int main() { +int main(int argc, char *argv[]) { /* Set the all enclosing simulation box dimensions */ double box_anchor[3] = {VORONOI3D_BOX_ANCHOR_X, VORONOI3D_BOX_ANCHOR_Y, diff --git a/theory/Cosmology/coordinates.tex b/theory/Cosmology/coordinates.tex index e3a22eaae025e6911e8aca92d6d29bf5fa82bf21..bc593606026217f345e2f77d90cee3f6632b3cef 100644 --- a/theory/Cosmology/coordinates.tex +++ b/theory/Cosmology/coordinates.tex @@ -40,19 +40,29 @@ $\Psi \equiv \frac{1}{2}a\dot{a}\mathbf{r}_i^2$ and obtain -\frac{\phi'}{a},\\ \phi' &= a\phi + \frac{1}{2}a^2\ddot{a}\mathbf{r}_i'^2.\nonumber \end{align} -Finally, we introduce the velocities $\mathbf{v}' \equiv -a^2\dot{\mathbf{r}'}$ that are used internally by the code. Note that these -velocities \emph{do not} have a physical interpretation. We caution that they -are not the peculiar velocities, nor the Hubble flow, nor the total -velocities\footnote{One additional inconvenience of our choice of +Finally, we introduce the velocities +$\mathbf{v}' \equiv a^2\dot{\mathbf{r}'}$ that are used internally by +the code. Note that these velocities \emph{do not} have a physical +interpretation. We caution that they are not the peculiar velocities +($\mathbf{v}_{\rm p} \equiv a\dot{\mathbf{r}'} = +\frac{1}{a}\mathbf{v}'$), nor the Hubble flow +($\mathbf{v}_{\rm H} \equiv \dot{a}\mathbf{r}'$), nor the total +velocities +($\mathbf{v}_{\rm tot} \equiv \mathbf{v}_{\rm p} + \mathbf{v}_{\rm H} += \dot{a}\mathbf{r}' + \frac{1}{a}\mathbf{v}'$) and also differ from +the convention used in \gadget snapshots +($\sqrt{a} \dot{\mathbf{r}'}$) and other related simulation +codes\footnote{One additional inconvenience of our choice of generalised coordinates is that our velocities $\mathbf{v}'$ and sound-speed $c'$ do not have the same dependencies on the scale-factor. The signal velocity entering the time-step calculation - will hence read $v_{\rm sig} = a\dot{\mathbf{r}'} + c = \frac{1}{a} \left( + will hence read + $v_{\rm sig} = a\dot{\mathbf{r}'} + c = \frac{1}{a} \left( |\mathbf{v}'| + a^{(5 - 3\gamma)/2}c'\right)$.}. -This choice implies that $\dot{v}' = a \ddot{r}$. Using the SPH -definition of density, $\rho_i' = -\sum_jm_jW(\mathbf{r}_{j}'-\mathbf{r}_{i}',h_i') = +% This choice implies that $\dot{v}' = a \ddot{r}$. + +Using the SPH definition of density, +$\rho_i' = \sum_jm_jW(\mathbf{r}_{j}'-\mathbf{r}_{i}',h_i') = \sum_jm_jW_{ij}'(h_i')$, we can follow \cite{Price2012} and apply the Euler-Lagrange equations to write \begin{alignat}{3} diff --git a/theory/Multipoles/bibliography.bib b/theory/Multipoles/bibliography.bib index c3d1289584cab55cd8e0d4d0765d70e22f0fcf2e..547b82159cef01e1da65efb32fc7e3d47a66112e 100644 --- a/theory/Multipoles/bibliography.bib +++ b/theory/Multipoles/bibliography.bib @@ -191,4 +191,61 @@ keywords = "adaptive algorithms" adsnote = {Provided by the SAO/NASA Astrophysics Data System} } +@ARTICLE{Hubber2011, + author = {{Hubber}, D.~A. and {Batty}, C.~P. and {McLeod}, A. and {Whitworth}, A.~P. + }, + title = "{SEREN - a new SPH code for star and planet formation simulations. Algorithms and tests}", + journal = {\aap}, + keywords = {hydrodynamics, methods: numerical, stars: formation}, + year = 2011, + month = may, + volume = 529, + eid = {A27}, + pages = {A27}, + doi = {10.1051/0004-6361/201014949}, + adsurl = {http://adsabs.harvard.edu/abs/2011A%26A...529A..27H}, + adsnote = {Provided by the SAO/NASA Astrophysics Data System} +} + + +@ARTICLE{Klessen1997, + author = {{Klessen}, R.}, + title = "{GRAPESPH with fully periodic boundary conditions - Fragmentation of molecular clouds}", + journal = {\mnras}, + keywords = {Molecular Clouds, Interstellar Matter, Fragmentation, Astronomical Models, Computer Programs, Boundary Conditions}, + year = 1997, + month = nov, + volume = 292, + pages = {11}, + doi = {10.1093/mnras/292.1.11}, + adsurl = {http://adsabs.harvard.edu/abs/1997MNRAS.292...11K}, + adsnote = {Provided by the SAO/NASA Astrophysics Data System} +} + +@ARTICLE{Hernquist1991, + author = {{Hernquist}, L. and {Bouchet}, F.~R. and {Suto}, Y.}, + title = "{Application of the Ewald method to cosmological N-body simulations}", + journal = {\apjs}, + keywords = {Computational Astrophysics, Galactic Structure, Hubble Constant, Many Body Problem, Astronomical Models, Boundary Conditions, Spatial Resolution}, + year = 1991, + month = feb, + volume = 75, + pages = {231-240}, + doi = {10.1086/191530}, + adsurl = {http://adsabs.harvard.edu/abs/1991ApJS...75..231H}, + adsnote = {Provided by the SAO/NASA Astrophysics Data System} +} + +@ARTICLE{Ewald1921, + author = {{Ewald}, P.~P.}, + title = "{Die Berechnung optischer und elektrostatischer Gitterpotentiale}", + journal = {Annalen der Physik}, + year = 1921, + volume = 369, + pages = {253-287}, + doi = {10.1002/andp.19213690304}, + adsurl = {http://adsabs.harvard.edu/abs/1921AnP...369..253E}, + adsnote = {Provided by the SAO/NASA Astrophysics Data System} +} + diff --git a/theory/Multipoles/exact_forces.tex b/theory/Multipoles/exact_forces.tex new file mode 100644 index 0000000000000000000000000000000000000000..2602c2db1c06eb496d4c450e0d33ee87d28831e8 --- /dev/null +++ b/theory/Multipoles/exact_forces.tex @@ -0,0 +1,20 @@ +\subsection{Exact forces for accuracy checks} +\label{ssec:exact_forces} + +To assess the accuracy of the gravity solver, \swift can also compute +the gravitational forces and potential for a subset of particles using +a simple direct summation method. This is obviously much slower and +should only be used for code testing purposes. The forces for a +selection of particles are computed every time-step if they are active +and dumped to a file alongside the forces computed by the FMM method. + +In the case where periodic boundary conditions are used, we apply the +\cite{Ewald1921} summation technique to include the contribution to +the forces of all the infinite periodic replications of the particle +distribution. We use the approximation to the infinite series of terms +proposed by \cite{Hernquist1991}\footnote{Note, however, that there is +a typo in their formula for the force correction terms. The correct +expression is given by \cite{Klessen1997} \citep[see +also][]{Hubber2011}.}, which we tabulate in one octant using 64 +equally spaced bins along each spatial direction spanning and the +range $[0,L]$, where $L$ is the side-length of the box. diff --git a/theory/Multipoles/fmm_standalone.tex b/theory/Multipoles/fmm_standalone.tex index 65d6b522f3a6a1f9ede41091a39f4f5145cf041c..1b597fa636650cd09469b9952f7a14bdf22ce35f 100644 --- a/theory/Multipoles/fmm_standalone.tex +++ b/theory/Multipoles/fmm_standalone.tex @@ -35,6 +35,7 @@ Making gravity great again. \input{fmm_summary} %\input{gravity_derivatives} \input{mesh_summary} +\input{exact_forces} \bibliographystyle{mnras} \bibliography{./bibliography.bib}