Skip to content
Snippets Groups Projects

Basic infrastructure for black holes

Merged Matthieu Schaller requested to merge black_holes into master

Apologies in advance for the monster push...

This adds support for black hole particles. We can:

  • Read BHs from ICs,
  • Write BHs to snapshots,
  • Exchange Bhs over the network,
  • Put the BHs in the correct cells,
  • Integrate them forward in time.

This should not break any of the simulations we do until now and the EAGLE-low-z examples should run with the new runtime flag --black-holes. This ought to work with and without MPI.

Actual tasks using the BHs will follow but having this in place first is helpful.

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Matthieu Schaller resolved all discussions

    resolved all discussions

  • added 2 commits

    • e3523307 - Added the black hole directory to the Doxygen list of files.
    • 12d18a2c - Documentation fixes.

    Compare with previous version

  • Thanks Peter. I have fixed these.

  • added 1 commit

    • fc401a32 - Update the command line options in the documentations.

    Compare with previous version

  • Ran an EAGLE_50 using:

    ./configure --with-parmetis --with-subgrid=EAGLE --disable-hand-vec --enable-sanitizer --enable-undefined-sanitizer

    and

    module load gnu_comp intel_mpi/2018 fftw/3.3.7 gsl/2.4 parallel_hdf5/1.10.3 parmetis/4.0.3
    
    mpirun -np 6 ../../swift_mpi --pin --cosmology --hydro --self-gravity --stars --cooling --black-holes -t 16 eagle_50.yml -v 2

    on COSMA6 and it failed with:

    [0001] [02079.1] common_io.c:io_collect_gparts_to_write():1583: Collected the wrong number of g-particles (68791027 vs. 68798829 expected)
    application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1

    During the dump of the initial snapshot.

    [0000] [01917.3] engine_dump_snapshot: Dumping snapshot at a=9.090909e-01
    [0003] [01917.4] engine_init_particles: took 1557046.750 ms.
    [0003] [01917.4] engine_dump_snapshot: Dumping snapshot at a=9.090909e-01
    [0000] [01921.4] write_output_parallel: Snapshot and internal units match. No conversion needed.
    [0000] [02001.3] writeArray: Need to redo one iteration for array 'ElementAbundance'
    [0000] [02025.2] writeArray: Need to redo one iteration for array 'SmoothedElementAbundance'
  • Thanks! Do you know whether it works with serial hdf5?

  • Serial worked for smaller volumes, so no, not for certain. I'll try that.

  • And the answer is yes, that is now at step 104. Will check on it later.

  • Great, thanks. Means, it's the i/o only then. I'll work on it.

  • So that has now ran to step 2500, so proceeding nicely. Don't know when I'll get another chance to look at this MR, so if you fix the I/O issue and need to make progress, just merge it in.

  • added 1 commit

    • b46a4c47 - Use the correct number of DM particles in the parallel io write.

    Compare with previous version

  • Think I have fixed it. The same job is up and running now with parallel hdf5.

  • added 1 commit

    • ed22584a - Add the black hole particles to the top-level count and offset i/o lists.

    Compare with previous version

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading