Skip to content

WIP: Lightcone outputs

John Helly requested to merge lightcone into master

This merge request modifies Swift to produce lightcone outputs.

In the .yml file we specify the observer position and redshift limits:

Lightcone:
  basename: lightcone
  z_min: 0.0
  z_max: 0.05
  observer_position: [150.0, 150.0, 150.0]
  use_gas:           0
  use_dm:            1
  use_dm_background: 0
  use_stars:         0
  use_black_hole:    0
  use_neutrino:      0

Before each time step, we calculate which periodic replications of the simulation box contain particles which could cross the lightcone during the time step. Every time a particle is drifted we loop over the list of replications and check if the particle crossed the lightcone. If so, we add the particle to a buffer. At the end of the time step we check the size of the buffer and if it is above some size threshold we write the particles to disk.

TODO: use the cell structure to narrow down how many replications to search for each particle. (now done)

There are a few slightly tricky implementation details:

Restart files

Each MPI rank writes a separate output file. Just before dumping restart files each rank flushes its particle buffers and closes the current file. Any subsequent particles crossing the lightcone will be written to a new file. The current file index is stored in the restart files so that if we crash and restart we will overwrite the lightcone files that were written after the restart files were dumped and we should avoid losing or duplicating any particles.

Buffering particles

Particles crossing the lightcone are identified during drift operations, which are part of the task system and execute on multiple threads. The particle buffer needs to allow multiple threads to append particles simultaneously. Also, there isn't an obvious limit on how many particles we might need to buffer during one time step. Currently the particle buffer consists of a linked list of fixed size data blocks so extra storage can be allocated as needed. Most updates only involve an atomic increment and a memcpy() but when we run out of space in the current block a lock is used. I've tried to use gcc's __atomic intrinsics to make this work.

Multiple particle types and output quantities

Each particle type has its own buffer. In lightcone_io.c there's a struct to contain the data we keep about each particle type, a function to fill the struct from the part/gpart/sparts etc, and a function to write the data to HDF5 datasets. New output quantities can be added by modifying these. Currently it doesn't use any of the infrastructure that's used for writing snapshots.

Healpix maps

Lightcone healpix maps are not implemented yet, but I think we could use the same buffering scheme to periodically update in-memory healpix maps distributed over all MPI ranks. We could wait until the buffer is suitably full to avoid doing extra communication on every time step. Or we could keep the maps on disk and periodically read, update and write them. Making restarts work might be tricky in the latter case (maybe we could make a backup copy of the maps to go with each dump?).

Multiple lightcones

This is also not implemented yet, but all of the lightcone related data is stored in a lightcone_props struct so we could have an array of these structs and call the lightcone functions on each element to make multiple lightcones. It's not clear to me if we can specify multiple lightcones in the .yml file in a sensible way.

Edited by John Helly

Merge request reports