This merge request modifies Swift to produce lightcone outputs.
In the .yml file we specify the observer position and redshift limits:
Lightcone: basename: lightcone z_min: 0.0 z_max: 0.05 observer_position: [150.0, 150.0, 150.0] use_gas: 0 use_dm: 1 use_dm_background: 0 use_stars: 0 use_black_hole: 0 use_neutrino: 0
Before each time step, we calculate which periodic replications of the simulation box contain particles which could cross the lightcone during the time step. Every time a particle is drifted we loop over the list of replications and check if the particle crossed the lightcone. If so, we add the particle to a buffer. At the end of the time step we check the size of the buffer and if it is above some size threshold we write the particles to disk.
TODO: use the cell structure to narrow down how many replications to search for each particle. (now done)
There are a few slightly tricky implementation details:
Each MPI rank writes a separate output file. Just before dumping restart files each rank flushes its particle buffers and closes the current file. Any subsequent particles crossing the lightcone will be written to a new file. The current file index is stored in the restart files so that if we crash and restart we will overwrite the lightcone files that were written after the restart files were dumped and we should avoid losing or duplicating any particles.
Particles crossing the lightcone are identified during drift operations, which are part of the task system and execute on multiple threads. The particle buffer needs to allow multiple threads to append particles simultaneously. Also, there isn't an obvious limit on how many particles we might need to buffer during one time step. Currently the particle buffer consists of a linked list of fixed size data blocks so extra storage can be allocated as needed. Most updates only involve an atomic increment and a memcpy() but when we run out of space in the current block a lock is used. I've tried to use gcc's __atomic intrinsics to make this work.
Multiple particle types and output quantities
Each particle type has its own buffer. In lightcone_io.c there's a struct to contain the data we keep about each particle type, a function to fill the struct from the part/gpart/sparts etc, and a function to write the data to HDF5 datasets. New output quantities can be added by modifying these. Currently it doesn't use any of the infrastructure that's used for writing snapshots.
Lightcone healpix maps are now implemented. It uses the same buffering scheme to periodically update in-memory healpix maps distributed over all MPI ranks.
Implemented, but not tested. All of the lightcone related data is stored in a lightcone_props struct so we can have an array of these structs and call the lightcone functions on each element to make multiple lightcones.