diff --git a/doc/RTD/source/AnalysisTools/index.rst b/doc/RTD/source/AnalysisTools/index.rst
index 7ba9f6256d841c7bb3bec42abab6a5c22ba4083e..f7f2f979666270ce371b532b6eab7bad3a23c9bd 100644
--- a/doc/RTD/source/AnalysisTools/index.rst
+++ b/doc/RTD/source/AnalysisTools/index.rst
@@ -58,24 +58,13 @@ can be made by calling the ``memuse_log_allocation()`` function directly.
 
 The output files are called ``memuse_report-step<n>.dat`` or
 ``memuse_report-rank<m>-step<n>.dat`` if running using MPI. These have a line
-for each allocation or free that records the time, memory address, step,
-whether an allocation or free, the label and when an allocation, the amount of
-memory. The comments in this file also record the actual memory use of the
-process (including threads) as reported by the operating system at the end of
-the step.
-
-To post process these files into a memory used timeline and get a report of
-the peak memory use, as well as the memory still in use at the end of the step
-there is an basic analysis script ``analyse_memuse_logs.py`` and two wrappers
-that process a directory of logs, these are ``./process_memuse_logs.sh`` and
-``./process_memuse_logs_MPI.sh`` for non-MPI and MPI runs respectively.
-
-Note that the process scripts process each step individually and also process
-all the logs as a single sequence. When interpreting these some care should be
-taken as they are not all the memory allocated, just important allocations in
-SWIFT and when looking at a single step the context of any previous steps is
-not used, so you only see allocations made in that step and the effect of any
-matching frees (so allocations made in previous steps that are freed in this
-step will not be understood and will be ignored, you need the global analysis
-to understand that).
+for each allocation or free that records the time, step, whether an allocation
+or free, the label, the amount of memory allocated or freed and the total of
+all (labelled) memory in use at that time.
+
+Comments at the end of this file also record the actual memory use of the
+process (including threads), as reported by the operating system at the end of
+the step, and the total memory still in use per label. Note this includes
+memory still active from previous steps and the total memory is also continued
+from the previous dump.
 
diff --git a/doc/RTD/source/FriendsOfFriends/algorithm_description.rst b/doc/RTD/source/FriendsOfFriends/algorithm_description.rst
new file mode 100644
index 0000000000000000000000000000000000000000..bf7fd77fbc36f026088c317bf9b50c24c1406061
--- /dev/null
+++ b/doc/RTD/source/FriendsOfFriends/algorithm_description.rst
@@ -0,0 +1,62 @@
+.. Friends Of Friends
+   Matthieu Schaller 15th June 2019
+
+.. _fof_algorithm_description_label:
+
+Friends-Of-Friends Algorithm
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Friends-Of-Friends (FOF) is a common tool used to identify groups of
+particles in a simulation. This can, for instance, be used to identify
+haloes in a cosmological simulation.
+
+In practice, this is done by using a *linking length* ``l`` and
+demanding that any particle that finds another particle within a
+distance ``l`` is linked to it to form a group. A particle is linked
+directly to all other particles within a distance ``l`` (its
+*friends*) and indirectly to all particles that are linked to its
+friends (its *friends-of-friends*). This creates networks of linked particles
+which are called *groups*. The size (or length) of
+a group is the number of particles in that group. If a particle does not
+find any other particle within ``l`` then it forms its own group of
+size 1. For a given distribution of particles the resulting list of
+groups is unique and unambiguously defined.
+
+Small groups are typically discarded, the final catalogue only contains
+objects with a length above a minimal threshold, typically of the
+order of ``20`` particles. Smaller groups can often be spurious.
+
+Once the groups have been identified, properties can be computed for
+each of them. The total mass or the centre of mass are common
+examples. These are then stored in catalogues alongside a unique
+identifier for each group.
+
+SWIFT implements FOF using a Union-Find approach. It also exploits the
+domain decomposition and tree structure that is created for the other
+parts of the code. The tree can be easily used to find neighbours of
+particles within the linking length.
+
+Depending on the application, the choice of linking length and
+minimal group size can vary. For cosmological applications, bound
+structures (dark matter haloes) are traditionally identified using a
+linking length expressed as :math:`0.2` of the mean inter-particle
+separation :math:`d` in the simulation which is given by :math:`d =
+\sqrt[3]{\frac{V}{N}}`, where :math:`N` is the number of particles in
+the simulation and :math:`V` is the simulation (co-moving)
+volume. Usually only dark matter particles are considered for the
+number :math:`N`. Other particle types are linked but do not
+participate in the calculation of the linking length. Experience shows
+that this produces groups that are similar to the commonly adopted
+(but much more complex) definition of virialised haloes. A minimal
+group length of :math:`32` is often adopted in order to get a robust
+catalogue of haloes and compute a good halo mass function.
+
+For non-cosmological applications of the FOF algorithm, the choice of
+the linking length is more difficult and left to the user. The choice
+of the minimal group size to consider is also application dependent.
+
+In SWIFT, FOF is also used to identify haloes in which to seed black
+holes in a cosmological run. When used in this mode, the code does not
+produce any outputs but uses the catalogue internally to convert some
+gas particles into black hole particles.
+
diff --git a/doc/RTD/source/FriendsOfFriends/index.rst b/doc/RTD/source/FriendsOfFriends/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..d3ee08c43150dddd5f9247e0d109ab37c565e2f2
--- /dev/null
+++ b/doc/RTD/source/FriendsOfFriends/index.rst
@@ -0,0 +1,20 @@
+.. Friends Of Friends
+   Matthieu Schaller 15th June 2019
+
+.. _Friends_Of_Friends_label:
+
+Friends-Of-Friends (FOF) Halo Finder
+====================================
+
+This section describes the Friends-Of-Friends group
+finder and related parameters
+
+.. toctree::
+   :maxdepth: 2
+   :caption: Contents:
+
+   algorithm_description
+   stand_alone_fof
+   on_the_fly_fof
+   parameter_description
+
diff --git a/doc/RTD/source/FriendsOfFriends/on_the_fly_fof.rst b/doc/RTD/source/FriendsOfFriends/on_the_fly_fof.rst
new file mode 100644
index 0000000000000000000000000000000000000000..b9b5272a393c85eeddd2f091c3cf00d416991b84
--- /dev/null
+++ b/doc/RTD/source/FriendsOfFriends/on_the_fly_fof.rst
@@ -0,0 +1,46 @@
+.. Friends Of Friends
+   Matthieu Schaller 15th June 2019
+
+.. _fof_on_the_fly_label:
+
+On-the-fly Friends-Of-Friends and Black Hole seeding
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The main purpose of the on-the-fly FOF is to identify haloes during a
+cosmological simulation in order to seed some of them with black holes
+based on physical considerations.
+
+**In this mode, no group catalogue is written to the disk. The resulting list
+of haloes is only used internally by SWIFT.**
+
+Once the haloes have been identified by the FOF code, SWIFT will iterate
+over the list of groups and will check whether each halo obeys the
+following criteria:
+
+  * Is above a user-specified mass threshold (typically
+    :math:`10^{10}~\rm{M}_\odot` or more).
+  * Contains *at least* one gas particle.
+  * Does *not* contain any already existing black hole particle.
+
+If a group satisfies all these requirements then the *densest* gas particle
+(based on their SPH density) in the group will be converted to a black
+hole. In practice this means that the gas particle is removed and a black
+hole particle with the same dynamical mass is inserted at the same position
+in the domain and with the same velocity. Additional properties are copied
+from the gas particle to the black hole particle depending on the specifics
+of the sub-grid model in use (see :ref:`subgrid`).
+
+Given that the halo mass considered for seeding black holes is usually many
+hundred times the mass of a single particle, it can be advantageous to
+raise the minimal group length required for a halo to appear in the catalogue
+of objects. This reduces the number of groups that are kept in memory and
+speeds up the seeding procedure by removing haloes that are obviously too
+small. For instance, in the case of EAGLE-like runs, the dark matter
+particle mass is around :math:`10^7\rm~{M}_\odot` and the minimal halo mass
+considered for seeding a black hole is of order
+:math:`10^{10}~\rm{M}_\odot`. Groups will hence need to have at least 1000
+particles to pass the first criterion outlined above. In this case, using a
+minimal group length of order 500 is beneficial over the more traditional
+value of 32 as it will reduce the number of haloes to consider by about a
+factor 10 (assuming a normal cosmology) whilst still being far enough from
+the actual user-defined mass limit to be safe.
diff --git a/doc/RTD/source/FriendsOfFriends/parameter_description.rst b/doc/RTD/source/FriendsOfFriends/parameter_description.rst
new file mode 100644
index 0000000000000000000000000000000000000000..bd5fc2b8c758b1b8df65158900c4a8dcdf6fb132
--- /dev/null
+++ b/doc/RTD/source/FriendsOfFriends/parameter_description.rst
@@ -0,0 +1,88 @@
+.. Friends Of Friends
+   Matthieu Schaller 15th June 2019
+
+.. _Fof_Parameter_Description_label:
+
+Friends-Of-Friends Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``FOF`` section of the parameter file contains all the options related
+to the group finder. Some parameters only make sense in the stand-alone
+version and some only in the on-the-fly version of the module.
+
+The main parameter is the linking length that will be used. The easiest way
+to define it is to set the ratio of the linking length to the mean *dark
+matter* inter-particle separation (irrespective of whether there are other
+kinds of particles present in the run). This is done using the parameter
+``linking_length_ratio`` and the typical value used is
+``0.2``. Users can optionally overwrite the linking length by imposing an
+absolute value using the parameter ``absolute_linking_length``. This is
+expressed in internal units. This value will be ignored (and the ratio of
+the mean inter-particle separation will be used) when set to ``-1``.
+
+The second important parameter is the minimal size of groups to retain in
+the catalogues. This is given in terms of number of particles (of all types)
+via the parameter ``min_group_size``. When analysing simulations, to
+identify haloes, the common practice is to set this to ``32`` in order to
+not plague the catalogue with too many small, likely unbound, structures.
+When using the FOF code on-the-fly for black hole seeding, larger values
+are recommended as there is no need to store groups much smaller than the
+minimal halo mass (see below).
+
+------------------------
+
+In the case of black holes seeding, we run the FOF module on-the-fly during
+a cosmological simulation. The time of the first FOF call is controlled by
+the following two options:
+
+  * Time of the first FOF call (non-cosmological runs): ``time_first``,
+  * Scale-factor of the first FOF call (cosmological runs):
+    ``scale_factor_first``.
+
+One of those two parameters has to be provided depending on the type of
+run. In the case of non-cosmological runs, the time of the first FOF call
+is expressed in the internal units of time. Users also have to provide the
+difference in time (or scale-factor) between consecutive outputs:
+
+  * Time difference between consecutive outputs: ``delta_time``.
+
+The last parameter relevant to the on-the-fly FOF module is the minimal
+halo (group) mass considered for seeding black holes. This is specified by
+the parameter ``black_hole_seed_halo_mass_Msun`` which is expressed in
+solar masses.
+
+------------------------
+
+In the case of the stand-alone module, the four seeding parameters
+described above are ignored but an additional one needs to be
+specified. This is the name of the file in which the catalogue of groups will
+be written. This is specified by the parameter ``fof_output``. The linking
+length and minimal group to keep are set in the way described above.
+
+There are two additional optional parameters that can be used to modify
+details of the stand-alone FOF code. The ``GroupID`` of the particles that
+do not belong to any groups is by default set to 2147483647
+(i.e. :math:`2^{31}-1`). A different value can be specified by changing the
+optional parameter ``group_id_default``. Similarly, the first group in the
+catalogue (i.e. the largest group) carries the ``GroupID`` 1. This can be
+changed by tweaking the optional parameter ``group_id_offset``.
+
+
+------------------------
+
+A full FOF section of the YAML parameter file looks like:
+
+.. code:: YAML
+
+    # Parameters of the Friends-Of-Friends module
+    FOF:
+       basename:                        fof_output  # Filename for the FOF outputs.
+       scale_factor_first:              0.91        # Scale-factor of first FoF black hole seeding calls.
+       time_first:                      0.2         # Time of first FoF black hole seeding calls.
+       delta_time:                      1.005       # Time between consecutive FoF black hole seeding calls.
+       min_group_size:                  256         # The minimum no. of particles required for a group.
+       linking_length_ratio:            0.2         # Linking length in units of the main inter-particle separation.
+       black_hole_seed_halo_mass_Msun:  1.5e10      # Minimal halo mass in which to seed a black hole (in solar masses).
+       absolute_linking_length:         -1.         # (Optional) Absolute linking length (in internal units).
+       group_id_default:                2147483647  # (Optional) Sets the group ID of particles in groups below the minimum size.
+       group_id_offset:                 1           # (Optional) Sets the offset of group ID labelling. Defaults to 1 if unspecified.
diff --git a/doc/RTD/source/FriendsOfFriends/stand_alone_fof.rst b/doc/RTD/source/FriendsOfFriends/stand_alone_fof.rst
new file mode 100644
index 0000000000000000000000000000000000000000..4d3a1b6eb838802d95df95e4c4c0dfe98adac2bc
--- /dev/null
+++ b/doc/RTD/source/FriendsOfFriends/stand_alone_fof.rst
@@ -0,0 +1,31 @@
+.. Friends Of Friends
+   Matthieu Schaller 15th June 2019
+
+.. _fof_stand_alone_label:
+
+Stand-Alone Friends-Of-Friends
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The stand-alone version of the Friends-Of-Friends module can be
+compiled by configuring the code with the option
+``--enable-stand-alone-fof``. The ``fof`` and ``fof_mpi`` executables
+will then be generated alongside the regular SWIFT ones.
+
+The executable takes a parameter file as an argument. It will then
+read the snapshot specified in the parameter file and extract all
+the dark matter particles by default. FOF is then run on these
+particles and a catalogue of groups is written to disk. Additional
+particle types can be read and processed by the stand-alone FOF
+code by adding any of the following runtime parameters to the
+command line:
+
+ * ``--hydro``: Read and process the gas particles,
+ * ``--stars``: Read and process the star particles,
+ * ``--black-holes``: Read and process the black hole particles.
+
+The group catalogues contain a unique identifier for each group as
+well as the number of particles in each group and their total mass (in
+internal units). The FOF code will also write a snapshot with an
+additional field for each particle. This contains the ``GroupID`` of
+each particle and can be used to find all the particles in a given
+halo and to link them to the information stored in the catalogue.
diff --git a/doc/RTD/source/ParameterFiles/parameter_description.rst b/doc/RTD/source/ParameterFiles/parameter_description.rst
index f382bc88d9ac5cb5e4a5b840dd8a4769df1cb6fd..8965e67ee59213f9047d54e10e4ae01cce7e0527 100644
--- a/doc/RTD/source/ParameterFiles/parameter_description.rst
+++ b/doc/RTD/source/ParameterFiles/parameter_description.rst
@@ -517,6 +517,14 @@ following pages:
 * :ref:`Output_list_label` (to have snapshots not evenly spaced in time),
 * :ref:`Output_selection_label` (to select what particle fields to write).
 
+.. _Parameters_fof:
+
+Friends-Of-Friends (FOF)
+------------------------
+
+The parameters are described separately on the page
+:ref:`Fof_Parameter_Description_label` within the more general
+:ref:`Friends_Of_Friends_label` description.
 
 .. _Parameters_statistics:
 
diff --git a/doc/RTD/source/index.rst b/doc/RTD/source/index.rst
index b7c4205b0ac45680ee14e4a3941f03d116ab8c69..ffd59d8b8e4b79e3b60aa81a00153d968ba42da0 100644
--- a/doc/RTD/source/index.rst
+++ b/doc/RTD/source/index.rst
@@ -21,6 +21,7 @@ difference is the parameter file that will need to be adapted for SWIFT.
    Snapshots/index
    HydroSchemes/index
    SubgridModels/index
+   FriendsOfFriends/index
    EquationOfState/index
    ExternalPotentials/index
    NewOption/index
diff --git a/examples/EAGLE_ICs/EAGLE_12/eagle_12.yml b/examples/EAGLE_ICs/EAGLE_12/eagle_12.yml
index eccf8529098eea1e157619d07d3c588bb30ef289..4b807846de2871e1a6c9e7baf4793c9fd6475a9c 100644
--- a/examples/EAGLE_ICs/EAGLE_12/eagle_12.yml
+++ b/examples/EAGLE_ICs/EAGLE_12/eagle_12.yml
@@ -165,6 +165,7 @@ EAGLEFeedback:
 EAGLEAGN:
   subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
   max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  viscous_alpha:                    1e6        # Normalisation constant of the Bondi viscuous time-scale accretion reduction term
   radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
   coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
   AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
diff --git a/examples/EAGLE_ICs/EAGLE_25/eagle_25.yml b/examples/EAGLE_ICs/EAGLE_25/eagle_25.yml
index a960af6dd9ff949053c5a053ae29a7bd97e19d9c..222e7a81ac7a979b6e9be9b4ac29c32342e80dae 100644
--- a/examples/EAGLE_ICs/EAGLE_25/eagle_25.yml
+++ b/examples/EAGLE_ICs/EAGLE_25/eagle_25.yml
@@ -166,6 +166,7 @@ EAGLEFeedback:
 EAGLEAGN:
   subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
   max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  viscous_alpha:                    1e6        # Normalisation constant of the Bondi viscuous time-scale accretion reduction term
   radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
   coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
   AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
diff --git a/examples/EAGLE_ICs/EAGLE_50/eagle_50.yml b/examples/EAGLE_ICs/EAGLE_50/eagle_50.yml
index b31a77d87669d5786ab1fdb904e1f6683b9dece9..27320d1327c412cc803eb3f8859a7f569bc14175 100644
--- a/examples/EAGLE_ICs/EAGLE_50/eagle_50.yml
+++ b/examples/EAGLE_ICs/EAGLE_50/eagle_50.yml
@@ -166,6 +166,7 @@ EAGLEFeedback:
 EAGLEAGN:
   subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
   max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  viscous_alpha:                    1e6        # Normalisation constant of the Bondi viscuous time-scale accretion reduction term
   radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
   coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
   AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
diff --git a/examples/EAGLE_low_z/EAGLE_12/eagle_12.yml b/examples/EAGLE_low_z/EAGLE_12/eagle_12.yml
index 24e66cd999b7380f11a201b35ea2f929c5ae7a93..faeda2a3acf4f04cd75512190666374351592276 100644
--- a/examples/EAGLE_low_z/EAGLE_12/eagle_12.yml
+++ b/examples/EAGLE_low_z/EAGLE_12/eagle_12.yml
@@ -30,6 +30,9 @@ Snapshots:
   delta_time:          1.01  # Time difference between consecutive outputs (in internal units)
   compression:         1
 
+Scheduler:
+  links_per_tasks: 500
+
 # Parameters governing the conserved quantities statistics
 Statistics:
   scale_factor_first:  0.92 # Scale-factor of the first stat dump (cosmological run)
@@ -154,6 +157,7 @@ EAGLEFeedback:
 EAGLEAGN:
   subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
   max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  viscous_alpha:                    1e6        # Normalisation constant of the Bondi viscuous time-scale accretion reduction term
   radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
   coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
   AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
diff --git a/examples/EAGLE_low_z/EAGLE_25/eagle_25.yml b/examples/EAGLE_low_z/EAGLE_25/eagle_25.yml
index 7d47d39481e2f2794e83660ac0868d624e6b38ad..5736fb864bb950703c6861cefec063ceec7861ec 100644
--- a/examples/EAGLE_low_z/EAGLE_25/eagle_25.yml
+++ b/examples/EAGLE_low_z/EAGLE_25/eagle_25.yml
@@ -22,7 +22,10 @@ Cosmology:
   Omega_m:        0.307         # Matter density parameter
   Omega_lambda:   0.693         # Dark-energy density parameter
   Omega_b:        0.0455        # Baryon density parameter
-  
+
+Scheduler:
+  links_per_tasks: 500
+
 # Parameters governing the time integration
 TimeIntegration:
   time_begin: 0.    # The starting time of the simulation (in internal units).
@@ -161,6 +164,7 @@ EAGLEFeedback:
 EAGLEAGN:
   subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
   max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  viscous_alpha:                    1e6        # Normalisation constant of the Bondi viscuous time-scale accretion reduction term
   radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
   coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
   AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
diff --git a/examples/EAGLE_low_z/EAGLE_50/eagle_50.yml b/examples/EAGLE_low_z/EAGLE_50/eagle_50.yml
index e36eb4ad8d814b7d37006578a2c2796f3ec934b9..cb2fa3b4dc0b2e3f04f04f73b6157e1400665c67 100644
--- a/examples/EAGLE_low_z/EAGLE_50/eagle_50.yml
+++ b/examples/EAGLE_low_z/EAGLE_50/eagle_50.yml
@@ -156,6 +156,7 @@ EAGLEFeedback:
 EAGLEAGN:
   subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
   max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  viscous_alpha:                    1e6        # Normalisation constant of the Bondi viscuous time-scale accretion reduction term
   radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
   coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
   AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
diff --git a/examples/EAGLE_low_z/EAGLE_6/eagle_6.yml b/examples/EAGLE_low_z/EAGLE_6/eagle_6.yml
index caaebdb7c4876b0f70459a817f3ab4758d833216..d1bbfa4ef6c6b785265bfda7a9d929374af5f962 100644
--- a/examples/EAGLE_low_z/EAGLE_6/eagle_6.yml
+++ b/examples/EAGLE_low_z/EAGLE_6/eagle_6.yml
@@ -25,6 +25,7 @@ Cosmology:
   
 Scheduler:
   max_top_level_cells:    8
+  links_per_tasks: 500
   
 # Parameters governing the time integration
 TimeIntegration:
@@ -165,6 +166,7 @@ EAGLEFeedback:
 EAGLEAGN:
   subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
   max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  viscous_alpha:                    1e6        # Normalisation constant of the Bondi viscuous time-scale accretion reduction term
   radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
   coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
   AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
diff --git a/examples/HydroTests/BlobTest_3D/README.md b/examples/HydroTests/BlobTest_3D/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..2bd59abd7a028a230a581f3bf8b765192923d903
--- /dev/null
+++ b/examples/HydroTests/BlobTest_3D/README.md
@@ -0,0 +1,21 @@
+Blob Test
+=========
+
+This test is very similar to the one presented in Agertz+ 2007
+(https://ui.adsabs.harvard.edu/abs/2007MNRAS.380..963A/abstract) that tests
+un-seeded fluid mixing.
+
+You can use the makeIC.py script to generate the initial conditions, and
+then the makeMovie.py script to look at the results. Note that this is a very
+dynamic test, and similarly to the kelvin-helmholtz it does _not_ have an explicit
+answer and is best viewed as a movie with more than a grain of salt.
+
+The expected results are:
+
++ For schemes with surface tension terms, the blob stays together
++ For schemes without surface tension terms, the blob breaks up.
+
+This is at much lower resolution than the original test in Agertz+ 2007.
+To change the resolution, you will need to edit the `makeIC.py` file.
+A similar resolution to the GAD\_10M presented in that paper can be gained
+by using `num_on_side=128`.
diff --git a/examples/HydroTests/BlobTest_3D/blob.yml b/examples/HydroTests/BlobTest_3D/blob.yml
new file mode 100644
index 0000000000000000000000000000000000000000..b3196182824e9a1edac3f40e48617f31c4874bff
--- /dev/null
+++ b/examples/HydroTests/BlobTest_3D/blob.yml
@@ -0,0 +1,36 @@
+# Define the system of units to use internally. 
+InternalUnitSystem:
+  UnitMass_in_cgs:     1   # Grams
+  UnitLength_in_cgs:   1   # Centimeters
+  UnitVelocity_in_cgs: 1   # Centimeters per second
+  UnitCurrent_in_cgs:  1   # Amperes
+  UnitTemp_in_cgs:     1   # Kelvin
+
+# Parameters governing the time integration
+TimeIntegration:
+  time_begin: 0.    # The starting time of the simulation (in internal units).
+  time_end:   10   # The end time of the simulation (in internal units).
+  dt_min:     1e-7  # The minimal time-step size of the simulation (in internal units).
+  dt_max:     1e-2  # The maximal time-step size of the simulation (in internal units).
+
+# Parameters governing the snapshots
+Snapshots:
+  basename:            blob # Common part of the name of output files
+  time_first:          0.       # Time of the first output (in internal units)
+  delta_time:          0.1      # Time difference between consecutive outputs (in internal units)
+  compression:         1
+  
+# Parameters governing the conserved quantities statistics
+Statistics:
+  delta_time:          1e-2 # Time between statistics output
+
+# Parameters for the hydrodynamics scheme
+SPH:
+  resolution_eta:        1.2348   # Target smoothing length in units of the mean inter-particle separation (1.2348 == 48Ngbs with the cubic spline kernel).
+  CFL_condition:         0.1      # Courant-Friedrich-Levy condition for time integration.
+
+# Parameters related to the initial conditions
+InitialConditions:
+  file_name:  ./blob.hdf5       # The file to read
+  periodic:   1
+  cleanup_smoothing_lengths: 1
diff --git a/examples/HydroTests/BlobTest_3D/makeIC.py b/examples/HydroTests/BlobTest_3D/makeIC.py
new file mode 100644
index 0000000000000000000000000000000000000000..859612f8a67e2546cf9e79e2dd7284d9ed335179
--- /dev/null
+++ b/examples/HydroTests/BlobTest_3D/makeIC.py
@@ -0,0 +1,169 @@
+"""
+Creates the BCC ICs for the blob test.
+"""
+
+import numpy as np
+import h5py
+
+from unyt import cm, g, s, erg
+from unyt.unit_systems import cgs_unit_system
+
+from swiftsimio import Writer
+
+
+def generate_cube(num_on_side, side_length=1.0):
+    """
+    Generates a cube
+    """
+
+    values = np.linspace(0.0, side_length, num_on_side + 1)[:-1]
+
+    positions = np.empty((num_on_side ** 3, 3), dtype=float)
+
+    for x in range(num_on_side):
+        for y in range(num_on_side):
+            for z in range(num_on_side):
+                index = x * num_on_side + y * num_on_side ** 2 + z
+
+                positions[index, 0] = values[x]
+                positions[index, 1] = values[y]
+                positions[index, 2] = values[z]
+
+    return positions
+
+
+def generate_bcc_lattice(num_on_side, side_length=1.0):
+    cube = generate_cube(num_on_side // 2, side_length)
+
+    mips = side_length / num_on_side
+
+    positions = np.concatenate([cube, cube + mips * 0.5])
+
+    return positions
+
+
+def generate_outside_of_blob(
+    num_on_side, num_copies_x, side_length, blob_centre_x, blob_radius, initial_velocity
+):
+    """
+    Generates the area outside of the blob, including a
+    cut out for where the blob will fit in.
+    """
+
+    bcc_lattice = generate_bcc_lattice(num_on_side)
+
+    # We now need to duplicate.
+    bcc_lattice_full = np.concatenate(
+        [bcc_lattice + x * np.array([1.0, 0.0, 0.0]) for x in range(num_copies_x)]
+    )
+
+    # Now size it appropriately
+    bcc_lattice_full *= side_length
+
+    # Now we need to chop out the region that our blob will live in
+    dx = bcc_lattice_full - np.array(
+        [blob_centre_x, 0.5 * side_length, 0.5 * side_length]
+    )
+    squared_radius = np.sum(dx * dx, axis=1)
+
+    cut_out_mask = squared_radius > blob_radius ** 2
+
+    positions = bcc_lattice_full[cut_out_mask]
+
+    # Now we can generate the velocities
+    velocities = np.zeros_like(positions)
+    velocities[:, 0] = initial_velocity
+
+    return positions, velocities
+
+
+def generate_blob(num_on_side, side_length, blob_centre_x, blob_radius):
+    """
+    Generate the positions and velocities for the blob.
+    """
+
+    bcc_lattice = generate_bcc_lattice(num_on_side)
+
+    # Update to respect side length
+    bcc_lattice *= side_length
+
+    # Find the radii
+    squared_radius = np.sum((bcc_lattice - 0.5 * side_length) ** 2, axis=1)
+    inside_sphere = squared_radius < blob_radius * blob_radius
+
+    # Now select out particles
+    bcc_lattice = bcc_lattice[inside_sphere]
+
+    # Move to the correct x_position
+    bcc_lattice[:, 0] = bcc_lattice[:, 0] + blob_centre_x - 0.5 * side_length
+    positions = bcc_lattice
+
+    # Generate velocities
+    velocities = np.zeros_like(positions)
+
+    return positions, velocities
+
+
+def write_out_ics(
+    filename,
+    num_on_side,
+    side_length=1.0,
+    duplications=4,
+    blob_radius=0.1,
+    blob_location_x=0.5,
+    inside_density=10,
+    velocity_outside=2.7,  # Actually the mach number
+):
+    x = Writer(
+        cgs_unit_system, [side_length * duplications, side_length, side_length] * cm
+    )
+
+    positions_outside, velocities_outside = generate_outside_of_blob(
+        num_on_side,
+        duplications,
+        side_length,
+        blob_location_x,
+        blob_radius,
+        initial_velocity=1.0,
+    )
+    positions_inside, velocities_inside = generate_blob(
+        int(num_on_side * np.cbrt(inside_density)),
+        side_length,
+        blob_location_x,
+        blob_radius,
+    )
+
+    coordinates = np.concatenate([positions_inside, positions_outside])
+
+    velocities = np.concatenate([velocities_inside, velocities_outside])
+
+    x.gas.coordinates = coordinates * cm
+
+    x.gas.velocities = velocities * cm / s
+
+    x.gas.masses = (
+        np.ones(coordinates.shape[0], dtype=float) * (side_length / num_on_side) * g
+    )
+
+    # Set to be (1 / n) c_s
+    x.gas.internal_energy = (
+        np.ones(coordinates.shape[0], dtype=float)
+        * ((1.0 / velocity_outside) * x.gas.velocities[-1][0]) ** 2
+        / (5.0 / 3.0 - 1)
+    )
+
+    # Need to correct the particles inside the sphere so that we are in pressure equilibrium everywhere
+    x.gas.internal_energy[: len(positions_inside)] /= inside_density
+
+    x.gas.generate_smoothing_lengths(
+        boxsize=(duplications / 2) * side_length * cm, dimension=3
+    )
+
+    x.write(filename)
+
+    return
+
+
+if __name__ == "__main__":
+    write_out_ics(filename="blob.hdf5", num_on_side=64)
+
diff --git a/examples/HydroTests/BlobTest_3D/makeMovie.py b/examples/HydroTests/BlobTest_3D/makeMovie.py
new file mode 100644
index 0000000000000000000000000000000000000000..9ae4a538e000fa7006f093b67d325ff433e97089
--- /dev/null
+++ b/examples/HydroTests/BlobTest_3D/makeMovie.py
@@ -0,0 +1,169 @@
+"""
+Makes a movie of the output of the blob test.
+
+Josh Borrow (joshua.borrow@durham.ac.uk) 2019
+
+LGPLv3
+"""
+
+from swiftsimio import load
+from swiftsimio.visualisation import scatter
+
+from p_tqdm import p_map
+
+import matplotlib.pyplot as plt
+import numpy as np
+
+from matplotlib.colors import LogNorm
+from matplotlib.animation import FuncAnimation
+
+info_frames = 15
+start_frame = 0
+end_frame = 101
+resolution = 1024
+snapshot_name = "blob"
+cmap = "Spectral_r"
+text_args = dict(color="black")
+# plot = "pressure"
+# name = "Pressure $P$"
+plot = "density"
+name = "Fluid Density $\\rho$"
+
+
+def get_image(n):
+    """
+    Gets the image for snapshot n, and also returns the associated
+    SWIFT metadata object.
+    """
+    filename = f"{snapshot_name}_{n:04d}.hdf5"
+
+    data = load(filename)
+    boxsize = data.metadata.boxsize[0].value
+
+    output = np.zeros((resolution, resolution * 4), dtype=float)
+
+    x, y, _ = data.gas.coordinates.value.T
+
+    # This is an oblong box but we can only make squares!
+    for box, box_edges in enumerate([[0.0, 1.1], [0.9, 2.1], [1.9, 3.1], [2.9, 4.0]]):
+        mask = np.logical_and(x >= box_edges[0], x <= box_edges[1])
+        masked_x = x[mask] - np.float64(box)
+        masked_y = y[mask]
+
+        hsml = data.gas.smoothing_length.value[mask]
+
+        if plot == "density":
+            mass = data.gas.masses.value[mask]
+            image = scatter(x=masked_y, y=masked_x, m=mass, h=hsml, res=resolution)
+        else:
+            quantity = getattr(data.gas, plot).value[mask]
+            # Need to divide out the particle density for non-projected density quantities
+            image = scatter(
+                x=masked_y, y=masked_x, m=quantity, h=hsml, res=resolution
+            ) / scatter(
+                x=masked_y, y=masked_x, m=np.ones_like(quantity), h=hsml, res=resolution
+            )
+
+        output[:, box * resolution : (box + 1) * resolution] = image
+
+    return output, data.metadata
+
+
+def get_data_dump(metadata):
+    """
+    Gets a big data dump from the SWIFT metadata
+    """
+
+    try:
+        viscosity = metadata.viscosity_info
+    except:
+        viscosity = "No info"
+
+    try:
+        diffusion = metadata.diffusion_info
+    except:
+        diffusion = "No info"
+
+    output = (
+        "$\\bf{Blob}$ $\\bf{Test}$\n\n"
+        "$\\bf{SWIFT}$\n"
+        + metadata.code_info
+        + "\n\n"
+        + "$\\bf{Compiler}$\n"
+        + metadata.compiler_info
+        + "\n\n"
+        + "$\\bf{Hydrodynamics}$\n"
+        + metadata.hydro_info
+        + "\n\n"
+        + "$\\bf{Viscosity}$\n"
+        + viscosity
+        + "\n\n"
+        + "$\\bf{Diffusion}$\n"
+        + diffusion
+    )
+
+    return output
+
+
+def time_formatter(metadata):
+    return f"$t = {metadata.t:2.2f}$"
+
+
+# Generate the frames and unpack our variables
+images, metadata = zip(*p_map(get_image, list(range(start_frame, end_frame))))
+
+# The edges are funny because of the non-periodicity.
+central_region = images[0][
+    resolution // 10 : resolution - resolution // 10,
+    resolution // 10 : resolution - resolution // 10,
+]
+norm = LogNorm(vmin=np.min(central_region), vmax=np.max(central_region), clip="black")
+
+fig, ax = plt.subplots(figsize=(8 * 4, 8), dpi=resolution // 8)
+
+fig.subplots_adjust(0, 0, 1, 1)
+ax.axis("off")
+
+# Set up the initial state
+image = ax.imshow(np.zeros_like(images[0]), norm=norm, cmap=cmap, origin="lower")
+
+description_text = ax.text(
+    0.5,
+    0.5,
+    get_data_dump(metadata[0]),
+    va="center",
+    ha="center",
+    **text_args,
+    transform=ax.transAxes,
+)
+
+time_text = ax.text(
+    (1 - 0.025 * 0.25),
+    0.975,
+    time_formatter(metadata[0]),
+    **text_args,
+    va="top",
+    ha="right",
+    transform=ax.transAxes,
+)
+
+info_text = ax.text(
+    0.025 * 0.25, 0.975, name, **text_args, va="top", ha="left", transform=ax.transAxes
+)
+
+
+def animate(n):
+    # Display just our original frames at t < 0
+    if n >= 0:
+        image.set_array(images[n])
+        description_text.set_text("")
+        time_text.set_text(time_formatter(metadata[n]))
+
+    return (image,)
+
+
+animation = FuncAnimation(
+    fig, animate, range(start_frame - info_frames, end_frame), interval=40
+)
+
+animation.save("blob.mp4")
diff --git a/examples/HydroTests/BlobTest_3D/run.sh b/examples/HydroTests/BlobTest_3D/run.sh
new file mode 100755
index 0000000000000000000000000000000000000000..b59594227ceee4f95db133665381acee2f7efbcf
--- /dev/null
+++ b/examples/HydroTests/BlobTest_3D/run.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+
+# Generate the initial conditions if they are not present.
+if [ ! -e blob.hdf5 ]
+then
+    echo "Generating initial conditions for the Sod shock example..."
+    python makeIC.py
+fi
+
+# Run SWIFT
+../../swift --hydro --threads=4 blob.yml
+
+python makeMovie.py
diff --git a/examples/SantaBarbara/SantaBarbara-256/plotTempEvolution.py b/examples/SantaBarbara/SantaBarbara-256/plotTempEvolution.py
index 90de6cb712744359dbdfbf07cc4ed81546ea38bf..dab4b2c90a7b751c8d143ed38c614473c951988a 100644
--- a/examples/SantaBarbara/SantaBarbara-256/plotTempEvolution.py
+++ b/examples/SantaBarbara/SantaBarbara-256/plotTempEvolution.py
@@ -28,32 +28,34 @@ n_snapshots = 153
 snapname = "santabarbara"
 
 import matplotlib
+
 matplotlib.use("Agg")
 from pylab import *
 import h5py
 import os.path
 
 # Plot parameters
-params = {'axes.labelsize': 10,
-'axes.titlesize': 10,
-'font.size': 9,
-'legend.fontsize': 9,
-'xtick.labelsize': 10,
-'ytick.labelsize': 10,
-'text.usetex': True,
- 'figure.figsize' : (3.15,3.15),
-'figure.subplot.left'    : 0.14,
-'figure.subplot.right'   : 0.99,
-'figure.subplot.bottom'  : 0.12,
-'figure.subplot.top'     : 0.99,
-'figure.subplot.wspace'  : 0.15,
-'figure.subplot.hspace'  : 0.12,
-'lines.markersize' : 6,
-'lines.linewidth' : 2.,
-'text.latex.unicode': True
+params = {
+    "axes.labelsize": 10,
+    "axes.titlesize": 10,
+    "font.size": 9,
+    "legend.fontsize": 9,
+    "xtick.labelsize": 10,
+    "ytick.labelsize": 10,
+    "text.usetex": True,
+    "figure.figsize": (3.15, 3.15),
+    "figure.subplot.left": 0.14,
+    "figure.subplot.right": 0.99,
+    "figure.subplot.bottom": 0.12,
+    "figure.subplot.top": 0.99,
+    "figure.subplot.wspace": 0.15,
+    "figure.subplot.hspace": 0.12,
+    "lines.markersize": 6,
+    "lines.linewidth": 2.0,
+    "text.latex.unicode": True,
 }
 rcParams.update(params)
-rc('font',**{'family':'sans-serif','sans-serif':['Times']})
+rc("font", **{"family": "sans-serif", "sans-serif": ["Times"]})
 
 # Read the simulation data
 sim = h5py.File("%s_0000.hdf5" % snapname, "r")
@@ -63,9 +65,10 @@ scheme = sim["/HydroScheme"].attrs["Scheme"][0]
 kernel = sim["/HydroScheme"].attrs["Kernel function"][0]
 neighbours = sim["/HydroScheme"].attrs["Kernel target N_ngb"][0]
 eta = sim["/HydroScheme"].attrs["Kernel eta"][0]
-alpha = sim["/HydroScheme"].attrs["Alpha viscosity"][0]
 H_mass_fraction = sim["/HydroScheme"].attrs["Hydrogen mass fraction"][0]
-H_transition_temp = sim["/HydroScheme"].attrs["Hydrogen ionization transition temperature"][0]
+H_transition_temp = sim["/HydroScheme"].attrs[
+    "Hydrogen ionization transition temperature"
+][0]
 T_initial = sim["/HydroScheme"].attrs["Initial temperature"][0]
 T_minimal = sim["/HydroScheme"].attrs["Minimal temperature"][0]
 git = sim["Code"].attrs["Git Revision"]
@@ -85,25 +88,26 @@ unit_time_in_si = unit_time_in_cgs
 # Primoridal ean molecular weight as a function of temperature
 def mu(T, H_frac=H_mass_fraction, T_trans=H_transition_temp):
     if T > T_trans:
-        return 4. / (8. - 5. * (1. - H_frac))
+        return 4.0 / (8.0 - 5.0 * (1.0 - H_frac))
     else:
-        return 4. / (1. + 3. * H_frac)
-    
+        return 4.0 / (1.0 + 3.0 * H_frac)
+
+
 # Temperature of some primoridal gas with a given internal energy
 def T(u, H_frac=H_mass_fraction, T_trans=H_transition_temp):
-    T_over_mu = (gas_gamma - 1.) * u * mH_in_kg / k_in_J_K
+    T_over_mu = (gas_gamma - 1.0) * u * mH_in_kg / k_in_J_K
     ret = np.ones(np.size(u)) * T_trans
 
     # Enough energy to be ionized?
-    mask_ionized = (T_over_mu > (T_trans+1) / mu(T_trans+1, H_frac, T_trans))
-    if np.sum(mask_ionized)  > 0:
-        ret[mask_ionized] = T_over_mu[mask_ionized] * mu(T_trans*10, H_frac, T_trans)
+    mask_ionized = T_over_mu > (T_trans + 1) / mu(T_trans + 1, H_frac, T_trans)
+    if np.sum(mask_ionized) > 0:
+        ret[mask_ionized] = T_over_mu[mask_ionized] * mu(T_trans * 10, H_frac, T_trans)
 
     # Neutral gas?
-    mask_neutral = (T_over_mu < (T_trans-1) / mu((T_trans-1), H_frac, T_trans))
-    if np.sum(mask_neutral)  > 0:
+    mask_neutral = T_over_mu < (T_trans - 1) / mu((T_trans - 1), H_frac, T_trans)
+    if np.sum(mask_neutral) > 0:
         ret[mask_neutral] = T_over_mu[mask_neutral] * mu(0, H_frac, T_trans)
-        
+
     return ret
 
 
@@ -119,7 +123,7 @@ T_max = np.zeros(n_snapshots)
 
 # Loop over all the snapshots
 for i in range(n_snapshots):
-    sim = h5py.File("%s_%04d.hdf5"% (snapname, i), "r")
+    sim = h5py.File("%s_%04d.hdf5" % (snapname, i), "r")
 
     z[i] = sim["/Cosmology"].attrs["Redshift"][0]
     a[i] = sim["/Cosmology"].attrs["Scale-factor"][0]
@@ -127,8 +131,8 @@ for i in range(n_snapshots):
     u = sim["/PartType0/InternalEnergy"][:]
 
     # Compute the temperature
-    u *= (unit_length_in_si**2 / unit_time_in_si**2)
-    u /= a[i]**(3 * (gas_gamma - 1.))
+    u *= unit_length_in_si ** 2 / unit_time_in_si ** 2
+    u /= a[i] ** (3 * (gas_gamma - 1.0))
     Temp = T(u)
 
     # Gather statistics
@@ -142,34 +146,56 @@ for i in range(n_snapshots):
 
 # CMB evolution
 a_evol = np.logspace(-3, 0, 60)
-T_cmb = (1. / a_evol)**2 * 2.72
+T_cmb = (1.0 / a_evol) ** 2 * 2.72
 
 # Plot the interesting quantities
 figure()
 subplot(111, xscale="log", yscale="log")
 
-fill_between(a, T_mean-T_std, T_mean+T_std, color='C0', alpha=0.1)
-plot(a, T_max, ls='-.', color='C0', lw=1., label="${\\rm max}~T$")
-plot(a, T_min, ls=':', color='C0', lw=1., label="${\\rm min}~T$")
-plot(a, T_mean, color='C0', label="${\\rm mean}~T$", lw=1.5)
-fill_between(a, 10**(T_log_mean-T_log_std), 10**(T_log_mean+T_log_std), color='C1', alpha=0.1)
-plot(a, 10**T_log_mean, color='C1', label="${\\rm mean}~{\\rm log} T$", lw=1.5)
-plot(a, T_median, color='C2', label="${\\rm median}~T$", lw=1.5)
+fill_between(a, T_mean - T_std, T_mean + T_std, color="C0", alpha=0.1)
+plot(a, T_max, ls="-.", color="C0", lw=1.0, label="${\\rm max}~T$")
+plot(a, T_min, ls=":", color="C0", lw=1.0, label="${\\rm min}~T$")
+plot(a, T_mean, color="C0", label="${\\rm mean}~T$", lw=1.5)
+fill_between(
+    a,
+    10 ** (T_log_mean - T_log_std),
+    10 ** (T_log_mean + T_log_std),
+    color="C1",
+    alpha=0.1,
+)
+plot(a, 10 ** T_log_mean, color="C1", label="${\\rm mean}~{\\rm log} T$", lw=1.5)
+plot(a, T_median, color="C2", label="${\\rm median}~T$", lw=1.5)
 
 legend(loc="upper left", frameon=False, handlelength=1.5)
 
 # Expected lines
-plot([1e-10, 1e10], [H_transition_temp, H_transition_temp], 'k--', lw=0.5, alpha=0.7)
-text(2.5e-2, H_transition_temp*1.07, "$T_{\\rm HII\\rightarrow HI}$", va="bottom", alpha=0.7, fontsize=8)
-plot([1e-10, 1e10], [T_minimal, T_minimal], 'k--', lw=0.5, alpha=0.7)
-text(1e-2, T_minimal*0.8, "$T_{\\rm min}$", va="top", alpha=0.7, fontsize=8)
-plot(a_evol, T_cmb, 'k--', lw=0.5, alpha=0.7)
-text(a_evol[20], T_cmb[20]*0.55, "$(1+z)^2\\times T_{\\rm CMB,0}$", rotation=-34, alpha=0.7, fontsize=8, va="top", bbox=dict(facecolor='w', edgecolor='none', pad=1.0, alpha=0.9))
-
-
-redshift_ticks = np.array([0., 1., 2., 5., 10., 20., 50., 100.])
+plot([1e-10, 1e10], [H_transition_temp, H_transition_temp], "k--", lw=0.5, alpha=0.7)
+text(
+    2.5e-2,
+    H_transition_temp * 1.07,
+    "$T_{\\rm HII\\rightarrow HI}$",
+    va="bottom",
+    alpha=0.7,
+    fontsize=8,
+)
+plot([1e-10, 1e10], [T_minimal, T_minimal], "k--", lw=0.5, alpha=0.7)
+text(1e-2, T_minimal * 0.8, "$T_{\\rm min}$", va="top", alpha=0.7, fontsize=8)
+plot(a_evol, T_cmb, "k--", lw=0.5, alpha=0.7)
+text(
+    a_evol[20],
+    T_cmb[20] * 0.55,
+    "$(1+z)^2\\times T_{\\rm CMB,0}$",
+    rotation=-34,
+    alpha=0.7,
+    fontsize=8,
+    va="top",
+    bbox=dict(facecolor="w", edgecolor="none", pad=1.0, alpha=0.9),
+)
+
+
+redshift_ticks = np.array([0.0, 1.0, 2.0, 5.0, 10.0, 20.0, 50.0, 100.0])
 redshift_labels = ["$0$", "$1$", "$2$", "$5$", "$10$", "$20$", "$50$", "$100$"]
-a_ticks = 1. / (redshift_ticks + 1.)
+a_ticks = 1.0 / (redshift_ticks + 1.0)
 
 xticks(a_ticks, redshift_labels)
 minorticks_off()
@@ -180,4 +206,3 @@ xlim(9e-3, 1.1)
 ylim(20, 2.5e7)
 
 savefig("Temperature_evolution.png", dpi=200)
-
diff --git a/examples/SubgridTests/BlackHoleSwallowing/check_masses.py b/examples/SubgridTests/BlackHoleSwallowing/check_masses.py
new file mode 100644
index 0000000000000000000000000000000000000000..c7f1d7b2c1f90efa285c13c75f0e5243f36e49ea
--- /dev/null
+++ b/examples/SubgridTests/BlackHoleSwallowing/check_masses.py
@@ -0,0 +1,70 @@
+import h5py as h5
+import numpy as np
+
+
+f = h5.File("bh_swallowing_serial_0001.hdf5", "r")
+f_mpi = h5.File("bh_swallowing_mpi_0001.hdf5", "r")
+f_ics = h5.File("bh_swallowing.hdf5", "r")
+
+m = np.array(f["/PartType5/Masses"])#[:]
+m_mpi = np.array(f_mpi["/PartType5/Masses"])#[:]
+m_ics = np.array(f_ics["/PartType5/Masses"])#[:]
+
+ids = np.array(f["/PartType5/ParticleIDs"])#[:]
+ids_mpi = np.array(f_mpi["/PartType5/ParticleIDs"])#[:]
+ids_ics = np.array(f_ics["/PartType5/ParticleIDs"])#[:]
+
+rank = np.argsort(ids)
+rank_mpi = np.argsort(ids_mpi)
+rank_ics = np.argsort(ids_ics)
+
+m = m[rank]
+m_mpi = m_mpi[rank_mpi]
+m_ics = m_ics[rank_ics]
+
+ids = ids[rank]
+ids_mpi = ids_mpi[rank_mpi]
+ids_ics = ids_ics[rank_ics]
+
+
+# Check
+#print ids - ids_mpi
+#print ids - ids_ics
+
+count_wrong = 0
+
+print "ID -- ICs mass -- serial mass -- MPI mass"
+for i in range(np.size(ids)):
+    print "%14d %5f %5f %5f"%(ids[i], m_ics[i], m[i], m_mpi[i]),
+    
+    if m[i] > m_ics[i] * 1.1:
+        print "#",
+    else:
+        print " ",
+    
+    if m[i] != m_mpi[i]:
+        count_wrong += 1
+        print " <---"
+    else:
+        print ""
+
+print "Found", count_wrong, "wrong BH masses."
+
+
+ids_gas_mpi = f_mpi["/PartType0/ParticleIDs"][:]
+ids_removed_mpi = np.loadtxt("list_mpi.txt")
+
+for i in range(np.size(ids_removed_mpi)):
+    result = np.where(ids_gas_mpi == ids_removed_mpi)
+    print result
+
+
+ids_gas = f["/PartType0/ParticleIDs"][:]
+ids_removed = np.loadtxt("list.txt")
+
+for i in range(np.size(ids_removed)):
+    result = np.where(ids_gas == ids_removed)
+    print result
+
+#rho_gas = f["/PartType0/Density"][:]
+#print np.mean(rho_gas), np.std(rho_gas)
diff --git a/examples/SubgridTests/BlackHoleSwallowing/getGlass.sh b/examples/SubgridTests/BlackHoleSwallowing/getGlass.sh
new file mode 100755
index 0000000000000000000000000000000000000000..ffd92e88deae6e91237059adac2a6c2067caee46
--- /dev/null
+++ b/examples/SubgridTests/BlackHoleSwallowing/getGlass.sh
@@ -0,0 +1,2 @@
+#!/bin/bash
+wget http://virgodb.cosma.dur.ac.uk/swift-webstorage/ICs/glassCube_32.hdf5
diff --git a/examples/SubgridTests/BlackHoleSwallowing/makeIC.py b/examples/SubgridTests/BlackHoleSwallowing/makeIC.py
new file mode 100644
index 0000000000000000000000000000000000000000..5490924003ad3b86b042cc3b62c12f97ec60f8d4
--- /dev/null
+++ b/examples/SubgridTests/BlackHoleSwallowing/makeIC.py
@@ -0,0 +1,154 @@
+###############################################################################
+ # This file is part of SWIFT.
+ # Copyright (c) 2016 Matthieu Schaller (matthieu.schaller@durham.ac.uk)
+ # 
+ # This program is free software: you can redistribute it and/or modify
+ # it under the terms of the GNU Lesser General Public License as published
+ # by the Free Software Foundation, either version 3 of the License, or
+ # (at your option) any later version.
+ # 
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ # GNU General Public License for more details.
+ # 
+ # You should have received a copy of the GNU Lesser General Public License
+ # along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ # 
+ ##############################################################################
+
+import h5py
+from numpy import *
+
+# Some constants
+solar_mass_cgs = 1.988480e33 
+kpc_in_cm = 3.085678e21
+mp_cgs = 1.67e-24
+boltzmann_k_cgs = 1.38e-16
+
+# Parameters
+gamma = 5./3.      				# Gas adiabatic index
+rho_cgs = mp_cgs        			# Background density
+u0_cgs = 1.2e12					# Desired initial internal energy (1.2e12 ~ 10^4K)
+P_cgs = rho_cgs*u0_cgs*(gamma - 1.)          	# Background pressure
+fileName = "bh_swallowing.hdf5" 
+
+# Units
+unit_l_cgs = 3.085678e24  # kpc
+unit_m_cgs = 1.988480e43  # 10^10 Msun
+unit_v_cgs = 1e5          # km / s
+unit_A_cgs = 1.
+unit_T_cgs = 1.
+unit_t_cgs = unit_l_cgs / unit_v_cgs
+
+boxsize_cgs = 10. * kpc_in_cm
+vol_cgs = boxsize_cgs**3
+
+#---------------------------------------------------
+glass = h5py.File("glassCube_32.hdf5", "r")
+
+# Read particle positions and h from the glass
+pos = glass["/PartType0/Coordinates"][:,:]
+eps = 1e-6
+pos = (pos - pos.min()) / (pos.max() - pos.min() + eps) * boxsize_cgs / unit_l_cgs
+h = glass["/PartType0/SmoothingLength"][:] * boxsize_cgs / unit_l_cgs
+
+numPart = size(h)
+
+# Generate extra arrays
+v = zeros((numPart, 3))
+ids = linspace(1, numPart, numPart)
+m_cgs = zeros(numPart)
+u_cgs = zeros(numPart)
+m = zeros(numPart)
+u = zeros(numPart)
+
+m_cgs[:] = rho_cgs * vol_cgs / numPart    
+u_cgs[:] = P_cgs / (rho_cgs * (gamma - 1))
+
+# BHs
+bh_pos = zeros((2, 3))
+bh_pos[0,:] = 0.5 * boxsize_cgs / unit_l_cgs
+
+#diff = [4.812127e-03, 4.908179e-03, -4.878537e-03] - bh_pos[0,:]
+
+#print diff
+
+bh_pos[1,:] = 0.5 * boxsize_cgs / unit_l_cgs + diff
+
+print bh_pos
+
+# BHs don't move
+bh_v = zeros((2, 3))
+bh_v[:,:] = 0.
+
+# increase mass to keep it at center
+bh_m_cgs = ones(2) * m_cgs[0]
+bh_ids = linspace(numPart + 1, numPart + 2, 2, dtype='L')
+bh_h = ones(2) * [h.max()]
+
+print bh_ids
+
+#--------------------------------------------------
+
+# Check quantities are correct for debugging
+print("boxsize kpc " + str(boxsize_cgs/kpc_in_cm))
+print("density cm^-3 " + str(rho_cgs/mp_cgs))
+print("initial temperature K " + str(u_cgs[0] / boltzmann_k_cgs*((gamma - 1)*rho_cgs)))
+
+# Convert to internal units
+bh_m = bh_m_cgs/unit_m_cgs
+m[:] = m_cgs/unit_m_cgs
+u[:] = u_cgs*unit_v_cgs**-2
+boxsize = boxsize_cgs/unit_l_cgs
+
+
+#--------------------------------------------------
+
+#File
+file = h5py.File(fileName, 'w')
+
+# Header
+grp = file.create_group("/Header")
+grp.attrs["BoxSize"] = [boxsize]*3
+grp.attrs["NumPart_Total"] =  [numPart, 0, 0, 0, 0, 2]
+#grp.attrs["NumPart_Total"] =  [numPart, 0, 0, 0, 0, 0]
+grp.attrs["NumPart_Total_HighWord"] = [0, 0, 0, 0, 0, 0]
+grp.attrs["NumPart_ThisFile"] = [numPart, 0, 0, 0, 0, 2]
+#grp.attrs["NumPart_ThisFile"] = [numPart, 0, 0, 0, 0, 0]
+grp.attrs["Time"] = 0.0
+grp.attrs["NumFilesPerSnapshot"] = 1
+grp.attrs["MassTable"] = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
+grp.attrs["Flag_Entropy_ICs"] = 0
+grp.attrs["Dimension"] = 3
+
+#Runtime parameters
+grp = file.create_group("/RuntimePars")
+grp.attrs["PeriodicBoundariesOn"] = 0
+
+#Units
+grp = file.create_group("/Units")
+grp.attrs["Unit length in cgs (U_L)"] = unit_l_cgs
+grp.attrs["Unit mass in cgs (U_M)"] = unit_m_cgs
+grp.attrs["Unit time in cgs (U_t)"] = unit_t_cgs
+grp.attrs["Unit current in cgs (U_I)"] = unit_A_cgs
+grp.attrs["Unit temperature in cgs (U_T)"] = unit_T_cgs
+
+#Particle group
+grp = file.create_group("/PartType0")
+grp.create_dataset('Coordinates', data=pos, dtype='d')
+grp.create_dataset('Velocities', data=v, dtype='f')
+grp.create_dataset('Masses', data=m, dtype='f')
+grp.create_dataset('SmoothingLength', data=h, dtype='f')
+grp.create_dataset('InternalEnergy', data=u, dtype='f')
+grp.create_dataset('ParticleIDs', data=ids, dtype='L')
+
+# stellar group
+grp = file.create_group("/PartType5")
+grp.create_dataset("Coordinates", (2,3),data=bh_pos, dtype="d")
+grp.create_dataset('Velocities', (2,3),data=bh_v, dtype='f')
+grp.create_dataset('Masses', (2,1),data=bh_m, dtype='f')
+grp.create_dataset('SmoothingLength', (2,1),data=bh_h, dtype='f')
+grp.create_dataset('ParticleIDs', (2,1),data=bh_ids[::-1], dtype='L')
+
+file.close()
diff --git a/examples/SubgridTests/BlackHoleSwallowing/swallowing.yml b/examples/SubgridTests/BlackHoleSwallowing/swallowing.yml
new file mode 100644
index 0000000000000000000000000000000000000000..e0637308e9cc9631e7cd042920163a98bf7e0bf0
--- /dev/null
+++ b/examples/SubgridTests/BlackHoleSwallowing/swallowing.yml
@@ -0,0 +1,92 @@
+# Define the system of units to use internally. 
+InternalUnitSystem:
+  UnitMass_in_cgs:     1.98848e43    # 10^10 M_sun in grams
+  UnitLength_in_cgs:   3.08567758e24 # Mpc in centimeters
+  UnitVelocity_in_cgs: 1e5           # km/s in centimeters per second
+  UnitCurrent_in_cgs:  1.            # Amperes
+  UnitTemp_in_cgs:     1.            # Kelvin
+
+# Cosmological parameters
+Cosmology:
+  h:              0.6777        # Reduced Hubble constant
+  a_begin:        0.5           # Initial scale-factor of the simulation
+  a_end:          1.0           # Final scale factor of the simulation
+  Omega_m:        0.307         # Matter density parameter
+  Omega_lambda:   0.693         # Dark-energy density parameter
+  Omega_b:        0.0455        # Baryon density parameter
+
+# Parameters governing the time integration
+TimeIntegration:
+  time_begin: 0      # The starting time of the simulation (in internal units).
+  time_end:   1.3e-2 # The end time of the simulation (in internal units).
+  dt_min:     1e-10  # The minimal time-step size of the simulation (in internal units).
+  dt_max:     1e-5   # The maximal time-step size of the simulation (in internal units).
+
+# Parameters governing the snapshots
+Snapshots:
+  basename:            bh_swallowing_serial # Common part of the name of output files
+  time_first:          0.            # Time of the first output (in internal units)
+  delta_time:          3.e-5         # Time difference between consecutive outputs without cosmology (internal units)
+  compression:         1
+
+# Parameters governing the conserved quantities statistics
+Statistics:
+  time_first:          0.
+  delta_time:          1.e-5  # non cosmology time between statistics output
+
+# Parameters for the hydrodynamics scheme
+SPH:
+  resolution_eta:        1.2348   # Target smoothing length in units of the mean inter-particle separation 
+  CFL_condition:         0.1      # Courant-Friedrich-Levy condition for time integration.
+  minimal_temperature:   10.      # Kelvin
+
+# Parameters related to the initial conditions
+InitialConditions:
+  file_name:  ./bh_swallowing.hdf5       # The file to read
+  periodic:   1
+  
+Scheduler:
+  max_top_level_cells: 8
+  tasks_per_cell:      500			
+
+# Parameters for the EAGLE "equation of state"
+EAGLEEntropyFloor:
+  Jeans_density_threshold_H_p_cm3: 0.1       # Physical density above which the EAGLE Jeans limiter entropy floor kicks in expressed in Hydrogen atoms per cm^3.
+  Jeans_over_density_threshold:    10.       # Overdensity above which the EAGLE Jeans limiter entropy floor can kick in.
+  Jeans_temperature_norm_K:        8000      # Temperature of the EAGLE Jeans limiter entropy floor at the density threshold expressed in Kelvin.
+  Jeans_gamma_effective:           1.3333333 # Slope the of the EAGLE Jeans limiter entropy floor
+  Cool_density_threshold_H_p_cm3: 1e-5       # Physical density above which the EAGLE Cool limiter entropy floor kicks in expressed in Hydrogen atoms per cm^3.
+  Cool_over_density_threshold:    10.        # Overdensity above which the EAGLE Cool limiter entropy floor can kick in.
+  Cool_temperature_norm_K:        8000       # Temperature of the EAGLE Cool limiter entropy floor at the density threshold expressed in Kelvin.
+  Cool_gamma_effective:           1.         # Slope the of the EAGLE Cool limiter entropy floor
+
+# Metallicites read in for the gas and star
+EAGLEChemistry:              
+  init_abundance_metal:      0.01
+  init_abundance_Hydrogen:   0.752
+  init_abundance_Helium:     0.248
+  init_abundance_Carbon:     0.0
+  init_abundance_Nitrogen:   0.0
+  init_abundance_Oxygen:     0.0
+  init_abundance_Neon:       0.0
+  init_abundance_Magnesium:  0.0
+  init_abundance_Silicon:    0.0
+  init_abundance_Iron:       0.0
+
+# Standard EAGLE cooling options
+EAGLECooling:
+  dir_name:                ./coolingtables/  # Location of the Wiersma+08 cooling tables
+  H_reion_z:               11.5              # Redshift of Hydrogen re-ionization
+  H_reion_eV_p_H:          2.0
+  He_reion_z_centre:       3.5               # Redshift of the centre of the Helium re-ionization Gaussian
+  He_reion_z_sigma:        0.5               # Spread in redshift of the  Helium re-ionization Gaussian
+  He_reion_eV_p_H:         2.0               # Energy inject by Helium re-ionization in electron-volt per Hydrogen atom
+
+# EAGLE AGN model
+EAGLEAGN:
+  subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
+  max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
+  coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
+  AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
+  AGN_num_ngb_to_heat:              1.         # Target number of gas neighbours to heat in an AGN feedback event.
diff --git a/examples/SubgridTests/BlackHoleSwallowing/swallowing_mpi.yml b/examples/SubgridTests/BlackHoleSwallowing/swallowing_mpi.yml
new file mode 100644
index 0000000000000000000000000000000000000000..8db524c9caa7f0941e34d6a44976076abc7f2dba
--- /dev/null
+++ b/examples/SubgridTests/BlackHoleSwallowing/swallowing_mpi.yml
@@ -0,0 +1,92 @@
+# Define the system of units to use internally. 
+InternalUnitSystem:
+  UnitMass_in_cgs:     1.98848e43    # 10^10 M_sun in grams
+  UnitLength_in_cgs:   3.08567758e24 # Mpc in centimeters
+  UnitVelocity_in_cgs: 1e5           # km/s in centimeters per second
+  UnitCurrent_in_cgs:  1.            # Amperes
+  UnitTemp_in_cgs:     1.            # Kelvin
+
+# Cosmological parameters
+Cosmology:
+  h:              0.6777        # Reduced Hubble constant
+  a_begin:        0.5           # Initial scale-factor of the simulation
+  a_end:          1.0           # Final scale factor of the simulation
+  Omega_m:        0.307         # Matter density parameter
+  Omega_lambda:   0.693         # Dark-energy density parameter
+  Omega_b:        0.0455        # Baryon density parameter
+
+# Parameters governing the time integration
+TimeIntegration:
+  time_begin: 0      # The starting time of the simulation (in internal units).
+  time_end:   1.3e-2 # The end time of the simulation (in internal units).
+  dt_min:     1e-10  # The minimal time-step size of the simulation (in internal units).
+  dt_max:     1e-5   # The maximal time-step size of the simulation (in internal units).
+
+# Parameters governing the snapshots
+Snapshots:
+  basename:            bh_swallowing_mpi # Common part of the name of output files
+  time_first:          0.            # Time of the first output (in internal units)
+  delta_time:          3.e-5         # Time difference between consecutive outputs without cosmology (internal units)
+  compression:         1
+
+# Parameters governing the conserved quantities statistics
+Statistics:
+  time_first:          0.
+  delta_time:          1.e-5  # non cosmology time between statistics output
+
+# Parameters for the hydrodynamics scheme
+SPH:
+  resolution_eta:        1.2348   # Target smoothing length in units of the mean inter-particle separation 
+  CFL_condition:         0.1      # Courant-Friedrich-Levy condition for time integration.
+  minimal_temperature:   10.      # Kelvin
+
+# Parameters related to the initial conditions
+InitialConditions:
+  file_name:  ./bh_swallowing.hdf5       # The file to read
+  periodic:   1
+  
+Scheduler:
+  max_top_level_cells: 8
+  tasks_per_cell:      500			
+
+# Parameters for the EAGLE "equation of state"
+EAGLEEntropyFloor:
+  Jeans_density_threshold_H_p_cm3: 0.1       # Physical density above which the EAGLE Jeans limiter entropy floor kicks in expressed in Hydrogen atoms per cm^3.
+  Jeans_over_density_threshold:    10.       # Overdensity above which the EAGLE Jeans limiter entropy floor can kick in.
+  Jeans_temperature_norm_K:        8000      # Temperature of the EAGLE Jeans limiter entropy floor at the density threshold expressed in Kelvin.
+  Jeans_gamma_effective:           1.3333333 # Slope the of the EAGLE Jeans limiter entropy floor
+  Cool_density_threshold_H_p_cm3: 1e-5       # Physical density above which the EAGLE Cool limiter entropy floor kicks in expressed in Hydrogen atoms per cm^3.
+  Cool_over_density_threshold:    10.        # Overdensity above which the EAGLE Cool limiter entropy floor can kick in.
+  Cool_temperature_norm_K:        8000       # Temperature of the EAGLE Cool limiter entropy floor at the density threshold expressed in Kelvin.
+  Cool_gamma_effective:           1.         # Slope the of the EAGLE Cool limiter entropy floor
+
+# Metallicites read in for the gas and star
+EAGLEChemistry:              
+  init_abundance_metal:      0.01
+  init_abundance_Hydrogen:   0.752
+  init_abundance_Helium:     0.248
+  init_abundance_Carbon:     0.0
+  init_abundance_Nitrogen:   0.0
+  init_abundance_Oxygen:     0.0
+  init_abundance_Neon:       0.0
+  init_abundance_Magnesium:  0.0
+  init_abundance_Silicon:    0.0
+  init_abundance_Iron:       0.0
+
+# Standard EAGLE cooling options
+EAGLECooling:
+  dir_name:                ./coolingtables/  # Location of the Wiersma+08 cooling tables
+  H_reion_z:               11.5              # Redshift of Hydrogen re-ionization
+  H_reion_eV_p_H:          2.0
+  He_reion_z_centre:       3.5               # Redshift of the centre of the Helium re-ionization Gaussian
+  He_reion_z_sigma:        0.5               # Spread in redshift of the  Helium re-ionization Gaussian
+  He_reion_eV_p_H:         2.0               # Energy inject by Helium re-ionization in electron-volt per Hydrogen atom
+
+# EAGLE AGN model
+EAGLEAGN:
+  subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
+  max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
+  coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
+  AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
+  AGN_num_ngb_to_heat:              1.         # Target number of gas neighbours to heat in an AGN feedback event.
diff --git a/examples/parameter_example.yml b/examples/parameter_example.yml
index eec64eb7e7c03640e344ec37419b06e3b3c1b663..0898771a755b34af1a51a9772b35a7a44295f28c 100644
--- a/examples/parameter_example.yml
+++ b/examples/parameter_example.yml
@@ -396,6 +396,7 @@ EAGLEFeedback:
 EAGLEAGN:
   subgrid_seed_mass_Msun:           1.5e5      # Black hole subgrid mass at creation time in solar masses.
   max_eddington_fraction:           1.         # Maximal allowed accretion rate in units of the Eddington rate.
+  viscous_alpha:                    1e6        # Normalisation constant of the Bondi viscuous time-scale accretion reduction term
   radiative_efficiency:             0.1        # Fraction of the accreted mass that gets radiated.
   coupling_efficiency:              0.15       # Fraction of the radiated energy that couples to the gas in feedback events.
   AGN_delta_T_K:                    3.16228e8  # Change in temperature to apply to the gas particle in an AGN feedback event in Kelvin.
diff --git a/src/Makefile.am b/src/Makefile.am
index 7ae03c4d677f4d60ac580c3e9a8c7ec1f8305398..7556089b828cb901f0ba2b1e576cf4793e3e1e8e 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -53,7 +53,7 @@ include_HEADERS = space.h runner.h queue.h task.h lock.h cell.h part.h const.h \
     star_formation_struct.h star_formation.h star_formation_iact.h \
     star_formation_logger.h star_formation_logger_struct.h \
     velociraptor_struct.h velociraptor_io.h random.h memuse.h black_holes.h black_holes_io.h \
-    black_holes_properties.h feedback.h feedback_struct.h feedback_properties.h
+    black_holes_properties.h black_holes_struct.h feedback.h feedback_struct.h feedback_properties.h
 
 # source files for EAGLE cooling
 EAGLE_COOLING_SOURCES =
@@ -218,6 +218,7 @@ nobase_noinst_HEADERS = align.h approx_math.h atomic.h barrier.h cycle.h error.h
                  black_holes/Default/black_holes.h black_holes/Default/black_holes_io.h \
 		 black_holes/Default/black_holes_part.h black_holes/Default/black_holes_iact.h \
                  black_holes/Default/black_holes_properties.h \
+                 black_holes/Default/black_holes_struct.h \
                  black_holes/EAGLE/black_holes.h black_holes/EAGLE/black_holes_io.h \
 		 black_holes/EAGLE/black_holes_part.h black_holes/EAGLE/black_holes_iact.h \
                  black_holes/EAGLE/black_holes_properties.h 
diff --git a/src/black_holes/Default/black_holes.h b/src/black_holes/Default/black_holes.h
index fa7ece225dbed571b5bfb4f695f1fb50e4a59422..ebc5a36841e07fe96d81319914f065516ef43517 100644
--- a/src/black_holes/Default/black_holes.h
+++ b/src/black_holes/Default/black_holes.h
@@ -23,6 +23,7 @@
 
 /* Local includes */
 #include "black_holes_properties.h"
+#include "black_holes_struct.h"
 #include "dimension.h"
 #include "kernel_hydro.h"
 #include "minmax.h"
@@ -139,6 +140,58 @@ black_holes_bpart_has_no_neighbours(struct bpart* restrict bp,
   bp->density.wcount_dh = 0.f;
 }
 
+/**
+ * @brief Update the properties of a black hole particles by swallowing
+ * a gas particle.
+ *
+ * @param bp The #bpart to update.
+ * @param p The #part that is swallowed.
+ * @param xp The #xpart that is swallowed.
+ * @param cosmo The current cosmological model.
+ */
+__attribute__((always_inline)) INLINE static void black_holes_swallow_part(
+    struct bpart* bp, const struct part* p, const struct xpart* xp,
+    const struct cosmology* cosmo) {
+
+  /* Nothing to do here: No swallowing in the default model */
+}
+
+/**
+ * @brief Update a given #part's BH data field to mark the particle has
+ * not yet been swallowed.
+ *
+ * @param p_data The #part's #black_holes_part_data structure.
+ */
+__attribute__((always_inline)) INLINE static void
+black_holes_mark_as_not_swallowed(struct black_holes_part_data* p_data) {
+
+  /* Nothing to do here: No swallowing in the default model */
+}
+
+/**
+ * @brief Update a given #part's BH data field to mark the particle has
+ * having been been swallowed.
+ *
+ * @param p_data The #part's #black_holes_part_data structure.
+ */
+__attribute__((always_inline)) INLINE static void black_holes_mark_as_swallowed(
+    struct black_holes_part_data* p_data) {
+
+  /* Nothing to do here: No swallowing in the default model */
+}
+
+/**
+ * @brief Return the ID of the BH that should swallow this #part.
+ *
+ * @param p_data The #part's #black_holes_part_data structure.
+ */
+__attribute__((always_inline)) INLINE static long long
+black_holes_get_swallow_id(struct black_holes_part_data* p_data) {
+
+  /* Return a non-existing ID */
+  return -1;
+}
+
 /**
  * @brief Compute the accretion rate of the black hole and all the quantites
  * required for the feedback loop.
diff --git a/src/black_holes/Default/black_holes_iact.h b/src/black_holes/Default/black_holes_iact.h
index 4aeb27b4e13a614036962a564d99a80c6990ebec..37e3facc4a14a8eaa1b87e1c60dfeae22dfecf44 100644
--- a/src/black_holes/Default/black_holes_iact.h
+++ b/src/black_holes/Default/black_holes_iact.h
@@ -63,6 +63,28 @@ __attribute__((always_inline)) INLINE static void runner_iact_nonsym_bh_density(
 #endif
 }
 
+/**
+ * @brief Swallowing interaction between two particles (non-symmetric).
+ *
+ * Function used to flag the gas particles that will be swallowed
+ * by the black hole particle.
+ *
+ * @param r2 Comoving square distance between the two particles.
+ * @param dx Comoving vector separating both particles (pi - pj).
+ * @param hi Comoving smoothing-length of particle i.
+ * @param hj Comoving smoothing-length of particle j.
+ * @param bi First particle (black hole).
+ * @param pj Second particle (gas)
+ * @param xpj The extended data of the second particle.
+ * @param cosmo The cosmological model.
+ * @param ti_current Current integer time value (for random numbers).
+ */
+__attribute__((always_inline)) INLINE static void runner_iact_nonsym_bh_swallow(
+    const float r2, const float *dx, const float hi, const float hj,
+    struct bpart *restrict bi, struct part *restrict pj,
+    struct xpart *restrict xpj, const struct cosmology *cosmo,
+    const integertime_t ti_current) {}
+
 /**
  * @brief Feedback interaction between two particles (non-symmetric).
  *
diff --git a/src/black_holes/Default/black_holes_part.h b/src/black_holes/Default/black_holes_part.h
index 360eb45d9e2ccb14e8ff55b6d286e7bb91ef89cc..db8c9b18cb8bac72cd7b3b239930aa2306e5171d 100644
--- a/src/black_holes/Default/black_holes_part.h
+++ b/src/black_holes/Default/black_holes_part.h
@@ -19,8 +19,7 @@
 #ifndef SWIFT_DEFAULT_BLACK_HOLE_PART_H
 #define SWIFT_DEFAULT_BLACK_HOLE_PART_H
 
-/* Some standard headers. */
-#include <stdlib.h>
+#include "chemistry_struct.h"
 
 /**
  * @brief Particle fields for the black hole particles.
@@ -63,6 +62,10 @@ struct bpart {
 
   } density;
 
+  /*! Chemistry information (e.g. metal content at birth, swallowed metal
+   * content, etc.) */
+  struct chemistry_bpart_data chemistry_data;
+
 #ifdef SWIFT_DEBUG_CHECKS
 
   /* Time of the last drift */
diff --git a/src/black_holes/Default/black_holes_struct.h b/src/black_holes/Default/black_holes_struct.h
new file mode 100644
index 0000000000000000000000000000000000000000..3fdb0ee882b075d40a8725fa52b221d48dcbd722
--- /dev/null
+++ b/src/black_holes/Default/black_holes_struct.h
@@ -0,0 +1,27 @@
+/*******************************************************************************
+ * This file is part of SWIFT.
+ * Copyright (c) 2019 Matthieu Schaller (schaller@strw.leidenuniv.nl)
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation, either version 3 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ ******************************************************************************/
+#ifndef SWIFT_BLACK_HOLES_STRUCT_DEFAULT_H
+#define SWIFT_BLACK_HOLES_STRUCT_DEFAULT_H
+
+/**
+ * @brief Black holes-related fields carried by each *gas* particle.
+ */
+struct black_holes_part_data {};
+
+#endif /* SWIFT_BLACK_HOLES_STRUCT_DEFAULT_H */
diff --git a/src/black_holes/EAGLE/black_holes.h b/src/black_holes/EAGLE/black_holes.h
index 6e9be26bc6dea438df80e15054ebe5303ecf5b63..a9a40b1cfb9604045913c90a25ed5b3771b0f3e4 100644
--- a/src/black_holes/EAGLE/black_holes.h
+++ b/src/black_holes/EAGLE/black_holes.h
@@ -21,6 +21,7 @@
 
 /* Local includes */
 #include "black_holes_properties.h"
+#include "black_holes_struct.h"
 #include "cosmology.h"
 #include "dimension.h"
 #include "kernel_hydro.h"
@@ -81,6 +82,9 @@ __attribute__((always_inline)) INLINE static void black_holes_init_bpart(
   bp->velocity_gas[0] = 0.f;
   bp->velocity_gas[1] = 0.f;
   bp->velocity_gas[2] = 0.f;
+  bp->circular_velocity_gas[0] = 0.f;
+  bp->circular_velocity_gas[1] = 0.f;
+  bp->circular_velocity_gas[2] = 0.f;
   bp->ngb_mass = 0.f;
   bp->num_ngbs = 0;
 }
@@ -101,7 +105,7 @@ __attribute__((always_inline)) INLINE static void black_holes_predict_extra(
  * @param bp The particle.
  */
 __attribute__((always_inline)) INLINE static void
-black_holes_reset_predicted_values(struct bpart* restrict bp) {}
+black_holes_reset_predicted_values(struct bpart* bp) {}
 
 /**
  * @brief Kick the additional variables
@@ -135,6 +139,9 @@ __attribute__((always_inline)) INLINE static void black_holes_end_density(
   bp->velocity_gas[0] *= h_inv_dim;
   bp->velocity_gas[1] *= h_inv_dim;
   bp->velocity_gas[2] *= h_inv_dim;
+  bp->circular_velocity_gas[0] *= h_inv_dim;
+  bp->circular_velocity_gas[1] *= h_inv_dim;
+  bp->circular_velocity_gas[2] *= h_inv_dim;
 
   const float rho_inv = 1.f / bp->rho_gas;
 
@@ -143,6 +150,9 @@ __attribute__((always_inline)) INLINE static void black_holes_end_density(
   bp->velocity_gas[0] *= rho_inv;
   bp->velocity_gas[1] *= rho_inv;
   bp->velocity_gas[2] *= rho_inv;
+  bp->circular_velocity_gas[0] *= rho_inv * h_inv;
+  bp->circular_velocity_gas[1] *= rho_inv * h_inv;
+  bp->circular_velocity_gas[2] *= rho_inv * h_inv;
 }
 
 /**
@@ -153,7 +163,7 @@ __attribute__((always_inline)) INLINE static void black_holes_end_density(
  * @param cosmo The current cosmological model.
  */
 __attribute__((always_inline)) INLINE static void
-black_holes_bpart_has_no_neighbours(struct bpart* restrict bp,
+black_holes_bpart_has_no_neighbours(struct bpart* bp,
                                     const struct cosmology* cosmo) {
 
   /* Some smoothing length multiples. */
@@ -166,6 +176,99 @@ black_holes_bpart_has_no_neighbours(struct bpart* restrict bp,
   bp->density.wcount_dh = 0.f;
 }
 
+/**
+ * @brief Update the properties of a black hole particles by swallowing
+ * a gas particle.
+ *
+ * @param bp The #bpart to update.
+ * @param p The #part that is swallowed.
+ * @param xp The #xpart that is swallowed.
+ * @param cosmo The current cosmological model.
+ */
+__attribute__((always_inline)) INLINE static void black_holes_swallow_part(
+    struct bpart* bp, const struct part* p, const struct xpart* xp,
+    const struct cosmology* cosmo) {
+
+  /* Get the current dynamical masses */
+  const float gas_mass = hydro_get_mass(p);
+  const float BH_mass = bp->mass;
+
+  /* Increase the dynamical mass of the BH. */
+  bp->mass += gas_mass;
+  bp->gpart->mass += gas_mass;
+
+  /* Update the BH momentum */
+  const float BH_mom[3] = {BH_mass * bp->v[0] + gas_mass * p->v[0],
+                           BH_mass * bp->v[1] + gas_mass * p->v[1],
+                           BH_mass * bp->v[2] + gas_mass * p->v[2]};
+
+  bp->v[0] = BH_mom[0] / bp->mass;
+  bp->v[1] = BH_mom[1] / bp->mass;
+  bp->v[2] = BH_mom[2] / bp->mass;
+  bp->gpart->v_full[0] = bp->v[0];
+  bp->gpart->v_full[1] = bp->v[1];
+  bp->gpart->v_full[2] = bp->v[2];
+
+  /* Update the BH metal masses */
+  struct chemistry_bpart_data* bp_chem = &bp->chemistry_data;
+  const struct chemistry_part_data* p_chem = &p->chemistry_data;
+
+  bp_chem->metal_mass_total += p_chem->metal_mass_fraction_total * gas_mass;
+  for (int i = 0; i < chemistry_element_count; ++i) {
+    bp_chem->metal_mass[i] += p_chem->metal_mass_fraction[i] * gas_mass;
+  }
+  bp_chem->mass_from_SNIa += p_chem->mass_from_SNIa;
+  bp_chem->mass_from_SNII += p_chem->mass_from_SNII;
+  bp_chem->mass_from_AGB += p_chem->mass_from_AGB;
+  bp_chem->metal_mass_from_SNIa +=
+      p_chem->metal_mass_fraction_from_SNIa * gas_mass;
+  bp_chem->metal_mass_from_SNII +=
+      p_chem->metal_mass_fraction_from_SNII * gas_mass;
+  bp_chem->metal_mass_from_AGB +=
+      p_chem->metal_mass_fraction_from_AGB * gas_mass;
+  bp_chem->iron_mass_from_SNIa +=
+      p_chem->iron_mass_fraction_from_SNIa * gas_mass;
+
+  /* This BH lost a neighbour */
+  bp->num_ngbs--;
+  bp->ngb_mass -= gas_mass;
+}
+
+/**
+ * @brief Update a given #part's BH data field to mark the particle has
+ * not yet been swallowed.
+ *
+ * @param p_data The #part's #black_holes_part_data structure.
+ */
+__attribute__((always_inline)) INLINE static void
+black_holes_mark_as_not_swallowed(struct black_holes_part_data* p_data) {
+
+  p_data->swallow_id = -1;
+}
+
+/**
+ * @brief Update a given #part's BH data field to mark the particle has
+ * having been been swallowed.
+ *
+ * @param p_data The #part's #black_holes_part_data structure.
+ */
+__attribute__((always_inline)) INLINE static void black_holes_mark_as_swallowed(
+    struct black_holes_part_data* p_data) {
+
+  p_data->swallow_id = -2;
+}
+
+/**
+ * @brief Return the ID of the BH that should swallow this #part.
+ *
+ * @param p_data The #part's #black_holes_part_data structure.
+ */
+__attribute__((always_inline)) INLINE static long long
+black_holes_get_swallow_id(struct black_holes_part_data* p_data) {
+
+  return p_data->swallow_id;
+}
+
 /**
  * @brief Compute the accretion rate of the black hole and all the quantites
  * required for the feedback loop.
@@ -181,6 +284,8 @@ __attribute__((always_inline)) INLINE static void black_holes_prepare_feedback(
     const struct phys_const* constants, const struct cosmology* cosmo,
     const double dt) {
 
+  if (dt == 0.) return;
+
   /* Gather some physical constants (all in internal units) */
   const double G = constants->const_newton_G;
   const double c = constants->const_speed_light_c;
@@ -194,12 +299,13 @@ __attribute__((always_inline)) INLINE static void black_holes_prepare_feedback(
   const double num_ngbs_to_heat = props->num_ngbs_to_heat;
   const double delta_T = props->AGN_delta_T_desired;
   const double delta_u = delta_T * props->temp_to_u_factor;
+  const double alpha_visc = props->alpha_visc;
 
   /* (Subgrid) mass of the BH (internal units) */
   const double BH_mass = bp->subgrid_mass;
 
   /* Convert the quantities we gathered to physical frame (all internal units)
-   */
+   * Note: for the velocities this means peculiar velocities */
   const double gas_rho_phys = bp->rho_gas * cosmo->a3_inv;
   const double gas_c_phys = bp->sound_speed_gas * cosmo->a_factor_sound_speed;
   const double gas_v_peculiar[3] = {bp->velocity_gas[0] * cosmo->a_inv,
@@ -210,6 +316,11 @@ __attribute__((always_inline)) INLINE static void black_holes_prepare_feedback(
                                    bp->v[1] * cosmo->a_inv,
                                    bp->v[2] * cosmo->a_inv};
 
+  const double gas_v_circular[3] = {
+      bp->circular_velocity_gas[0] * cosmo->a_inv,
+      bp->circular_velocity_gas[1] * cosmo->a_inv,
+      bp->circular_velocity_gas[2] * cosmo->a_inv};
+
   /* Difference in peculiar velocity between the gas and the BH
    * Note that there is no need for a Hubble flow term here. We are
    * computing the gas velocity at the position of the black hole. */
@@ -220,13 +331,31 @@ __attribute__((always_inline)) INLINE static void black_holes_prepare_feedback(
                               v_diff_peculiar[1] * v_diff_peculiar[1] +
                               v_diff_peculiar[2] * v_diff_peculiar[2];
 
+  /* Norm of the circular velocity of the gas around the BH */
+  const double tangential_velocity2 = gas_v_circular[0] * gas_v_circular[0] +
+                                      gas_v_circular[1] * gas_v_circular[1] +
+                                      gas_v_circular[2] * gas_v_circular[2];
+  const double tangential_velocity = sqrt(tangential_velocity2);
+
   /* We can now compute the Bondi accretion rate (internal units) */
   const double gas_c_phys2 = gas_c_phys * gas_c_phys;
   const double denominator2 = v_diff_norm2 + gas_c_phys2;
   const double denominator_inv = 1. / sqrt(denominator2);
-  const double Bondi_rate = 4. * M_PI * G * G * BH_mass * BH_mass *
-                            gas_rho_phys * denominator_inv * denominator_inv *
-                            denominator_inv;
+  double Bondi_rate = 4. * M_PI * G * G * BH_mass * BH_mass * gas_rho_phys *
+                      denominator_inv * denominator_inv * denominator_inv;
+
+  /* Compute the reduction factor from Rosas-Guevara et al. (2015) */
+  const double Bondi_radius = G * BH_mass / gas_c_phys2;
+  const double Bondi_time = Bondi_radius / gas_c_phys;
+  const double r_times_v_tang = Bondi_radius * tangential_velocity;
+  const double r_times_v_tang_3 =
+      r_times_v_tang * r_times_v_tang * r_times_v_tang;
+  const double viscous_time = 2. * M_PI * r_times_v_tang_3 /
+                              (1e-6 * alpha_visc * G * G * BH_mass * BH_mass);
+  const double f_visc = max(Bondi_time / viscous_time, 1.);
+
+  /* Limit the Bondi rate by the Bondi viscuous time ratio */
+  Bondi_rate *= f_visc;
 
   /* Compute the Eddington rate (internal units) */
   const double Eddington_rate =
@@ -245,7 +374,9 @@ __attribute__((always_inline)) INLINE static void black_holes_prepare_feedback(
   bp->total_accreted_mass += mass_rate * dt;
   bp->energy_reservoir += luminosity * epsilon_f * dt;
 
-  /* Energy required to have a feedback event */
+  /* Energy required to have a feedback event
+   * Note that we have subtracted the particles we swallowed from the ngb_mass
+   * and num_ngbs accumulators. */
   const double mean_ngb_mass = bp->ngb_mass / ((double)bp->num_ngbs);
   const double E_feedback_event = num_ngbs_to_heat * delta_u * mean_ngb_mass;
 
@@ -327,7 +458,8 @@ INLINE static void black_holes_create_from_gas(
     const struct part* p) {
 
   /* All the non-basic properties of the black hole have been zeroed
-     in the FOF code. We just update the rest. */
+   * in the FOF code. We update them here.
+   * (i.e. position, velocity, mass, time-step have been set) */
 
   /* Birth time */
   bp->formation_scale_factor = cosmo->a;
@@ -335,6 +467,31 @@ INLINE static void black_holes_create_from_gas(
   /* Initial seed mass */
   bp->subgrid_mass = props->subgrid_seed_mass;
 
+  /* We haven't accreted anything yet */
+  bp->total_accreted_mass = 0.f;
+
+  /* Initial metal masses */
+  const float gas_mass = hydro_get_mass(p);
+
+  struct chemistry_bpart_data* bp_chem = &bp->chemistry_data;
+  const struct chemistry_part_data* p_chem = &p->chemistry_data;
+
+  bp_chem->metal_mass_total = p_chem->metal_mass_fraction_total * gas_mass;
+  for (int i = 0; i < chemistry_element_count; ++i) {
+    bp_chem->metal_mass[i] = p_chem->metal_mass_fraction[i] * gas_mass;
+  }
+  bp_chem->mass_from_SNIa = p_chem->mass_from_SNIa;
+  bp_chem->mass_from_SNII = p_chem->mass_from_SNII;
+  bp_chem->mass_from_AGB = p_chem->mass_from_AGB;
+  bp_chem->metal_mass_from_SNIa =
+      p_chem->metal_mass_fraction_from_SNIa * gas_mass;
+  bp_chem->metal_mass_from_SNII =
+      p_chem->metal_mass_fraction_from_SNII * gas_mass;
+  bp_chem->metal_mass_from_AGB =
+      p_chem->metal_mass_fraction_from_AGB * gas_mass;
+  bp_chem->iron_mass_from_SNIa =
+      p_chem->iron_mass_fraction_from_SNIa * gas_mass;
+
   /* First initialisation */
   black_holes_init_bpart(bp);
 }
diff --git a/src/black_holes/EAGLE/black_holes_iact.h b/src/black_holes/EAGLE/black_holes_iact.h
index 0db85b3924fee6b4a8d0dcc3263c1746788219cc..cba68a999433bd0eedc1050cbdf00e5e17a535d7 100644
--- a/src/black_holes/EAGLE/black_holes_iact.h
+++ b/src/black_holes/EAGLE/black_holes_iact.h
@@ -22,6 +22,7 @@
 /* Local includes */
 #include "hydro.h"
 #include "random.h"
+#include "space.h"
 
 /**
  * @brief Density interaction between two particles (non-symmetric).
@@ -83,6 +84,12 @@ __attribute__((always_inline)) INLINE static void runner_iact_nonsym_bh_density(
   bi->velocity_gas[1] += mj * vj[1] * wi;
   bi->velocity_gas[2] += mj * vj[2] * wi;
 
+  /* Contribution to the circular valocity */
+  const float dv[3] = {bi->v[0] - vj[0], bi->v[1] - vj[1], bi->v[2] - vj[2]};
+  bi->circular_velocity_gas[0] += mj * wi * (dx[1] * dv[2] - dx[2] * dv[1]);
+  bi->circular_velocity_gas[1] += mj * wi * (dx[2] * dv[0] - dx[0] * dv[2]);
+  bi->circular_velocity_gas[2] += mj * wi * (dx[0] * dv[1] - dx[1] * dv[0]);
+
 #ifdef DEBUG_INTERACTIONS_BH
   /* Update ngb counters */
   if (si->num_ngb_density < MAX_NUM_OF_NEIGHBOURS_BH)
@@ -93,6 +100,75 @@ __attribute__((always_inline)) INLINE static void runner_iact_nonsym_bh_density(
 #endif
 }
 
+/**
+ * @brief Swallowing interaction between two particles (non-symmetric).
+ *
+ * Function used to flag the gas particles that will be swallowed
+ * by the black hole particle.
+ *
+ * @param r2 Comoving square distance between the two particles.
+ * @param dx Comoving vector separating both particles (pi - pj).
+ * @param hi Comoving smoothing-length of particle i.
+ * @param hj Comoving smoothing-length of particle j.
+ * @param bi First particle (black hole).
+ * @param pj Second particle (gas)
+ * @param xpj The extended data of the second particle.
+ * @param cosmo The cosmological model.
+ * @param ti_current Current integer time value (for random numbers).
+ */
+__attribute__((always_inline)) INLINE static void runner_iact_nonsym_bh_swallow(
+    const float r2, const float *dx, const float hi, const float hj,
+    struct bpart *restrict bi, struct part *restrict pj,
+    struct xpart *restrict xpj, const struct cosmology *cosmo,
+    const integertime_t ti_current) {
+
+  float wi;
+
+  /* Get r and 1/r. */
+  const float r_inv = 1.0f / sqrtf(r2);
+  const float r = r2 * r_inv;
+
+  /* Compute the kernel function */
+  const float hi_inv = 1.0f / hi;
+  const float hi_inv_dim = pow_dimension(hi_inv);
+  const float ui = r * hi_inv;
+  kernel_eval(ui, &wi);
+
+  /* Is the BH hungry? */
+  if (bi->subgrid_mass > bi->mass) {
+
+    /* Probability to swallow this particle
+     * Recall that in SWIFT the SPH kernel is recovered by computing
+     * kernel_eval() and muliplying by (1/h^d) */
+    const float prob =
+        (bi->subgrid_mass - bi->mass) * hi_inv_dim * wi / bi->rho_gas;
+
+    /* Draw a random number (Note mixing both IDs) */
+    const float rand = random_unit_interval(bi->id + pj->id, ti_current,
+                                            random_number_BH_swallow);
+
+    /* Are we lucky? */
+    if (rand < prob) {
+
+      /* This particle is swallowed by the BH with the largest ID of all the
+       * candidates wanting to swallow it */
+      if (pj->black_holes_data.swallow_id < bi->id) {
+
+        message("BH %lld wants to swallow gas particle %lld", bi->id, pj->id);
+
+        pj->black_holes_data.swallow_id = bi->id;
+
+      } else {
+
+        message(
+            "BH %lld wants to swallow gas particle %lld BUT CANNOT (old "
+            "swallow id=%lld)",
+            bi->id, pj->id, pj->black_holes_data.swallow_id);
+      }
+    }
+  }
+}
+
 /**
  * @brief Feedback interaction between two particles (non-symmetric).
  *
@@ -136,7 +212,12 @@ runner_iact_nonsym_bh_feedback(const float r2, const float *dx, const float hi,
       hydro_set_drifted_physical_internal_energy(pj, cosmo, u_new);
 
       /* Impose maximal viscosity */
-      hydro_set_viscosity_alpha_max_feedback(pj);
+      hydro_diffusive_feedback_reset(pj);
+
+      /* message( */
+      /*     "We did some AGN heating! id %llu BH id %llu probability " */
+      /*     " %.5e  random_num %.5e du %.5e du/ini %.5e", */
+      /*     pj->id, bi->id, prob, rand, delta_u, delta_u / u_init); */
     }
   }
 
diff --git a/src/black_holes/EAGLE/black_holes_part.h b/src/black_holes/EAGLE/black_holes_part.h
index b6d15dfa06be1c4572845d1b9656b053439f70d1..598edffecae228a47f57224be9727ee50ccc4074 100644
--- a/src/black_holes/EAGLE/black_holes_part.h
+++ b/src/black_holes/EAGLE/black_holes_part.h
@@ -19,6 +19,8 @@
 #ifndef SWIFT_EAGLE_BLACK_HOLE_PART_H
 #define SWIFT_EAGLE_BLACK_HOLE_PART_H
 
+#include "chemistry_struct.h"
+
 /**
  * @brief Particle fields for the black hole particles.
  *
@@ -73,7 +75,8 @@ struct bpart {
   /*! Subgrid mass of the black hole */
   float subgrid_mass;
 
-  /*! Total accreted mass of the black hole */
+  /*! Total accreted mass of the black hole (including accreted mass onto BHs
+   * that were merged) */
   float total_accreted_mass;
 
   /*! Energy reservoir for feedback */
@@ -91,6 +94,9 @@ struct bpart {
   /*! Smoothed velocity (peculiar) of the gas surrounding the black hole */
   float velocity_gas[3];
 
+  /*! Curl of the velocity field around the black hole */
+  float circular_velocity_gas[3];
+
   /*! Total mass of the gas neighbours. */
   float ngb_mass;
 
@@ -108,6 +114,10 @@ struct bpart {
 
   } to_distribute;
 
+  /*! Chemistry information (e.g. metal content at birth, swallowed metal
+   * content, etc.) */
+  struct chemistry_bpart_data chemistry_data;
+
 #ifdef SWIFT_DEBUG_CHECKS
 
   /* Time of the last drift */
diff --git a/src/black_holes/EAGLE/black_holes_properties.h b/src/black_holes/EAGLE/black_holes_properties.h
index 6cb0b4ea7056c19b7c131a9d226675b5c888b5ce..50981b43e1ec23927d9958c95ec396fe4a4e4d3a 100644
--- a/src/black_holes/EAGLE/black_holes_properties.h
+++ b/src/black_holes/EAGLE/black_holes_properties.h
@@ -63,6 +63,9 @@ struct black_holes_props {
   /*! Feedback coupling efficiency of the black holes. */
   float epsilon_f;
 
+  /*! Normalisation of the viscuous angular momentum accretion reduction */
+  float alpha_visc;
+
   /* ---- Properties of the feedback model ------- */
 
   /*! Temperature increase induced by AGN feedback (Kelvin) */
@@ -144,6 +147,7 @@ INLINE static void black_holes_props_init(struct black_holes_props *bp,
       parser_get_param_float(params, "EAGLEAGN:radiative_efficiency");
   bp->epsilon_f =
       parser_get_param_float(params, "EAGLEAGN:coupling_efficiency");
+  bp->alpha_visc = parser_get_param_float(params, "EAGLEAGN:viscous_alpha");
 
   /* Feedback parameters ---------------------------------- */
 
diff --git a/src/black_holes/EAGLE/black_holes_struct.h b/src/black_holes/EAGLE/black_holes_struct.h
new file mode 100644
index 0000000000000000000000000000000000000000..cf0e9c3975c1a1d9bda0065a5072465ae886297f
--- /dev/null
+++ b/src/black_holes/EAGLE/black_holes_struct.h
@@ -0,0 +1,31 @@
+/*******************************************************************************
+ * This file is part of SWIFT.
+ * Copyright (c) 2019 Matthieu Schaller (schaller@strw.leidenuniv.nl)
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation, either version 3 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ ******************************************************************************/
+#ifndef SWIFT_BLACK_HOLES_STRUCT_EAGLE_H
+#define SWIFT_BLACK_HOLES_STRUCT_EAGLE_H
+
+/**
+ * @brief Black holes-related fields carried by each *gas* particle.
+ */
+struct black_holes_part_data {
+
+  /*! ID of the black-hole that will swallow this #part. */
+  long long swallow_id;
+};
+
+#endif /* SWIFT_BLACK_HOLES_STRUCT_EAGLE_H */
diff --git a/src/black_holes_struct.h b/src/black_holes_struct.h
new file mode 100644
index 0000000000000000000000000000000000000000..5535caf23eb9d6eaed3774faa96e75f6f9f73625
--- /dev/null
+++ b/src/black_holes_struct.h
@@ -0,0 +1,39 @@
+/*******************************************************************************
+ * This file is part of SWIFT.
+ * Copyright (c) 2019 Matthieu Schaller (schaller@strw.leidenuniv.nl)
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation, either version 3 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ ******************************************************************************/
+#ifndef SWIFT_BLACK_HOLES_STRUCT_H
+#define SWIFT_BLACK_HOLES_STRUCT_H
+
+/**
+ * @file src/feedback_struct.h
+ * @brief Branches between the different feedback functions.
+ */
+
+/* Config parameters. */
+#include "../config.h"
+
+/* Import the right black holes definition */
+#if defined(BLACK_HOLES_NONE)
+#include "./black_holes/Default/black_holes_struct.h"
+#elif defined(BLACK_HOLES_EAGLE)
+#include "./black_holes/EAGLE/black_holes_struct.h"
+#else
+#error "Invalid choice of black holes function."
+#endif
+
+#endif /* SWIFT_BLACK_HOLES_STRUCT_H */
diff --git a/src/cache.h b/src/cache.h
index e5a62f33b3eb492f9da6e0e98ed767d6b8de32dd..e2e5185d5ac8e12761037eee1a79a26f827ad843 100644
--- a/src/cache.h
+++ b/src/cache.h
@@ -346,7 +346,7 @@ __attribute__((always_inline)) INLINE int cache_read_particles_subset_self(
  */
 __attribute__((always_inline)) INLINE void cache_read_particles_subset_pair(
     const struct cell *restrict const ci, struct cache *restrict const ci_cache,
-    const struct entry *restrict sort_i, int *first_pi, int *last_pi,
+    const struct sort_entry *restrict sort_i, int *first_pi, int *last_pi,
     const double *loc, const int flipped) {
 
 #if defined(GADGET2_SPH)
@@ -608,9 +608,10 @@ __attribute__((always_inline)) INLINE int cache_read_force_particles(
 __attribute__((always_inline)) INLINE void cache_read_two_partial_cells_sorted(
     const struct cell *restrict const ci, const struct cell *restrict const cj,
     struct cache *restrict const ci_cache,
-    struct cache *restrict const cj_cache, const struct entry *restrict sort_i,
-    const struct entry *restrict sort_j, const double *restrict const shift,
-    int *first_pi, int *last_pj) {
+    struct cache *restrict const cj_cache,
+    const struct sort_entry *restrict sort_i,
+    const struct sort_entry *restrict sort_j,
+    const double *restrict const shift, int *first_pi, int *last_pj) {
 
   /* Make the number of particles to be read a multiple of the vector size.
    * This eliminates serial remainder loops where possible when populating the
@@ -860,8 +861,9 @@ __attribute__((always_inline)) INLINE void
 cache_read_two_partial_cells_sorted_force(
     const struct cell *const ci, const struct cell *const cj,
     struct cache *const ci_cache, struct cache *const cj_cache,
-    const struct entry *restrict sort_i, const struct entry *restrict sort_j,
-    const double *const shift, int *first_pi, int *last_pj) {
+    const struct sort_entry *restrict sort_i,
+    const struct sort_entry *restrict sort_j, const double *const shift,
+    int *first_pi, int *last_pj) {
 
   /* Make the number of particles to be read a multiple of the vector size.
    * This eliminates serial remainder loops where possible when populating the
diff --git a/src/cell.c b/src/cell.c
index fca6a04ea746d6d5c995318c7433c8ff2ccb98d2..115a0e253e187758b448909558c02809dbd32f9f 100644
--- a/src/cell.c
+++ b/src/cell.c
@@ -590,6 +590,28 @@ int cell_pack_tags(const struct cell *c, int *tags) {
 #endif
 }
 
+void cell_pack_part_swallow(const struct cell *c,
+                            struct black_holes_part_data *data) {
+
+  const size_t count = c->hydro.count;
+  const struct part *parts = c->hydro.parts;
+
+  for (size_t i = 0; i < count; ++i) {
+    data[i] = parts[i].black_holes_data;
+  }
+}
+
+void cell_unpack_part_swallow(struct cell *c,
+                              const struct black_holes_part_data *data) {
+
+  const size_t count = c->hydro.count;
+  struct part *parts = c->hydro.parts;
+
+  for (size_t i = 0; i < count; ++i) {
+    parts[i].black_holes_data = data[i];
+  }
+}
+
 /**
  * @brief Unpack the data of a given cell and its sub-cells.
  *
@@ -1382,6 +1404,67 @@ int cell_slocktree(struct cell *c) {
   }
 }
 
+/**
+ * @brief Lock a cell for access to its array of #bpart and hold its parents.
+ *
+ * @param c The #cell.
+ * @return 0 on success, 1 on failure
+ */
+int cell_blocktree(struct cell *c) {
+  TIMER_TIC
+
+  /* First of all, try to lock this cell. */
+  if (c->black_holes.hold || lock_trylock(&c->black_holes.lock) != 0) {
+    TIMER_TOC(timer_locktree);
+    return 1;
+  }
+
+  /* Did somebody hold this cell in the meantime? */
+  if (c->black_holes.hold) {
+    /* Unlock this cell. */
+    if (lock_unlock(&c->black_holes.lock) != 0) error("Failed to unlock cell.");
+
+    /* Admit defeat. */
+    TIMER_TOC(timer_locktree);
+    return 1;
+  }
+
+  /* Climb up the tree and lock/hold/unlock. */
+  struct cell *finger;
+  for (finger = c->parent; finger != NULL; finger = finger->parent) {
+    /* Lock this cell. */
+    if (lock_trylock(&finger->black_holes.lock) != 0) break;
+
+    /* Increment the hold. */
+    atomic_inc(&finger->black_holes.hold);
+
+    /* Unlock the cell. */
+    if (lock_unlock(&finger->black_holes.lock) != 0)
+      error("Failed to unlock cell.");
+  }
+
+  /* If we reached the top of the tree, we're done. */
+  if (finger == NULL) {
+    TIMER_TOC(timer_locktree);
+    return 0;
+  }
+
+  /* Otherwise, we hit a snag. */
+  else {
+    /* Undo the holds up to finger. */
+    for (struct cell *finger2 = c->parent; finger2 != finger;
+         finger2 = finger2->parent)
+      atomic_dec(&finger2->black_holes.hold);
+
+    /* Unlock this cell. */
+    if (lock_unlock(&c->black_holes.lock) != 0) error("Failed to unlock cell.");
+
+    /* Admit defeat. */
+    TIMER_TOC(timer_locktree);
+    return 1;
+  }
+}
+
 /**
  * @brief Unlock a cell's parents for access to #part array.
  *
@@ -1454,6 +1537,24 @@ void cell_sunlocktree(struct cell *c) {
   TIMER_TOC(timer_locktree);
 }
 
+/**
+ * @brief Unlock a cell's parents for access to #bpart array.
+ *
+ * @param c The #cell.
+ */
+void cell_bunlocktree(struct cell *c) {
+  TIMER_TIC
+
+  /* First of all, try to unlock this cell. */
+  if (lock_unlock(&c->black_holes.lock) != 0) error("Failed to unlock cell.");
+
+  /* Climb up the tree and unhold the parents. */
+  for (struct cell *finger = c->parent; finger != NULL; finger = finger->parent)
+    atomic_dec(&finger->black_holes.hold);
+
+  TIMER_TOC(timer_locktree);
+}
+
 /**
  * @brief Sort the parts into eight bins along the given pivots.
  *
@@ -1886,6 +1987,8 @@ void cell_clean_links(struct cell *c, void *data) {
   c->stars.density = NULL;
   c->stars.feedback = NULL;
   c->black_holes.density = NULL;
+  c->black_holes.swallow = NULL;
+  c->black_holes.do_swallow = NULL;
   c->black_holes.feedback = NULL;
 }
 
@@ -2903,20 +3006,16 @@ void cell_activate_subcell_black_holes_tasks(struct cell *ci, struct cell *cj,
     }
 
     /* Otherwise, activate the drifts. */
-    else {
-      if (cell_is_active_black_holes(ci, e)) {
-
-        /* Activate the drifts if the cells are local. */
-        if (ci->nodeID == engine_rank) cell_activate_drift_bpart(ci, s);
-        if (cj->nodeID == engine_rank) cell_activate_drift_part(cj, s);
-      }
+    else if (cell_is_active_black_holes(ci, e) ||
+             cell_is_active_black_holes(cj, e)) {
 
-      if (cell_is_active_black_holes(cj, e)) {
+      /* Activate the drifts if the cells are local. */
+      if (ci->nodeID == engine_rank) cell_activate_drift_bpart(ci, s);
+      if (cj->nodeID == engine_rank) cell_activate_drift_part(cj, s);
 
-        /* Activate the drifts if the cells are local. */
-        if (ci->nodeID == engine_rank) cell_activate_drift_part(ci, s);
-        if (cj->nodeID == engine_rank) cell_activate_drift_bpart(cj, s);
-      }
+      /* Activate the drifts if the cells are local. */
+      if (ci->nodeID == engine_rank) cell_activate_drift_part(ci, s);
+      if (cj->nodeID == engine_rank) cell_activate_drift_bpart(cj, s);
     }
   } /* Otherwise, pair interation */
 }
@@ -3727,6 +3826,7 @@ int cell_unskip_stars_tasks(struct cell *c, struct scheduler *s,
  * @return 1 If the space needs rebuilding. 0 otherwise.
  */
 int cell_unskip_black_holes_tasks(struct cell *c, struct scheduler *s) {
+
   struct engine *e = s->space->e;
   const int nodeID = e->nodeID;
   int rebuild = 0;
@@ -3750,39 +3850,35 @@ int cell_unskip_black_holes_tasks(struct cell *c, struct scheduler *s) {
     const int cj_nodeID = nodeID;
 #endif
 
-    /* Activate the drifts */
-    if (t->type == task_type_self && ci_active) {
-      cell_activate_drift_part(ci, s);
-      cell_activate_drift_bpart(ci, s);
-    }
-
     /* Only activate tasks that involve a local active cell. */
     if ((ci_active || cj_active) &&
         (ci_nodeID == nodeID || cj_nodeID == nodeID)) {
+
       scheduler_activate(s, t);
 
-      if (t->type == task_type_pair) {
-        /* Do ci */
-        if (ci_active) {
+      /* Activate the drifts */
+      if (t->type == task_type_self) {
+        cell_activate_drift_part(ci, s);
+        cell_activate_drift_bpart(ci, s);
+      }
 
-          /* Activate the drift tasks. */
-          if (ci_nodeID == nodeID) cell_activate_drift_bpart(ci, s);
-          if (cj_nodeID == nodeID) cell_activate_drift_part(cj, s);
-        }
+      /* Activate the drifts */
+      else if (t->type == task_type_pair) {
 
-        /* Do cj */
-        if (cj_active) {
+        /* Activate the drift tasks. */
+        if (ci_nodeID == nodeID) cell_activate_drift_bpart(ci, s);
+        if (ci_nodeID == nodeID) cell_activate_drift_part(ci, s);
 
-          /* Activate the drift tasks. */
-          if (cj_nodeID == nodeID) cell_activate_drift_bpart(cj, s);
-          if (ci_nodeID == nodeID) cell_activate_drift_part(ci, s);
-        }
+        if (cj_nodeID == nodeID) cell_activate_drift_part(cj, s);
+        if (cj_nodeID == nodeID) cell_activate_drift_bpart(cj, s);
       }
 
+      /* Store current values of dx_max and h_max. */
       else if (t->type == task_type_sub_self) {
         cell_activate_subcell_black_holes_tasks(ci, NULL, s);
       }
 
+      /* Store current values of dx_max and h_max. */
       else if (t->type == task_type_sub_pair) {
         cell_activate_subcell_black_holes_tasks(ci, cj, s);
       }
@@ -3790,76 +3886,99 @@ int cell_unskip_black_holes_tasks(struct cell *c, struct scheduler *s) {
 
     /* Only interested in pair interactions as of here. */
     if (t->type == task_type_pair || t->type == task_type_sub_pair) {
+
       /* Check whether there was too much particle motion, i.e. the
          cell neighbour conditions were violated. */
       if (cell_need_rebuild_for_black_holes_pair(ci, cj)) rebuild = 1;
       if (cell_need_rebuild_for_black_holes_pair(cj, ci)) rebuild = 1;
 
+      scheduler_activate(s, ci->hydro.super->black_holes.swallow_ghost[0]);
+      scheduler_activate(s, cj->hydro.super->black_holes.swallow_ghost[0]);
+
 #ifdef WITH_MPI
       /* Activate the send/recv tasks. */
       if (ci_nodeID != nodeID) {
-        if (cj_active) {
-          scheduler_activate_recv(s, ci->mpi.recv, task_subtype_xv);
 
-          /* If the local cell is active, more stuff will be needed. */
-          scheduler_activate_send(s, cj->mpi.send, task_subtype_bpart,
-                                  ci_nodeID);
-          cell_activate_drift_bpart(cj, s);
+        /* Receive the foreign parts to compute BH accretion rates and do the
+         * swallowing */
+        scheduler_activate_recv(s, ci->mpi.recv, task_subtype_rho);
+        scheduler_activate_recv(s, ci->mpi.recv, task_subtype_part_swallow);
 
-          /* If the local cell is active, send its ti_end values. */
-          scheduler_activate_send(s, cj->mpi.send, task_subtype_tend_bpart,
-                                  ci_nodeID);
-        }
+        /* Send the local BHs to tag the particles to swallow and to do feedback
+         */
+        scheduler_activate_send(s, cj->mpi.send, task_subtype_bpart_rho,
+                                ci_nodeID);
+        scheduler_activate_send(s, cj->mpi.send, task_subtype_bpart_feedback,
+                                ci_nodeID);
 
-        if (ci_active) {
-          scheduler_activate_recv(s, ci->mpi.recv, task_subtype_bpart);
+        /* Drift before you send */
+        cell_activate_drift_bpart(cj, s);
 
-          /* If the foreign cell is active, we want its ti_end values. */
-          scheduler_activate_recv(s, ci->mpi.recv, task_subtype_tend_bpart);
+        /* Send the new BH time-steps */
+        scheduler_activate_send(s, cj->mpi.send, task_subtype_tend_bpart,
+                                ci_nodeID);
 
-          /* Is the foreign cell active and will need stuff from us? */
-          scheduler_activate_send(s, cj->mpi.send, task_subtype_xv, ci_nodeID);
+        /* Receive the foreign BHs to tag particles to swallow and for feedback
+         */
+        scheduler_activate_recv(s, ci->mpi.recv, task_subtype_bpart_rho);
+        scheduler_activate_recv(s, ci->mpi.recv, task_subtype_bpart_feedback);
 
-          /* Drift the cell which will be sent; note that not all sent
-             particles will be drifted, only those that are needed. */
-          cell_activate_drift_part(cj, s);
-        }
-
-      } else if (cj_nodeID != nodeID) {
-        /* If the local cell is active, receive data from the foreign cell. */
-        if (ci_active) {
-          scheduler_activate_recv(s, cj->mpi.recv, task_subtype_xv);
+        /* Receive the foreign BH time-steps */
+        scheduler_activate_recv(s, ci->mpi.recv, task_subtype_tend_bpart);
 
-          /* If the local cell is active, more stuff will be needed. */
-          scheduler_activate_send(s, ci->mpi.send, task_subtype_bpart,
-                                  cj_nodeID);
-          cell_activate_drift_bpart(ci, s);
+        /* Send the local part information */
+        scheduler_activate_send(s, cj->mpi.send, task_subtype_rho, ci_nodeID);
+        scheduler_activate_send(s, cj->mpi.send, task_subtype_part_swallow,
+                                ci_nodeID);
 
-          /* If the local cell is active, send its ti_end values. */
-          scheduler_activate_send(s, ci->mpi.send, task_subtype_tend_bpart,
-                                  cj_nodeID);
-        }
-
-        if (cj_active) {
-          scheduler_activate_recv(s, cj->mpi.recv, task_subtype_bpart);
+        /* Drift the cell which will be sent; note that not all sent
+           particles will be drifted, only those that are needed. */
+        cell_activate_drift_part(cj, s);
 
-          /* If the foreign cell is active, we want its ti_end values. */
-          scheduler_activate_recv(s, cj->mpi.recv, task_subtype_tend_bpart);
-
-          /* Is the foreign cell active and will need stuff from us? */
-          scheduler_activate_send(s, ci->mpi.send, task_subtype_xv, cj_nodeID);
+      } else if (cj_nodeID != nodeID) {
 
-          /* Drift the cell which will be sent; note that not all sent
-             particles will be drifted, only those that are needed. */
-          cell_activate_drift_part(ci, s);
-        }
+        /* Receive the foreign parts to compute BH accretion rates and do the
+         * swallowing */
+        scheduler_activate_recv(s, cj->mpi.recv, task_subtype_rho);
+        scheduler_activate_recv(s, cj->mpi.recv, task_subtype_part_swallow);
+
+        /* Send the local BHs to tag the particles to swallow and to do feedback
+         */
+        scheduler_activate_send(s, ci->mpi.send, task_subtype_bpart_rho,
+                                cj_nodeID);
+        scheduler_activate_send(s, ci->mpi.send, task_subtype_bpart_feedback,
+                                cj_nodeID);
+
+        /* Drift before you send */
+        cell_activate_drift_bpart(ci, s);
+
+        /* Send the new BH time-steps */
+        scheduler_activate_send(s, ci->mpi.send, task_subtype_tend_bpart,
+                                cj_nodeID);
+
+        /* Receive the foreign BHs to tag particles to swallow and for feedback
+         */
+        scheduler_activate_recv(s, cj->mpi.recv, task_subtype_bpart_rho);
+        scheduler_activate_recv(s, cj->mpi.recv, task_subtype_bpart_feedback);
+
+        /* Receive the foreign BH time-steps */
+        scheduler_activate_recv(s, cj->mpi.recv, task_subtype_tend_bpart);
+
+        /* Send the local part information */
+        scheduler_activate_send(s, ci->mpi.send, task_subtype_rho, cj_nodeID);
+        scheduler_activate_send(s, ci->mpi.send, task_subtype_part_swallow,
+                                cj_nodeID);
+
+        /* Drift the cell which will be sent; note that not all sent
+           particles will be drifted, only those that are needed. */
+        cell_activate_drift_part(ci, s);
       }
 #endif
     }
   }
 
-  /* Un-skip the feedback tasks involved with this cell. */
-  for (struct link *l = c->black_holes.feedback; l != NULL; l = l->next) {
+  /* Un-skip the swallow tasks involved with this cell. */
+  for (struct link *l = c->black_holes.swallow; l != NULL; l = l->next) {
     struct task *t = l->t;
     struct cell *ci = t->ci;
     struct cell *cj = t->cj;
@@ -3873,36 +3992,80 @@ int cell_unskip_black_holes_tasks(struct cell *c, struct scheduler *s) {
     const int cj_nodeID = nodeID;
 #endif
 
-    if (t->type == task_type_self && ci_active) {
+    /* Only activate tasks that involve a local active cell. */
+    if ((ci_active || cj_active) &&
+        (ci_nodeID == nodeID || cj_nodeID == nodeID)) {
+
       scheduler_activate(s, t);
     }
+  }
+
+  /* Un-skip the swallow tasks involved with this cell. */
+  for (struct link *l = c->black_holes.do_swallow; l != NULL; l = l->next) {
+    struct task *t = l->t;
+    struct cell *ci = t->ci;
+    struct cell *cj = t->cj;
+    const int ci_active = cell_is_active_black_holes(ci, e);
+    const int cj_active = (cj != NULL) ? cell_is_active_black_holes(cj, e) : 0;
+#ifdef WITH_MPI
+    const int ci_nodeID = ci->nodeID;
+    const int cj_nodeID = (cj != NULL) ? cj->nodeID : -1;
+#else
+    const int ci_nodeID = nodeID;
+    const int cj_nodeID = nodeID;
+#endif
+
+    /* Only activate tasks that involve a local active cell. */
+    if ((ci_active || cj_active) &&
+        (ci_nodeID == nodeID || cj_nodeID == nodeID)) {
 
-    else if (t->type == task_type_sub_self && ci_active) {
       scheduler_activate(s, t);
     }
+  }
 
-    else if (t->type == task_type_pair || t->type == task_type_sub_pair) {
-      /* We only want to activate the task if the cell is active and is
-         going to update some gas on the *local* node */
-      if ((ci_nodeID == nodeID && cj_nodeID == nodeID) &&
-          (ci_active || cj_active)) {
-        scheduler_activate(s, t);
+  /* Un-skip the feedback tasks involved with this cell. */
+  for (struct link *l = c->black_holes.feedback; l != NULL; l = l->next) {
+    struct task *t = l->t;
+    struct cell *ci = t->ci;
+    struct cell *cj = t->cj;
+    const int ci_active = cell_is_active_black_holes(ci, e);
+    const int cj_active = (cj != NULL) ? cell_is_active_black_holes(cj, e) : 0;
+#ifdef WITH_MPI
+    const int ci_nodeID = ci->nodeID;
+    const int cj_nodeID = (cj != NULL) ? cj->nodeID : -1;
+#else
+    const int ci_nodeID = nodeID;
+    const int cj_nodeID = nodeID;
+#endif
 
-      } else if ((ci_nodeID == nodeID && cj_nodeID != nodeID) && (cj_active)) {
-        scheduler_activate(s, t);
+    /* Only activate tasks that involve a local active cell. */
+    if ((ci_active || cj_active) &&
+        (ci_nodeID == nodeID || cj_nodeID == nodeID)) {
 
-      } else if ((ci_nodeID != nodeID && cj_nodeID == nodeID) && (ci_active)) {
-        scheduler_activate(s, t);
-      }
+      scheduler_activate(s, t);
     }
-
-    /* Nothing more to do here, all drifts activated above */
   }
 
   /* Unskip all the other task types. */
   if (c->nodeID == nodeID && cell_is_active_black_holes(c, e)) {
-    if (c->black_holes.ghost != NULL)
-      scheduler_activate(s, c->black_holes.ghost);
+
+    /* for (struct link *l = c->black_holes.swallow; l != NULL; l = l->next)
+     */
+    /*   scheduler_activate(s, l->t); */
+    /* for (struct link *l = c->black_holes.do_swallow; l != NULL; l =
+     * l->next)
+     */
+    /*   scheduler_activate(s, l->t); */
+    /* for (struct link *l = c->black_holes.feedback; l != NULL; l = l->next)
+     */
+    /*   scheduler_activate(s, l->t); */
+
+    if (c->black_holes.density_ghost != NULL)
+      scheduler_activate(s, c->black_holes.density_ghost);
+    if (c->black_holes.swallow_ghost[0] != NULL)
+      scheduler_activate(s, c->black_holes.swallow_ghost[0]);
+    if (c->black_holes.swallow_ghost[1] != NULL)
+      scheduler_activate(s, c->black_holes.swallow_ghost[1]);
     if (c->black_holes.black_holes_in != NULL)
       scheduler_activate(s, c->black_holes.black_holes_in);
     if (c->black_holes.black_holes_out != NULL)
@@ -3920,7 +4083,8 @@ int cell_unskip_black_holes_tasks(struct cell *c, struct scheduler *s) {
  * @brief Set the super-cell pointers for all cells in a hierarchy.
  *
  * @param c The top-level #cell to play with.
- * @param super Pointer to the deepest cell with tasks in this part of the tree.
+ * @param super Pointer to the deepest cell with tasks in this part of the
+ * tree.
  * @param with_hydro Are we running with hydrodynamics on?
  * @param with_grav Are we running with gravity on?
  */
@@ -3945,8 +4109,8 @@ void cell_set_super(struct cell *c, struct cell *super, const int with_hydro,
  * @brief Set the super-cell pointers for all cells in a hierarchy.
  *
  * @param c The top-level #cell to play with.
- * @param super_hydro Pointer to the deepest cell with tasks in this part of the
- * tree.
+ * @param super_hydro Pointer to the deepest cell with tasks in this part of
+ * the tree.
  */
 void cell_set_super_hydro(struct cell *c, struct cell *super_hydro) {
   /* Are we in a cell with some kind of self/pair task ? */
@@ -4177,8 +4341,14 @@ void cell_drift_part(struct cell *c, const struct engine *e, int force) {
           hydro_remove_part(p, xp);
 
           /* Remove the particle entirely */
+          struct gpart *gp = p->gpart;
           cell_remove_part(e, c, p, xp);
 
+          /* and it's gravity friend */
+          if (gp != NULL) {
+            cell_remove_gpart(e, c, gp);
+          }
+
           continue;
         }
       }
@@ -4197,9 +4367,12 @@ void cell_drift_part(struct cell *c, const struct engine *e, int force) {
                              xp->x_diff_sort[2] * xp->x_diff_sort[2];
       dx2_max_sort = max(dx2_max_sort, dx2_sort);
 
-      /* Maximal smoothing length */
+      /* Update the maximal smoothing length in the cell */
       cell_h_max = max(cell_h_max, p->h);
 
+      /* Mark the particle has not being swallowed */
+      black_holes_mark_as_not_swallowed(&p->black_holes_data);
+
       /* Get ready for a density calculation */
       if (part_is_active(p, e)) {
         hydro_init_part(p, &e->s->hs);
@@ -4754,7 +4927,8 @@ void cell_clear_stars_sort_flags(struct cell *c, const int clear_unused_flags) {
     cell_clear_flag(c, cell_flag_do_stars_sub_sort);
   }
 
-  /* Indicate that the cell is not sorted and cancel the pointer sorting arrays.
+  /* Indicate that the cell is not sorted and cancel the pointer sorting
+   * arrays.
    */
   c->stars.sorted = 0;
   cell_free_stars_sorts(c);
@@ -4788,7 +4962,8 @@ void cell_clear_hydro_sort_flags(struct cell *c, const int clear_unused_flags) {
     cell_clear_flag(c, cell_flag_do_hydro_sub_sort);
   }
 
-  /* Indicate that the cell is not sorted and cancel the pointer sorting arrays.
+  /* Indicate that the cell is not sorted and cancel the pointer sorting
+   * arrays.
    */
   c->hydro.sorted = 0;
   cell_free_hydro_sorts(c);
@@ -4938,9 +5113,9 @@ void cell_recursively_shift_sparts(struct cell *c,
  * @param e The #engine.
  * @param c The leaf-cell in which to add the #spart.
  *
- * @return A pointer to the newly added #spart. The spart has a been zeroed and
- * given a position within the cell as well as set to the minimal active time
- * bin.
+ * @return A pointer to the newly added #spart. The spart has a been zeroed
+ * and given a position within the cell as well as set to the minimal active
+ * time bin.
  */
 struct spart *cell_add_spart(struct engine *e, struct cell *const c) {
   /* Perform some basic consitency checks */
@@ -5020,7 +5195,8 @@ struct spart *cell_add_spart(struct engine *e, struct cell *const c) {
    * current cell*/
   cell_recursively_shift_sparts(top, progeny, /* main_branch=*/1);
 
-  /* Make sure the gravity will be recomputed for this particle in the next step
+  /* Make sure the gravity will be recomputed for this particle in the next
+   * step
    */
   struct cell *top2 = c;
   while (top2->parent != NULL) {
@@ -5060,7 +5236,8 @@ struct spart *cell_add_spart(struct engine *e, struct cell *const c) {
 /**
  * @brief "Remove" a gas particle from the calculation.
  *
- * The particle is inhibited and will officially be removed at the next rebuild.
+ * The particle is inhibited and will officially be removed at the next
+ * rebuild.
  *
  * @param e The #engine running on this node.
  * @param c The #cell from which to remove the particle.
@@ -5085,12 +5262,17 @@ void cell_remove_part(const struct engine *e, struct cell *c, struct part *p,
 
   /* Un-link the part */
   p->gpart = NULL;
+
+  /* Update the space-wide counters */
+  const size_t one = 1;
+  atomic_add(&e->s->nr_inhibited_parts, one);
 }
 
 /**
  * @brief "Remove" a gravity particle from the calculation.
  *
- * The particle is inhibited and will officially be removed at the next rebuild.
+ * The particle is inhibited and will officially be removed at the next
+ * rebuild.
  *
  * @param e The #engine running on this node.
  * @param c The #cell from which to remove the particle.
@@ -5104,12 +5286,17 @@ void cell_remove_gpart(const struct engine *e, struct cell *c,
 
   /* Mark the particle as inhibited */
   gp->time_bin = time_bin_inhibited;
+
+  /* Update the space-wide counters */
+  const size_t one = 1;
+  atomic_add(&e->s->nr_inhibited_gparts, one);
 }
 
 /**
  * @brief "Remove" a star particle from the calculation.
  *
- * The particle is inhibited and will officially be removed at the next rebuild.
+ * The particle is inhibited and will officially be removed at the next
+ * rebuild.
  *
  * @param e The #engine running on this node.
  * @param c The #cell from which to remove the particle.
@@ -5131,12 +5318,17 @@ void cell_remove_spart(const struct engine *e, struct cell *c,
 
   /* Un-link the spart */
   sp->gpart = NULL;
+
+  /* Update the space-wide counters */
+  const size_t one = 1;
+  atomic_add(&e->s->nr_inhibited_sparts, one);
 }
 
 /**
  * @brief "Remove" a black hole particle from the calculation.
  *
- * The particle is inhibited and will officially be removed at the next rebuild.
+ * The particle is inhibited and will officially be removed at the next
+ * rebuild.
  *
  * @param e The #engine running on this node.
  * @param c The #cell from which to remove the particle.
@@ -5159,6 +5351,10 @@ void cell_remove_bpart(const struct engine *e, struct cell *c,
 
   /* Un-link the bpart */
   bp->gpart = NULL;
+
+  /* Update the space-wide counters */
+  const size_t one = 1;
+  atomic_add(&e->s->nr_inhibited_bparts, one);
 }
 
 /**
@@ -5168,7 +5364,8 @@ void cell_remove_bpart(const struct engine *e, struct cell *c,
  * Note that the #part is not destroyed. The pointer is still valid
  * after this call and the properties of the #part are not altered
  * apart from the time-bin and #gpart pointer.
- * The particle is inhibited and will officially be removed at the next rebuild.
+ * The particle is inhibited and will officially be removed at the next
+ * rebuild.
  *
  * @param e The #engine running on this node.
  * @param c The #cell from which to remove the particle.
@@ -5204,6 +5401,9 @@ struct gpart *cell_convert_part_to_gpart(const struct engine *e, struct cell *c,
   gp->ti_kick = p->ti_kick;
 #endif
 
+  /* Update the space-wide counters */
+  atomic_inc(&e->s->nr_inhibited_parts);
+
   return gp;
 }
 
@@ -5214,7 +5414,8 @@ struct gpart *cell_convert_part_to_gpart(const struct engine *e, struct cell *c,
  * Note that the #spart is not destroyed. The pointer is still valid
  * after this call and the properties of the #spart are not altered
  * apart from the time-bin and #gpart pointer.
- * The particle is inhibited and will officially be removed at the next rebuild.
+ * The particle is inhibited and will officially be removed at the next
+ * rebuild.
  *
  * @param e The #engine running on this node.
  * @param c The #cell from which to remove the particle.
@@ -5249,6 +5450,9 @@ struct gpart *cell_convert_spart_to_gpart(const struct engine *e,
   gp->ti_kick = sp->ti_kick;
 #endif
 
+  /* Update the space-wide counters */
+  atomic_inc(&e->s->nr_inhibited_sparts);
+
   return gp;
 }
 
@@ -5259,7 +5463,8 @@ struct gpart *cell_convert_spart_to_gpart(const struct engine *e,
  * Note that the #part is not destroyed. The pointer is still valid
  * after this call and the properties of the #part are not altered
  * apart from the time-bin and #gpart pointer.
- * The particle is inhibited and will officially be removed at the next rebuild.
+ * The particle is inhibited and will officially be removed at the next
+ * rebuild.
  *
  * @param e The #engine.
  * @param c The #cell from which to remove the #part.
@@ -5325,8 +5530,8 @@ struct spart *cell_convert_part_to_spart(struct engine *e, struct cell *c,
 }
 
 /**
- * @brief Re-arrange the #part in a top-level cell such that all the extra ones
- * for on-the-fly creation are located at the end of the array.
+ * @brief Re-arrange the #part in a top-level cell such that all the extra
+ * ones for on-the-fly creation are located at the end of the array.
  *
  * @param c The #cell to sort.
  * @param parts_offset The offset between the first #part in the array and the
@@ -5377,12 +5582,13 @@ void cell_reorder_extra_parts(struct cell *c, const ptrdiff_t parts_offset) {
 }
 
 /**
- * @brief Re-arrange the #spart in a top-level cell such that all the extra ones
- * for on-the-fly creation are located at the end of the array.
+ * @brief Re-arrange the #spart in a top-level cell such that all the extra
+ * ones for on-the-fly creation are located at the end of the array.
  *
  * @param c The #cell to sort.
- * @param sparts_offset The offset between the first #spart in the array and the
- * first #spart in the global array in the space structure (for re-linking).
+ * @param sparts_offset The offset between the first #spart in the array and
+ * the first #spart in the global array in the space structure (for
+ * re-linking).
  */
 void cell_reorder_extra_sparts(struct cell *c, const ptrdiff_t sparts_offset) {
   struct spart *sparts = c->stars.parts;
@@ -5432,8 +5638,8 @@ void cell_reorder_extra_sparts(struct cell *c, const ptrdiff_t sparts_offset) {
 }
 
 /**
- * @brief Re-arrange the #gpart in a top-level cell such that all the extra ones
- * for on-the-fly creation are located at the end of the array.
+ * @brief Re-arrange the #gpart in a top-level cell such that all the extra
+ * ones for on-the-fly creation are located at the end of the array.
  *
  * @param c The #cell to sort.
  * @param parts The global array of #part (for re-linking).
diff --git a/src/cell.h b/src/cell.h
index 4f8bed5fa5e337842bf37a0b382e3915a806a474..6fcc0afb79dc3b76c70acdcbde87c6d2264aba27 100644
--- a/src/cell.h
+++ b/src/cell.h
@@ -332,7 +332,7 @@ struct cell {
     struct xpart *xparts;
 
     /*! Pointer for the sorted indices. */
-    struct entry *sort[13];
+    struct sort_entry *sort[13];
 
     /*! Super cell, i.e. the highest-level parent cell that has a hydro
      * pair/self tasks */
@@ -420,9 +420,6 @@ struct cell {
     /*! Number of #part updated in this cell. */
     int updated;
 
-    /*! Number of #part inhibited in this cell. */
-    int inhibited;
-
     /*! Is the #part data of this cell being used in a sub-cell? */
     int hold;
 
@@ -521,9 +518,6 @@ struct cell {
     /*! Number of #gpart updated in this cell. */
     int updated;
 
-    /*! Number of #gpart inhibited in this cell. */
-    int inhibited;
-
     /*! Is the #gpart data of this cell being used in a sub-cell? */
     int phold;
 
@@ -600,7 +594,7 @@ struct cell {
     float dx_max_sort_old;
 
     /*! Pointer for the sorted indices. */
-    struct entry *sort[13];
+    struct sort_entry *sort[13];
 
     /*! Bit mask of sort directions that will be needed in the next timestep. */
     uint16_t requires_sorts;
@@ -624,9 +618,6 @@ struct cell {
     /*! Number of #spart updated in this cell. */
     int updated;
 
-    /*! Number of #spart inhibited in this cell. */
-    int inhibited;
-
     /*! Is the #spart data of this cell being used in a sub-cell? */
     int hold;
 
@@ -657,12 +648,22 @@ struct cell {
     struct task *black_holes_out;
 
     /*! The star ghost task itself */
-    struct task *ghost;
+    struct task *density_ghost;
 
-    /*! Linked list of the tasks computing this cell's star density. */
+    /*! The star ghost task itself */
+    struct task *swallow_ghost[2];
+
+    /*! Linked list of the tasks computing this cell's BH density. */
     struct link *density;
 
-    /*! Linked list of the tasks computing this cell's star feedback. */
+    /*! Linked list of the tasks computing this cell's BH swallowing and
+     * merging. */
+    struct link *swallow;
+
+    /*! Linked list of the tasks processing the particles to swallow */
+    struct link *do_swallow;
+
+    /*! Linked list of the tasks computing this cell's BH feedback. */
     struct link *feedback;
 
     /*! Max smoothing length in this cell. */
@@ -703,9 +704,6 @@ struct cell {
     /*! Number of #bpart updated in this cell. */
     int updated;
 
-    /*! Number of #bpart inhibited in this cell. */
-    int inhibited;
-
     /*! Is the #bpart data of this cell being used in a sub-cell? */
     int hold;
 
@@ -808,9 +806,15 @@ int cell_mlocktree(struct cell *c);
 void cell_munlocktree(struct cell *c);
 int cell_slocktree(struct cell *c);
 void cell_sunlocktree(struct cell *c);
+int cell_blocktree(struct cell *c);
+void cell_bunlocktree(struct cell *c);
 int cell_pack(struct cell *c, struct pcell *pc, const int with_gravity);
 int cell_unpack(struct pcell *pc, struct cell *c, struct space *s,
                 const int with_gravity);
+void cell_pack_part_swallow(const struct cell *c,
+                            struct black_holes_part_data *data);
+void cell_unpack_part_swallow(struct cell *c,
+                              const struct black_holes_part_data *data);
 int cell_pack_tags(const struct cell *c, int *tags);
 int cell_unpack_tags(const int *tags, struct cell *c);
 int cell_pack_end_step_hydro(struct cell *c, struct pcell_step_hydro *pcell);
@@ -1261,8 +1265,8 @@ __attribute__((always_inline)) INLINE static void cell_malloc_hydro_sorts(
    * on the same dimensions), so we need separate allocations per dimension. */
   for (int j = 0; j < 13; j++) {
     if ((flags & (1 << j)) && c->hydro.sort[j] == NULL) {
-      if ((c->hydro.sort[j] = (struct entry *)swift_malloc(
-               "hydro.sort", sizeof(struct entry) * (count + 1))) == NULL)
+      if ((c->hydro.sort[j] = (struct sort_entry *)swift_malloc(
+               "hydro.sort", sizeof(struct sort_entry) * (count + 1))) == NULL)
         error("Failed to allocate sort memory.");
     }
   }
@@ -1299,8 +1303,8 @@ __attribute__((always_inline)) INLINE static void cell_malloc_stars_sorts(
    * on the same dimensions), so we need separate allocations per dimension. */
   for (int j = 0; j < 13; j++) {
     if ((flags & (1 << j)) && c->stars.sort[j] == NULL) {
-      if ((c->stars.sort[j] = (struct entry *)swift_malloc(
-               "stars.sort", sizeof(struct entry) * (count + 1))) == NULL)
+      if ((c->stars.sort[j] = (struct sort_entry *)swift_malloc(
+               "stars.sort", sizeof(struct sort_entry) * (count + 1))) == NULL)
         error("Failed to allocate sort memory.");
     }
   }
diff --git a/src/chemistry/EAGLE/chemistry_io.h b/src/chemistry/EAGLE/chemistry_io.h
index 62de85d47329a588d658cc61db72510337988638..023791133d6fee78497f27ad741bcfcd1eeec5ce 100644
--- a/src/chemistry/EAGLE/chemistry_io.h
+++ b/src/chemistry/EAGLE/chemistry_io.h
@@ -166,6 +166,50 @@ INLINE static int chemistry_write_sparticles(const struct spart* sparts,
   return 12;
 }
 
+/**
+ * @brief Specifies which black hole particle fields to write to a dataset
+ *
+ * @param bparts The black hole particle array.
+ * @param list The list of i/o properties to write.
+ *
+ * @return Returns the number of fields to write.
+ */
+INLINE static int chemistry_write_bparticles(const struct bpart* bparts,
+                                             struct io_props* list) {
+
+  /* List what we want to write */
+  list[0] =
+      io_make_output_field("ElementMass", FLOAT, chemistry_element_count,
+                           UNIT_CONV_MASS, bparts, chemistry_data.metal_mass);
+
+  list[1] = io_make_output_field("MetalMass", FLOAT, chemistry_element_count,
+                                 UNIT_CONV_MASS, bparts,
+                                 chemistry_data.metal_mass_total);
+
+  list[2] = io_make_output_field("MassFromSNIa", FLOAT, 1, UNIT_CONV_MASS,
+                                 bparts, chemistry_data.mass_from_SNIa);
+
+  list[3] = io_make_output_field("MassFromSNII", FLOAT, 1, UNIT_CONV_MASS,
+                                 bparts, chemistry_data.mass_from_SNII);
+
+  list[4] = io_make_output_field("MassFromAGB", FLOAT, 1, UNIT_CONV_MASS,
+                                 bparts, chemistry_data.mass_from_AGB);
+
+  list[5] = io_make_output_field("MetalMassFromSNIa", FLOAT, 1, UNIT_CONV_MASS,
+                                 bparts, chemistry_data.metal_mass_from_SNIa);
+
+  list[6] = io_make_output_field("MetalMassFromSNII", FLOAT, 1, UNIT_CONV_MASS,
+                                 bparts, chemistry_data.metal_mass_from_SNII);
+
+  list[7] = io_make_output_field("MetalMassFromAGB", FLOAT, 1, UNIT_CONV_MASS,
+                                 bparts, chemistry_data.metal_mass_from_AGB);
+
+  list[8] = io_make_output_field("IronMassFromSNIa", FLOAT, 1, UNIT_CONV_MASS,
+                                 bparts, chemistry_data.iron_mass_from_SNIa);
+
+  return 9;
+}
+
 #ifdef HAVE_HDF5
 
 /**
diff --git a/src/chemistry/EAGLE/chemistry_struct.h b/src/chemistry/EAGLE/chemistry_struct.h
index f5e47347f8b6a910640624ddfc0b5968242eedf6..32d5faaac08886684ebe78ea3696f7aaacab48ea 100644
--- a/src/chemistry/EAGLE/chemistry_struct.h
+++ b/src/chemistry/EAGLE/chemistry_struct.h
@@ -89,4 +89,37 @@ struct chemistry_part_data {
   float smoothed_iron_mass_fraction_from_SNIa;
 };
 
+/**
+ * @brief Chemical abundances traced by the #bpart in the EAGLE model.
+ */
+struct chemistry_bpart_data {
+
+  /*! Mass in a given element */
+  float metal_mass[chemistry_element_count];
+
+  /*! Mass in *all* metals */
+  float metal_mass_total;
+
+  /*! Mass coming from SNIa */
+  float mass_from_SNIa;
+
+  /*! Mass coming from AGB */
+  float mass_from_AGB;
+
+  /*! Mass coming from SNII */
+  float mass_from_SNII;
+
+  /*! Metal mass coming from SNIa */
+  float metal_mass_from_SNIa;
+
+  /*! Metal mass coming from AGB */
+  float metal_mass_from_AGB;
+
+  /*! Metal mass coming from SNII */
+  float metal_mass_from_SNII;
+
+  /*! Iron mass coming from SNIa */
+  float iron_mass_from_SNIa;
+};
+
 #endif /* SWIFT_CHEMISTRY_STRUCT_EAGLE_H */
diff --git a/src/chemistry/GEAR/chemistry_io.h b/src/chemistry/GEAR/chemistry_io.h
index b29f7db65d3ab7dce4b0dcafed06c97a9e621bfe..7d21be4d138f40b626c29a9711166496f34b75d3 100644
--- a/src/chemistry/GEAR/chemistry_io.h
+++ b/src/chemistry/GEAR/chemistry_io.h
@@ -113,6 +113,21 @@ INLINE static int chemistry_write_sparticles(const struct spart* sparts,
   return 3;
 }
 
+/**
+ * @brief Specifies which black hole particle fields to write to a dataset
+ *
+ * @param bparts The black hole particle array.
+ * @param list The list of i/o properties to write.
+ *
+ * @return Returns the number of fields to write.
+ */
+INLINE static int chemistry_write_bparticles(const struct bpart* bparts,
+                                             struct io_props* list) {
+
+  /* No fields to write here */
+  return 0;
+}
+
 #ifdef HAVE_HDF5
 
 /**
diff --git a/src/chemistry/GEAR/chemistry_struct.h b/src/chemistry/GEAR/chemistry_struct.h
index c3443126cf0ac72c76774d9000fe218378cc6663..105b72d7fd731356a6f1135ce41b03bb2f898acf 100644
--- a/src/chemistry/GEAR/chemistry_struct.h
+++ b/src/chemistry/GEAR/chemistry_struct.h
@@ -58,4 +58,9 @@ struct chemistry_part_data {
   float Z;
 };
 
+/**
+ * @brief Chemical abundances traced by the #bpart in the GEAR model.
+ */
+struct chemistry_bpart_data {};
+
 #endif /* SWIFT_CHEMISTRY_STRUCT_GEAR_H */
diff --git a/src/chemistry/none/chemistry_io.h b/src/chemistry/none/chemistry_io.h
index c6e5b7b769cec667fee8c3fc674f3b4aa35929c1..698d92c7ca7479400d331bd6adf8d695782a3094 100644
--- a/src/chemistry/none/chemistry_io.h
+++ b/src/chemistry/none/chemistry_io.h
@@ -72,6 +72,23 @@ INLINE static int chemistry_write_sparticles(const struct spart* sparts,
   return 0;
 }
 
+/**
+ * @brief Specifies which bparticle fields to write to a dataset
+ *
+ * @param bparts The bparticle array.
+ * @param list The list of i/o properties to write.
+ *
+ * @return Returns the number of fields to write.
+ */
+INLINE static int chemistry_write_bparticles(const struct bpart* bparts,
+                                             struct io_props* list) {
+
+  /* update list according to hydro_io */
+
+  /* Return the number of fields to write */
+  return 0;
+}
+
 #ifdef HAVE_HDF5
 
 /**
diff --git a/src/chemistry/none/chemistry_struct.h b/src/chemistry/none/chemistry_struct.h
index a80699d66cccd2e290555006d45216ff53d9c99c..3b065bcdf6cbf2fec6f9cb213180e36267fa6a8f 100644
--- a/src/chemistry/none/chemistry_struct.h
+++ b/src/chemistry/none/chemistry_struct.h
@@ -43,4 +43,11 @@ struct chemistry_global_data {};
  */
 struct chemistry_part_data {};
 
+/**
+ * @brief Chemistry properties carried by the #bpart.
+ *
+ * Nothing here.
+ */
+struct chemistry_bpart_data {};
+
 #endif /* SWIFT_CHEMISTRY_STRUCT_NONE_H */
diff --git a/src/common_io.c b/src/common_io.c
index 537f06e53837619d5f5011628aa649fce90164cb..140dfc14593aa8964845e3bb73cbf01c2176e534 100644
--- a/src/common_io.c
+++ b/src/common_io.c
@@ -387,6 +387,59 @@ void io_write_engine_policy(hid_t h_file, const struct engine* e) {
   H5Gclose(h_grp);
 }
 
+static long long cell_count_non_inhibited_gas(const struct cell* c) {
+  const int total_count = c->hydro.count;
+  struct part* parts = c->hydro.parts;
+  long long count = 0;
+  for (int i = 0; i < total_count; ++i) {
+    if (!(parts[i].time_bin != time_bin_inhibited) &&
+        !(parts[i].time_bin != time_bin_not_created)) {
+      ++count;
+    }
+  }
+  return count;
+}
+
+static long long cell_count_non_inhibited_dark_matter(const struct cell* c) {
+  const int total_count = c->grav.count;
+  struct gpart* gparts = c->grav.parts;
+  long long count = 0;
+  for (int i = 0; i < total_count; ++i) {
+    if (!(gparts[i].time_bin != time_bin_inhibited) &&
+        !(gparts[i].time_bin != time_bin_not_created) &&
+        (gparts[i].type == swift_type_dark_matter)) {
+      ++count;
+    }
+  }
+  return count;
+}
+
+static long long cell_count_non_inhibited_stars(const struct cell* c) {
+  const int total_count = c->stars.count;
+  struct spart* sparts = c->stars.parts;
+  long long count = 0;
+  for (int i = 0; i < total_count; ++i) {
+    if (!(sparts[i].time_bin != time_bin_inhibited) &&
+        !(sparts[i].time_bin != time_bin_not_created)) {
+      ++count;
+    }
+  }
+  return count;
+}
+
+static long long cell_count_non_inhibited_black_holes(const struct cell* c) {
+  const int total_count = c->black_holes.count;
+  struct bpart* bparts = c->black_holes.parts;
+  long long count = 0;
+  for (int i = 0; i < total_count; ++i) {
+    if (!(bparts[i].time_bin != time_bin_inhibited) &&
+        !(bparts[i].time_bin != time_bin_not_created)) {
+      ++count;
+    }
+  }
+  return count;
+}
+
 void io_write_cell_offsets(hid_t h_grp, const int cdim[3],
                            const struct cell* cells_top, const int nr_cells,
                            const double width[3], const int nodeID,
@@ -438,15 +491,10 @@ void io_write_cell_offsets(hid_t h_grp, const int cdim[3],
       centres[i * 3 + 2] = cells_top[i].loc[2] + cell_width[2] * 0.5;
 
       /* Count real particles that will be written */
-      count_part[i] = cells_top[i].hydro.count - cells_top[i].hydro.inhibited;
-      count_gpart[i] = cells_top[i].grav.count - cells_top[i].grav.inhibited;
-      count_spart[i] = cells_top[i].stars.count - cells_top[i].stars.inhibited;
-      count_bpart[i] = cells_top[i].stars.count - cells_top[i].stars.inhibited;
-
-      /* Only count DM gpart (gpart without friends) */
-      count_gpart[i] -= count_part[i];
-      count_gpart[i] -= count_spart[i];
-      count_gpart[i] -= count_bpart[i];
+      count_part[i] = cell_count_non_inhibited_gas(&cells_top[i]);
+      count_gpart[i] = cell_count_non_inhibited_dark_matter(&cells_top[i]);
+      count_spart[i] = cell_count_non_inhibited_stars(&cells_top[i]);
+      count_bpart[i] = cell_count_non_inhibited_black_holes(&cells_top[i]);
 
       /* Offsets including the global offset of all particles on this MPI rank
        */
@@ -454,7 +502,8 @@ void io_write_cell_offsets(hid_t h_grp, const int cdim[3],
       offset_gpart[i] =
           local_offset_gpart + global_offsets[swift_type_dark_matter];
       offset_spart[i] = local_offset_spart + global_offsets[swift_type_stars];
-      offset_bpart[i] = local_offset_bpart + global_offsets[swift_type_stars];
+      offset_bpart[i] =
+          local_offset_bpart + global_offsets[swift_type_black_hole];
 
       local_offset_part += count_part[i];
       local_offset_gpart += count_gpart[i];
diff --git a/src/engine.c b/src/engine.c
index 7d2a0cd4c2bd89803713bba71fd13d0396c12b82..d26953f1642ded4c916bb9cd7e98481688d3e211 100644
--- a/src/engine.c
+++ b/src/engine.c
@@ -2715,7 +2715,7 @@ void engine_collect_end_of_step_recurse_hydro(struct cell *c,
 #endif
 
   /* Counters for the different quantities. */
-  size_t updated = 0, inhibited = 0;
+  size_t updated = 0;
   integertime_t ti_hydro_end_min = max_nr_timesteps, ti_hydro_end_max = 0,
                 ti_hydro_beg_max = 0;
 
@@ -2739,7 +2739,6 @@ void engine_collect_end_of_step_recurse_hydro(struct cell *c,
       ti_hydro_beg_max = max(ti_hydro_beg_max, cp->hydro.ti_beg_max);
 
       updated += cp->hydro.updated;
-      inhibited += cp->hydro.inhibited;
 
       /* Check if the cell is inactive and in that case reorder the SFH */
       if (!cell_is_starting_hydro(cp, e)) {
@@ -2759,7 +2758,7 @@ void engine_collect_end_of_step_recurse_hydro(struct cell *c,
   c->hydro.ti_end_max = ti_hydro_end_max;
   c->hydro.ti_beg_max = ti_hydro_beg_max;
   c->hydro.updated = updated;
-  c->hydro.inhibited = inhibited;
+  // c->hydro.inhibited = inhibited;
 
   /* Store the star formation history in the parent cell */
   star_formation_logger_add(&c->stars.sfh, &sfh_updated);
@@ -2790,7 +2789,7 @@ void engine_collect_end_of_step_recurse_grav(struct cell *c,
 #endif
 
   /* Counters for the different quantities. */
-  size_t updated = 0, inhibited = 0;
+  size_t updated = 0;
   integertime_t ti_grav_end_min = max_nr_timesteps, ti_grav_end_max = 0,
                 ti_grav_beg_max = 0;
 
@@ -2808,7 +2807,6 @@ void engine_collect_end_of_step_recurse_grav(struct cell *c,
       ti_grav_beg_max = max(ti_grav_beg_max, cp->grav.ti_beg_max);
 
       updated += cp->grav.updated;
-      inhibited += cp->grav.inhibited;
 
       /* Collected, so clear for next time. */
       cp->grav.updated = 0;
@@ -2820,7 +2818,6 @@ void engine_collect_end_of_step_recurse_grav(struct cell *c,
   c->grav.ti_end_max = ti_grav_end_max;
   c->grav.ti_beg_max = ti_grav_beg_max;
   c->grav.updated = updated;
-  c->grav.inhibited = inhibited;
 }
 
 /**
@@ -2847,7 +2844,7 @@ void engine_collect_end_of_step_recurse_stars(struct cell *c,
 #endif
 
   /* Counters for the different quantities. */
-  size_t updated = 0, inhibited = 0;
+  size_t updated = 0;
   integertime_t ti_stars_end_min = max_nr_timesteps, ti_stars_end_max = 0,
                 ti_stars_beg_max = 0;
 
@@ -2865,7 +2862,6 @@ void engine_collect_end_of_step_recurse_stars(struct cell *c,
       ti_stars_beg_max = max(ti_stars_beg_max, cp->stars.ti_beg_max);
 
       updated += cp->stars.updated;
-      inhibited += cp->stars.inhibited;
 
       /* Collected, so clear for next time. */
       cp->stars.updated = 0;
@@ -2877,7 +2873,6 @@ void engine_collect_end_of_step_recurse_stars(struct cell *c,
   c->stars.ti_end_max = ti_stars_end_max;
   c->stars.ti_beg_max = ti_stars_beg_max;
   c->stars.updated = updated;
-  c->stars.inhibited = inhibited;
 }
 
 /**
@@ -2904,7 +2899,7 @@ void engine_collect_end_of_step_recurse_black_holes(struct cell *c,
 #endif
 
   /* Counters for the different quantities. */
-  size_t updated = 0, inhibited = 0;
+  size_t updated = 0;
   integertime_t ti_black_holes_end_min = max_nr_timesteps,
                 ti_black_holes_end_max = 0, ti_black_holes_beg_max = 0;
 
@@ -2925,7 +2920,6 @@ void engine_collect_end_of_step_recurse_black_holes(struct cell *c,
           max(ti_black_holes_beg_max, cp->black_holes.ti_beg_max);
 
       updated += cp->black_holes.updated;
-      inhibited += cp->black_holes.inhibited;
 
       /* Collected, so clear for next time. */
       cp->black_holes.updated = 0;
@@ -2937,7 +2931,6 @@ void engine_collect_end_of_step_recurse_black_holes(struct cell *c,
   c->black_holes.ti_end_max = ti_black_holes_end_max;
   c->black_holes.ti_beg_max = ti_black_holes_beg_max;
   c->black_holes.updated = updated;
-  c->black_holes.inhibited = inhibited;
 }
 
 /**
@@ -2968,7 +2961,6 @@ void engine_collect_end_of_step_mapper(void *map_data, int num_elements,
 
   /* Local collectible */
   size_t updated = 0, g_updated = 0, s_updated = 0, b_updated = 0;
-  size_t inhibited = 0, g_inhibited = 0, s_inhibited = 0, b_inhibited = 0;
   integertime_t ti_hydro_end_min = max_nr_timesteps, ti_hydro_end_max = 0,
                 ti_hydro_beg_max = 0;
   integertime_t ti_gravity_end_min = max_nr_timesteps, ti_gravity_end_max = 0,
@@ -3033,11 +3025,6 @@ void engine_collect_end_of_step_mapper(void *map_data, int num_elements,
       s_updated += c->stars.updated;
       b_updated += c->black_holes.updated;
 
-      inhibited += c->hydro.inhibited;
-      g_inhibited += c->grav.inhibited;
-      s_inhibited += c->stars.inhibited;
-      b_inhibited += c->black_holes.inhibited;
-
       /* Check if the cell is inactive and in that case reorder the SFH */
       if (!cell_is_starting_hydro(c, e)) {
         star_formation_logger_log_inactive_cell(&c->stars.sfh);
@@ -3063,11 +3050,6 @@ void engine_collect_end_of_step_mapper(void *map_data, int num_elements,
     data->s_updated += s_updated;
     data->b_updated += b_updated;
 
-    data->inhibited += inhibited;
-    data->g_inhibited += g_inhibited;
-    data->s_inhibited += s_inhibited;
-    data->b_inhibited += b_inhibited;
-
     /* Add the SFH information from this engine to the global data */
     star_formation_logger_add(sfh_top, &sfh_updated);
 
@@ -3124,8 +3106,6 @@ void engine_collect_end_of_step(struct engine *e, int apply) {
   struct space *s = e->s;
   struct end_of_step_data data;
   data.updated = 0, data.g_updated = 0, data.s_updated = 0, data.b_updated = 0;
-  data.inhibited = 0, data.g_inhibited = 0, data.s_inhibited = 0,
-  data.b_inhibited = 0;
   data.ti_hydro_end_min = max_nr_timesteps, data.ti_hydro_end_max = 0,
   data.ti_hydro_beg_max = 0;
   data.ti_gravity_end_min = max_nr_timesteps, data.ti_gravity_end_max = 0,
@@ -3144,11 +3124,12 @@ void engine_collect_end_of_step(struct engine *e, int apply) {
                  s->local_cells_with_tasks_top, s->nr_local_cells_with_tasks,
                  sizeof(int), 0, &data);
 
-  /* Store the local number of inhibited particles */
-  s->nr_inhibited_parts = data.inhibited;
-  s->nr_inhibited_gparts = data.g_inhibited;
-  s->nr_inhibited_sparts = data.s_inhibited;
-  s->nr_inhibited_bparts = data.b_inhibited;
+  /* Get the number of inhibited particles from the space-wide counters
+   * since these have been updated atomically during the time-steps. */
+  data.inhibited = s->nr_inhibited_parts;
+  data.g_inhibited = s->nr_inhibited_gparts;
+  data.s_inhibited = s->nr_inhibited_sparts;
+  data.b_inhibited = s->nr_inhibited_bparts;
 
   /* Store these in the temporary collection group. */
   collectgroup1_init(
@@ -3184,11 +3165,12 @@ void engine_collect_end_of_step(struct engine *e, int apply) {
       error("Failed to get same ti_gravity_end_min, is %lld, should be %lld",
             in_i[1], e->collect_group1.ti_gravity_end_min);
 
-    long long in_ll[3], out_ll[3];
+    long long in_ll[4], out_ll[4];
     out_ll[0] = data.updated;
     out_ll[1] = data.g_updated;
     out_ll[2] = data.s_updated;
-    if (MPI_Allreduce(out_ll, in_ll, 3, MPI_LONG_LONG_INT, MPI_SUM,
+    out_ll[3] = data.b_updated;
+    if (MPI_Allreduce(out_ll, in_ll, 4, MPI_LONG_LONG_INT, MPI_SUM,
                       MPI_COMM_WORLD) != MPI_SUCCESS)
       error("Failed to aggregate particle counts.");
     if (in_ll[0] != (long long)e->collect_group1.updated)
@@ -3200,11 +3182,15 @@ void engine_collect_end_of_step(struct engine *e, int apply) {
     if (in_ll[2] != (long long)e->collect_group1.s_updated)
       error("Failed to get same s_updated, is %lld, should be %lld", in_ll[2],
             e->collect_group1.s_updated);
+    if (in_ll[3] != (long long)e->collect_group1.b_updated)
+      error("Failed to get same b_updated, is %lld, should be %lld", in_ll[3],
+            e->collect_group1.b_updated);
 
     out_ll[0] = data.inhibited;
     out_ll[1] = data.g_inhibited;
     out_ll[2] = data.s_inhibited;
-    if (MPI_Allreduce(out_ll, in_ll, 3, MPI_LONG_LONG_INT, MPI_SUM,
+    out_ll[3] = data.b_inhibited;
+    if (MPI_Allreduce(out_ll, in_ll, 4, MPI_LONG_LONG_INT, MPI_SUM,
                       MPI_COMM_WORLD) != MPI_SUCCESS)
       error("Failed to aggregate particle counts.");
     if (in_ll[0] != (long long)e->collect_group1.inhibited)
@@ -3216,6 +3202,9 @@ void engine_collect_end_of_step(struct engine *e, int apply) {
     if (in_ll[2] != (long long)e->collect_group1.s_inhibited)
       error("Failed to get same s_inhibited, is %lld, should be %lld", in_ll[2],
             e->collect_group1.s_inhibited);
+    if (in_ll[3] != (long long)e->collect_group1.b_inhibited)
+      error("Failed to get same b_inhibited, is %lld, should be %lld", in_ll[3],
+            e->collect_group1.b_inhibited);
 
     int buff = 0;
     if (MPI_Allreduce(&e->forcerebuild, &buff, 1, MPI_INT, MPI_MAX,
@@ -3336,9 +3325,17 @@ void engine_skip_force_and_kick(struct engine *e) {
         t->type == task_type_stars_in || t->type == task_type_stars_out ||
         t->type == task_type_star_formation ||
         t->type == task_type_extra_ghost ||
+        t->type == task_type_bh_swallow_ghost1 ||
+        t->type == task_type_bh_swallow_ghost2 ||
         t->subtype == task_subtype_gradient ||
         t->subtype == task_subtype_stars_feedback ||
         t->subtype == task_subtype_bh_feedback ||
+        t->subtype == task_subtype_bh_swallow ||
+        t->subtype == task_subtype_do_swallow ||
+        t->subtype == task_subtype_bpart_rho ||
+        t->subtype == task_subtype_part_swallow ||
+        t->subtype == task_subtype_bpart_swallow ||
+        t->subtype == task_subtype_bpart_feedback ||
         t->subtype == task_subtype_tend_part ||
         t->subtype == task_subtype_tend_gpart ||
         t->subtype == task_subtype_tend_spart ||
@@ -3571,6 +3568,7 @@ void engine_init_particles(struct engine *e, int flag_entropy_ICs,
 #ifdef SWIFT_DEBUG_CHECKS
   /* Make sure all woken-up particles have been processed */
   space_check_limiter(e->s);
+  space_check_swallow(e->s);
 #endif
 
   /* Recover the (integer) end of the next time-step */
@@ -3859,6 +3857,7 @@ void engine_step(struct engine *e) {
   /* Make sure all woken-up particles have been processed */
   space_check_limiter(e->s);
   space_check_sort_flags(e->s);
+  space_check_swallow(e->s);
 #endif
 
   /* Collect information about the next time-step */
@@ -6491,8 +6490,9 @@ void engine_activate_fof_tasks(struct engine *e) {
 void engine_fof(struct engine *e, const int dump_results,
                 const int seed_black_holes) {
 
+#ifdef WITH_MPI
   MPI_Barrier(MPI_COMM_WORLD);
-  
+#endif
   ticks tic = getticks();
 
   /* Compute number of DM particles */
@@ -6504,7 +6504,7 @@ void engine_fof(struct engine *e, const int dump_results,
   fof_allocate(e->s, total_nr_dmparts, e->fof_properties);
 
   message("FOF allocate (FOF SCALING) took: %.3f %s.",
-      clocks_from_ticks(getticks() - tic), clocks_getunit());
+          clocks_from_ticks(getticks() - tic), clocks_getunit());
 
   /* Make FOF tasks */
   engine_make_fof_tasks(e);
@@ -6513,14 +6513,16 @@ void engine_fof(struct engine *e, const int dump_results,
   engine_activate_fof_tasks(e);
 
   ticks tic_local_fof = getticks();
-  
+
   /* Perform local FOF tasks. */
   engine_launch(e);
 
   message("Local FOF search (tasks FOF SCALING) took: %.3f %s.",
-      clocks_from_ticks(getticks() - tic_local_fof), clocks_getunit());
+          clocks_from_ticks(getticks() - tic_local_fof), clocks_getunit());
 
-  message("FOF allocate + task creation + local search (FOF SCALING) took: %.3f %s.",
+  message(
+      "FOF allocate + task creation + local search (FOF SCALING) took: %.3f "
+      "%s.",
       clocks_from_ticks(getticks() - tic), clocks_getunit());
 
   /* Perform FOF search over foreign particles and
@@ -6538,8 +6540,11 @@ void engine_fof(struct engine *e, const int dump_results,
   /* ... and find the next FOF time */
   if (seed_black_holes) engine_compute_next_fof_time(e);
 
+#ifdef WITH_MPI
   MPI_Barrier(MPI_COMM_WORLD);
+#endif
 
-  message("Complete FOF search took: %.3f %s.",
-      clocks_from_ticks(getticks() - tic), clocks_getunit());
+  if (engine_rank == 0)
+    message("Complete FOF search took: %.3f %s.",
+            clocks_from_ticks(getticks() - tic), clocks_getunit());
 }
diff --git a/src/engine.h b/src/engine.h
index be96db7e9fe6e9caf387cf7bd2cbcf9ff997e445..8296933561b5f7664904d439ec6ca20f1e37a4b8 100644
--- a/src/engine.h
+++ b/src/engine.h
@@ -538,6 +538,7 @@ int engine_estimate_nr_tasks(const struct engine *e);
 void engine_print_task_counts(const struct engine *e);
 void engine_fof(struct engine *e, const int dump_results,
                 const int seed_black_holes);
+void engine_activate_gpart_comms(struct engine *e);
 
 /* Function prototypes, engine_maketasks.c. */
 void engine_maketasks(struct engine *e);
diff --git a/src/engine_maketasks.c b/src/engine_maketasks.c
index 150ec107f999cc390c34f54d3b5835919cc83cf8..af119c9370d83eace384690b9b58aaa95e49867e 100644
--- a/src/engine_maketasks.c
+++ b/src/engine_maketasks.c
@@ -318,11 +318,18 @@ void engine_addtasks_send_stars(struct engine *e, struct cell *ci,
  * @param e The #engine.
  * @param ci The sending #cell.
  * @param cj Dummy cell containing the nodeID of the receiving node.
+ * @param t_rho The density comm. task, if it has already been created.
+ * @param t_swallow The BH swallow comm. task, if it has already been created.
+ * @param t_gas_swallow The gas swallow comm. task, if it has already been
+ * created.
  * @param t_feedback The send_feed #task, if it has already been created.
  * @param t_ti The recv_ti_end #task, if it has already been created.
  */
 void engine_addtasks_send_black_holes(struct engine *e, struct cell *ci,
-                                      struct cell *cj, struct task *t_feedback,
+                                      struct cell *cj, struct task *t_rho,
+                                      struct task *t_swallow,
+                                      struct task *t_gas_swallow,
+                                      struct task *t_feedback,
                                       struct task *t_ti) {
 
 #ifdef WITH_MPI
@@ -340,31 +347,60 @@ void engine_addtasks_send_black_holes(struct engine *e, struct cell *ci,
   /* If so, attach send tasks. */
   if (l != NULL) {
 
-    if (t_feedback == NULL) {
+    if (t_rho == NULL) {
 
       /* Make sure this cell is tagged. */
       cell_ensure_tagged(ci);
 
       /* Create the tasks and their dependencies? */
-      t_feedback = scheduler_addtask(s, task_type_send, task_subtype_bpart,
-                                     ci->mpi.tag, 0, ci, cj);
+      t_rho = scheduler_addtask(s, task_type_send, task_subtype_bpart_rho,
+                                ci->mpi.tag, 0, ci, cj);
+
+      /* t_swallow = */
+      /*     scheduler_addtask(s, task_type_send, task_subtype_bpart_swallow, */
+      /*                       ci->mpi.tag, 0, ci, cj); */
+
+      t_gas_swallow = scheduler_addtask(
+          s, task_type_send, task_subtype_part_swallow, ci->mpi.tag, 0, ci, cj);
+
+      t_feedback =
+          scheduler_addtask(s, task_type_send, task_subtype_bpart_feedback,
+                            ci->mpi.tag, 0, ci, cj);
 
       t_ti = scheduler_addtask(s, task_type_send, task_subtype_tend_bpart,
                                ci->mpi.tag, 0, ci, cj);
 
-      /* The send_black_holes task should unlock the super_cell's kick task. */
+      /* The send_black_holes task should unlock the super_cell's BH exit point
+       * task. */
       scheduler_addunlock(s, t_feedback,
                           ci->hydro.super->black_holes.black_holes_out);
 
+      scheduler_addunlock(s, ci->hydro.super->black_holes.swallow_ghost[1],
+                          t_feedback);
+
       /* Ghost before you send */
-      scheduler_addunlock(s, ci->hydro.super->black_holes.ghost, t_feedback);
+      scheduler_addunlock(s, ci->hydro.super->black_holes.density_ghost, t_rho);
+      scheduler_addunlock(s, t_rho,
+                          ci->hydro.super->black_holes.swallow_ghost[0]);
 
       /* Drift before you send */
-      scheduler_addunlock(s, ci->hydro.super->black_holes.drift, t_feedback);
+      /* scheduler_addunlock(s, ci->hydro.super->black_holes.swallow_ghost[0],
+       */
+      /*                     t_swallow); */
+      /* scheduler_addunlock(s, t_swallow, */
+      /*                     ci->hydro.super->black_holes.swallow_ghost[1]); */
+
+      scheduler_addunlock(s, ci->hydro.super->black_holes.swallow_ghost[0],
+                          t_gas_swallow);
+      scheduler_addunlock(s, t_gas_swallow,
+                          ci->hydro.super->black_holes.swallow_ghost[1]);
 
       scheduler_addunlock(s, ci->super->timestep, t_ti);
     }
 
+    engine_addlink(e, &ci->mpi.send, t_rho);
+    // engine_addlink(e, &ci->mpi.send, t_swallow);
+    engine_addlink(e, &ci->mpi.send, t_gas_swallow);
     engine_addlink(e, &ci->mpi.send, t_feedback);
     engine_addlink(e, &ci->mpi.send, t_ti);
   }
@@ -373,7 +409,8 @@ void engine_addtasks_send_black_holes(struct engine *e, struct cell *ci,
   if (ci->split)
     for (int k = 0; k < 8; k++)
       if (ci->progeny[k] != NULL)
-        engine_addtasks_send_black_holes(e, ci->progeny[k], cj, t_feedback,
+        engine_addtasks_send_black_holes(e, ci->progeny[k], cj, t_rho,
+                                         t_swallow, t_gas_swallow, t_feedback,
                                          t_ti);
 
 #else
@@ -460,10 +497,10 @@ void engine_addtasks_recv_hydro(struct engine *e, struct cell *c,
       scheduler_addunlock(s, t_rho, l->t);
     }
 
-    /* Make sure the part have been sent before the BHs compute their densities.
-     */
+    /* Make sure the part have been received before the BHs compute their
+     * accretion rates (depends on particles' rho). */
     for (struct link *l = c->black_holes.density; l != NULL; l = l->next) {
-      scheduler_addunlock(s, t_xv, l->t);
+      scheduler_addunlock(s, t_rho, l->t);
     }
   }
 
@@ -571,10 +608,17 @@ void engine_addtasks_recv_stars(struct engine *e, struct cell *c,
  *
  * @param e The #engine.
  * @param c The foreign #cell.
+ * @param t_rho The density comm. task, if it has already been created.
+ * @param t_swallow The BH swallow comm. task, if it has already been created.
+ * @param t_gas_swallow The gas swallow comm. task, if it has already been
+ * created.
  * @param t_feedback The recv_feed #task, if it has already been created.
  * @param t_ti The recv_ti_end #task, if it has already been created.
  */
 void engine_addtasks_recv_black_holes(struct engine *e, struct cell *c,
+                                      struct task *t_rho,
+                                      struct task *t_swallow,
+                                      struct task *t_gas_swallow,
                                       struct task *t_feedback,
                                       struct task *t_ti) {
 
@@ -582,7 +626,7 @@ void engine_addtasks_recv_black_holes(struct engine *e, struct cell *c,
   struct scheduler *s = &e->sched;
 
   /* Have we reached a level where there are any black_holes tasks ? */
-  if (t_feedback == NULL && c->black_holes.density != NULL) {
+  if (t_rho == NULL && c->black_holes.density != NULL) {
 
 #ifdef SWIFT_DEBUG_CHECKS
     /* Make sure this cell has a valid tag. */
@@ -590,14 +634,27 @@ void engine_addtasks_recv_black_holes(struct engine *e, struct cell *c,
 #endif  // SWIFT_DEBUG_CHECKS
 
     /* Create the tasks. */
-    t_feedback = scheduler_addtask(s, task_type_recv, task_subtype_bpart,
-                                   c->mpi.tag, 0, c, NULL);
+    t_rho = scheduler_addtask(s, task_type_recv, task_subtype_bpart_rho,
+                              c->mpi.tag, 0, c, NULL);
+
+    /* t_swallow = scheduler_addtask(s, task_type_recv,
+     * task_subtype_bpart_swallow, */
+    /*                               c->mpi.tag, 0, c, NULL); */
+
+    t_gas_swallow = scheduler_addtask(
+        s, task_type_recv, task_subtype_part_swallow, c->mpi.tag, 0, c, NULL);
+
+    t_feedback = scheduler_addtask(
+        s, task_type_recv, task_subtype_bpart_feedback, c->mpi.tag, 0, c, NULL);
 
     t_ti = scheduler_addtask(s, task_type_recv, task_subtype_tend_bpart,
                              c->mpi.tag, 0, c, NULL);
   }
 
-  if (t_feedback != NULL) {
+  if (t_rho != NULL) {
+    engine_addlink(e, &c->mpi.recv, t_rho);
+    // engine_addlink(e, &c->mpi.recv, t_swallow);
+    engine_addlink(e, &c->mpi.recv, t_gas_swallow);
     engine_addlink(e, &c->mpi.recv, t_feedback);
     engine_addlink(e, &c->mpi.recv, t_ti);
 
@@ -606,9 +663,23 @@ void engine_addtasks_recv_black_holes(struct engine *e, struct cell *c,
 #endif
 
     for (struct link *l = c->black_holes.density; l != NULL; l = l->next) {
-      scheduler_addunlock(s, l->t, t_feedback);
+      scheduler_addunlock(s, l->t, t_rho);
     }
 
+    for (struct link *l = c->hydro.force; l != NULL; l = l->next) {
+      scheduler_addunlock(s, l->t, t_gas_swallow);
+    }
+
+    for (struct link *l = c->black_holes.swallow; l != NULL; l = l->next) {
+      scheduler_addunlock(s, t_rho, l->t);
+      // scheduler_addunlock(s, l->t, t_swallow);
+      scheduler_addunlock(s, l->t, t_gas_swallow);
+    }
+    for (struct link *l = c->black_holes.do_swallow; l != NULL; l = l->next) {
+      // scheduler_addunlock(s, t_swallow, l->t);
+      scheduler_addunlock(s, t_gas_swallow, l->t);
+      scheduler_addunlock(s, l->t, t_feedback);
+    }
     for (struct link *l = c->black_holes.feedback; l != NULL; l = l->next) {
       scheduler_addunlock(s, t_feedback, l->t);
       scheduler_addunlock(s, l->t, t_ti);
@@ -619,7 +690,8 @@ void engine_addtasks_recv_black_holes(struct engine *e, struct cell *c,
   if (c->split)
     for (int k = 0; k < 8; k++)
       if (c->progeny[k] != NULL)
-        engine_addtasks_recv_black_holes(e, c->progeny[k], t_feedback, t_ti);
+        engine_addtasks_recv_black_holes(e, c->progeny[k], t_rho, t_swallow,
+                                         t_gas_swallow, t_feedback, t_ti);
 
 #else
   error("SWIFT was not compiled with MPI support.");
@@ -925,6 +997,12 @@ void engine_make_hierarchical_tasks_hydro(struct engine *e, struct cell *c) {
                                          task_subtype_none, 0, 0, c, NULL);
     }
 
+    if (with_black_holes) {
+      c->black_holes.swallow_ghost[0] =
+          scheduler_addtask(s, task_type_bh_swallow_ghost1, task_subtype_none,
+                            0, /* implicit =*/1, c, NULL);
+    }
+
     /* Local tasks only... */
     if (c->nodeID == e->nodeID) {
 
@@ -1012,8 +1090,11 @@ void engine_make_hierarchical_tasks_hydro(struct engine *e, struct cell *c) {
             scheduler_addtask(s, task_type_bh_out, task_subtype_none, 0,
                               /* implicit = */ 1, c, NULL);
 
-        c->black_holes.ghost = scheduler_addtask(
-            s, task_type_bh_ghost, task_subtype_none, 0, 0, c, NULL);
+        c->black_holes.density_ghost = scheduler_addtask(
+            s, task_type_bh_density_ghost, task_subtype_none, 0, 0, c, NULL);
+
+        c->black_holes.swallow_ghost[1] = scheduler_addtask(
+            s, task_type_bh_swallow_ghost2, task_subtype_none, 0, 0, c, NULL);
 
         scheduler_addunlock(s, c->super->kick2, c->black_holes.black_holes_in);
         scheduler_addunlock(s, c->black_holes.black_holes_out,
@@ -1636,6 +1717,8 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
   struct task *t_star_density = NULL;
   struct task *t_star_feedback = NULL;
   struct task *t_bh_density = NULL;
+  struct task *t_bh_swallow = NULL;
+  struct task *t_do_swallow = NULL;
   struct task *t_bh_feedback = NULL;
 
   for (int ind = 0; ind < num_elements; ind++) {
@@ -1660,6 +1743,8 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
     /* Self-interaction? */
     else if (t_type == task_type_self && t_subtype == task_subtype_density) {
 
+      const int bcount_i = ci->black_holes.count;
+
       /* Make the self-density tasks depend on the drift only. */
       scheduler_addunlock(sched, ci->hydro.super->hydro.drift, t);
 
@@ -1684,9 +1769,13 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
       }
 
       /* The black hole feedback tasks */
-      if (with_black_holes) {
+      if (with_black_holes && bcount_i > 0) {
         t_bh_density = scheduler_addtask(
             sched, task_type_self, task_subtype_bh_density, flags, 0, ci, NULL);
+        t_bh_swallow = scheduler_addtask(
+            sched, task_type_self, task_subtype_bh_swallow, flags, 0, ci, NULL);
+        t_do_swallow = scheduler_addtask(
+            sched, task_type_self, task_subtype_do_swallow, flags, 0, ci, NULL);
         t_bh_feedback =
             scheduler_addtask(sched, task_type_self, task_subtype_bh_feedback,
                               flags, 0, ci, NULL);
@@ -1701,8 +1790,10 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
         engine_addlink(e, &ci->stars.density, t_star_density);
         engine_addlink(e, &ci->stars.feedback, t_star_feedback);
       }
-      if (with_black_holes) {
+      if (with_black_holes && bcount_i > 0) {
         engine_addlink(e, &ci->black_holes.density, t_bh_density);
+        engine_addlink(e, &ci->black_holes.swallow, t_bh_swallow);
+        engine_addlink(e, &ci->black_holes.do_swallow, t_do_swallow);
         engine_addlink(e, &ci->black_holes.feedback, t_bh_feedback);
       }
 
@@ -1745,7 +1836,7 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
                             ci->hydro.super->stars.stars_out);
       }
 
-      if (with_black_holes) {
+      if (with_black_holes && bcount_i > 0) {
 
         scheduler_addunlock(sched, ci->hydro.super->black_holes.drift,
                             t_bh_density);
@@ -1753,8 +1844,20 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
         scheduler_addunlock(sched, ci->hydro.super->black_holes.black_holes_in,
                             t_bh_density);
         scheduler_addunlock(sched, t_bh_density,
-                            ci->hydro.super->black_holes.ghost);
-        scheduler_addunlock(sched, ci->hydro.super->black_holes.ghost,
+                            ci->hydro.super->black_holes.density_ghost);
+
+        scheduler_addunlock(sched, ci->hydro.super->black_holes.density_ghost,
+                            t_bh_swallow);
+        scheduler_addunlock(sched, t_bh_swallow,
+                            ci->hydro.super->black_holes.swallow_ghost[0]);
+
+        scheduler_addunlock(
+            sched, ci->hydro.super->black_holes.swallow_ghost[0], t_do_swallow);
+        scheduler_addunlock(sched, t_do_swallow,
+                            ci->hydro.super->black_holes.swallow_ghost[1]);
+
+        scheduler_addunlock(sched,
+                            ci->hydro.super->black_holes.swallow_ghost[1],
                             t_bh_feedback);
         scheduler_addunlock(sched, t_bh_feedback,
                             ci->hydro.super->black_holes.black_holes_out);
@@ -1770,6 +1873,9 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
     /* Otherwise, pair interaction? */
     else if (t_type == task_type_pair && t_subtype == task_subtype_density) {
 
+      const int bcount_i = ci->black_holes.count;
+      const int bcount_j = cj->black_holes.count;
+
       /* Make all density tasks depend on the drift */
       if (ci->nodeID == nodeID) {
         scheduler_addunlock(sched, ci->hydro.super->hydro.drift, t);
@@ -1805,9 +1911,13 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
       }
 
       /* The black hole feedback tasks */
-      if (with_black_holes) {
+      if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
         t_bh_density = scheduler_addtask(
             sched, task_type_pair, task_subtype_bh_density, flags, 0, ci, cj);
+        t_bh_swallow = scheduler_addtask(
+            sched, task_type_pair, task_subtype_bh_swallow, flags, 0, ci, cj);
+        t_do_swallow = scheduler_addtask(
+            sched, task_type_pair, task_subtype_do_swallow, flags, 0, ci, cj);
         t_bh_feedback = scheduler_addtask(
             sched, task_type_pair, task_subtype_bh_feedback, flags, 0, ci, cj);
       }
@@ -1824,9 +1934,13 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
         engine_addlink(e, &ci->stars.feedback, t_star_feedback);
         engine_addlink(e, &cj->stars.feedback, t_star_feedback);
       }
-      if (with_black_holes) {
+      if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
         engine_addlink(e, &ci->black_holes.density, t_bh_density);
         engine_addlink(e, &cj->black_holes.density, t_bh_density);
+        engine_addlink(e, &ci->black_holes.swallow, t_bh_swallow);
+        engine_addlink(e, &cj->black_holes.swallow, t_bh_swallow);
+        engine_addlink(e, &ci->black_holes.do_swallow, t_do_swallow);
+        engine_addlink(e, &cj->black_holes.do_swallow, t_do_swallow);
         engine_addlink(e, &ci->black_holes.feedback, t_bh_feedback);
         engine_addlink(e, &cj->black_holes.feedback, t_bh_feedback);
       }
@@ -1898,7 +2012,7 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
                               ci->hydro.super->stars.stars_out);
         }
 
-        if (with_black_holes) {
+        if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
 
           scheduler_addunlock(sched, ci->hydro.super->black_holes.drift,
                               t_bh_density);
@@ -1907,8 +2021,21 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
           scheduler_addunlock(
               sched, ci->hydro.super->black_holes.black_holes_in, t_bh_density);
           scheduler_addunlock(sched, t_bh_density,
-                              ci->hydro.super->black_holes.ghost);
-          scheduler_addunlock(sched, ci->hydro.super->black_holes.ghost,
+                              ci->hydro.super->black_holes.density_ghost);
+
+          scheduler_addunlock(sched, ci->hydro.super->black_holes.density_ghost,
+                              t_bh_swallow);
+          scheduler_addunlock(sched, t_bh_swallow,
+                              ci->hydro.super->black_holes.swallow_ghost[0]);
+
+          scheduler_addunlock(sched,
+                              ci->hydro.super->black_holes.swallow_ghost[0],
+                              t_do_swallow);
+          scheduler_addunlock(sched, t_do_swallow,
+                              ci->hydro.super->black_holes.swallow_ghost[1]);
+
+          scheduler_addunlock(sched,
+                              ci->hydro.super->black_holes.swallow_ghost[1],
                               t_bh_feedback);
           scheduler_addunlock(sched, t_bh_feedback,
                               ci->hydro.super->black_holes.black_holes_out);
@@ -1924,6 +2051,11 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
           scheduler_addunlock(sched, ci->hydro.super->stars.sorts,
                               t_star_feedback);
         }
+
+        if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
+          scheduler_addunlock(sched, t_bh_swallow,
+                              ci->hydro.super->black_holes.swallow_ghost[0]);
+        }
       }
 
       if (cj->nodeID == nodeID) {
@@ -1950,7 +2082,7 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
                                 cj->hydro.super->stars.stars_out);
           }
 
-          if (with_black_holes) {
+          if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
 
             scheduler_addunlock(sched, cj->hydro.super->black_holes.drift,
                                 t_bh_density);
@@ -1960,8 +2092,22 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
                                 cj->hydro.super->black_holes.black_holes_in,
                                 t_bh_density);
             scheduler_addunlock(sched, t_bh_density,
-                                cj->hydro.super->black_holes.ghost);
-            scheduler_addunlock(sched, cj->hydro.super->black_holes.ghost,
+                                cj->hydro.super->black_holes.density_ghost);
+
+            scheduler_addunlock(sched,
+                                cj->hydro.super->black_holes.density_ghost,
+                                t_bh_swallow);
+            scheduler_addunlock(sched, t_bh_swallow,
+                                cj->hydro.super->black_holes.swallow_ghost[0]);
+
+            scheduler_addunlock(sched,
+                                cj->hydro.super->black_holes.swallow_ghost[0],
+                                t_do_swallow);
+            scheduler_addunlock(sched, t_do_swallow,
+                                cj->hydro.super->black_holes.swallow_ghost[1]);
+
+            scheduler_addunlock(sched,
+                                cj->hydro.super->black_holes.swallow_ghost[1],
                                 t_bh_feedback);
             scheduler_addunlock(sched, t_bh_feedback,
                                 cj->hydro.super->black_holes.black_holes_out);
@@ -1978,6 +2124,12 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
           scheduler_addunlock(sched, cj->hydro.super->stars.sorts,
                               t_star_feedback);
         }
+
+        if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
+
+          scheduler_addunlock(sched, t_bh_swallow,
+                              cj->hydro.super->black_holes.swallow_ghost[0]);
+        }
       }
     }
 
@@ -1985,6 +2137,8 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
     else if (t_type == task_type_sub_self &&
              t_subtype == task_subtype_density) {
 
+      const int bcount_i = ci->black_holes.count;
+
       /* Make all density tasks depend on the drift and sorts. */
       scheduler_addunlock(sched, ci->hydro.super->hydro.drift, t);
       scheduler_addunlock(sched, ci->hydro.super->hydro.sorts, t);
@@ -2010,10 +2164,18 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
       }
 
       /* The black hole feedback tasks */
-      if (with_black_holes) {
+      if (with_black_holes && bcount_i > 0) {
         t_bh_density =
             scheduler_addtask(sched, task_type_sub_self,
                               task_subtype_bh_density, flags, 0, ci, NULL);
+        t_bh_swallow =
+            scheduler_addtask(sched, task_type_sub_self,
+                              task_subtype_bh_swallow, flags, 0, ci, NULL);
+
+        t_do_swallow =
+            scheduler_addtask(sched, task_type_sub_self,
+                              task_subtype_do_swallow, flags, 0, ci, NULL);
+
         t_bh_feedback =
             scheduler_addtask(sched, task_type_sub_self,
                               task_subtype_bh_feedback, flags, 0, ci, NULL);
@@ -2028,8 +2190,10 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
         engine_addlink(e, &ci->stars.density, t_star_density);
         engine_addlink(e, &ci->stars.feedback, t_star_feedback);
       }
-      if (with_black_holes) {
+      if (with_black_holes && bcount_i > 0) {
         engine_addlink(e, &ci->black_holes.density, t_bh_density);
+        engine_addlink(e, &ci->black_holes.swallow, t_bh_swallow);
+        engine_addlink(e, &ci->black_holes.do_swallow, t_do_swallow);
         engine_addlink(e, &ci->black_holes.feedback, t_bh_feedback);
       }
 
@@ -2078,7 +2242,7 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
                             ci->hydro.super->stars.stars_out);
       }
 
-      if (with_black_holes) {
+      if (with_black_holes && bcount_i > 0) {
 
         scheduler_addunlock(sched, ci->hydro.super->black_holes.drift,
                             t_bh_density);
@@ -2086,8 +2250,20 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
         scheduler_addunlock(sched, ci->hydro.super->black_holes.black_holes_in,
                             t_bh_density);
         scheduler_addunlock(sched, t_bh_density,
-                            ci->hydro.super->black_holes.ghost);
-        scheduler_addunlock(sched, ci->hydro.super->black_holes.ghost,
+                            ci->hydro.super->black_holes.density_ghost);
+
+        scheduler_addunlock(sched, ci->hydro.super->black_holes.density_ghost,
+                            t_bh_swallow);
+        scheduler_addunlock(sched, t_bh_swallow,
+                            ci->hydro.super->black_holes.swallow_ghost[0]);
+
+        scheduler_addunlock(
+            sched, ci->hydro.super->black_holes.swallow_ghost[0], t_do_swallow);
+        scheduler_addunlock(sched, t_do_swallow,
+                            ci->hydro.super->black_holes.swallow_ghost[1]);
+
+        scheduler_addunlock(sched,
+                            ci->hydro.super->black_holes.swallow_ghost[1],
                             t_bh_feedback);
         scheduler_addunlock(sched, t_bh_feedback,
                             ci->hydro.super->black_holes.black_holes_out);
@@ -2105,6 +2281,9 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
     else if (t_type == task_type_sub_pair &&
              t_subtype == task_subtype_density) {
 
+      const int bcount_i = ci->black_holes.count;
+      const int bcount_j = cj->black_holes.count;
+
       /* Make all density tasks depend on the drift */
       if (ci->nodeID == nodeID) {
         scheduler_addunlock(sched, ci->hydro.super->hydro.drift, t);
@@ -2140,10 +2319,16 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
       }
 
       /* The black hole feedback tasks */
-      if (with_black_holes) {
+      if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
         t_bh_density =
             scheduler_addtask(sched, task_type_sub_pair,
                               task_subtype_bh_density, flags, 0, ci, cj);
+        t_bh_swallow =
+            scheduler_addtask(sched, task_type_sub_pair,
+                              task_subtype_bh_swallow, flags, 0, ci, cj);
+        t_do_swallow =
+            scheduler_addtask(sched, task_type_sub_pair,
+                              task_subtype_do_swallow, flags, 0, ci, cj);
         t_bh_feedback =
             scheduler_addtask(sched, task_type_sub_pair,
                               task_subtype_bh_feedback, flags, 0, ci, cj);
@@ -2161,9 +2346,13 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
         engine_addlink(e, &ci->stars.feedback, t_star_feedback);
         engine_addlink(e, &cj->stars.feedback, t_star_feedback);
       }
-      if (with_black_holes) {
+      if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
         engine_addlink(e, &ci->black_holes.density, t_bh_density);
         engine_addlink(e, &cj->black_holes.density, t_bh_density);
+        engine_addlink(e, &ci->black_holes.swallow, t_bh_swallow);
+        engine_addlink(e, &cj->black_holes.swallow, t_bh_swallow);
+        engine_addlink(e, &ci->black_holes.do_swallow, t_do_swallow);
+        engine_addlink(e, &cj->black_holes.do_swallow, t_do_swallow);
         engine_addlink(e, &ci->black_holes.feedback, t_bh_feedback);
         engine_addlink(e, &cj->black_holes.feedback, t_bh_feedback);
       }
@@ -2234,7 +2423,7 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
                               ci->hydro.super->stars.stars_out);
         }
 
-        if (with_black_holes) {
+        if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
 
           scheduler_addunlock(sched, ci->hydro.super->black_holes.drift,
                               t_bh_density);
@@ -2243,8 +2432,21 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
           scheduler_addunlock(
               sched, ci->hydro.super->black_holes.black_holes_in, t_bh_density);
           scheduler_addunlock(sched, t_bh_density,
-                              ci->hydro.super->black_holes.ghost);
-          scheduler_addunlock(sched, ci->hydro.super->black_holes.ghost,
+                              ci->hydro.super->black_holes.density_ghost);
+
+          scheduler_addunlock(sched, ci->hydro.super->black_holes.density_ghost,
+                              t_bh_swallow);
+          scheduler_addunlock(sched, t_bh_swallow,
+                              ci->hydro.super->black_holes.swallow_ghost[0]);
+
+          scheduler_addunlock(sched,
+                              ci->hydro.super->black_holes.swallow_ghost[0],
+                              t_do_swallow);
+          scheduler_addunlock(sched, t_do_swallow,
+                              ci->hydro.super->black_holes.swallow_ghost[1]);
+
+          scheduler_addunlock(sched,
+                              ci->hydro.super->black_holes.swallow_ghost[1],
                               t_bh_feedback);
           scheduler_addunlock(sched, t_bh_feedback,
                               ci->hydro.super->black_holes.black_holes_out);
@@ -2262,6 +2464,11 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
           scheduler_addunlock(sched, ci->hydro.super->stars.sorts,
                               t_star_feedback);
         }
+        if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
+
+          scheduler_addunlock(sched, t_bh_swallow,
+                              ci->hydro.super->black_holes.swallow_ghost[0]);
+        }
       }
 
       if (cj->nodeID == nodeID) {
@@ -2288,7 +2495,7 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
                                 cj->hydro.super->stars.stars_out);
           }
 
-          if (with_black_holes) {
+          if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
 
             scheduler_addunlock(sched, cj->hydro.super->black_holes.drift,
                                 t_bh_density);
@@ -2298,8 +2505,22 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
                                 cj->hydro.super->black_holes.black_holes_in,
                                 t_bh_density);
             scheduler_addunlock(sched, t_bh_density,
-                                cj->hydro.super->black_holes.ghost);
-            scheduler_addunlock(sched, cj->hydro.super->black_holes.ghost,
+                                cj->hydro.super->black_holes.density_ghost);
+
+            scheduler_addunlock(sched,
+                                cj->hydro.super->black_holes.density_ghost,
+                                t_bh_swallow);
+            scheduler_addunlock(sched, t_bh_swallow,
+                                cj->hydro.super->black_holes.swallow_ghost[0]);
+
+            scheduler_addunlock(sched,
+                                cj->hydro.super->black_holes.swallow_ghost[0],
+                                t_do_swallow);
+            scheduler_addunlock(sched, t_do_swallow,
+                                cj->hydro.super->black_holes.swallow_ghost[1]);
+
+            scheduler_addunlock(sched,
+                                cj->hydro.super->black_holes.swallow_ghost[1],
                                 t_bh_feedback);
             scheduler_addunlock(sched, t_bh_feedback,
                                 cj->hydro.super->black_holes.black_holes_out);
@@ -2316,6 +2537,11 @@ void engine_make_extra_hydroloop_tasks_mapper(void *map_data, int num_elements,
           scheduler_addunlock(sched, cj->hydro.super->stars.sorts,
                               t_star_feedback);
         }
+
+        if (with_black_holes && (bcount_i > 0 || bcount_j > 0)) {
+          scheduler_addunlock(sched, t_bh_swallow,
+                              cj->hydro.super->black_holes.swallow_ghost[0]);
+        }
       }
     }
   }
@@ -2489,7 +2715,10 @@ void engine_addtasks_send_mapper(void *map_data, int num_elements,
      * connection. */
     if ((e->policy & engine_policy_black_holes) &&
         (type & proxy_cell_type_hydro))
-      engine_addtasks_send_black_holes(e, ci, cj, /*t_feedback=*/NULL,
+      engine_addtasks_send_black_holes(e, ci, cj, /*t_rho=*/NULL,
+                                       /*t_swallow=*/NULL,
+                                       /*t_gas_swallow=*/NULL,
+                                       /*t_feedback=*/NULL,
                                        /*t_ti=*/NULL);
 
     /* Add the send tasks for the cells in the proxy that have a gravity
@@ -2528,7 +2757,11 @@ void engine_addtasks_recv_mapper(void *map_data, int num_elements,
      * connection. */
     if ((e->policy & engine_policy_black_holes) &&
         (type & proxy_cell_type_hydro))
-      engine_addtasks_recv_black_holes(e, ci, NULL, NULL);
+      engine_addtasks_recv_black_holes(e, ci, /*t_rho=*/NULL,
+                                       /*t_swallow=*/NULL,
+                                       /*t_gas_swallow=*/NULL,
+                                       /*t_feedback=*/NULL,
+                                       /*t_ti=*/NULL);
 
     /* Add the recv tasks for the cells in the proxy that have a gravity
      * connection. */
diff --git a/src/engine_marktasks.c b/src/engine_marktasks.c
index b8d96edbbd8e5385f0d02dfc98116bd267920370..73cb820d6b221940b22e2b7ef37ad613f51d8ff8 100644
--- a/src/engine_marktasks.c
+++ b/src/engine_marktasks.c
@@ -194,6 +194,30 @@ void engine_marktasks_mapper(void *map_data, int num_elements,
         }
       }
 
+      else if (t_type == task_type_self &&
+               t_subtype == task_subtype_bh_swallow) {
+        if (ci_active_black_holes) {
+          scheduler_activate(s, t);
+        }
+      }
+
+      else if (t_type == task_type_sub_self &&
+               t_subtype == task_subtype_bh_swallow) {
+        if (ci_active_black_holes) scheduler_activate(s, t);
+      }
+
+      else if (t_type == task_type_self &&
+               t_subtype == task_subtype_do_swallow) {
+        if (ci_active_black_holes) {
+          scheduler_activate(s, t);
+        }
+      }
+
+      else if (t_type == task_type_sub_self &&
+               t_subtype == task_subtype_do_swallow) {
+        if (ci_active_black_holes) scheduler_activate(s, t);
+      }
+
       else if (t_type == task_type_self &&
                t_subtype == task_subtype_bh_feedback) {
         if (ci_active_black_holes) {
@@ -378,30 +402,22 @@ void engine_marktasks_mapper(void *map_data, int num_elements,
       }
 
       /* Black_Holes density */
-      else if ((t_subtype == task_subtype_bh_density) &&
+      else if ((t_subtype == task_subtype_bh_density ||
+                t_subtype == task_subtype_bh_swallow ||
+                t_subtype == task_subtype_do_swallow ||
+                t_subtype == task_subtype_bh_feedback) &&
                (ci_active_black_holes || cj_active_black_holes) &&
                (ci_nodeID == nodeID || cj_nodeID == nodeID)) {
 
         scheduler_activate(s, t);
 
         /* Set the correct sorting flags */
-        if (t_type == task_type_pair) {
-
-          /* Do ci */
-          if (ci_active_black_holes) {
-
-            /* Activate the drift tasks. */
-            if (ci_nodeID == nodeID) cell_activate_drift_bpart(ci, s);
-            if (cj_nodeID == nodeID) cell_activate_drift_part(cj, s);
-          }
-
-          /* Do cj */
-          if (cj_active_black_holes) {
+        if (t_type == task_type_pair && t_subtype == task_subtype_bh_density) {
+          if (ci_nodeID == nodeID) cell_activate_drift_bpart(ci, s);
+          if (ci_nodeID == nodeID) cell_activate_drift_part(ci, s);
 
-            /* Activate the drift tasks. */
-            if (ci_nodeID == nodeID) cell_activate_drift_part(ci, s);
-            if (cj_nodeID == nodeID) cell_activate_drift_bpart(cj, s);
-          }
+          if (cj_nodeID == nodeID) cell_activate_drift_part(cj, s);
+          if (cj_nodeID == nodeID) cell_activate_drift_bpart(cj, s);
         }
 
         /* Store current values of dx_max and h_max. */
@@ -411,28 +427,6 @@ void engine_marktasks_mapper(void *map_data, int num_elements,
         }
       }
 
-      /* Black_Holes feedback */
-      else if (t_subtype == task_subtype_bh_feedback) {
-
-        /* We only want to activate the task if the cell is active and is
-           going to update some gas on the *local* node */
-        if ((ci_nodeID == nodeID && cj_nodeID == nodeID) &&
-            (ci_active_black_holes || cj_active_black_holes)) {
-
-          scheduler_activate(s, t);
-
-        } else if ((ci_nodeID == nodeID && cj_nodeID != nodeID) &&
-                   (cj_active_black_holes)) {
-
-          scheduler_activate(s, t);
-
-        } else if ((ci_nodeID != nodeID && cj_nodeID == nodeID) &&
-                   (ci_active_black_holes)) {
-
-          scheduler_activate(s, t);
-        }
-      }
-
       /* Gravity */
       else if ((t_subtype == task_subtype_grav) &&
                ((ci_active_gravity && ci_nodeID == nodeID) ||
@@ -669,68 +663,86 @@ void engine_marktasks_mapper(void *map_data, int num_elements,
         if (cell_need_rebuild_for_black_holes_pair(ci, cj)) *rebuild_space = 1;
         if (cell_need_rebuild_for_black_holes_pair(cj, ci)) *rebuild_space = 1;
 
+        scheduler_activate(s, ci->hydro.super->black_holes.swallow_ghost[0]);
+        scheduler_activate(s, cj->hydro.super->black_holes.swallow_ghost[0]);
+
 #ifdef WITH_MPI
         /* Activate the send/recv tasks. */
         if (ci_nodeID != nodeID) {
 
-          if (cj_active_black_holes) {
-            scheduler_activate_recv(s, ci->mpi.recv, task_subtype_xv);
+          /* Receive the foreign parts to compute BH accretion rates and do the
+           * swallowing */
+          scheduler_activate_recv(s, ci->mpi.recv, task_subtype_rho);
+          scheduler_activate_recv(s, ci->mpi.recv, task_subtype_part_swallow);
 
-            /* If the local cell is active, more stuff will be needed. */
-            scheduler_activate_send(s, cj->mpi.send, task_subtype_bpart,
-                                    ci_nodeID);
-            cell_activate_drift_bpart(cj, s);
+          /* Send the local BHs to tag the particles to swallow and to do
+           * feedback */
+          scheduler_activate_send(s, cj->mpi.send, task_subtype_bpart_rho,
+                                  ci_nodeID);
+          scheduler_activate_send(s, cj->mpi.send, task_subtype_bpart_feedback,
+                                  ci_nodeID);
 
-            /* If the local cell is active, send its ti_end values. */
-            scheduler_activate_send(s, cj->mpi.send, task_subtype_tend_bpart,
-                                    ci_nodeID);
-          }
+          /* Drift before you send */
+          cell_activate_drift_bpart(cj, s);
 
-          if (ci_active_black_holes) {
-            scheduler_activate_recv(s, ci->mpi.recv, task_subtype_bpart);
+          /* Send the new BH time-steps */
+          scheduler_activate_send(s, cj->mpi.send, task_subtype_tend_bpart,
+                                  ci_nodeID);
 
-            /* If the foreign cell is active, we want its ti_end values. */
-            scheduler_activate_recv(s, ci->mpi.recv, task_subtype_tend_bpart);
+          /* Receive the foreign BHs to tag particles to swallow and for
+           * feedback */
+          scheduler_activate_recv(s, ci->mpi.recv, task_subtype_bpart_rho);
+          scheduler_activate_recv(s, ci->mpi.recv, task_subtype_bpart_feedback);
 
-            /* Is the foreign cell active and will need stuff from us? */
-            scheduler_activate_send(s, cj->mpi.send, task_subtype_xv,
-                                    ci_nodeID);
+          /* Receive the foreign BH time-steps */
+          scheduler_activate_recv(s, ci->mpi.recv, task_subtype_tend_bpart);
 
-            /* Drift the cell which will be sent; note that not all sent
-               particles will be drifted, only those that are needed. */
-            cell_activate_drift_part(cj, s);
-          }
+          /* Send the local part information */
+          scheduler_activate_send(s, cj->mpi.send, task_subtype_rho, ci_nodeID);
+          scheduler_activate_send(s, cj->mpi.send, task_subtype_part_swallow,
+                                  ci_nodeID);
+
+          /* Drift the cell which will be sent; note that not all sent
+             particles will be drifted, only those that are needed. */
+          cell_activate_drift_part(cj, s);
 
         } else if (cj_nodeID != nodeID) {
 
-          /* If the local cell is active, receive data from the foreign cell. */
-          if (ci_active_black_holes) {
-            scheduler_activate_recv(s, cj->mpi.recv, task_subtype_xv);
+          /* Receive the foreign parts to compute BH accretion rates and do the
+           * swallowing */
+          scheduler_activate_recv(s, cj->mpi.recv, task_subtype_rho);
+          scheduler_activate_recv(s, cj->mpi.recv, task_subtype_part_swallow);
 
-            /* If the local cell is active, more stuff will be needed. */
-            scheduler_activate_send(s, ci->mpi.send, task_subtype_bpart,
-                                    cj_nodeID);
-            cell_activate_drift_bpart(ci, s);
+          /* Send the local BHs to tag the particles to swallow and to do
+           * feedback */
+          scheduler_activate_send(s, ci->mpi.send, task_subtype_bpart_rho,
+                                  cj_nodeID);
+          scheduler_activate_send(s, ci->mpi.send, task_subtype_bpart_feedback,
+                                  cj_nodeID);
 
-            /* If the local cell is active, send its ti_end values. */
-            scheduler_activate_send(s, ci->mpi.send, task_subtype_tend_bpart,
-                                    cj_nodeID);
-          }
+          /* Drift before you send */
+          cell_activate_drift_bpart(ci, s);
 
-          if (cj_active_black_holes) {
-            scheduler_activate_recv(s, cj->mpi.recv, task_subtype_bpart);
+          /* Send the new BH time-steps */
+          scheduler_activate_send(s, ci->mpi.send, task_subtype_tend_bpart,
+                                  cj_nodeID);
 
-            /* If the foreign cell is active, we want its ti_end values. */
-            scheduler_activate_recv(s, cj->mpi.recv, task_subtype_tend_bpart);
+          /* Receive the foreign BHs to tag particles to swallow and for
+           * feedback */
+          scheduler_activate_recv(s, cj->mpi.recv, task_subtype_bpart_rho);
+          scheduler_activate_recv(s, cj->mpi.recv, task_subtype_bpart_feedback);
 
-            /* Is the foreign cell active and will need stuff from us? */
-            scheduler_activate_send(s, ci->mpi.send, task_subtype_xv,
-                                    cj_nodeID);
+          /* Receive the foreign BH time-steps */
+          scheduler_activate_recv(s, cj->mpi.recv, task_subtype_tend_bpart);
 
-            /* Drift the cell which will be sent; note that not all sent
-               particles will be drifted, only those that are needed. */
-            cell_activate_drift_part(ci, s);
-          }
+          /* Send the local part information */
+          scheduler_activate_send(s, ci->mpi.send, task_subtype_rho, cj_nodeID);
+          scheduler_activate_send(s, ci->mpi.send, task_subtype_part_swallow,
+                                  cj_nodeID);
+
+          /* Drift the cell which will be sent; note that not all sent
+             particles will be drifted, only those that are needed. */
+          cell_activate_drift_part(ci, s);
         }
 #endif
       }
@@ -878,7 +890,9 @@ void engine_marktasks_mapper(void *map_data, int num_elements,
     }
 
     /* Black hole ghost tasks ? */
-    else if (t_type == task_type_bh_ghost) {
+    else if (t_type == task_type_bh_density_ghost ||
+             t_type == task_type_bh_swallow_ghost1 ||
+             t_type == task_type_bh_swallow_ghost2) {
       if (cell_is_active_black_holes(t->ci, e)) scheduler_activate(s, t);
     }
 
diff --git a/src/feedback/EAGLE/feedback.c b/src/feedback/EAGLE/feedback.c
index 531f887a74bca8565e83a15336c538d5eeb846ae..cf6848760fb9827718befe48e43ab53cfe77f8ef 100644
--- a/src/feedback/EAGLE/feedback.c
+++ b/src/feedback/EAGLE/feedback.c
@@ -140,9 +140,13 @@ INLINE static void compute_SNII_feedback(
   /* Time after birth considered for SNII feedback (internal units) */
   const double SNII_wind_delay = feedback_props->SNII_wind_delay;
 
-  /* Are we doing feedback this step? */
-  if (star_age <= SNII_wind_delay && (star_age + dt) > SNII_wind_delay) {
+  /* Are we doing feedback this step?
+   * Note that since the ages are calculated using an interpolation table we
+   * must allow some tolerance here*/
+  if (star_age <= SNII_wind_delay &&
+      (star_age + 1.0001 * dt) > SNII_wind_delay) {
 
+    /* Make sure a star does not do feedback twice! */
     if (sp->f_E != -1.f) {
 #ifdef SWIFT_DEBUG_CHECKS
       message("Star has already done feedback! sp->id=%lld age=%e d=%e", sp->id,
diff --git a/src/feedback/EAGLE/feedback_iact.h b/src/feedback/EAGLE/feedback_iact.h
index 05d8ff43d446ba19fcd8f514cdb657e8119c323c..fa5507df74777077ebcbf9cab6de0750768460eb 100644
--- a/src/feedback/EAGLE/feedback_iact.h
+++ b/src/feedback/EAGLE/feedback_iact.h
@@ -301,7 +301,7 @@ runner_iact_nonsym_feedback_apply(const float r2, const float *dx,
       hydro_set_drifted_physical_internal_energy(pj, cosmo, u_new);
 
       /* Impose maximal viscosity */
-      hydro_set_viscosity_alpha_max_feedback(pj);
+      hydro_diffusive_feedback_reset(pj);
 
       /* message( */
       /*     "We did some heating! id %llu star id %llu probability %.5e " */
diff --git a/src/fof.c b/src/fof.c
index 93797508731cc6b0d8eaf17f4f3c25b6a0be8210..9dbe7416e2eb537dff2216a17bf85bbac616084b 100644
--- a/src/fof.c
+++ b/src/fof.c
@@ -50,6 +50,9 @@
 #define UNION_BY_SIZE_OVER_MPI (1)
 #define FOF_COMPRESS_PATHS_MIN_LENGTH (2)
 
+/* Are we timing the FOF? */
+#define WITH_FOF_TIMING
+
 /**
  * @brief Properties of a group used for black hole seeding
  */
@@ -373,6 +376,7 @@ __attribute__((always_inline)) INLINE static size_t fof_find_global(
 
 #endif /* WITH_MPI */
 
+#ifndef WITH_FOF_TIMING
 /**
  * @brief   Finds the local root ID of the group a particle exists in
  * when group_index contains globally unique identifiers -
@@ -385,28 +389,29 @@ __attribute__((always_inline)) INLINE static size_t fof_find_global(
  * @param nr_gparts The number of g-particles on this node.
  * @param group_index Array of group root indices.
  */
-//__attribute__((always_inline)) INLINE static size_t fof_find_local(
-//    const size_t i, const size_t nr_gparts, const size_t *group_index) {
-//#ifdef WITH_MPI
-//  size_t root = node_offset + i;
-//
-//  while ((group_index[root - node_offset] != root) &&
-//         (group_index[root - node_offset] >= node_offset) &&
-//         (group_index[root - node_offset] < node_offset + nr_gparts)) {
-//    root = group_index[root - node_offset];
-//  }
-//
-//  return root - node_offset;
-//#else
-//  size_t root = i;
-//
-//  while ((group_index[root] != root) && (group_index[root] < nr_gparts)) {
-//    root = group_index[root];
-//  }
-//
-//  return root;
-//#endif
-//}
+__attribute__((always_inline)) INLINE static size_t fof_find_local(
+    const size_t i, const size_t nr_gparts, const size_t *group_index) {
+#ifdef WITH_MPI
+  size_t root = node_offset + i;
+
+  while ((group_index[root - node_offset] != root) &&
+         (group_index[root - node_offset] >= node_offset) &&
+         (group_index[root - node_offset] < node_offset + nr_gparts)) {
+    root = group_index[root - node_offset];
+  }
+
+  return root - node_offset;
+#else
+  size_t root = i;
+
+  while ((group_index[root] != root) && (group_index[root] < nr_gparts)) {
+    root = group_index[root];
+  }
+
+  return root;
+#endif
+}
+#endif /* #ifndef WITH_FOF_TIMING */
 
 /**
  * @brief Finds the local root ID of the group a particle exists in.
@@ -2105,30 +2110,10 @@ void fof_search_foreign_cells(struct fof_props *props, const struct space *s) {
   if (verbose)
     message(
         "Finding local/foreign cell pairs and initialising particle roots "
-        "took: "
-        "%.3f %s.",
+        "took: %.3f %s.",
         clocks_from_ticks(getticks() - tic), clocks_getunit());
 
-  tic = getticks();
-
-  struct scheduler *sched = &e->sched;
-  struct task *tasks = sched->tasks;
-
-  /* Activate the send and receive tasks for the gparts. */
-  for (int i = 0; i < sched->nr_tasks; i++) {
-
-    struct task *t = &tasks[i];
-
-    if ((t->type == task_type_send && t->subtype == task_subtype_gpart) ||
-        (t->type == task_type_recv && t->subtype == task_subtype_gpart)) {
-      scheduler_activate(sched, t);
-    } else
-      t->skip = 1;
-  }
-
-  if (verbose)
-    message("MPI send/recv task activation took: %.3f %s.",
-            clocks_from_ticks(getticks() - tic), clocks_getunit());
+  engine_activate_gpart_comms(e);
 
   ticks local_fof_tic = getticks();
 
@@ -2411,7 +2396,11 @@ void fof_search_tree(struct fof_props *props,
 
   const size_t nr_gparts = s->nr_gparts;
   const size_t min_group_size = props->min_group_size;
-  //const size_t group_id_offset = props->group_id_offset;
+#ifndef WITH_FOF_TIMING
+  const size_t group_id_offset = props->group_id_offset;
+  const size_t group_id_default = props->group_id_default;
+#endif
+
 #ifdef WITH_MPI
   const int nr_nodes = s->e->nr_nodes;
 #endif
@@ -2431,8 +2420,6 @@ void fof_search_tree(struct fof_props *props,
   if (engine_rank == 0 && verbose)
     message("Size of hash table element: %ld", sizeof(hashmap_element_t));
 
-  //const size_t group_id_default = props->group_id_default;
-
 #ifdef WITH_MPI
 
   /* Reset global variable */
@@ -2441,15 +2428,15 @@ void fof_search_tree(struct fof_props *props,
   /* Determine number of gparts on lower numbered MPI ranks */
   long long nr_gparts_cumulative;
   long long nr_gparts_local = s->nr_gparts;
-  
+
   ticks comms_tic = getticks();
 
   MPI_Scan(&nr_gparts_local, &nr_gparts_cumulative, 1, MPI_LONG_LONG, MPI_SUM,
            MPI_COMM_WORLD);
-  
-  //if (verbose)
+
+  // if (verbose)
   message("MPI_Scan Imbalance took: %.3f %s.",
-      clocks_from_ticks(getticks() - comms_tic), clocks_getunit());
+          clocks_from_ticks(getticks() - comms_tic), clocks_getunit());
 
   node_offset = nr_gparts_cumulative - nr_gparts_local;
 
@@ -2470,8 +2457,8 @@ void fof_search_tree(struct fof_props *props,
                  nr_gparts, sizeof(struct gpart), 0, s);
 
   message("FOF calc group size took (FOF SCALING): %.3f %s.",
-      clocks_from_ticks(getticks() - tic_calc_group_size),
-      clocks_getunit());
+          clocks_from_ticks(getticks() - tic_calc_group_size),
+          clocks_getunit());
 
 #ifdef WITH_MPI
   if (nr_nodes > 1) {
@@ -2481,16 +2468,21 @@ void fof_search_tree(struct fof_props *props,
     /* Search for group links across MPI domains. */
     fof_search_foreign_cells(props, s);
 
-    message("fof_search_foreign_cells() took (FOF SCALING): %.3f %s.",       
+    message("fof_search_foreign_cells() took (FOF SCALING): %.3f %s.",
             clocks_from_ticks(getticks() - tic_mpi), clocks_getunit());
-    
-    message("fof_search_foreign_cells() + calc_group_size took (FOF SCALING): %.3f %s.",       
-            clocks_from_ticks(getticks() - tic_total), clocks_getunit());
+
+    message(
+        "fof_search_foreign_cells() + calc_group_size took (FOF SCALING): %.3f "
+        "%s.",
+        clocks_from_ticks(getticks() - tic_total), clocks_getunit());
   }
 #endif
 
-  size_t num_groups_local = 0;//, num_parts_in_groups_local = 0,
-  //       max_group_size_local = 0;
+  size_t num_groups_local = 0;
+#ifndef WITH_FOF_TIMING
+  size_t num_parts_in_groups_local = 0;
+  size_t max_group_size_local = 0;
+#endif
 
   ticks tic_num_groups_calc = getticks();
 
@@ -2506,256 +2498,264 @@ void fof_search_tree(struct fof_props *props,
       num_groups_local++;
 #endif
 
-//    /* Find the total number of particles in groups. */
-//    if (group_size[i] >= min_group_size)
-//      num_parts_in_groups_local += group_size[i];
-//
-//    /* Find the largest group. */
-//    if (group_size[i] > max_group_size_local)
-//      max_group_size_local = group_size[i];
+#ifndef WITH_FOF_TIMING
+    /* Find the total number of particles in groups. */
+    if (group_size[i] >= min_group_size)
+      num_parts_in_groups_local += group_size[i];
+
+    /* Find the largest group. */
+    if (group_size[i] > max_group_size_local)
+      max_group_size_local = group_size[i];
+#endif
   }
     
-  message("Calculating the total no. of local groups took: (FOF SCALING): %.3f %s.",       
-            clocks_from_ticks(getticks() - tic_num_groups_calc), clocks_getunit());
+  message(
+      "Calculating the total no. of local groups took: (FOF SCALING): %.3f %s.",
+      clocks_from_ticks(getticks() - tic_num_groups_calc), clocks_getunit());
 
   /* Sort the groups in descending order based upon size and re-label their IDs
    * 0-num_groups. */
-//  struct group_length *high_group_sizes = NULL;
-//  int group_count = 0;
-//
-//  if (swift_memalign("fof_high_group_sizes", (void **)&high_group_sizes, 32,
-//                     num_groups_local * sizeof(struct group_length)) != 0)
-//    error("Failed to allocate list of large groups.");
-//
-//  /* Store the group_sizes and their offset. */
-//  for (size_t i = 0; i < nr_gparts; i++) {
-//
-//#ifdef WITH_MPI
-//    if (group_index[i] == i + node_offset && group_size[i] >= min_group_size) {
-//      high_group_sizes[group_count].index = node_offset + i;
-//      high_group_sizes[group_count++].size = group_size[i];
-//    }
-//#else
-//    if (group_index[i] == i && group_size[i] >= min_group_size) {
-//      high_group_sizes[group_count].index = i;
-//      high_group_sizes[group_count++].size = group_size[i];
-//    }
-//#endif
-//  }
-
-//  ticks tic = getticks();
-//
-//  /* Find global properties. */
+#ifndef WITH_FOF_TIMING
+  struct group_length *high_group_sizes = NULL;
+  int group_count = 0;
+
+  if (swift_memalign("fof_high_group_sizes", (void **)&high_group_sizes, 32,
+                     num_groups_local * sizeof(struct group_length)) != 0)
+    error("Failed to allocate list of large groups.");
+
+  /* Store the group_sizes and their offset. */
+  for (size_t i = 0; i < nr_gparts; i++) {
+
+#ifdef WITH_MPI
+    if (group_index[i] == i + node_offset && group_size[i] >= min_group_size) {
+      high_group_sizes[group_count].index = node_offset + i;
+      high_group_sizes[group_count++].size = group_size[i];
+    }
+#else
+    if (group_index[i] == i && group_size[i] >= min_group_size) {
+      high_group_sizes[group_count].index = i;
+      high_group_sizes[group_count++].size = group_size[i];
+    }
+#endif
+  }
+
+  ticks tic = getticks();
+#endif /* #ifndef WITH_FOF_TIMING */
+
+  /* Find global properties. */
 #ifdef WITH_MPI
   MPI_Allreduce(&num_groups_local, &num_groups, 1, MPI_INT, MPI_SUM,
                 MPI_COMM_WORLD);
 
-  message("Finding the total no. of groups took: (FOF SCALING): %.3f %s.",       
-            clocks_from_ticks(getticks() - tic_num_groups_calc), clocks_getunit());
+  message("Finding the total no. of groups took: (FOF SCALING): %.3f %s.",
+          clocks_from_ticks(getticks() - tic_num_groups_calc),
+          clocks_getunit());
 
-  //MPI_Reduce(&num_parts_in_groups_local, &num_parts_in_groups, 1, MPI_INT,
-  //           MPI_SUM, 0, MPI_COMM_WORLD);
-  //MPI_Reduce(&max_group_size_local, &max_group_size, 1, MPI_INT, MPI_MAX, 0,
-  //           MPI_COMM_WORLD);
+#ifndef WITH_FOF_TIMING
+  MPI_Reduce(&num_parts_in_groups_local, &num_parts_in_groups, 1, MPI_INT,
+             MPI_SUM, 0, MPI_COMM_WORLD);
+  MPI_Reduce(&max_group_size_local, &max_group_size, 1, MPI_INT, MPI_MAX, 0,
+             MPI_COMM_WORLD);
+#endif /* #ifndef WITH_FOF_TIMING */
 #else
   num_groups = num_groups_local;
-  //num_parts_in_groups = num_parts_in_groups_local;
-  //max_group_size = max_group_size_local;
+
+#ifndef WITH_FOF_TIMING
+  num_parts_in_groups = num_parts_in_groups_local;
+  max_group_size = max_group_size_local;
+#endif /* #ifndef WITH_FOF_TIMING */
 #endif /* WITH_MPI */
   props->num_groups = num_groups;
-//
-//  /* Find number of groups on lower numbered MPI ranks */
-//#ifdef WITH_MPI
-//  long long nglocal = num_groups_local;
-//  long long ngsum;
-//  MPI_Scan(&nglocal, &ngsum, 1, MPI_LONG_LONG, MPI_SUM, MPI_COMM_WORLD);
-//  const size_t num_groups_prev = (size_t)(ngsum - nglocal);
-//#endif /* WITH_MPI */
-//
-//  /* Sort local groups into descending order of size */
-//  qsort(high_group_sizes, num_groups_local, sizeof(struct group_length),
-//        cmp_func_group_size);
-//
-//  /* Set default group ID for all particles */
-//  for (size_t i = 0; i < nr_gparts; i++) gparts[i].group_id = group_id_default;
-//
-//  /*
-//    Assign final group IDs to local root particles where the global root is on
-//    this node and the group is large enough. Within a node IDs are assigned in
-//    descending order of particle number.
-//  */
-//  for (size_t i = 0; i < num_groups_local; i++) {
-//#ifdef WITH_MPI
-//    gparts[high_group_sizes[i].index - node_offset].group_id =
-//        group_id_offset + i + num_groups_prev;
-//#else
-//    gparts[high_group_sizes[i].index].group_id = group_id_offset + i;
-//#endif
-//  }
-//
-//#ifdef WITH_MPI
-//  /*
-//     Now, for each local root where the global root is on some other node
-//     AND the total size of the group is >= min_group_size we need to retrieve
-//     the gparts.group_id we just assigned to the global root.
-//
-//     Will do that by sending the group_index of these lcoal roots to the node
-//     where their global root is stored and receiving back the new group_id
-//     associated with that particle.
-//  */
-//
-//  /*
-//     Identify local roots with global root on another node and large enough
-//     group_size. Store index of the local and global roots in these cases.
-//
-//     NOTE: if group_size only contains the total FoF mass for global roots,
-//     then we have to communicate ALL fragments where the global root is not
-//     on this node. Hence the commented out extra conditions below.
-//  */
-//  size_t nsend = 0;
-//  for (size_t i = 0; i < nr_gparts; i += 1) {
-//    if ((!is_local(group_index[i],
-//                   nr_gparts))) { /* && (group_size[i] >= min_group_size)) { */
-//      nsend += 1;
-//    }
-//  }
-//  struct fof_final_index *fof_index_send =
-//      (struct fof_final_index *)swift_malloc(
-//          "fof_index_send", sizeof(struct fof_final_index) * nsend);
-//  nsend = 0;
-//  for (size_t i = 0; i < nr_gparts; i += 1) {
-//    if ((!is_local(group_index[i],
-//                   nr_gparts))) { /* && (group_size[i] >= min_group_size)) { */
-//      fof_index_send[nsend].local_root = node_offset + i;
-//      fof_index_send[nsend].global_root = group_index[i];
-//      nsend += 1;
-//    }
-//  }
-//
-//  /* Sort by global root - this puts the groups in order of which node they're
-//   * stored on */
-//  qsort(fof_index_send, nsend, sizeof(struct fof_final_index),
-//        compare_fof_final_index_global_root);
-//
-//  /* Determine range of global indexes (i.e. particles) on each node */
-//  size_t *num_on_node = (size_t *)malloc(nr_nodes * sizeof(size_t));
-//  MPI_Allgather(&nr_gparts, sizeof(size_t), MPI_BYTE, num_on_node,
-//                sizeof(size_t), MPI_BYTE, MPI_COMM_WORLD);
-//  size_t *first_on_node = (size_t *)malloc(nr_nodes * sizeof(size_t));
-//  first_on_node[0] = 0;
-//  for (int i = 1; i < nr_nodes; i += 1)
-//    first_on_node[i] = first_on_node[i - 1] + num_on_node[i - 1];
-//
-//  /* Determine how many entries go to each node */
-//  int *sendcount = (int *)malloc(nr_nodes * sizeof(int));
-//  for (int i = 0; i < nr_nodes; i += 1) sendcount[i] = 0;
-//  int dest = 0;
-//  for (size_t i = 0; i < nsend; i += 1) {
-//    while ((fof_index_send[i].global_root >=
-//            first_on_node[dest] + num_on_node[dest]) ||
-//           (num_on_node[dest] == 0))
-//      dest += 1;
-//    if (dest >= nr_nodes) error("Node index out of range!");
-//    sendcount[dest] += 1;
-//  }
-//
-//  int *recvcount = NULL, *sendoffset = NULL, *recvoffset = NULL;
-//  size_t nrecv = 0;
-//
-//  fof_compute_send_recv_offsets(nr_nodes, sendcount, &recvcount, &sendoffset,
-//                                &recvoffset, &nrecv);
-//
-//  struct fof_final_index *fof_index_recv =
-//      (struct fof_final_index *)swift_malloc(
-//          "fof_index_recv", nrecv * sizeof(struct fof_final_index));
-//
-//  /* Exchange group indexes */
-//  MPI_Alltoallv(fof_index_send, sendcount, sendoffset, fof_final_index_type,
-//                fof_index_recv, recvcount, recvoffset, fof_final_index_type,
-//                MPI_COMM_WORLD);
-//
-//  /* For each received global root, look up the group ID we assigned and store
-//   * it in the struct */
-//  for (size_t i = 0; i < nrecv; i += 1) {
-//    if ((fof_index_recv[i].global_root < node_offset) ||
-//        (fof_index_recv[i].global_root >= node_offset + nr_gparts)) {
-//      error("Received global root index out of range!");
-//    }
-//    fof_index_recv[i].global_root =
-//        gparts[fof_index_recv[i].global_root - node_offset].group_id;
-//  }
-//
-//  /* Send the result back */
-//  MPI_Alltoallv(fof_index_recv, recvcount, recvoffset, fof_final_index_type,
-//                fof_index_send, sendcount, sendoffset, fof_final_index_type,
-//                MPI_COMM_WORLD);
-//
-//  /* Update local gparts.group_id */
-//  for (size_t i = 0; i < nsend; i += 1) {
-//    if ((fof_index_send[i].local_root < node_offset) ||
-//        (fof_index_send[i].local_root >= node_offset + nr_gparts)) {
-//      error("Sent local root index out of range!");
-//    }
-//    gparts[fof_index_send[i].local_root - node_offset].group_id =
-//        fof_index_send[i].global_root;
-//  }
-//
-//  free(sendcount);
-//  free(recvcount);
-//  free(sendoffset);
-//  free(recvoffset);
-//  swift_free("fof_index_send", fof_index_send);
-//  swift_free("fof_index_recv", fof_index_recv);
-//
-//#endif /* WITH_MPI */
-//
-//  /* Assign every particle the group_id of its local root. */
-//  for (size_t i = 0; i < nr_gparts; i++) {
-//    const size_t root = fof_find_local(i, nr_gparts, group_index);
-//    gparts[i].group_id = gparts[root].group_id;
-//  }
-//
-//  if (verbose)
-//    message("Group sorting took: %.3f %s.", clocks_from_ticks(getticks() - tic),
-//            clocks_getunit());
-//
-//  /* Allocate and initialise a group mass array. */
-//  if (swift_memalign("group_mass", (void **)&props->group_mass, 32,
-//                     num_groups_local * sizeof(double)) != 0)
-//    error("Failed to allocate list of group masses for FOF search.");
-//
-//  bzero(props->group_mass, num_groups_local * sizeof(double));
-//
-//  ticks tic_seeding = getticks();
-//
-//  double *group_mass = props->group_mass;
-//#ifdef WITH_MPI
-//  fof_calc_group_mass(props, s, num_groups_local, num_groups_prev, num_on_node,
-//                      first_on_node, group_mass);
-//  free(num_on_node);
-//  free(first_on_node);
-//#else
-//  fof_calc_group_mass(props, s, num_groups_local, 0, NULL, NULL, group_mass);
-//#endif
-//
-//  if (verbose)
-//    message("Black hole seeding took: %.3f %s.",
-//            clocks_from_ticks(getticks() - tic_seeding), clocks_getunit());
-//
-//  /* Dump group data. */
-//  if (dump_results) {
-//    fof_dump_group_data(props, output_file_name, s, num_groups_local,
-//                        high_group_sizes);
-//  }
-//
-//  /* Seed black holes */
-//  if (seed_black_holes) {
-//    fof_seed_black_holes(props, bh_props, constants, cosmo, s, num_groups_local,
-//                         high_group_sizes);
-//  }
+
+#ifndef WITH_FOF_TIMING
+
+  /* Find number of groups on lower numbered MPI ranks */
+#ifdef WITH_MPI
+  long long nglocal = num_groups_local;
+  long long ngsum;
+  MPI_Scan(&nglocal, &ngsum, 1, MPI_LONG_LONG, MPI_SUM, MPI_COMM_WORLD);
+  const size_t num_groups_prev = (size_t)(ngsum - nglocal);
+#endif /* WITH_MPI */
+
+  message("Finding the total no. of groups took: (FOF SCALING): %.3f %s.",       
+            clocks_from_ticks(getticks() - tic_num_groups_calc), clocks_getunit());
+
+  /* Set default group ID for all particles */
+  for (size_t i = 0; i < nr_gparts; i++) gparts[i].group_id = group_id_default;
+
+  /* Assign final group IDs to local root particles where the global root is
+   * on this node and the group is large enough. Within a node IDs are
+   * assigned in descending order of particle number. */
+  for (size_t i = 0; i < num_groups_local; i++) {
+#ifdef WITH_MPI
+    gparts[high_group_sizes[i].index - node_offset].group_id =
+        group_id_offset + i + num_groups_prev;
+#else
+    gparts[high_group_sizes[i].index].group_id = group_id_offset + i;
+#endif
+  }
+
+#ifdef WITH_MPI
+
+  /* Now, for each local root where the global root is on some other node
+   * AND the total size of the group is >= min_group_size we need to
+   * retrieve the gparts.group_id we just assigned to the global root.
+   *
+   * Will do that by sending the group_index of these lcoal roots to the
+   * node where their global root is stored and receiving back the new
+   * group_id associated with that particle.
+   *
+   * Identify local roots with global root on another node and large enough
+   * group_size. Store index of the local and global roots in these cases.
+   *
+   * NOTE: if group_size only contains the total FoF mass for global roots,
+   * then we have to communicate ALL fragments where the global root is not
+   * on this node. Hence the commented out extra conditions below.*/
+  size_t nsend = 0;
+  for (size_t i = 0; i < nr_gparts; i += 1) {
+    if ((!is_local(group_index[i],
+                   nr_gparts))) { /* && (group_size[i] >= min_group_size)) { */
+      nsend += 1;
+    }
+  }
+  struct fof_final_index *fof_index_send =
+      (struct fof_final_index *)swift_malloc(
+          "fof_index_send", sizeof(struct fof_final_index) * nsend);
+  nsend = 0;
+  for (size_t i = 0; i < nr_gparts; i += 1) {
+    if ((!is_local(group_index[i],
+                   nr_gparts))) { /* && (group_size[i] >= min_group_size)) { */
+      fof_index_send[nsend].local_root = node_offset + i;
+      fof_index_send[nsend].global_root = group_index[i];
+      nsend += 1;
+    }
+  }
+
+  /* Sort by global root - this puts the groups in order of which node they're
+   * stored on */
+  qsort(fof_index_send, nsend, sizeof(struct fof_final_index),
+        compare_fof_final_index_global_root);
+
+  /* Determine range of global indexes (i.e. particles) on each node */
+  size_t *num_on_node = (size_t *)malloc(nr_nodes * sizeof(size_t));
+  MPI_Allgather(&nr_gparts, sizeof(size_t), MPI_BYTE, num_on_node,
+                sizeof(size_t), MPI_BYTE, MPI_COMM_WORLD);
+  size_t *first_on_node = (size_t *)malloc(nr_nodes * sizeof(size_t));
+  first_on_node[0] = 0;
+  for (int i = 1; i < nr_nodes; i += 1)
+    first_on_node[i] = first_on_node[i - 1] + num_on_node[i - 1];
+
+  /* Determine how many entries go to each node */
+  int *sendcount = (int *)malloc(nr_nodes * sizeof(int));
+  for (int i = 0; i < nr_nodes; i += 1) sendcount[i] = 0;
+  int dest = 0;
+  for (size_t i = 0; i < nsend; i += 1) {
+    while ((fof_index_send[i].global_root >=
+            first_on_node[dest] + num_on_node[dest]) ||
+           (num_on_node[dest] == 0))
+      dest += 1;
+    if (dest >= nr_nodes) error("Node index out of range!");
+    sendcount[dest] += 1;
+  }
+
+  int *recvcount = NULL, *sendoffset = NULL, *recvoffset = NULL;
+  size_t nrecv = 0;
+
+  fof_compute_send_recv_offsets(nr_nodes, sendcount, &recvcount, &sendoffset,
+                                &recvoffset, &nrecv);
+
+  struct fof_final_index *fof_index_recv =
+      (struct fof_final_index *)swift_malloc(
+          "fof_index_recv", nrecv * sizeof(struct fof_final_index));
+
+  /* Exchange group indexes */
+  MPI_Alltoallv(fof_index_send, sendcount, sendoffset, fof_final_index_type,
+                fof_index_recv, recvcount, recvoffset, fof_final_index_type,
+                MPI_COMM_WORLD);
+
+  /* For each received global root, look up the group ID we assigned and store
+   * it in the struct */
+  for (size_t i = 0; i < nrecv; i += 1) {
+    if ((fof_index_recv[i].global_root < node_offset) ||
+        (fof_index_recv[i].global_root >= node_offset + nr_gparts)) {
+      error("Received global root index out of range!");
+    }
+    fof_index_recv[i].global_root =
+        gparts[fof_index_recv[i].global_root - node_offset].group_id;
+  }
+
+  /* Send the result back */
+  MPI_Alltoallv(fof_index_recv, recvcount, recvoffset, fof_final_index_type,
+                fof_index_send, sendcount, sendoffset, fof_final_index_type,
+                MPI_COMM_WORLD);
+
+  /* Update local gparts.group_id */
+  for (size_t i = 0; i < nsend; i += 1) {
+    if ((fof_index_send[i].local_root < node_offset) ||
+        (fof_index_send[i].local_root >= node_offset + nr_gparts)) {
+      error("Sent local root index out of range!");
+    }
+    gparts[fof_index_send[i].local_root - node_offset].group_id =
+        fof_index_send[i].global_root;
+  }
+
+  free(sendcount);
+  free(recvcount);
+  free(sendoffset);
+  free(recvoffset);
+  swift_free("fof_index_send", fof_index_send);
+  swift_free("fof_index_recv", fof_index_recv);
+
+#endif /* WITH_MPI */
+
+  /* Assign every particle the group_id of its local root. */
+  for (size_t i = 0; i < nr_gparts; i++) {
+    const size_t root = fof_find_local(i, nr_gparts, group_index);
+    gparts[i].group_id = gparts[root].group_id;
+  }
+
+  if (verbose)
+    message("Group sorting took: %.3f %s.", clocks_from_ticks(getticks() - tic),
+            clocks_getunit());
+
+  /* Allocate and initialise a group mass array. */
+  if (swift_memalign("group_mass", (void **)&props->group_mass, 32,
+                     num_groups_local * sizeof(double)) != 0)
+    error("Failed to allocate list of group masses for FOF search.");
+
+  bzero(props->group_mass, num_groups_local * sizeof(double));
+
+  ticks tic_seeding = getticks();
+
+  double *group_mass = props->group_mass;
+#ifdef WITH_MPI
+  fof_calc_group_mass(props, s, num_groups_local, num_groups_prev, num_on_node,
+                      first_on_node, group_mass);
+  free(num_on_node);
+  free(first_on_node);
+#else
+  fof_calc_group_mass(props, s, num_groups_local, 0, NULL, NULL, group_mass);
+#endif
+
+  if (verbose)
+    message("Black hole seeding took: %.3f %s.",
+            clocks_from_ticks(getticks() - tic_seeding), clocks_getunit());
+
+  /* Dump group data. */
+  if (dump_results) {
+    fof_dump_group_data(props, output_file_name, s, num_groups_local,
+                        high_group_sizes);
+  }
+
+  /* Seed black holes */
+  if (seed_black_holes) {
+    fof_seed_black_holes(props, bh_props, constants, cosmo, s, num_groups_local,
+                         high_group_sizes);
+  }
 
   /* Free the left-overs */
-  //swift_free("fof_high_group_sizes", high_group_sizes);
+  swift_free("fof_high_group_sizes", high_group_sizes);
+#endif /* #ifndef WITH_FOF_TIMING */
   swift_free("fof_group_index", props->group_index);
   swift_free("fof_group_size", props->group_size);
   swift_free("fof_group_mass", props->group_mass);
diff --git a/src/hashmap.h b/src/hashmap.h
index 40bb309e552266942afdacd79b5cb8268132a74f..a4b89ebaf9434ce6f753ba124c0c59969b807b56 100644
--- a/src/hashmap.h
+++ b/src/hashmap.h
@@ -127,10 +127,11 @@ void hashmap_init(hashmap_t *m);
  * capacity, since it will grow if too many collisions occur. As a rule
  * of thumb, allocate twice as many elements as you think you will need.
  *
- * @param size New table size. If zero, the current size will be increase
- *             by a fixed rate.
+ * @param m The hasmmap to grow.
+ * @param new_size New table size. If zero, the current size will be increase
+ *                 by a fixed rate.
  */
-void hashmap_grow(hashmap_t *m, size_t size);
+void hashmap_grow(hashmap_t *m, size_t new_size);
 
 /**
  * @brief Add a key/value pair to the hashmap, overwriting whatever was
diff --git a/src/hydro/AnarchyDU/hydro.h b/src/hydro/AnarchyDU/hydro.h
index 8f14e583f6841ffaf73840b8c04bc6bc5ec96ddf..53c66e316136b172f7610dd6f066c6e841421ced 100644
--- a/src/hydro/AnarchyDU/hydro.h
+++ b/src/hydro/AnarchyDU/hydro.h
@@ -423,15 +423,18 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
 }
 
 /**
- * @brief Update the value of the viscosity alpha to the
+ * @brief Update the value of the diffusive coefficients to the
  *        feedback reset value for the scheme.
  *
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part *restrict p) {
+hydro_diffusive_feedback_reset(struct part *restrict p) {
+  /* Set the viscosity to the max, and the diffusion to the min */
   hydro_set_viscosity_alpha(p,
                             hydro_props_default_viscosity_alpha_feedback_reset);
+
+  p->diffusion.alpha = hydro_props_default_diffusion_alpha_feedback_reset;
 }
 
 /**
diff --git a/src/hydro/AnarchyDU/hydro_parameters.h b/src/hydro/AnarchyDU/hydro_parameters.h
index d2ce2167cb5cf02a1189bbbf8a6327ed5f5bff04..63af6503145066e757f3a3827db0f71d9f841770 100644
--- a/src/hydro/AnarchyDU/hydro_parameters.h
+++ b/src/hydro/AnarchyDU/hydro_parameters.h
@@ -76,6 +76,14 @@
  * sets the decay length for the diffusion. */
 #define hydro_props_default_viscosity_length 0.25f
 
+/* Diffusion parameters -- FIXED -- MUST BE DEFINED AT COMPILE-TIME */
+
+/*! The diffusion that the particles are reset to after being hit by a
+ * feedback event. This should be set to the same value as the
+ * hydro_props_default_diffusion_alpha in fixed schemes, and likely
+ * to hydro_props_default_diffusion_alpha_min in variable schemes. */
+#define hydro_props_default_diffusion_alpha_feedback_reset 0.0f
+
 /* Diffusion parameters -- Defaults; can be changed at run-time */
 
 /*! The "initial" diffusion, or the fixed value for non-variable
diff --git a/src/hydro/AnarchyDU/hydro_part.h b/src/hydro/AnarchyDU/hydro_part.h
index 98bb058af9881fd050e6b9f2101766870ed26a46..c30f778b80d91d576052082c6e849fa9d1a4f38e 100644
--- a/src/hydro/AnarchyDU/hydro_part.h
+++ b/src/hydro/AnarchyDU/hydro_part.h
@@ -27,6 +27,7 @@
  *        Price 2008 thermal diffusion (particle definition)
  */
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "star_formation_struct.h"
@@ -190,6 +191,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /*! Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/AnarchyPU/hydro.h b/src/hydro/AnarchyPU/hydro.h
index 460461d06af954f9058e254fcdf2e9a8ced21d14..9ba58828dd84df205539550c221746dfac37ed21 100644
--- a/src/hydro/AnarchyPU/hydro.h
+++ b/src/hydro/AnarchyPU/hydro.h
@@ -431,15 +431,18 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
 }
 
 /**
- * @brief Update the value of the viscosity alpha to the
+ * @brief Update the value of the diffusive coefficients to the
  *        feedback reset value for the scheme.
  *
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part *restrict p) {
+hydro_diffusive_feedback_reset(struct part *restrict p) {
+  /* Set the viscosity to the max, and the diffusion to the min */
   hydro_set_viscosity_alpha(p,
                             hydro_props_default_viscosity_alpha_feedback_reset);
+
+  p->diffusion.alpha = hydro_props_default_diffusion_alpha_feedback_reset;
 }
 
 /**
diff --git a/src/hydro/AnarchyPU/hydro_parameters.h b/src/hydro/AnarchyPU/hydro_parameters.h
index ec316abbaa0bbe579d56c216736ccd8a0c3c2e21..1f1e1c2de1c02a1c0f1969eb00244dfc50a0ecc3 100644
--- a/src/hydro/AnarchyPU/hydro_parameters.h
+++ b/src/hydro/AnarchyPU/hydro_parameters.h
@@ -76,6 +76,14 @@
  * sets the decay length for the diffusion. */
 #define hydro_props_default_viscosity_length 0.25f
 
+/* Diffusion parameters -- FIXED -- MUST BE DEFINED AT COMPILE-TIME */
+
+/*! The diffusion that the particles are reset to after being hit by a
+ * feedback event. This should be set to the same value as the
+ * hydro_props_default_diffusion_alpha in fixed schemes, and likely
+ * to hydro_props_default_diffusion_alpha_min in variable schemes. */
+#define hydro_props_default_diffusion_alpha_feedback_reset 0.0f
+
 /* Diffusion parameters -- Defaults; can be changed at run-time */
 
 /* The "initial" diffusion, or the fixed value for non-variable
diff --git a/src/hydro/AnarchyPU/hydro_part.h b/src/hydro/AnarchyPU/hydro_part.h
index 3c59f2d7c1b8610eeef816c674038dcab5554130..2ba76967ac074df68e6beddd51713ae0505430f6 100644
--- a/src/hydro/AnarchyPU/hydro_part.h
+++ b/src/hydro/AnarchyPU/hydro_part.h
@@ -19,6 +19,7 @@
  ******************************************************************************/
 #ifndef SWIFT_ANARCHY_PU_HYDRO_PART_H
 #define SWIFT_ANARCHY_PU_HYDRO_PART_H
+
 /**
  * @file AnarchyPU/hydro_part.h
  * @brief P-U conservative implementation of SPH,
@@ -28,6 +29,7 @@
  * See AnarchyPU/hydro.h for references.
  */
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "star_formation_struct.h"
@@ -194,6 +196,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /*! Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/Default/hydro.h b/src/hydro/Default/hydro.h
index 694441c8b02f1071119d41df9b70999efd75be1f..558143f88d07225f5e6bfe5666d345a7891a7534 100644
--- a/src/hydro/Default/hydro.h
+++ b/src/hydro/Default/hydro.h
@@ -423,13 +423,13 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
 }
 
 /**
- * @brief Update the value of the viscosity alpha to the
+ * @brief Update the value of the diffusive coefficients to the
  *        feedback reset value for the scheme.
  *
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part *restrict p) {
+hydro_diffusive_feedback_reset(struct part *restrict p) {
   hydro_set_viscosity_alpha(p,
                             hydro_props_default_viscosity_alpha_feedback_reset);
 }
diff --git a/src/hydro/Default/hydro_part.h b/src/hydro/Default/hydro_part.h
index e5c8aab4e88dc4f2a6ffe38ea8931e8f7849e5de..a3fa2545aa58d54e02c54af02103d047a5c8cf17 100644
--- a/src/hydro/Default/hydro_part.h
+++ b/src/hydro/Default/hydro_part.h
@@ -27,6 +27,7 @@
  *        Price 2017 (PHANTOM) diffusion) (particle definition)
  */
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "star_formation_struct.h"
@@ -190,6 +191,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /*! Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/Gadget2/hydro.h b/src/hydro/Gadget2/hydro.h
index c6367ec7b46076d97f5637d360d946dc056d7bd4..afc9c17d5b6ed691685bf42df2562ecb6f4409d8 100644
--- a/src/hydro/Gadget2/hydro.h
+++ b/src/hydro/Gadget2/hydro.h
@@ -413,13 +413,13 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
 }
 
 /**
- * @brief Update the value of the viscosity alpha to the
+ * @brief Update the value of the diffusive coefficients to the
  *        feedback reset value for the scheme.
  *
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part *restrict p) {
+hydro_diffusive_feedback_reset(struct part *restrict p) {
   /* This scheme has fixed alpha */
 }
 
diff --git a/src/hydro/Gadget2/hydro_part.h b/src/hydro/Gadget2/hydro_part.h
index 3001700395b8584981c0087c8ff402a953461213..7a8844d560561dae80c676a0b5bb72b34416d080 100644
--- a/src/hydro/Gadget2/hydro_part.h
+++ b/src/hydro/Gadget2/hydro_part.h
@@ -31,6 +31,7 @@
  * Gadget-2 tree-code neighbours search.
  */
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "logger.h"
@@ -151,6 +152,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /* Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/GizmoMFM/hydro.h b/src/hydro/GizmoMFM/hydro.h
index 8c47d529dd4696331b9afbb3e402639116bd06f1..eb3d9044baecadeaba9165061fae33c816446042 100644
--- a/src/hydro/GizmoMFM/hydro.h
+++ b/src/hydro/GizmoMFM/hydro.h
@@ -1048,7 +1048,7 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part* restrict p) {
+hydro_diffusive_feedback_reset(struct part* restrict p) {
   /* Purposefully left empty */
 }
 
diff --git a/src/hydro/GizmoMFM/hydro_part.h b/src/hydro/GizmoMFM/hydro_part.h
index 8097b1b2560f24f78636bbb855700054524fe0bb..2ca752dac714d5fca02e2552601c18174f98a8ab 100644
--- a/src/hydro/GizmoMFM/hydro_part.h
+++ b/src/hydro/GizmoMFM/hydro_part.h
@@ -19,6 +19,7 @@
 #ifndef SWIFT_GIZMO_MFM_HYDRO_PART_H
 #define SWIFT_GIZMO_MFM_HYDRO_PART_H
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "star_formation_struct.h"
@@ -192,6 +193,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /* Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/GizmoMFV/hydro.h b/src/hydro/GizmoMFV/hydro.h
index dc11bcece94bbd0622f4eb48ca2b14d1fcd9d235..e6a87b9a86fc7ec03a4bfe055e151ee8c7c580e4 100644
--- a/src/hydro/GizmoMFV/hydro.h
+++ b/src/hydro/GizmoMFV/hydro.h
@@ -1119,7 +1119,7 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part* restrict p) {
+hydro_diffusive_feedback_reset(struct part* restrict p) {
   /* Purposefully left empty */
 }
 
diff --git a/src/hydro/GizmoMFV/hydro_part.h b/src/hydro/GizmoMFV/hydro_part.h
index af83dbd92590a30aa401e1cc5626554633058b1f..db43c03cba3c23a4d4d8ab81cc62ebc5951299f3 100644
--- a/src/hydro/GizmoMFV/hydro_part.h
+++ b/src/hydro/GizmoMFV/hydro_part.h
@@ -19,6 +19,7 @@
 #ifndef SWIFT_GIZMO_MFV_HYDRO_PART_H
 #define SWIFT_GIZMO_MFV_HYDRO_PART_H
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "star_formation_struct.h"
@@ -203,6 +204,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /* Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/Minimal/hydro.h b/src/hydro/Minimal/hydro.h
index 33e01a783c4bf854707d7cf72a67c437e4f3ba0e..0cac1de40f258ff9921ea98ddd59a848d4bb7a67 100644
--- a/src/hydro/Minimal/hydro.h
+++ b/src/hydro/Minimal/hydro.h
@@ -408,13 +408,13 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
 }
 
 /**
- * @brief Update the value of the viscosity alpha to the
+ * @brief Update the value of the diffusive coefficients to the
  *        feedback reset value for the scheme.
  *
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part *restrict p) {
+hydro_diffusive_feedback_reset(struct part *restrict p) {
   /* This scheme has fixed alpha */
 }
 
diff --git a/src/hydro/Minimal/hydro_part.h b/src/hydro/Minimal/hydro_part.h
index 7697f36ca723875a8b77705eefcf2e2af4605583..0e61b95cc164690b178806c9906a195cb86a01ac 100644
--- a/src/hydro/Minimal/hydro_part.h
+++ b/src/hydro/Minimal/hydro_part.h
@@ -32,6 +32,7 @@
  * Physics, 2012, Volume 231, Issue 3, pp. 759-794.
  */
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "star_formation_struct.h"
@@ -170,9 +171,12 @@ struct part {
     } force;
   };
 
-  /* Chemistry information */
+  /*! Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /*! Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/Planetary/hydro.h b/src/hydro/Planetary/hydro.h
index 7a6d0861cbe0b22f8e687f9781db0a4585c8b569..7139f811a6e868d9eafdbbf4628b03c156aa459e 100644
--- a/src/hydro/Planetary/hydro.h
+++ b/src/hydro/Planetary/hydro.h
@@ -432,13 +432,13 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
 }
 
 /**
- * @brief Update the value of the viscosity alpha to the
+ * @brief Update the value of the diffusive coefficients to the
  *        feedback reset value for the scheme.
  *
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part *restrict p) {
+hydro_diffusive_feedback_reset(struct part *restrict p) {
   /* This scheme has fixed alpha */
 }
 
diff --git a/src/hydro/Planetary/hydro_part.h b/src/hydro/Planetary/hydro_part.h
index b2725ca1fceddb196a6b2be42b768eb3f88f1101..db0396f7af06335e1e17b1255941781db1300a78 100644
--- a/src/hydro/Planetary/hydro_part.h
+++ b/src/hydro/Planetary/hydro_part.h
@@ -33,6 +33,7 @@
  * Physics, 2012, Volume 231, Issue 3, pp. 759-794.
  */
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "equation_of_state.h"  // For enum material_id
@@ -175,6 +176,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /*! Material identifier flag */
   enum eos_planetary_material_id mat_id;
 
diff --git a/src/hydro/PressureEnergy/hydro.h b/src/hydro/PressureEnergy/hydro.h
index 1691781a12e3c4c2299a08a4aac08bc83c1d123d..e3a4f98160150717e50651760d8af68af2a70662 100644
--- a/src/hydro/PressureEnergy/hydro.h
+++ b/src/hydro/PressureEnergy/hydro.h
@@ -433,13 +433,13 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
 }
 
 /**
- * @brief Update the value of the viscosity alpha to the
+ * @brief Update the value of the diffusive coefficients to the
  *        feedback reset value for the scheme.
  *
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part *restrict p) {
+hydro_diffusive_feedback_reset(struct part *restrict p) {
   /* This scheme has fixed alpha */
 }
 
diff --git a/src/hydro/PressureEnergy/hydro_part.h b/src/hydro/PressureEnergy/hydro_part.h
index 20c326da443e4acd1c3bdc0ebd01cce81bb6bad7..ded83d70176409332b669517ed203fdf95ecfd5e 100644
--- a/src/hydro/PressureEnergy/hydro_part.h
+++ b/src/hydro/PressureEnergy/hydro_part.h
@@ -31,6 +31,7 @@
  * See PressureEnergy/hydro.h for references.
  */
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "star_formation_struct.h"
@@ -173,6 +174,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /*! Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/PressureEnergyMorrisMonaghanAV/hydro.h b/src/hydro/PressureEnergyMorrisMonaghanAV/hydro.h
index e477c9f96b2daaf1d786dfc58070da8fd89a7f88..c23b3467fd0dee575aacf92c108508be7d44ad32 100644
--- a/src/hydro/PressureEnergyMorrisMonaghanAV/hydro.h
+++ b/src/hydro/PressureEnergyMorrisMonaghanAV/hydro.h
@@ -430,13 +430,13 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
 }
 
 /**
- * @brief Update the value of the viscosity alpha to the
+ * @brief Update the value of the diffusive coefficients to the
  *        feedback reset value for the scheme.
  *
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part *restrict p) {
+hydro_diffusive_feedback_reset(struct part *restrict p) {
   hydro_set_viscosity_alpha(p,
                             hydro_props_default_viscosity_alpha_feedback_reset);
 }
diff --git a/src/hydro/PressureEnergyMorrisMonaghanAV/hydro_part.h b/src/hydro/PressureEnergyMorrisMonaghanAV/hydro_part.h
index 5a520f522a85ac87376d7db8cc78f58c66e3550b..644b30234ad40d0571249726f96358453e06956a 100644
--- a/src/hydro/PressureEnergyMorrisMonaghanAV/hydro_part.h
+++ b/src/hydro/PressureEnergyMorrisMonaghanAV/hydro_part.h
@@ -32,6 +32,7 @@
  * See PressureEnergy/hydro.h for references.
  */
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "star_formation_struct.h"
@@ -177,6 +178,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /*! Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/PressureEntropy/hydro.h b/src/hydro/PressureEntropy/hydro.h
index a381011069c9a5f0717f1a26ffef1c4bd7eeb817..66b8e186c82ab1646ffb862ff2d401f0362886c3 100644
--- a/src/hydro/PressureEntropy/hydro.h
+++ b/src/hydro/PressureEntropy/hydro.h
@@ -195,13 +195,13 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
 }
 
 /**
- * @brief Update the value of the viscosity alpha to the
+ * @brief Update the value of the diffusive coefficients to the
  *        feedback reset value for the scheme.
  *
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part *restrict p) {
+hydro_diffusive_feedback_reset(struct part *restrict p) {
   /* This scheme has fixed alpha */
 }
 
diff --git a/src/hydro/PressureEntropy/hydro_part.h b/src/hydro/PressureEntropy/hydro_part.h
index 6cf5a88a167c529ad81737f1e206ba475f6bbc0e..cf7204ba1e157ce69f523414ab0321f61d110a94 100644
--- a/src/hydro/PressureEntropy/hydro_part.h
+++ b/src/hydro/PressureEntropy/hydro_part.h
@@ -30,6 +30,7 @@
  * Volume 428, Issue 4, pp. 2840-2856 with a simple Balsara viscosity term.
  */
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "star_formation_struct.h"
@@ -153,6 +154,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /* Time-step length */
   timebin_t time_bin;
 
diff --git a/src/hydro/Shadowswift/hydro.h b/src/hydro/Shadowswift/hydro.h
index a6b3b41c9c08e6f83ed4d989bd57176507221797..955a3db1d4480c43dd2fd57c6d976689d36a5652 100644
--- a/src/hydro/Shadowswift/hydro.h
+++ b/src/hydro/Shadowswift/hydro.h
@@ -892,7 +892,7 @@ __attribute__((always_inline)) INLINE static void hydro_set_viscosity_alpha(
  * @param p the particle of interest
  */
 __attribute__((always_inline)) INLINE static void
-hydro_set_viscosity_alpha_max_feedback(struct part* restrict p) {
+hydro_diffusive_feedback_reset(struct part* restrict p) {
   /* Purposefully left empty */
 }
 
diff --git a/src/hydro/Shadowswift/hydro_part.h b/src/hydro/Shadowswift/hydro_part.h
index d229b5c9d63ecfa855397f74f8a52a4117cefc03..4d8c44a3637b6ca67ba00d3c6f654692504207f6 100644
--- a/src/hydro/Shadowswift/hydro_part.h
+++ b/src/hydro/Shadowswift/hydro_part.h
@@ -18,7 +18,10 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  *
  ******************************************************************************/
+#ifndef SWIFT_SHADOWSWIFT_HYDRO_PART_H
+#define SWIFT_SHADOWSWIFT_HYDRO_PART_H
 
+#include "black_holes_struct.h"
 #include "chemistry_struct.h"
 #include "cooling_struct.h"
 #include "tracers_struct.h"
@@ -180,6 +183,9 @@ struct part {
   /* Chemistry information */
   struct chemistry_part_data chemistry_data;
 
+  /*! Black holes information (e.g. swallowing ID) */
+  struct black_holes_part_data black_holes_data;
+
   /* Time-step length */
   timebin_t time_bin;
 
@@ -200,3 +206,5 @@ struct part {
   struct voronoi_cell cell;
 
 } SWIFT_STRUCT_ALIGN;
+
+#endif /* SWIFT_SHADOWSWIFT_HYDRO_PART_H */
diff --git a/src/memuse.c b/src/memuse.c
index 9b29b377554e724337f19caa35baa5f0dd2f8f47..77344cc6cec56544a6e19cc6a71ca2375aa1916c 100644
--- a/src/memuse.c
+++ b/src/memuse.c
@@ -42,17 +42,22 @@
 
 #ifdef SWIFT_MEMUSE_REPORTS
 
+/* The initial size and increment of the log entries buffer. */
+#define MEMUSE_INITLOG 1000000
+
+/* A megabyte for conversions. */
+#define MEGABYTE 1048576.0
+
+/* Maximum length of label in log entry. */
+#define MEMUSE_MAXLABLEN 32
+
 /* Also recorded in logger. */
 extern int engine_rank;
 extern int engine_current_step;
 
 /* Entry for logger of memory allocations and deallocations in a step. */
-#define MEMUSE_MAXLAB 64
 struct memuse_log_entry {
 
-  /* Rank in action. */
-  int rank;
-
   /* Step of action. */
   int step;
 
@@ -62,23 +67,291 @@ struct memuse_log_entry {
   /* Memory allocated in bytes. */
   size_t size;
 
-  /* Address of memory. */
-  void *ptr;
+  /* Address of memory. Use union as easy way to convert into an array of
+   * bytes. */
+  union {
+    void *ptr;
+    uint8_t vptr[sizeof(uintptr_t)];
+  };
 
   /* Relative time of this action. */
   ticks dtic;
 
   /* Label associated with the memory. */
-  char label[MEMUSE_MAXLAB + 1];
+  char label[MEMUSE_MAXLABLEN + 1];
+
+  /* Whether log is still active, i.e. not matched with a free or allocation. */
+  int active;
 };
 
-/* The log of allocations and frees. */
+/* The log of allocations and frees. All volatile as accessed from threads
+ * that use the value to synchronise. */
 static struct memuse_log_entry *volatile memuse_log = NULL;
 static volatile size_t memuse_log_size = 0;
 static volatile size_t memuse_log_count = 0;
+static volatile size_t memuse_old_count = 0;
 static volatile size_t memuse_log_done = 0;
 
-#define MEMUSE_INITLOG 1000000
+/* Current sum of memory in use. Only used in dumping. */
+static size_t memuse_current = 0;
+
+/* Label usage gathering struct. Only used in dumping. */
+struct memuse_labelled_item {
+  size_t sum;
+  size_t count;
+};
+
+/* A radix node, this has a single byte key and a pointer to some related
+ * resource. It also holds a sorted list of children, if any. */
+struct memuse_rnode {
+
+  /* Byte key of this node. */
+  uint8_t keypart;
+
+  /* Value of this node, if set. */
+  void *ptr;
+
+  /* Sorted pointers to children of this node. */
+  struct memuse_rnode **children;
+  unsigned int count;
+};
+
+/* Persistent radix trie root node. Holds active logs between dumps. */
+static struct memuse_rnode *memuse_rnode_root;
+static int memuse_rnode_root_init = 1;
+
+#ifdef MEMUSE_RNODE_DUMP
+/**
+ * @brief Dump a representation of the radix tree rooted at a node to stdout.
+ *
+ * @param depth the depth of the node in the tree, root is 0.
+ * @param node the node at which to start dumping.
+ * @param full if not zero then nodes that are not storing a value
+ *              are also reported.
+ */
+static void memuse_rnode_dump(int depth, struct memuse_rnode *node, int full) {
+
+  /* Value of the full key, to this depth. Assumes full key is a pointer,
+   * so uncomment when using strings. */
+  static union {
+    // uint8_t key[MEMUSE_MAXLABLEN];
+    // char ptr[MEMUSE_MAXLABLEN];
+    uint8_t key[sizeof(uintptr_t)];
+    void *ptr;
+  } keyparts = {0};
+
+  /* Record keypart at this depth. Root has no keypart. */
+  if (depth != 0) keyparts.key[depth - 1] = node->keypart;
+
+  // if (node->ptr != NULL || full) {
+  //  keyparts.key[depth] = '\0';
+  //
+  //    /* Gather children's keys if full. */
+  //    char fullkey[MEMUSE_MAXLABLEN];
+  //    if (full) {
+  //      for (size_t k = 0; k < node->count; k++) {
+  //        fullkey[k] = node->children[k]->keypart;
+  //      }
+  //      fullkey[node->count] = '\0';
+  //      printf("dump @ depth: %d keypart: %d key: %s value: %p fullkey: %s\n",
+  //             depth, node->keypart, keyparts.ptr, node->ptr, fullkey);
+  //    } else {
+  //      printf("dump @ depth: %d keypart: %d key: %s value: %p\n", depth,
+  //             node->keypart, keyparts.ptr, node->ptr);
+  //    }
+  //}
+
+  if (node->ptr != NULL || full) {
+    printf("dump @ depth: %d keypart: %d key: %p value: %p\n", depth,
+           node->keypart, keyparts.ptr, node->ptr);
+  }
+
+  /* Recurse to all children. */
+  for (size_t k = 0; k < node->count; k++) {
+    memuse_rnode_dump(depth + 1, node->children[k], full);
+  }
+}
+#endif
+
+/**
+ * @brief Return the position of a keypart for a list of children.
+ *        If not found returns where it would be inserted.
+ *
+ * @param keypart the keypart to locate.
+ * @param children the list of sorted children.
+ * @param count the number of children
+ *
+ * @return the index of key or where it should be inserted.
+ */
+static unsigned int memuse_rnode_bsearch(uint8_t keypart,
+                                         struct memuse_rnode **children,
+                                         unsigned int count) {
+
+  /* Search for lower bound. */
+  unsigned int lower = 0;
+  unsigned int upper = count;
+  while (lower < upper) {
+    unsigned int middle = (upper + lower) / 2;
+    if (keypart > children[middle]->keypart)
+      lower = middle + 1;
+    else
+      upper = middle;
+  }
+  return lower;
+}
+
+/**
+ * @brief Insert a child, if needed, into a list of children. Assumes
+ *        we have sufficient room.
+ *
+ * @param child the child to insert, if needed.
+ * @param children the list of sorted children.
+ * @param count the number of children
+ */
+static void memuse_rnode_binsert_child(struct memuse_rnode *child,
+                                       struct memuse_rnode **children,
+                                       unsigned int *count) {
+  unsigned int pos = 0;
+  if (*count > 0) {
+
+    /* Find the child or insertion point. */
+    pos = memuse_rnode_bsearch(child->keypart, children, *count);
+
+    /* If not found move all children to make a space, unless we're inserting
+     * after the end. */
+    if (pos < *count && children[pos]->keypart != child->keypart) {
+      memmove(&children[pos + 1], &children[pos],
+              (*count - pos) * sizeof(struct memuse_rnode *));
+    }
+  }
+
+  /* Insert new child */
+  children[pos] = child;
+  *count += 1;
+}
+
+/**
+ * @brief Add a child rnode to an rnode. Making sure we have room and keeping
+ *        the sort order.
+ *
+ * @param node the parent node.
+ * @param child the node to add to the parent,
+ */
+static void memuse_rnode_add_child(struct memuse_rnode *node,
+                                   struct memuse_rnode *child) {
+
+  /* Extend the children list to include a new entry .*/
+  void *mem = realloc(node->children,
+                      (node->count + 1) * sizeof(struct memuse_rnode *));
+  if (mem == NULL) error("Failed to reallocate rnodes\n");
+  node->children = mem;
+
+  /* Insert the new child. */
+  memuse_rnode_binsert_child(child, node->children, &node->count);
+}
+
+/**
+ * @brief Find a child of a node with the given key part.
+ *
+ * @param node the node to search.
+ * @param keypart the key part of the child.
+ * @return NULL if not found.
+ */
+static struct memuse_rnode *memuse_rnode_lookup(const struct memuse_rnode *node,
+                                                uint8_t keypart) {
+
+  /* Locate the key, or where it would be inserted. */
+  if (node->count > 0) {
+    unsigned int index =
+        memuse_rnode_bsearch(keypart, node->children, node->count);
+    if (index < node->count && keypart == node->children[index]->keypart) {
+      return node->children[index];
+    }
+  }
+  return NULL;
+}
+
+/**
+ * @brief insert a child into a node's children list and add a pointer, iff
+ *        this is the destination node for the given key.
+ *
+ * @param node the parent node.
+ * @param depth the depth of the parent node.
+ * @param key the full key of the eventual leaf node.
+ * @param keylen the numbers of bytes in the full key.
+ * @param value pointer that will be stored as the value of the leaf node.
+ */
+static void memuse_rnode_insert_child(struct memuse_rnode *node, uint8_t depth,
+                                      uint8_t *key, uint8_t keylen,
+                                      void *value) {
+
+  /* Check if keypart this already exists at this level and add new child if
+   * not. */
+  uint8_t keypart = key[depth];
+  struct memuse_rnode *child = memuse_rnode_lookup(node, keypart);
+  if (child == NULL) {
+    child = calloc(1, sizeof(struct memuse_rnode));
+    child->keypart = keypart;
+    memuse_rnode_add_child(node, child);
+  }
+
+  /* Are we at the lowest level yet? */
+  depth++;
+  if (depth == keylen) {
+  /* Our destination node. */
+
+#if SWIFT_DEBUG_CHECKS
+    if (child->ptr != NULL)
+      message("Overwriting rnode value: %p with %p", child->ptr, value);
+#endif
+    child->ptr = value;
+    return;
+  }
+
+  /* Down we go to the next level. */
+  memuse_rnode_insert_child(child, depth, key, keylen, value);
+  return;
+}
+
+/**
+ * @brief Find a child node for the given full key.
+ *
+ * @param node the current parent node.
+ * @param depth the depth of the parent node, 0 for first call.
+ * @param key the full key of the expected child node.
+ * @param keylen the number of bytes in the key.
+ */
+static struct memuse_rnode *memuse_rnode_find_child(struct memuse_rnode *node,
+                                                    uint8_t depth, uint8_t *key,
+                                                    uint8_t keylen) {
+  uint8_t keypart = key[depth];
+  struct memuse_rnode *child = NULL;
+  if (node->count > 0) child = memuse_rnode_lookup(node, keypart);
+  if (child != NULL && (depth + 1) < keylen) {
+    return memuse_rnode_find_child(child, depth + 1, key, keylen);
+  }
+  return child;
+}
+
+/**
+ * @brief Free all resources associated with a node.
+ *
+ * @param node the rnode.
+ */
+static void memuse_rnode_cleanup(struct memuse_rnode *node) {
+
+  if (!node) return;
+
+  for (size_t k = 0; k < node->count; k++) {
+    memuse_rnode_cleanup(node->children[k]);
+    free(node->children[k]);
+  }
+  if (node->count > 0) free(node->children);
+}
+
+/**
+ * @brief reallocate the entries log if space is needed.
+ */
 static void memuse_log_reallocate(size_t ind) {
 
   if (ind == 0) {
@@ -94,7 +367,8 @@ static void memuse_log_reallocate(size_t ind) {
   } else {
     struct memuse_log_entry *new_log;
     if ((new_log = (struct memuse_log_entry *)malloc(
-             sizeof(struct memuse_log_entry) * memuse_log_size * 2)) == NULL)
+             sizeof(struct memuse_log_entry) *
+             (memuse_log_size + MEMUSE_INITLOG))) == NULL)
       error("Failed to re-allocate memuse log.");
 
     /* Wait for all writes to the old buffer to complete. */
@@ -107,8 +381,8 @@ static void memuse_log_reallocate(size_t ind) {
     free(memuse_log);
     memuse_log = new_log;
 
-    /* Last action. */
-    memuse_log_size *= 2;
+    /* Last action, releases waiting threads. */
+    atomic_add(&memuse_log_size, MEMUSE_INITLOG);
   }
 }
 
@@ -123,6 +397,7 @@ static void memuse_log_reallocate(size_t ind) {
  */
 void memuse_log_allocation(const char *label, void *ptr, int allocated,
                            size_t size) {
+
   size_t ind = atomic_inc(&memuse_log_count);
 
   /* If we are at the current size we need more space. */
@@ -133,14 +408,14 @@ void memuse_log_allocation(const char *label, void *ptr, int allocated,
     ;
 
   /* Record the log. */
-  memuse_log[ind].rank = engine_rank;
   memuse_log[ind].step = engine_current_step;
   memuse_log[ind].allocated = allocated;
   memuse_log[ind].size = size;
   memuse_log[ind].ptr = ptr;
-  strncpy(memuse_log[ind].label, label, MEMUSE_MAXLAB);
-  memuse_log[ind].label[MEMUSE_MAXLAB] = '\0';
+  strncpy(memuse_log[ind].label, label, MEMUSE_MAXLABLEN);
+  memuse_log[ind].label[MEMUSE_MAXLABLEN] = '\0';
   memuse_log[ind].dtic = getticks() - clocks_start_ticks;
+  memuse_log[ind].active = 1;
   atomic_inc(&memuse_log_done);
 }
 
@@ -152,30 +427,209 @@ void memuse_log_allocation(const char *label, void *ptr, int allocated,
 void memuse_log_dump(const char *filename) {
 
   /* Skip if nothing allocated this step. */
-  if (memuse_log_count == 0) return;
+  if (memuse_log_count == memuse_old_count) return;
+
+  // ticks tic = getticks();
+
+  /* Create the radix tree. If not already done. */
+  if (memuse_rnode_root_init) {
+    memuse_rnode_root =
+        (struct memuse_rnode *)calloc(1, sizeof(struct memuse_rnode));
+    memuse_rnode_root_init = 0;
+  }
+
+  /* Stop any new logs from being processed while we are dumping.
+   * Remember to not abort with error() in this section, that is recursive
+   * with the exit handler. */
+  size_t log_count = memuse_log_count;
+  size_t old_count = memuse_old_count;
 
   /* Open the output file. */
   FILE *fd;
-  if ((fd = fopen(filename, "w")) == NULL)
-    error("Failed to create memuse log file '%s'.", filename);
+  if ((fd = fopen(filename, "w")) == NULL) {
+    message("Failed to create memuse log file '%s', logs not dumped.",
+            filename);
+    return;
+  }
 
   /* Write a header. */
-  fprintf(fd, "# Current use: %s\n", memuse_process(1));
-  fprintf(fd, "# cpufreq: %lld\n", clocks_get_cpufreq());
-  fprintf(fd, "# dtic adr rank step allocated label size\n");
+  fprintf(fd, "# dtic step label size sum\n");
+
+  size_t memuse_maxmem = memuse_current;
+  for (size_t k = old_count; k < log_count; k++) {
+
+    /* Check if this address has already been recorded. */
+    struct memuse_rnode *child = memuse_rnode_find_child(
+        memuse_rnode_root, 0, memuse_log[k].vptr, sizeof(uintptr_t));
+
+    if (child != NULL && child->ptr != NULL) {
+
+      /* Found the allocation, this should be the free. */
+      if (memuse_log[k].allocated) {
+
+      /* Allocated twice, this is an error, but we cannot abort as that will
+       * attempt another memory dump, so just complain. */
+#if SWIFT_DEBUG_CHECKS
+        message("Allocated the same address twice (%s: %zd)",
+                memuse_log[k].label, memuse_log[k].size);
+#endif
+        continue;
+      }
+
+      /* Free, update the size to remove the allocation. */
+      struct memuse_log_entry *oldlog = (struct memuse_log_entry *)child->ptr;
+      memuse_log[k].size = -oldlog->size;
+
+      /* And deactivate this key. */
+      child->ptr = NULL;
+
+      /* And mark this as matched. */
+      memuse_log[k].active = 0;
+      oldlog->active = 0;
+
+    } else if (child == NULL && memuse_log[k].allocated) {
+
+      /* Not found, so new allocation which we store the log against the
+       * address. */
+      memuse_rnode_insert_child(memuse_rnode_root, 0, memuse_log[k].vptr,
+                                sizeof(uintptr_t), &memuse_log[k]);
 
-  for (size_t k = 0; k < memuse_log_count; k++) {
-    fprintf(fd, "%lld %p %d %d %d %s %zd\n", memuse_log[k].dtic,
-            memuse_log[k].ptr, memuse_log[k].rank, memuse_log[k].step,
-            memuse_log[k].allocated, memuse_log[k].label, memuse_log[k].size);
+    } else if (child == NULL && !memuse_log[k].allocated) {
+
+    /* Unmatched free, OK if NULL. */
+#if SWIFT_DEBUG_CHECKS
+      if (memuse_log[k].ptr != NULL) {
+        message("Unmatched non-NULL free: %s", memuse_log[k].label);
+      }
+#endif
+      continue;
+    } else if (memuse_log[k].allocated) {
+
+      /* Must be previously released allocation with same address, so we
+       * store. */
+      memuse_rnode_insert_child(memuse_rnode_root, 0, memuse_log[k].vptr,
+                                sizeof(uintptr_t), &memuse_log[k]);
+
+    } else {
+      /* Should not happen ... */
+      message("weird memory log record for label '%s' skipped",
+              memuse_log[k].label);
+      continue;
+    }
+
+    /* Keep maximum and rolling sum. */
+    memuse_current += memuse_log[k].size;
+    if (memuse_current > memuse_maxmem) memuse_maxmem = memuse_current;
+
+    /* And output. */
+    fprintf(fd, "%lld %d %s %zd %zd\n", memuse_log[k].dtic, memuse_log[k].step,
+            memuse_log[k].label, memuse_log[k].size, memuse_current);
   }
 
-  /* Clear the log. */
-  memuse_log_count = 0;
+#ifdef MEMUSE_RNODE_DUMP
+    /* Debug dump of tree. */
+    // memuse_rnode_dump(0, memuse_rnode_root, 0);
+#endif
+
+  /* Now we find all the still active nodes and gather their sizes against the
+   * labels. */
+  struct memuse_rnode *activernodes =
+      (struct memuse_rnode *)calloc(1, sizeof(struct memuse_rnode));
+  size_t newcount = 0;
+  struct memuse_rnode *labellednodes =
+      (struct memuse_rnode *)calloc(1, sizeof(struct memuse_rnode));
+  size_t *lindices = (size_t *)calloc(log_count, sizeof(size_t));
+  size_t lcount = 0;
+  for (size_t k = 0; k < log_count; k++) {
+
+    /* Only allocations are stored also is it active? */
+    if (memuse_log[k].allocated && memuse_log[k].active) {
+
+      /* Look for this label in our tree. */
+      struct memuse_rnode *labelchild = memuse_rnode_find_child(
+          labellednodes, 0, (uint8_t *)memuse_log[k].label,
+          strlen(memuse_log[k].label));
+      struct memuse_labelled_item *item = NULL;
+      if (labelchild == NULL || labelchild->ptr == NULL) {
+
+        /* New, so create an instance to keep the count. */
+        item = (struct memuse_labelled_item *)calloc(
+            1, sizeof(struct memuse_labelled_item));
+        item->sum = 0;
+        item->count = 0;
+        memuse_rnode_insert_child(labellednodes, 0,
+                                  (uint8_t *)memuse_log[k].label,
+                                  strlen(memuse_log[k].label), item);
+
+        /* Keep for indexing next time. */
+        lindices[lcount] = newcount;
+        lcount++;
+      } else {
+        item = (struct memuse_labelled_item *)labelchild->ptr;
+      }
+
+      /* And increment sum. */
+      item->sum += memuse_log[k].size;
+      item->count++;
+
+      /* Keep this in new log entry tree. Move to head. */
+      memcpy(&memuse_log[newcount], &memuse_log[k],
+             sizeof(struct memuse_log_entry));
+      memuse_rnode_insert_child(activernodes, 0, memuse_log[newcount].vptr,
+                                sizeof(uintptr_t), &memuse_log[newcount]);
+      newcount++;
+    }
+  }
+
+  /* And move all active logs to a clean new tree for next time. */
+  memuse_log_count = newcount;
+  memuse_old_count = newcount;
+  memuse_rnode_cleanup(memuse_rnode_root);
+  free(memuse_rnode_root);
+  memuse_rnode_root = activernodes;
+
+  /* Now dump the labelled counts. */
+  fprintf(fd, "# Memory use by label:\n");
+  fprintf(fd, "##  %30s %16s %16s\n", "label", "MB", "numactive");
+  fprintf(fd, "##\n");
+
+  size_t total_mem = 0;
+  for (size_t k = 0; k < lcount; k++) {
+    size_t ind = lindices[k];
+
+    /* Find this entry. */
+    struct memuse_rnode *labelchild = memuse_rnode_find_child(
+        labellednodes, 0, (uint8_t *)memuse_log[ind].label,
+        strlen(memuse_log[ind].label));
+    struct memuse_labelled_item *item =
+        (struct memuse_labelled_item *)labelchild->ptr;
+    fprintf(fd, "## %30s %16.3f %16zd\n", memuse_log[ind].label,
+            item->sum / MEGABYTE, item->count);
+    total_mem += item->sum;
+
+    /* Don't need this again. */
+    free(item);
+  }
+  fprintf(fd, "##\n");
+  fprintf(fd, "# Total memory still in use : %.3f (MB)\n",
+          total_mem / MEGABYTE);
+  fprintf(fd, "# Peak memory usage         : %.3f (MB)\n",
+          memuse_maxmem / MEGABYTE);
+  fprintf(fd, "#\n");
+  fprintf(fd, "# Memory use by process (all/system): %s\n", memuse_process(1));
+  fprintf(fd, "# cpufreq: %lld\n", clocks_get_cpufreq());
+
+  /* Clean up tree. */
+  memuse_rnode_cleanup(labellednodes);
+  free(labellednodes);
+  free(lindices);
 
   /* Close the file. */
   fflush(fd);
   fclose(fd);
+
+  // message("took %.3f %s.", clocks_from_ticks(getticks() - tic),
+  //        clocks_getunit());
 }
 
 /**
diff --git a/src/parallel_io.c b/src/parallel_io.c
index 7ba28d6f506338ea4e9a34343203bae4e794637d..db1fbe2426a3e858dd429aa495641a4e0d4ed631 100644
--- a/src/parallel_io.c
+++ b/src/parallel_io.c
@@ -1232,6 +1232,7 @@ void prepare_file(struct engine* e, const char* baseName, long long N_total[6],
 
       case swift_type_black_hole:
         black_holes_write_particles(bparts, list, &num_fields);
+        num_fields += chemistry_write_bparticles(bparts, list + num_fields);
         if (with_stf) {
           num_fields += velociraptor_write_bparts(bparts, list + num_fields);
         }
@@ -1552,11 +1553,11 @@ void write_output_parallel(struct engine* e, const char* baseName,
           if (swift_memalign("parts_written", (void**)&parts_written,
                              part_align,
                              Ngas_written * sizeof(struct part)) != 0)
-            error("Error while allocating temporart memory for parts");
+            error("Error while allocating temporary memory for parts");
           if (swift_memalign("xparts_written", (void**)&xparts_written,
                              xpart_align,
                              Ngas_written * sizeof(struct xpart)) != 0)
-            error("Error while allocating temporart memory for xparts");
+            error("Error while allocating temporary memory for xparts");
 
           /* Collect the particles we want to write */
           io_collect_parts_to_write(parts, xparts, parts_written,
@@ -1602,7 +1603,7 @@ void write_output_parallel(struct engine* e, const char* baseName,
           if (swift_memalign("gparts_written", (void**)&gparts_written,
                              gpart_align,
                              Ndm_written * sizeof(struct gpart)) != 0)
-            error("Error while allocating temporart memory for gparts");
+            error("Error while allocating temporary memory for gparts");
 
           if (with_stf) {
             if (swift_memalign(
@@ -1610,7 +1611,7 @@ void write_output_parallel(struct engine* e, const char* baseName,
                     gpart_align,
                     Ndm_written * sizeof(struct velociraptor_gpart_data)) != 0)
               error(
-                  "Error while allocating temporart memory for gparts STF "
+                  "Error while allocating temporary memory for gparts STF "
                   "data");
           }
 
@@ -1651,7 +1652,7 @@ void write_output_parallel(struct engine* e, const char* baseName,
           if (swift_memalign("sparts_written", (void**)&sparts_written,
                              spart_align,
                              Nstars_written * sizeof(struct spart)) != 0)
-            error("Error while allocating temporart memory for sparts");
+            error("Error while allocating temporary memory for sparts");
 
           /* Collect the particles we want to write */
           io_collect_sparts_to_write(sparts, sparts_written, Nstars,
@@ -1675,6 +1676,8 @@ void write_output_parallel(struct engine* e, const char* baseName,
           /* No inhibted particles: easy case */
           Nparticles = Nblackholes;
           black_holes_write_particles(bparts, list, &num_fields);
+          num_fields += chemistry_write_bparticles(bparts, list + num_fields);
+
           if (with_stf) {
             num_fields += velociraptor_write_bparts(bparts, list + num_fields);
           }
@@ -1687,7 +1690,7 @@ void write_output_parallel(struct engine* e, const char* baseName,
           if (swift_memalign("bparts_written", (void**)&bparts_written,
                              bpart_align,
                              Nblackholes_written * sizeof(struct bpart)) != 0)
-            error("Error while allocating temporart memory for bparts");
+            error("Error while allocating temporary memory for bparts");
 
           /* Collect the particles we want to write */
           io_collect_bparts_to_write(bparts, bparts_written, Nblackholes,
@@ -1695,6 +1698,8 @@ void write_output_parallel(struct engine* e, const char* baseName,
 
           /* Select the fields to write */
           black_holes_write_particles(bparts_written, list, &num_fields);
+          num_fields += chemistry_write_bparticles(bparts, list + num_fields);
+
           if (with_stf) {
             num_fields +=
                 velociraptor_write_bparts(bparts_written, list + num_fields);
diff --git a/src/part.c b/src/part.c
index 9394ab314b6c67cea832b2b5b96a8df76863ec8c..7d967caad18caf0c45d2a422fb4766d34f559d00 100644
--- a/src/part.c
+++ b/src/part.c
@@ -30,6 +30,7 @@
 
 /* Local headers */
 #include "error.h"
+#include "hydro.h"
 #include "part.h"
 
 /**
@@ -210,6 +211,13 @@ void part_verify_links(struct part *parts, struct gpart *gparts,
             part->x[1], part->x[2], gparts[k].x[0] - part->x[0],
             gparts[k].x[1] - part->x[1], gparts[k].x[2] - part->x[2]);
 
+      /* Check that the particles have the same mass */
+      if (gparts[k].mass != hydro_get_mass(part))
+        error(
+            "Linked particles do not have the same mass!\n"
+            "gp->m=%e p->m=%e",
+            gparts[k].mass, hydro_get_mass(part));
+
       /* Check that the particles are at the same time */
       if (gparts[k].time_bin != part->time_bin)
         error("Linked particles are not at the same time !");
@@ -237,6 +245,13 @@ void part_verify_links(struct part *parts, struct gpart *gparts,
             spart->x[1], spart->x[2], gparts[k].x[0] - spart->x[0],
             gparts[k].x[1] - spart->x[1], gparts[k].x[2] - spart->x[2]);
 
+      /* Check that the particles have the same mass */
+      if (gparts[k].mass != spart->mass)
+        error(
+            "Linked particles do not have the same mass!\n"
+            "gp->m=%e sp->m=%e",
+            gparts[k].mass, spart->mass);
+
       /* Check that the particles are at the same time */
       if (gparts[k].time_bin != spart->time_bin)
         error("Linked particles are not at the same time !");
@@ -264,6 +279,13 @@ void part_verify_links(struct part *parts, struct gpart *gparts,
             bpart->x[1], bpart->x[2], gparts[k].x[0] - bpart->x[0],
             gparts[k].x[1] - bpart->x[1], gparts[k].x[2] - bpart->x[2]);
 
+      /* Check that the particles have the same mass */
+      if (gparts[k].mass != bpart->mass)
+        error(
+            "Linked particles do not have the same mass!\n"
+            "gp->m=%e sp->m=%e",
+            gparts[k].mass, bpart->mass);
+
       /* Check that the particles are at the same time */
       if (gparts[k].time_bin != bpart->time_bin)
         error("Linked particles are not at the same time !");
@@ -287,6 +309,10 @@ void part_verify_links(struct part *parts, struct gpart *gparts,
           parts[k].x[2] != parts[k].gpart->x[2])
         error("Linked particles are not at the same position !");
 
+      /* Check that the particles have the same mass */
+      if (hydro_get_mass(&parts[k]) != parts[k].gpart->mass)
+        error("Linked particles do not have the same mass!\n");
+
       /* Check that the particles are at the same time */
       if (parts[k].time_bin != parts[k].gpart->time_bin)
         error("Linked particles are not at the same time !");
@@ -309,6 +335,10 @@ void part_verify_links(struct part *parts, struct gpart *gparts,
             sparts[k].x[2] != sparts[k].gpart->x[2])
           error("Linked particles are not at the same position !");
 
+        /* Check that the particles have the same mass */
+        if (sparts[k].mass != sparts[k].gpart->mass)
+          error("Linked particles do not have the same mass!\n");
+
         /* Check that the particles are at the same time */
         if (sparts[k].time_bin != sparts[k].gpart->time_bin)
           error("Linked particles are not at the same time !");
@@ -332,6 +362,10 @@ void part_verify_links(struct part *parts, struct gpart *gparts,
             bparts[k].x[2] != bparts[k].gpart->x[2])
           error("Linked particles are not at the same position !");
 
+        /* Check that the particles have the same mass */
+        if (bparts[k].mass != bparts[k].gpart->mass)
+          error("Linked particles do not have the same mass!\n");
+
         /* Check that the particles are at the same time */
         if (bparts[k].time_bin != bparts[k].gpart->time_bin)
           error("Linked particles are not at the same time !");
diff --git a/src/runner.c b/src/runner.c
index c1ac634eb752798005d551994a9306e3075052bc..622bd337170ed9939be168ec63d125bf67b9bf69 100644
--- a/src/runner.c
+++ b/src/runner.c
@@ -75,11 +75,13 @@
 #include "timestep_limiter.h"
 #include "tracers.h"
 
+/* Unique identifier of loop types */
 #define TASK_LOOP_DENSITY 0
 #define TASK_LOOP_GRADIENT 1
 #define TASK_LOOP_FORCE 2
 #define TASK_LOOP_LIMITER 3
 #define TASK_LOOP_FEEDBACK 4
+#define TASK_LOOP_SWALLOW 5
 
 /* Import the density loop functions. */
 #define FUNCTION density
@@ -135,6 +137,13 @@
 #undef FUNCTION_TASK_LOOP
 #undef FUNCTION
 
+/* Import the black hole feedback loop functions. */
+#define FUNCTION swallow
+#define FUNCTION_TASK_LOOP TASK_LOOP_SWALLOW
+#include "runner_doiact_black_holes.h"
+#undef FUNCTION_TASK_LOOP
+#undef FUNCTION
+
 /* Import the black hole feedback loop functions. */
 #define FUNCTION feedback
 #define FUNCTION_TASK_LOOP TASK_LOOP_FEEDBACK
@@ -559,11 +568,11 @@ void runner_do_stars_ghost(struct runner *r, struct cell *c, int timer) {
  * @param c The cell.
  * @param timer Are we timing this ?
  */
-void runner_do_black_holes_ghost(struct runner *r, struct cell *c, int timer) {
+void runner_do_black_holes_density_ghost(struct runner *r, struct cell *c,
+                                         int timer) {
 
   struct bpart *restrict bparts = c->black_holes.parts;
   const struct engine *e = r->e;
-  const int with_cosmology = e->policy & engine_policy_cosmology;
   const struct cosmology *cosmo = e->cosmology;
   const float black_holes_h_max = e->hydro_properties->h_max;
   const float black_holes_h_min = e->hydro_properties->h_min;
@@ -586,7 +595,7 @@ void runner_do_black_holes_ghost(struct runner *r, struct cell *c, int timer) {
   if (c->split) {
     for (int k = 0; k < 8; k++) {
       if (c->progeny[k] != NULL) {
-        runner_do_black_holes_ghost(r, c->progeny[k], 0);
+        runner_do_black_holes_density_ghost(r, c->progeny[k], 0);
 
         /* Update h_max */
         h_max = max(h_max, c->progeny[k]->black_holes.h_max);
@@ -682,25 +691,6 @@ void runner_do_black_holes_ghost(struct runner *r, struct cell *c, int timer) {
           if (((bp->h >= black_holes_h_max) && (f < 0.f)) ||
               ((bp->h <= black_holes_h_min) && (f > 0.f))) {
 
-            /* Get particle time-step */
-            double dt;
-            if (with_cosmology) {
-              const integertime_t ti_step = get_integer_timestep(bp->time_bin);
-              const integertime_t ti_begin =
-                  get_integer_time_begin(e->ti_current - 1, bp->time_bin);
-
-              dt = cosmology_get_delta_time(e->cosmology, ti_begin,
-                                            ti_begin + ti_step);
-            } else {
-              dt = get_timestep(bp->time_bin, e->time_base);
-            }
-
-            /* Compute variables required for the feedback loop */
-            black_holes_prepare_feedback(bp, e->black_holes_properties,
-                                         e->physical_constants, e->cosmology,
-                                         dt);
-
-            /* Reset quantities computed by the feedback loop */
             black_holes_reset_feedback(bp);
 
             /* Ok, we are done with this particle */
@@ -791,28 +781,10 @@ void runner_do_black_holes_ghost(struct runner *r, struct cell *c, int timer) {
 
         /* We now have a particle whose smoothing length has converged */
 
+        black_holes_reset_feedback(bp);
+
         /* Check if h_max has increased */
         h_max = max(h_max, bp->h);
-
-        /* Get particle time-step */
-        double dt;
-        if (with_cosmology) {
-          const integertime_t ti_step = get_integer_timestep(bp->time_bin);
-          const integertime_t ti_begin =
-              get_integer_time_begin(e->ti_current - 1, bp->time_bin);
-
-          dt = cosmology_get_delta_time(e->cosmology, ti_begin,
-                                        ti_begin + ti_step);
-        } else {
-          dt = get_timestep(bp->time_bin, e->time_base);
-        }
-
-        /* Compute variables required for the feedback loop */
-        black_holes_prepare_feedback(bp, e->black_holes_properties,
-                                     e->physical_constants, e->cosmology, dt);
-
-        /* Reset quantities computed by the feedback loop */
-        black_holes_reset_feedback(bp);
       }
 
       /* We now need to treat the particles whose smoothing length had not
@@ -888,7 +860,7 @@ void runner_do_black_holes_ghost(struct runner *r, struct cell *c, int timer) {
 
   /* The ghost may not always be at the top level.
    * Therefore we need to update h_max between the super- and top-levels */
-  if (c->black_holes.ghost) {
+  if (c->black_holes.density_ghost) {
     for (struct cell *tmp = c->parent; tmp != NULL; tmp = tmp->parent) {
       atomic_max_d(&tmp->black_holes.h_max, h_max);
     }
@@ -897,6 +869,65 @@ void runner_do_black_holes_ghost(struct runner *r, struct cell *c, int timer) {
   if (timer) TIMER_TOC(timer_do_black_holes_ghost);
 }
 
+/**
+ * @brief Intermediate task after the BHs have done their swallowing step.
+ * This is used to update the BH quantities if necessary.
+ *
+ * @param r The runner thread.
+ * @param c The cell.
+ * @param timer Are we timing this ?
+ */
+void runner_do_black_holes_swallow_ghost(struct runner *r, struct cell *c,
+                                         int timer) {
+
+  struct bpart *restrict bparts = c->black_holes.parts;
+  const int count = c->black_holes.count;
+  const struct engine *e = r->e;
+  const int with_cosmology = e->policy & engine_policy_cosmology;
+
+  TIMER_TIC;
+
+  /* Anything to do here? */
+  if (!cell_is_active_hydro(c, e)) return;
+
+  /* Recurse? */
+  if (c->split) {
+    for (int k = 0; k < 8; k++)
+      if (c->progeny[k] != NULL)
+        runner_do_black_holes_swallow_ghost(r, c->progeny[k], 0);
+  } else {
+
+    /* Loop over the parts in this cell. */
+    for (int i = 0; i < count; i++) {
+
+      /* Get a direct pointer on the part. */
+      struct bpart *bp = &bparts[i];
+
+      if (bpart_is_active(bp, e)) {
+
+        /* Get particle time-step */
+        double dt;
+        if (with_cosmology) {
+          const integertime_t ti_step = get_integer_timestep(bp->time_bin);
+          const integertime_t ti_begin =
+              get_integer_time_begin(e->ti_current - 1, bp->time_bin);
+
+          dt = cosmology_get_delta_time(e->cosmology, ti_begin,
+                                        ti_begin + ti_step);
+        } else {
+          dt = get_timestep(bp->time_bin, e->time_base);
+        }
+
+        /* Compute variables required for the feedback loop */
+        black_holes_prepare_feedback(bp, e->black_holes_properties,
+                                     e->physical_constants, e->cosmology, dt);
+      }
+    }
+  }
+
+  if (timer) TIMER_TOC(timer_do_black_holes_ghost);
+}
+
 /**
  * @brief Calculate gravity acceleration from external potential
  *
@@ -1194,13 +1225,13 @@ void runner_do_star_formation(struct runner *r, struct cell *c, int timer) {
  * @param sort The entries
  * @param N The number of entries.
  */
-void runner_do_sort_ascending(struct entry *sort, int N) {
+void runner_do_sort_ascending(struct sort_entry *sort, int N) {
 
   struct {
     short int lo, hi;
   } qstack[10];
   int qpos, i, j, lo, hi, imin;
-  struct entry temp;
+  struct sort_entry temp;
   float pivot;
 
   /* Sort parts in cell_i in decreasing order with quicksort */
@@ -1305,7 +1336,7 @@ RUNNER_CHECK_SORTS(stars)
 void runner_do_hydro_sort(struct runner *r, struct cell *c, int flags,
                           int cleanup, int clock) {
 
-  struct entry *fingers[8];
+  struct sort_entry *fingers[8];
   const int count = c->hydro.count;
   const struct part *parts = c->hydro.parts;
   struct xpart *xparts = c->hydro.xparts;
@@ -1417,7 +1448,7 @@ void runner_do_hydro_sort(struct runner *r, struct cell *c, int flags,
           }
 
       /* For each entry in the new sort list. */
-      struct entry *finger = c->hydro.sort[j];
+      struct sort_entry *finger = c->hydro.sort[j];
       for (int ind = 0; ind < count; ind++) {
 
         /* Copy the minimum into the new sort array. */
@@ -1496,7 +1527,7 @@ void runner_do_hydro_sort(struct runner *r, struct cell *c, int flags,
   /* Verify the sorting. */
   for (int j = 0; j < 13; j++) {
     if (!(flags & (1 << j))) continue;
-    struct entry *finger = c->hydro.sort[j];
+    struct sort_entry *finger = c->hydro.sort[j];
     for (int k = 1; k < count; k++) {
       if (finger[k].d < finger[k - 1].d)
         error("Sorting failed, ascending array.");
@@ -1538,7 +1569,7 @@ void runner_do_hydro_sort(struct runner *r, struct cell *c, int flags,
 void runner_do_stars_sort(struct runner *r, struct cell *c, int flags,
                           int cleanup, int clock) {
 
-  struct entry *fingers[8];
+  struct sort_entry *fingers[8];
   const int count = c->stars.count;
   struct spart *sparts = c->stars.parts;
   float buff[8];
@@ -1649,7 +1680,7 @@ void runner_do_stars_sort(struct runner *r, struct cell *c, int flags,
           }
 
       /* For each entry in the new sort list. */
-      struct entry *finger = c->stars.sort[j];
+      struct sort_entry *finger = c->stars.sort[j];
       for (int ind = 0; ind < count; ind++) {
 
         /* Copy the minimum into the new sort array. */
@@ -1722,7 +1753,7 @@ void runner_do_stars_sort(struct runner *r, struct cell *c, int flags,
   /* Verify the sorting. */
   for (int j = 0; j < 13; j++) {
     if (!(flags & (1 << j))) continue;
-    struct entry *finger = c->stars.sort[j];
+    struct sort_entry *finger = c->stars.sort[j];
     for (int k = 1; k < count; k++) {
       if (finger[k].d < finger[k - 1].d)
         error("Sorting failed, ascending array.");
@@ -3323,11 +3354,6 @@ void runner_do_timestep(struct runner *r, struct cell *c, int timer) {
         s_updated += cp->stars.updated;
         b_updated += cp->black_holes.updated;
 
-        inhibited += cp->hydro.inhibited;
-        g_inhibited += cp->grav.inhibited;
-        s_inhibited += cp->stars.inhibited;
-        b_inhibited += cp->black_holes.inhibited;
-
         ti_hydro_end_min = min(cp->hydro.ti_end_min, ti_hydro_end_min);
         ti_hydro_end_max = max(cp->hydro.ti_end_max, ti_hydro_end_max);
         ti_hydro_beg_max = max(cp->hydro.ti_beg_max, ti_hydro_beg_max);
@@ -3356,11 +3382,6 @@ void runner_do_timestep(struct runner *r, struct cell *c, int timer) {
   c->stars.updated = s_updated;
   c->black_holes.updated = b_updated;
 
-  c->hydro.inhibited = inhibited;
-  c->grav.inhibited = g_inhibited;
-  c->stars.inhibited = s_inhibited;
-  c->black_holes.inhibited = b_inhibited;
-
   c->hydro.ti_end_min = ti_hydro_end_min;
   c->hydro.ti_end_max = ti_hydro_end_max;
   c->hydro.ti_beg_max = ti_hydro_beg_max;
@@ -3655,7 +3676,8 @@ void runner_do_end_grav_force(struct runner *r, struct cell *c, int timer) {
 #endif
 
 #ifdef SWIFT_DEBUG_CHECKS
-        if (e->policy & engine_policy_self_gravity) {
+        if ((e->policy & engine_policy_self_gravity) &&
+            !(e->policy & engine_policy_black_holes)) {
 
           /* Let's add a self interaction to simplify the count */
           gp->num_interacted++;
@@ -3692,6 +3714,212 @@ void runner_do_end_grav_force(struct runner *r, struct cell *c, int timer) {
   if (timer) TIMER_TOC(timer_end_grav_force);
 }
 
+/**
+ * @brief Process all the gas particles in a cell that have been flagged for
+ * swallowing by a black hole.
+ *
+ * This is done by recursing down to the leaf-level and skipping the sub-cells
+ * that have not been drifted as they would not have any particles with
+ * swallowing flag. We then loop over the particles with a flag and look into
+ * the space-wide list of black holes for the particle with the corresponding
+ * ID. If found, the BH swallows the gas particle and the gas particle is
+ * removed. If the cell is local, we may be looking for a foreign BH, in which
+ * case, we do not update the BH (that will be done on its node) but just remove
+ * the gas particle.
+ *
+ * @param r The thread #runner.
+ * @param c The #cell.
+ * @param timer Are we timing this?
+ */
+void runner_do_swallow(struct runner *r, struct cell *c, int timer) {
+
+  struct engine *e = r->e;
+  struct space *s = e->s;
+  struct bpart *bparts = s->bparts;
+  const size_t nr_bpart = s->nr_bparts;
+#ifdef WITH_MPI
+  struct bpart *bparts_foreign = s->bparts_foreign;
+  const size_t nr_bparts_foreign = s->nr_bparts_foreign;
+#endif
+
+  struct part *parts = c->hydro.parts;
+  struct xpart *xparts = c->hydro.xparts;
+
+  /* Early abort?
+   * (We only want cells for which we drifted the gas as these are
+   * the only ones that could have gas particles that have been flagged
+   * for swallowing) */
+  if (c->hydro.count == 0 || c->hydro.ti_old_part != e->ti_current) {
+    return;
+  }
+
+  /* Loop over the progeny ? */
+  if (c->split) {
+    for (int k = 0; k < 8; k++) {
+      if (c->progeny[k] != NULL) {
+        struct cell *restrict cp = c->progeny[k];
+
+        runner_do_swallow(r, cp, 0);
+      }
+    }
+  } else {
+
+    /* Loop over all the gas particles in the cell
+     * Note that the cell (and hence the parts) may be local or foreign. */
+    const size_t nr_parts = c->hydro.count;
+    for (size_t k = 0; k < nr_parts; k++) {
+
+      /* Get a handle on the part. */
+      struct part *const p = &parts[k];
+      struct xpart *const xp = &xparts[k];
+
+      /* Ignore inhibited particles (they have already been removed!) */
+      if (part_is_inhibited(p, e)) continue;
+
+      /* Get the ID of the black holes that will swallow this part */
+      const long long swallow_id =
+          black_holes_get_swallow_id(&p->black_holes_data);
+
+      /* Has this particle been flagged for swallowing? */
+      if (swallow_id >= 0) {
+
+#ifdef SWIFT_DEBUG_CHECKS
+        if (p->ti_drift != e->ti_current)
+          error("Trying to swallow an un-drifted particle.");
+#endif
+
+        /* ID of the BH swallowing this particle */
+        const long long BH_id = swallow_id;
+
+        /* Have we found this particle's BH already? */
+        int found = 0;
+
+        /* Let's look for the hungry black hole in the local list */
+        for (size_t i = 0; i < nr_bpart; ++i) {
+
+          /* Get a handle on the bpart. */
+          struct bpart *bp = &bparts[i];
+
+          if (bp->id == BH_id) {
+
+            /* Lock the space as we are going to work directly on the bpart list
+             */
+            lock_lock(&s->lock);
+
+            /* Swallow the gas particle (i.e. update the BH properties) */
+            black_holes_swallow_part(bp, p, xp, e->cosmology);
+
+            /* Release the space as we are done updating the bpart */
+            if (lock_unlock(&s->lock) != 0)
+              error("Failed to unlock the space.");
+
+            message("BH %lld swallowing particle %lld", bp->id, p->id);
+
+            /* If the gas particle is local, remove it */
+            if (c->nodeID == e->nodeID) {
+
+              message("BH %lld removing particle %lld", bp->id, p->id);
+
+              /* Finally, remove the gas particle from the system */
+              struct gpart *gp = p->gpart;
+              cell_remove_part(e, c, p, xp);
+              cell_remove_gpart(e, c, gp);
+            }
+
+            /* In any case, prevent the particle from being re-swallowed */
+            black_holes_mark_as_swallowed(&p->black_holes_data);
+
+            found = 1;
+            break;
+          }
+
+        } /* Loop over local BHs */
+
+#ifdef WITH_MPI
+
+        /* We could also be in the case of a local gas particle being
+         * swallowed by a foreign BH. In this case, we won't update the
+         * BH but just remove the particle from the local list. */
+        if (c->nodeID == e->nodeID && !found) {
+
+          /* Let's look for the foreign hungry black hole */
+          for (size_t i = 0; i < nr_bparts_foreign; ++i) {
+
+            /* Get a handle on the bpart. */
+            struct bpart *bp = &bparts_foreign[i];
+
+            if (bp->id == BH_id) {
+
+              message("BH %lld removing particle %lld (foreign BH case)",
+                      bp->id, p->id);
+
+              /* Finally, remove the gas particle from the system */
+              struct gpart *gp = p->gpart;
+              cell_remove_part(e, c, p, xp);
+              cell_remove_gpart(e, c, gp);
+
+              found = 1;
+              break;
+            }
+          } /* Loop over foreign BHs */
+        }   /* Is the cell local? */
+#endif
+
+        /* If we have a local particle, we must have found the BH in one
+         * of our list of black holes. */
+        if (c->nodeID == e->nodeID && !found) {
+          error("Gas particle %lld could not find BH %lld to be swallowed",
+                p->id, swallow_id);
+        }
+      } /* Part was flagged for swallowing */
+    }   /* Loop over the parts */
+  }     /* Cell is not split */
+}
+
+/**
+ * @brief Processing of gas particles to swallow - self task case.
+ *
+ * @param r The thread #runner.
+ * @param c The #cell.
+ * @param timer Are we timing this?
+ */
+void runner_do_swallow_self(struct runner *r, struct cell *c, int timer) {
+
+#ifdef SWIFT_DEBUG_CHECKS
+  if (c->nodeID != r->e->nodeID) error("Running self task on foreign node");
+  if (!cell_is_active_black_holes(c, r->e))
+    error("Running self task on inactive cell");
+#endif
+
+  runner_do_swallow(r, c, timer);
+}
+
+/**
+ * @brief Processing of gas particles to swallow - pair task case.
+ *
+ * @param r The thread #runner.
+ * @param ci First #cell.
+ * @param cj Second #cell.
+ * @param timer Are we timing this?
+ */
+void runner_do_swallow_pair(struct runner *r, struct cell *ci, struct cell *cj,
+                            int timer) {
+
+  const struct engine *e = r->e;
+
+#ifdef SWIFT_DEBUG_CHECKS
+  if (ci->nodeID != e->nodeID && cj->nodeID != e->nodeID)
+    error("Running pair task on foreign node");
+  if (!cell_is_active_black_holes(ci, e) && !cell_is_active_black_holes(cj, e))
+    error("Running pair task with two inactive cells");
+#endif
+
+  /* Run the swallowing loop only in the cell that is the neighbour of the
+   * active BH */
+  if (cell_is_active_black_holes(cj, e)) runner_do_swallow(r, ci, timer);
+  if (cell_is_active_black_holes(ci, e)) runner_do_swallow(r, cj, timer);
+}
+
 /**
  * @brief Construct the cell properties from the received #part.
  *
@@ -4080,9 +4308,11 @@ void *runner_main(void *data) {
       }
 #endif
 
-/* Check that we haven't scheduled an inactive task */
 #ifdef SWIFT_DEBUG_CHECKS
+      /* Check that we haven't scheduled an inactive task */
       t->ti_run = e->ti_current;
+      /* Store the task that will be running (for debugging only) */
+      r->t = t;
 #endif
 
       /* Different types of tasks... */
@@ -4108,6 +4338,10 @@ void *runner_main(void *data) {
             runner_doself_branch_stars_feedback(r, ci);
           else if (t->subtype == task_subtype_bh_density)
             runner_doself_branch_bh_density(r, ci);
+          else if (t->subtype == task_subtype_bh_swallow)
+            runner_doself_branch_bh_swallow(r, ci);
+          else if (t->subtype == task_subtype_do_swallow)
+            runner_do_swallow_self(r, ci, 1);
           else if (t->subtype == task_subtype_bh_feedback)
             runner_doself_branch_bh_feedback(r, ci);
           else
@@ -4133,7 +4367,11 @@ void *runner_main(void *data) {
             runner_dopair_branch_stars_feedback(r, ci, cj);
           else if (t->subtype == task_subtype_bh_density)
             runner_dopair_branch_bh_density(r, ci, cj);
-          else if (t->subtype == task_subtype_bh_feedback)
+          else if (t->subtype == task_subtype_bh_swallow)
+            runner_dopair_branch_bh_swallow(r, ci, cj);
+          else if (t->subtype == task_subtype_do_swallow) {
+            runner_do_swallow_pair(r, ci, cj, 1);
+          } else if (t->subtype == task_subtype_bh_feedback)
             runner_dopair_branch_bh_feedback(r, ci, cj);
           else
             error("Unknown/invalid task subtype (%d).", t->subtype);
@@ -4156,6 +4394,10 @@ void *runner_main(void *data) {
             runner_dosub_self_stars_feedback(r, ci, 1);
           else if (t->subtype == task_subtype_bh_density)
             runner_dosub_self_bh_density(r, ci, 1);
+          else if (t->subtype == task_subtype_bh_swallow)
+            runner_dosub_self_bh_swallow(r, ci, 1);
+          else if (t->subtype == task_subtype_do_swallow)
+            runner_do_swallow_self(r, ci, 1);
           else if (t->subtype == task_subtype_bh_feedback)
             runner_dosub_self_bh_feedback(r, ci, 1);
           else
@@ -4179,7 +4421,11 @@ void *runner_main(void *data) {
             runner_dosub_pair_stars_feedback(r, ci, cj, 1);
           else if (t->subtype == task_subtype_bh_density)
             runner_dosub_pair_bh_density(r, ci, cj, 1);
-          else if (t->subtype == task_subtype_bh_feedback)
+          else if (t->subtype == task_subtype_bh_swallow)
+            runner_dosub_pair_bh_swallow(r, ci, cj, 1);
+          else if (t->subtype == task_subtype_do_swallow) {
+            runner_do_swallow_pair(r, ci, cj, 1);
+          } else if (t->subtype == task_subtype_bh_feedback)
             runner_dosub_pair_bh_feedback(r, ci, cj, 1);
           else
             error("Unknown/invalid task subtype (%d).", t->subtype);
@@ -4215,8 +4461,11 @@ void *runner_main(void *data) {
         case task_type_stars_ghost:
           runner_do_stars_ghost(r, ci, 1);
           break;
-        case task_type_bh_ghost:
-          runner_do_black_holes_ghost(r, ci, 1);
+        case task_type_bh_density_ghost:
+          runner_do_black_holes_density_ghost(r, ci, 1);
+          break;
+        case task_type_bh_swallow_ghost2:
+          runner_do_black_holes_swallow_ghost(r, ci, 1);
           break;
         case task_type_drift_part:
           runner_do_drift_part(r, ci, 1);
@@ -4263,6 +4512,8 @@ void *runner_main(void *data) {
             free(t->buff);
           } else if (t->subtype == task_subtype_sf_counts) {
             free(t->buff);
+          } else if (t->subtype == task_subtype_part_swallow) {
+            free(t->buff);
           }
           break;
         case task_type_recv:
@@ -4289,14 +4540,22 @@ void *runner_main(void *data) {
             runner_do_recv_part(r, ci, 0, 1);
           } else if (t->subtype == task_subtype_gradient) {
             runner_do_recv_part(r, ci, 0, 1);
+          } else if (t->subtype == task_subtype_part_swallow) {
+            cell_unpack_part_swallow(ci,
+                                     (struct black_holes_part_data *)t->buff);
+            free(t->buff);
           } else if (t->subtype == task_subtype_limiter) {
             runner_do_recv_part(r, ci, 0, 1);
           } else if (t->subtype == task_subtype_gpart) {
             runner_do_recv_gpart(r, ci, 1);
           } else if (t->subtype == task_subtype_spart) {
             runner_do_recv_spart(r, ci, 1, 1);
-          } else if (t->subtype == task_subtype_bpart) {
+          } else if (t->subtype == task_subtype_bpart_rho) {
             runner_do_recv_bpart(r, ci, 1, 1);
+          } else if (t->subtype == task_subtype_bpart_swallow) {
+            runner_do_recv_bpart(r, ci, 0, 1);
+          } else if (t->subtype == task_subtype_bpart_feedback) {
+            runner_do_recv_bpart(r, ci, 0, 1);
           } else if (t->subtype == task_subtype_multipole) {
             cell_unpack_multipoles(ci, (struct gravity_tensors *)t->buff);
             free(t->buff);
@@ -4343,6 +4602,9 @@ void *runner_main(void *data) {
         cj->tasks_executed[t->type]++;
         cj->subtasks_executed[t->subtype]++;
       }
+
+      /* This runner is not doing a task anymore */
+      r->t = NULL;
 #endif
 
       /* We're done with this task, see if we get a next one. */
diff --git a/src/runner.h b/src/runner.h
index c7b923ed262965e9dd7566dd093084c6f0948079..1dc62ad6f5dc1c92851cf841a4ab55836d084bac 100644
--- a/src/runner.h
+++ b/src/runner.h
@@ -29,6 +29,7 @@
 /* Includes. */
 #include "cache.h"
 #include "gravity_cache.h"
+#include "task.h"
 
 struct cell;
 struct engine;
@@ -64,6 +65,11 @@ struct runner {
   /*! The particle cache of cell cj. */
   struct cache cj_cache;
 #endif
+
+#ifdef SWIFT_DEBUG_CHECKS
+  /*! Pointer to the task this runner is currently performing */
+  const struct task *t;
+#endif
 };
 
 /* Function prototypes. */
diff --git a/src/runner_doiact.h b/src/runner_doiact.h
index 635d41a95d320dd99eb806c9aec61127e8c7e42d..6caa287cf726f85778ef5abdc184acaf759e8b0e 100644
--- a/src/runner_doiact.h
+++ b/src/runner_doiact.h
@@ -681,7 +681,7 @@ void DOPAIR_SUBSET(struct runner *r, struct cell *restrict ci,
   const float H = cosmo->H;
 
   /* Pick-out the sorted lists. */
-  const struct entry *restrict sort_j = cj->hydro.sort[sid];
+  const struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
   const float dxj = cj->hydro.dx_max_sort;
 
   /* Parts are on the left? */
@@ -984,8 +984,8 @@ void DOPAIR1(struct runner *r, struct cell *ci, struct cell *cj, const int sid,
   for (int k = 0; k < 3; k++) rshift += shift[k] * runner_shift[sid][k];
 
   /* Pick-out the sorted lists. */
-  const struct entry *restrict sort_i = ci->hydro.sort[sid];
-  const struct entry *restrict sort_j = cj->hydro.sort[sid];
+  const struct sort_entry *restrict sort_i = ci->hydro.sort[sid];
+  const struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
 
 #ifdef SWIFT_DEBUG_CHECKS
   /* Some constants used to checks that the parts are in the right frame */
@@ -1231,8 +1231,8 @@ void DOPAIR1_BRANCH(struct runner *r, struct cell *ci, struct cell *cj) {
 
 #ifdef SWIFT_DEBUG_CHECKS
   /* Pick-out the sorted lists. */
-  const struct entry *restrict sort_i = ci->hydro.sort[sid];
-  const struct entry *restrict sort_j = cj->hydro.sort[sid];
+  const struct sort_entry *restrict sort_i = ci->hydro.sort[sid];
+  const struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
 
   /* Check that the dx_max_sort values in the cell are indeed an upper
      bound on particle movement. */
@@ -1309,8 +1309,8 @@ void DOPAIR2(struct runner *r, struct cell *ci, struct cell *cj, const int sid,
   for (int k = 0; k < 3; k++) rshift += shift[k] * runner_shift[sid][k];
 
   /* Pick-out the sorted lists. */
-  struct entry *restrict sort_i = ci->hydro.sort[sid];
-  struct entry *restrict sort_j = cj->hydro.sort[sid];
+  struct sort_entry *restrict sort_i = ci->hydro.sort[sid];
+  struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
 
 #ifdef SWIFT_DEBUG_CHECKS
   /* Some constants used to checks that the parts are in the right frame */
@@ -1350,7 +1350,8 @@ void DOPAIR2(struct runner *r, struct cell *ci, struct cell *cj, const int sid,
   const double shift_j[3] = {cj->loc[0], cj->loc[1], cj->loc[2]};
 
   int count_active_i = 0, count_active_j = 0;
-  struct entry *restrict sort_active_i = NULL, *restrict sort_active_j = NULL;
+  struct sort_entry *restrict sort_active_i = NULL,
+                              *restrict sort_active_j = NULL;
 
   if (cell_is_all_active_hydro(ci, e)) {
     /* If everybody is active don't bother copying */
@@ -1358,7 +1359,7 @@ void DOPAIR2(struct runner *r, struct cell *ci, struct cell *cj, const int sid,
     count_active_i = count_i;
   } else if (cell_is_active_hydro(ci, e)) {
     if (posix_memalign((void **)&sort_active_i, SWIFT_CACHE_ALIGNMENT,
-                       sizeof(struct entry) * count_i) != 0)
+                       sizeof(struct sort_entry) * count_i) != 0)
       error("Failed to allocate active sortlists.");
 
     /* Collect the active particles in ci */
@@ -1376,7 +1377,7 @@ void DOPAIR2(struct runner *r, struct cell *ci, struct cell *cj, const int sid,
     count_active_j = count_j;
   } else if (cell_is_active_hydro(cj, e)) {
     if (posix_memalign((void **)&sort_active_j, SWIFT_CACHE_ALIGNMENT,
-                       sizeof(struct entry) * count_j) != 0)
+                       sizeof(struct sort_entry) * count_j) != 0)
       error("Failed to allocate active sortlists.");
 
     /* Collect the active particles in cj */
@@ -1423,6 +1424,14 @@ void DOPAIR2(struct runner *r, struct cell *ci, struct cell *cj, const int sid,
 
         /* Recover pj */
         struct part *pj = &parts_j[sort_active_j[pjd].i];
+
+        /* Skip inhibited particles.
+         * Note we are looping over active particles but in the case where
+         * the cell thinks all the particles are active (because of the
+         * ti_end_max), particles may have nevertheless been inhibted by BH
+         * swallowing in the mean time. */
+        if (part_is_inhibited(pj, e)) continue;
+
         const float hj = pj->h;
 
         /* Get the position of pj in the right frame */
@@ -1464,6 +1473,7 @@ void DOPAIR2(struct runner *r, struct cell *ci, struct cell *cj, const int sid,
         /* Check that particles have been drifted to the current time */
         if (pi->ti_drift != e->ti_current)
           error("Particle pi not drifted to current time");
+
         if (pj->ti_drift != e->ti_current)
           error("Particle pj not drifted to current time");
 #endif
@@ -1532,6 +1542,7 @@ void DOPAIR2(struct runner *r, struct cell *ci, struct cell *cj, const int sid,
         /* Check that particles have been drifted to the current time */
         if (pi->ti_drift != e->ti_current)
           error("Particle pi not drifted to current time");
+
         if (pj->ti_drift != e->ti_current)
           error("Particle pj not drifted to current time");
 #endif
@@ -1593,6 +1604,14 @@ void DOPAIR2(struct runner *r, struct cell *ci, struct cell *cj, const int sid,
 
         /* Recover pi */
         struct part *pi = &parts_i[sort_active_i[pid].i];
+
+        /* Skip inhibited particles.
+         * Note we are looping over active particles but in the case where
+         * the cell thinks all the particles are active (because of the
+         * ti_end_max), particles may have nevertheless been inhibted by BH
+         * swallowing in the mean time. */
+        if (part_is_inhibited(pi, e)) continue;
+
         const float hi = pi->h;
         const float hig2 = hi * hi * kernel_gamma2;
 
@@ -1778,8 +1797,8 @@ void DOPAIR2_BRANCH(struct runner *r, struct cell *ci, struct cell *cj) {
 
 #ifdef SWIFT_DEBUG_CHECKS
   /* Pick-out the sorted lists. */
-  const struct entry *restrict sort_i = ci->hydro.sort[sid];
-  const struct entry *restrict sort_j = cj->hydro.sort[sid];
+  const struct sort_entry *restrict sort_i = ci->hydro.sort[sid];
+  const struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
 
   /* Check that the dx_max_sort values in the cell are indeed an upper
      bound on particle movement. */
diff --git a/src/runner_doiact_black_holes.h b/src/runner_doiact_black_holes.h
index efd2d02b3aca3c530b4e1af13e38328368c99f7c..b2232481a027b6d5e51fc061114005a2bf08c527 100644
--- a/src/runner_doiact_black_holes.h
+++ b/src/runner_doiact_black_holes.h
@@ -112,19 +112,19 @@ void DOSELF1_BH(struct runner *r, struct cell *c, int timer) {
   struct xpart *restrict xparts = c->hydro.xparts;
 
   /* Loop over the bparts in ci. */
-  for (int sid = 0; sid < bcount; sid++) {
+  for (int bid = 0; bid < bcount; bid++) {
 
     /* Get a hold of the ith bpart in ci. */
-    struct bpart *restrict si = &bparts[sid];
+    struct bpart *restrict bi = &bparts[bid];
 
     /* Skip inactive particles */
-    if (!bpart_is_active(si, e)) continue;
+    if (!bpart_is_active(bi, e)) continue;
 
-    const float hi = si->h;
+    const float hi = bi->h;
     const float hig2 = hi * hi * kernel_gamma2;
-    const float six[3] = {(float)(si->x[0] - c->loc[0]),
-                          (float)(si->x[1] - c->loc[1]),
-                          (float)(si->x[2] - c->loc[2])};
+    const float bix[3] = {(float)(bi->x[0] - c->loc[0]),
+                          (float)(bi->x[1] - c->loc[1]),
+                          (float)(bi->x[2] - c->loc[2])};
 
     /* Loop over the parts in cj. */
     for (int pjd = 0; pjd < count; pjd++) {
@@ -141,7 +141,7 @@ void DOSELF1_BH(struct runner *r, struct cell *c, int timer) {
       const float pjx[3] = {(float)(pj->x[0] - c->loc[0]),
                             (float)(pj->x[1] - c->loc[1]),
                             (float)(pj->x[2] - c->loc[2])};
-      float dx[3] = {six[0] - pjx[0], six[1] - pjx[1], six[2] - pjx[2]};
+      float dx[3] = {bix[0] - pjx[0], bix[1] - pjx[1], bix[2] - pjx[2]};
       const float r2 = dx[0] * dx[0] + dx[1] * dx[1] + dx[2] * dx[2];
 
 #ifdef SWIFT_DEBUG_CHECKS
@@ -151,7 +151,7 @@ void DOSELF1_BH(struct runner *r, struct cell *c, int timer) {
 #endif
 
       if (r2 < hig2) {
-        IACT_BH(r2, dx, hi, hj, si, pj, xpj, cosmo, ti_current);
+        IACT_BH(r2, dx, hi, hj, bi, pj, xpj, cosmo, ti_current);
       }
     } /* loop over the parts in ci. */
   }   /* loop over the bparts in ci. */
@@ -172,7 +172,7 @@ void DO_NONSYM_PAIR1_BH_NAIVE(struct runner *r, struct cell *restrict ci,
 #ifdef SWIFT_DEBUG_CHECKS
 #if (FUNCTION_TASK_LOOP == TASK_LOOP_DENSITY)
   if (ci->nodeID != engine_rank) error("Should be run on a different node");
-#else
+#elif (FUNCTION_TASK_LOOP == TASK_LOOP_FEEDBACK)
   if (cj->nodeID != engine_rank) error("Should be run on a different node");
 #endif
 #endif
@@ -201,19 +201,19 @@ void DO_NONSYM_PAIR1_BH_NAIVE(struct runner *r, struct cell *restrict ci,
   }
 
   /* Loop over the bparts in ci. */
-  for (int sid = 0; sid < bcount_i; sid++) {
+  for (int bid = 0; bid < bcount_i; bid++) {
 
     /* Get a hold of the ith bpart in ci. */
-    struct bpart *restrict si = &bparts_i[sid];
+    struct bpart *restrict bi = &bparts_i[bid];
 
     /* Skip inactive particles */
-    if (!bpart_is_active(si, e)) continue;
+    if (!bpart_is_active(bi, e)) continue;
 
-    const float hi = si->h;
+    const float hi = bi->h;
     const float hig2 = hi * hi * kernel_gamma2;
-    const float six[3] = {(float)(si->x[0] - (cj->loc[0] + shift[0])),
-                          (float)(si->x[1] - (cj->loc[1] + shift[1])),
-                          (float)(si->x[2] - (cj->loc[2] + shift[2]))};
+    const float bix[3] = {(float)(bi->x[0] - (cj->loc[0] + shift[0])),
+                          (float)(bi->x[1] - (cj->loc[1] + shift[1])),
+                          (float)(bi->x[2] - (cj->loc[2] + shift[2]))};
 
     /* Loop over the parts in cj. */
     for (int pjd = 0; pjd < count_j; pjd++) {
@@ -230,7 +230,7 @@ void DO_NONSYM_PAIR1_BH_NAIVE(struct runner *r, struct cell *restrict ci,
       const float pjx[3] = {(float)(pj->x[0] - cj->loc[0]),
                             (float)(pj->x[1] - cj->loc[1]),
                             (float)(pj->x[2] - cj->loc[2])};
-      float dx[3] = {six[0] - pjx[0], six[1] - pjx[1], six[2] - pjx[2]};
+      float dx[3] = {bix[0] - pjx[0], bix[1] - pjx[1], bix[2] - pjx[2]};
       const float r2 = dx[0] * dx[0] + dx[1] * dx[1] + dx[2] * dx[2];
 
 #ifdef SWIFT_DEBUG_CHECKS
@@ -240,7 +240,7 @@ void DO_NONSYM_PAIR1_BH_NAIVE(struct runner *r, struct cell *restrict ci,
 #endif
 
       if (r2 < hig2) {
-        IACT_BH(r2, dx, hi, hj, si, pj, xpj, cosmo, ti_current);
+        IACT_BH(r2, dx, hi, hj, bi, pj, xpj, cosmo, ti_current);
       }
     } /* loop over the parts in cj. */
   }   /* loop over the parts in ci. */
@@ -254,11 +254,16 @@ void DOPAIR1_BH_NAIVE(struct runner *r, struct cell *restrict ci,
 #if (FUNCTION_TASK_LOOP == TASK_LOOP_DENSITY)
   const int do_ci_bh = ci->nodeID == r->e->nodeID;
   const int do_cj_bh = cj->nodeID == r->e->nodeID;
-#else
+#elif (FUNCTION_TASK_LOOP == TASK_LOOP_FEEDBACK)
   /* here we are updating the hydro -> switch ci, cj */
   const int do_ci_bh = cj->nodeID == r->e->nodeID;
   const int do_cj_bh = ci->nodeID == r->e->nodeID;
+#else
+  /* The swallow task is executed on both sides */
+  const int do_ci_bh = 1;
+  const int do_cj_bh = 1;
 #endif
+
   if (do_ci_bh && ci->black_holes.count != 0 && cj->hydro.count != 0)
     DO_NONSYM_PAIR1_BH_NAIVE(r, ci, cj);
   if (do_cj_bh && cj->black_holes.count != 0 && ci->hydro.count != 0)
@@ -302,19 +307,19 @@ void DOPAIR1_SUBSET_BH_NAIVE(struct runner *r, struct cell *restrict ci,
   if (count_j == 0) return;
 
   /* Loop over the parts_i. */
-  for (int pid = 0; pid < bcount; pid++) {
+  for (int bid = 0; bid < bcount; bid++) {
 
     /* Get a hold of the ith part in ci. */
-    struct bpart *restrict bpi = &bparts_i[ind[pid]];
+    struct bpart *restrict bi = &bparts_i[ind[bid]];
 
-    const double pix = bpi->x[0] - (shift[0]);
-    const double piy = bpi->x[1] - (shift[1]);
-    const double piz = bpi->x[2] - (shift[2]);
-    const float hi = bpi->h;
+    const double bix = bi->x[0] - (shift[0]);
+    const double biy = bi->x[1] - (shift[1]);
+    const double biz = bi->x[2] - (shift[2]);
+    const float hi = bi->h;
     const float hig2 = hi * hi * kernel_gamma2;
 
 #ifdef SWIFT_DEBUG_CHECKS
-    if (!bpart_is_active(bpi, e))
+    if (!bpart_is_active(bi, e))
       error("Trying to correct smoothing length of inactive particle !");
 #endif
 
@@ -334,8 +339,8 @@ void DOPAIR1_SUBSET_BH_NAIVE(struct runner *r, struct cell *restrict ci,
       const float hj = pj->h;
 
       /* Compute the pairwise distance. */
-      float dx[3] = {(float)(pix - pjx), (float)(piy - pjy),
-                     (float)(piz - pjz)};
+      float dx[3] = {(float)(bix - pjx), (float)(biy - pjy),
+                     (float)(biz - pjz)};
       const float r2 = dx[0] * dx[0] + dx[1] * dx[1] + dx[2] * dx[2];
 
 #ifdef SWIFT_DEBUG_CHECKS
@@ -345,7 +350,7 @@ void DOPAIR1_SUBSET_BH_NAIVE(struct runner *r, struct cell *restrict ci,
 #endif
       /* Hit or miss? */
       if (r2 < hig2) {
-        IACT_BH(r2, dx, hi, hj, bpi, pj, xpj, cosmo, ti_current);
+        IACT_BH(r2, dx, hi, hj, bi, pj, xpj, cosmo, ti_current);
       }
     } /* loop over the parts in cj. */
   }   /* loop over the parts in ci. */
@@ -381,19 +386,18 @@ void DOSELF1_SUBSET_BH(struct runner *r, struct cell *restrict ci,
   if (count_i == 0) return;
 
   /* Loop over the parts in ci. */
-  for (int bpid = 0; bpid < bcount; bpid++) {
+  for (int bid = 0; bid < bcount; bid++) {
 
     /* Get a hold of the ith part in ci. */
-    struct bpart *bpi = &bparts[ind[bpid]];
-    const float bpix[3] = {(float)(bpi->x[0] - ci->loc[0]),
-                           (float)(bpi->x[1] - ci->loc[1]),
-                           (float)(bpi->x[2] - ci->loc[2])};
-    const float hi = bpi->h;
+    struct bpart *bi = &bparts[ind[bid]];
+    const float bix[3] = {(float)(bi->x[0] - ci->loc[0]),
+                          (float)(bi->x[1] - ci->loc[1]),
+                          (float)(bi->x[2] - ci->loc[2])};
+    const float hi = bi->h;
     const float hig2 = hi * hi * kernel_gamma2;
 
 #ifdef SWIFT_DEBUG_CHECKS
-    if (!bpart_is_active(bpi, e))
-      error("Inactive particle in subset function!");
+    if (!bpart_is_active(bi, e)) error("Inactive particle in subset function!");
 #endif
 
     /* Loop over the parts in cj. */
@@ -410,7 +414,7 @@ void DOSELF1_SUBSET_BH(struct runner *r, struct cell *restrict ci,
       const float pjx[3] = {(float)(pj->x[0] - ci->loc[0]),
                             (float)(pj->x[1] - ci->loc[1]),
                             (float)(pj->x[2] - ci->loc[2])};
-      float dx[3] = {bpix[0] - pjx[0], bpix[1] - pjx[1], bpix[2] - pjx[2]};
+      float dx[3] = {bix[0] - pjx[0], bix[1] - pjx[1], bix[2] - pjx[2]};
       const float r2 = dx[0] * dx[0] + dx[1] * dx[1] + dx[2] * dx[2];
 
 #ifdef SWIFT_DEBUG_CHECKS
@@ -421,7 +425,7 @@ void DOSELF1_SUBSET_BH(struct runner *r, struct cell *restrict ci,
 
       /* Hit or miss? */
       if (r2 < hig2) {
-        IACT_BH(r2, dx, hi, pj->h, bpi, pj, xpj, cosmo, ti_current);
+        IACT_BH(r2, dx, hi, pj->h, bi, pj, xpj, cosmo, ti_current);
       }
     } /* loop over the parts in cj. */
   }   /* loop over the parts in ci. */
@@ -608,11 +612,16 @@ void DOPAIR1_BRANCH_BH(struct runner *r, struct cell *ci, struct cell *cj) {
 #if (FUNCTION_TASK_LOOP == TASK_LOOP_DENSITY)
   const int do_ci_bh = ci->nodeID == e->nodeID;
   const int do_cj_bh = cj->nodeID == e->nodeID;
-#else
+#elif (FUNCTION_TASK_LOOP == TASK_LOOP_FEEDBACK)
   /* here we are updating the hydro -> switch ci, cj */
   const int do_ci_bh = cj->nodeID == e->nodeID;
   const int do_cj_bh = ci->nodeID == e->nodeID;
+#else
+  /* The swallow task is executed on both sides */
+  const int do_ci_bh = 1;
+  const int do_cj_bh = 1;
 #endif
+
   const int do_ci = (ci->black_holes.count != 0 && cj->hydro.count != 0 &&
                      ci_active && do_ci_bh);
   const int do_cj = (cj->black_holes.count != 0 && ci->hydro.count != 0 &&
@@ -682,11 +691,15 @@ void DOSUB_PAIR1_BH(struct runner *r, struct cell *ci, struct cell *cj,
 #if (FUNCTION_TASK_LOOP == TASK_LOOP_DENSITY)
     const int do_ci_bh = ci->nodeID == e->nodeID;
     const int do_cj_bh = cj->nodeID == e->nodeID;
-#else
+#elif (FUNCTION_TASK_LOOP == TASK_LOOP_FEEDBACK)
     /* here we are updating the hydro -> switch ci, cj */
     const int do_ci_bh = cj->nodeID == e->nodeID;
     const int do_cj_bh = ci->nodeID == e->nodeID;
+#else
+    const int do_ci_bh = 1;
+    const int do_cj_bh = 1;
 #endif
+
     const int do_ci = ci->black_holes.count != 0 && cj->hydro.count != 0 &&
                       cell_is_active_black_holes(ci, e) && do_ci_bh;
     const int do_cj = cj->black_holes.count != 0 && ci->hydro.count != 0 &&
diff --git a/src/runner_doiact_stars.h b/src/runner_doiact_stars.h
index 4fca3be9463e5bc65d1b46967daa87fddad42959..7e9780def83bbdbab83a431a757a52f3ba51d2e4 100644
--- a/src/runner_doiact_stars.h
+++ b/src/runner_doiact_stars.h
@@ -324,8 +324,8 @@ void DO_SYM_PAIR1_STARS(struct runner *r, struct cell *ci, struct cell *cj,
   if (do_ci_stars) {
 
     /* Pick-out the sorted lists. */
-    const struct entry *restrict sort_j = cj->hydro.sort[sid];
-    const struct entry *restrict sort_i = ci->stars.sort[sid];
+    const struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
+    const struct sort_entry *restrict sort_i = ci->stars.sort[sid];
 
 #ifdef SWIFT_DEBUG_CHECKS
     /* Some constants used to checks that the parts are in the right frame */
@@ -452,8 +452,8 @@ void DO_SYM_PAIR1_STARS(struct runner *r, struct cell *ci, struct cell *cj,
 
   if (do_cj_stars) {
     /* Pick-out the sorted lists. */
-    const struct entry *restrict sort_i = ci->hydro.sort[sid];
-    const struct entry *restrict sort_j = cj->stars.sort[sid];
+    const struct sort_entry *restrict sort_i = ci->hydro.sort[sid];
+    const struct sort_entry *restrict sort_j = cj->stars.sort[sid];
 
 #ifdef SWIFT_DEBUG_CHECKS
     /* Some constants used to checks that the parts are in the right frame */
@@ -640,7 +640,7 @@ void DOPAIR1_SUBSET_STARS(struct runner *r, struct cell *restrict ci,
   if (count_j == 0) return;
 
   /* Pick-out the sorted lists. */
-  const struct entry *restrict sort_j = cj->hydro.sort[sid];
+  const struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
   const float dxj = cj->hydro.dx_max_sort;
 
   /* Sparts are on the left? */
@@ -1129,7 +1129,7 @@ void DOSELF1_BRANCH_STARS(struct runner *r, struct cell *c) {
 
 #define RUNNER_CHECK_SORT(TYPE, PART, cj, ci, sid)                          \
   ({                                                                        \
-    const struct entry *restrict sort_j = cj->TYPE.sort[sid];               \
+    const struct sort_entry *restrict sort_j = cj->TYPE.sort[sid];          \
                                                                             \
     for (int pjd = 0; pjd < cj->TYPE.count; pjd++) {                        \
       const struct PART *p = &cj->TYPE.parts[sort_j[pjd].i];                \
diff --git a/src/runner_doiact_vec.c b/src/runner_doiact_vec.c
index 182e81e99c442cf5e27405ea71321e22e7f374e3..68f34b0d3b8fc9c79097522f8a1618f86957612e 100644
--- a/src/runner_doiact_vec.c
+++ b/src/runner_doiact_vec.c
@@ -240,8 +240,8 @@ __attribute__((always_inline)) INLINE static void storeInteractions(
  *
  * @param ci #cell pointer to ci
  * @param cj #cell pointer to cj
- * @param sort_i #entry array for particle distance in ci
- * @param sort_j #entry array for particle distance in cj
+ * @param sort_i #sort_entry array for particle distance in ci
+ * @param sort_j #sort_entry array for particle distance in cj
  * @param dx_max maximum particle movement allowed in cell
  * @param rshift cutoff shift
  * @param hi_max Maximal smoothing length in cell ci
@@ -260,10 +260,11 @@ __attribute__((always_inline)) INLINE static void storeInteractions(
  */
 __attribute__((always_inline)) INLINE static void populate_max_index_density(
     const struct cell *ci, const struct cell *cj,
-    const struct entry *restrict sort_i, const struct entry *restrict sort_j,
-    const float dx_max, const float rshift, const double hi_max,
-    const double hj_max, const double di_max, const double dj_min,
-    int *max_index_i, int *max_index_j, int *init_pi, int *init_pj,
+    const struct sort_entry *restrict sort_i,
+    const struct sort_entry *restrict sort_j, const float dx_max,
+    const float rshift, const double hi_max, const double hj_max,
+    const double di_max, const double dj_min, int *max_index_i,
+    int *max_index_j, int *init_pi, int *init_pj,
     const timebin_t max_active_bin, const int active_ci, const int active_cj) {
 
   const struct part *restrict parts_i = ci->hydro.parts;
@@ -398,8 +399,8 @@ __attribute__((always_inline)) INLINE static void populate_max_index_density(
  *
  * @param ci #cell pointer to ci
  * @param cj #cell pointer to cj
- * @param sort_i #entry array for particle distance in ci
- * @param sort_j #entry array for particle distance in cj
+ * @param sort_i #sort_entry array for particle distance in ci
+ * @param sort_j #sort_entry array for particle distance in cj
  * @param dx_max maximum particle movement allowed in cell
  * @param rshift cutoff shift
  * @param hi_max_raw Maximal smoothing length in cell ci
@@ -419,12 +420,12 @@ __attribute__((always_inline)) INLINE static void populate_max_index_density(
  */
 __attribute__((always_inline)) INLINE static void populate_max_index_force(
     const struct cell *ci, const struct cell *cj,
-    const struct entry *restrict sort_i, const struct entry *restrict sort_j,
-    const float dx_max, const float rshift, const double hi_max_raw,
-    const double hj_max_raw, const double h_max, const double di_max,
-    const double dj_min, int *max_index_i, int *max_index_j, int *init_pi,
-    int *init_pj, const timebin_t max_active_bin, const int active_ci,
-    const int active_cj) {
+    const struct sort_entry *restrict sort_i,
+    const struct sort_entry *restrict sort_j, const float dx_max,
+    const float rshift, const double hi_max_raw, const double hj_max_raw,
+    const double h_max, const double di_max, const double dj_min,
+    int *max_index_i, int *max_index_j, int *init_pi, int *init_pj,
+    const timebin_t max_active_bin, const int active_ci, const int active_cj) {
 
   const struct part *restrict parts_i = ci->hydro.parts;
   const struct part *restrict parts_j = cj->hydro.parts;
@@ -570,7 +571,7 @@ __attribute__((always_inline)) INLINE static void populate_max_index_force(
  * @param runner_shift_x The runner_shift in the x direction.
  * @param runner_shift_y The runner_shift in the y direction.
  * @param runner_shift_z The runner_shift in the z direction.
- * @param sort_j #entry array for particle distance in cj
+ * @param sort_j #sort_entry array for particle distance in cj
  * @param max_index_i array to hold the maximum distances of pi particles into
  * #cell cj
  * @param flipped Flag to check whether the cells have been flipped or not.
@@ -582,7 +583,8 @@ __attribute__((always_inline)) INLINE static int populate_max_index_subset(
     int *restrict ind, const double *total_ci_shift, const float dxj,
     const double di_shift_correction, const double runner_shift_x,
     const double runner_shift_y, const double runner_shift_z,
-    const struct entry *restrict sort_j, int *max_index_i, const int flipped) {
+    const struct sort_entry *restrict sort_j, int *max_index_i,
+    const int flipped) {
 
   /* The cell is on the right so read the particles
    * into the cache from the start of the cell. */
@@ -1320,8 +1322,8 @@ void runner_dopair1_density_vec(struct runner *r, struct cell *ci,
   for (int k = 0; k < 3; k++) rshift += shift[k] * runner_shift[sid][k];
 
   /* Pick-out the sorted lists. */
-  const struct entry *restrict sort_i = ci->hydro.sort[sid];
-  const struct entry *restrict sort_j = cj->hydro.sort[sid];
+  const struct sort_entry *restrict sort_i = ci->hydro.sort[sid];
+  const struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
 
   /* Get some other useful values. */
   const int count_i = ci->hydro.count;
@@ -1726,7 +1728,7 @@ void runner_dopair_subset_density_vec(struct runner *r,
   const int count_j = cj->hydro.count;
 
   /* Pick-out the sorted lists. */
-  const struct entry *restrict sort_j = cj->hydro.sort[sid];
+  const struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
   const float dxj = cj->hydro.dx_max_sort;
 
   /* Get both particle caches from the runner and re-allocate
@@ -2075,8 +2077,8 @@ void runner_dopair2_force_vec(struct runner *r, struct cell *ci,
   for (int k = 0; k < 3; k++) rshift += shift[k] * runner_shift[sid][k];
 
   /* Pick-out the sorted lists. */
-  const struct entry *restrict sort_i = ci->hydro.sort[sid];
-  const struct entry *restrict sort_j = cj->hydro.sort[sid];
+  const struct sort_entry *restrict sort_i = ci->hydro.sort[sid];
+  const struct sort_entry *restrict sort_j = cj->hydro.sort[sid];
 
   /* Get some other useful values. */
   const int count_i = ci->hydro.count;
diff --git a/src/scheduler.c b/src/scheduler.c
index 6367fe2cb57734b6d73e3737c992958d3a1d3ed0..367fbe599a14e86fd592569c9e2b1de8a45154e0 100644
--- a/src/scheduler.c
+++ b/src/scheduler.c
@@ -1408,7 +1408,7 @@ void scheduler_reweight(struct scheduler *s, int verbose) {
       case task_type_stars_ghost:
         if (t->ci == t->ci->hydro.super) cost = wscale * scount_i;
         break;
-      case task_type_bh_ghost:
+      case task_type_bh_density_ghost:
         if (t->ci == t->ci->hydro.super) cost = wscale * bcount_i;
         break;
       case task_type_drift_part:
@@ -1672,6 +1672,14 @@ void scheduler_enqueue(struct scheduler *s, struct task *t) {
               t->ci->mpi.pcell_size * sizeof(struct pcell_step_black_holes),
               MPI_BYTE, t->ci->nodeID, t->flags, subtaskMPI_comms[t->subtype],
               &t->req);
+        } else if (t->subtype == task_subtype_part_swallow) {
+          t->buff = (struct black_holes_part_data *)malloc(
+              sizeof(struct black_holes_part_data) * t->ci->hydro.count);
+          err = MPI_Irecv(
+              t->buff,
+              t->ci->hydro.count * sizeof(struct black_holes_part_data),
+              MPI_BYTE, t->ci->nodeID, t->flags, subtaskMPI_comms[t->subtype],
+              &t->req);
         } else if (t->subtype == task_subtype_xv ||
                    t->subtype == task_subtype_rho ||
                    t->subtype == task_subtype_gradient) {
@@ -1686,7 +1694,9 @@ void scheduler_enqueue(struct scheduler *s, struct task *t) {
           err = MPI_Irecv(t->ci->stars.parts, t->ci->stars.count,
                           spart_mpi_type, t->ci->nodeID, t->flags,
                           subtaskMPI_comms[t->subtype], &t->req);
-        } else if (t->subtype == task_subtype_bpart) {
+        } else if (t->subtype == task_subtype_bpart_rho ||
+                   t->subtype == task_subtype_bpart_swallow ||
+                   t->subtype == task_subtype_bpart_feedback) {
           err = MPI_Irecv(t->ci->black_holes.parts, t->ci->black_holes.count,
                           bpart_mpi_type, t->ci->nodeID, t->flags,
                           subtaskMPI_comms[t->subtype], &t->req);
@@ -1791,6 +1801,27 @@ void scheduler_enqueue(struct scheduler *s, struct task *t) {
                 MPI_BYTE, t->cj->nodeID, t->flags, subtaskMPI_comms[t->subtype],
                 &t->req);
           }
+        } else if (t->subtype == task_subtype_part_swallow) {
+          t->buff = (struct black_holes_part_data *)malloc(
+              sizeof(struct black_holes_part_data) * t->ci->hydro.count);
+          cell_pack_part_swallow(t->ci,
+                                 (struct black_holes_part_data *)t->buff);
+
+          if (t->ci->hydro.count * sizeof(struct black_holes_part_data) >
+              s->mpi_message_limit) {
+            err = MPI_Isend(
+                t->buff,
+                t->ci->hydro.count * sizeof(struct black_holes_part_data),
+                MPI_BYTE, t->cj->nodeID, t->flags, subtaskMPI_comms[t->subtype],
+                &t->req);
+          } else {
+            err = MPI_Issend(
+                t->buff,
+                t->ci->hydro.count * sizeof(struct black_holes_part_data),
+                MPI_BYTE, t->cj->nodeID, t->flags, subtaskMPI_comms[t->subtype],
+                &t->req);
+          }
+
         } else if (t->subtype == task_subtype_xv ||
                    t->subtype == task_subtype_rho ||
                    t->subtype == task_subtype_gradient) {
@@ -1821,7 +1852,9 @@ void scheduler_enqueue(struct scheduler *s, struct task *t) {
             err = MPI_Issend(t->ci->stars.parts, t->ci->stars.count,
                              spart_mpi_type, t->cj->nodeID, t->flags,
                              subtaskMPI_comms[t->subtype], &t->req);
-        } else if (t->subtype == task_subtype_bpart) {
+        } else if (t->subtype == task_subtype_bpart_rho ||
+                   t->subtype == task_subtype_bpart_swallow ||
+                   t->subtype == task_subtype_bpart_feedback) {
           if ((t->ci->black_holes.count * sizeof(struct bpart)) >
               s->mpi_message_limit)
             err = MPI_Isend(t->ci->black_holes.parts, t->ci->black_holes.count,
diff --git a/src/serial_io.c b/src/serial_io.c
index e16a3351d75e176bf71d6dc1c791a37574de96f9..c05c69e1e86b737e1e7f06b995f89f6bb548901a 100644
--- a/src/serial_io.c
+++ b/src/serial_io.c
@@ -1241,7 +1241,7 @@ void write_output_serial(struct engine* e, const char* baseName,
               if (swift_memalign("gparts_written", (void**)&gparts_written,
                                  gpart_align,
                                  Ndm_written * sizeof(struct gpart)) != 0)
-                error("Error while allocating temporart memory for gparts");
+                error("Error while allocating temporary memory for gparts");
 
               if (with_stf) {
                 if (swift_memalign(
@@ -1250,7 +1250,7 @@ void write_output_serial(struct engine* e, const char* baseName,
                         Ndm_written * sizeof(struct velociraptor_gpart_data)) !=
                     0)
                   error(
-                      "Error while allocating temporart memory for gparts STF "
+                      "Error while allocating temporary memory for gparts STF "
                       "data");
               }
 
@@ -1291,7 +1291,7 @@ void write_output_serial(struct engine* e, const char* baseName,
               if (swift_memalign("sparts_written", (void**)&sparts_written,
                                  spart_align,
                                  Nstars_written * sizeof(struct spart)) != 0)
-                error("Error while allocating temporart memory for sparts");
+                error("Error while allocating temporary memory for sparts");
 
               /* Collect the particles we want to write */
               io_collect_sparts_to_write(sparts, sparts_written, Nstars,
@@ -1316,6 +1316,9 @@ void write_output_serial(struct engine* e, const char* baseName,
               /* No inhibted particles: easy case */
               Nparticles = Nblackholes;
               black_holes_write_particles(bparts, list, &num_fields);
+              num_fields +=
+                  chemistry_write_bparticles(bparts, list + num_fields);
+
               if (with_stf) {
                 num_fields +=
                     velociraptor_write_bparts(bparts, list + num_fields);
@@ -1329,7 +1332,7 @@ void write_output_serial(struct engine* e, const char* baseName,
               if (swift_memalign(
                       "bparts_written", (void**)&bparts_written, bpart_align,
                       Nblackholes_written * sizeof(struct bpart)) != 0)
-                error("Error while allocating temporart memory for bparts");
+                error("Error while allocating temporary memory for bparts");
 
               /* Collect the particles we want to write */
               io_collect_bparts_to_write(bparts, bparts_written, Nblackholes,
@@ -1337,6 +1340,9 @@ void write_output_serial(struct engine* e, const char* baseName,
 
               /* Select the fields to write */
               black_holes_write_particles(bparts_written, list, &num_fields);
+              num_fields +=
+                  chemistry_write_bparticles(bparts, list + num_fields);
+
               if (with_stf) {
                 num_fields += velociraptor_write_bparts(bparts_written,
                                                         list + num_fields);
diff --git a/src/single_io.c b/src/single_io.c
index afce1e1e6ebb6a8101ccfa109efd8df9e10fbd33..7a7856d29ce35435f750f2f71d66e0269d114261 100644
--- a/src/single_io.c
+++ b/src/single_io.c
@@ -989,11 +989,11 @@ void write_output_single(struct engine* e, const char* baseName,
           if (swift_memalign("parts_written", (void**)&parts_written,
                              part_align,
                              Ngas_written * sizeof(struct part)) != 0)
-            error("Error while allocating temporart memory for parts");
+            error("Error while allocating temporary memory for parts");
           if (swift_memalign("xparts_written", (void**)&xparts_written,
                              xpart_align,
                              Ngas_written * sizeof(struct xpart)) != 0)
-            error("Error while allocating temporart memory for xparts");
+            error("Error while allocating temporary memory for xparts");
 
           /* Collect the particles we want to write */
           io_collect_parts_to_write(parts, xparts, parts_written,
@@ -1039,7 +1039,7 @@ void write_output_single(struct engine* e, const char* baseName,
           if (swift_memalign("gparts_written", (void**)&gparts_written,
                              gpart_align,
                              Ndm_written * sizeof(struct gpart)) != 0)
-            error("Error while allocating temporart memory for gparts");
+            error("Error while allocating temporary memory for gparts");
 
           if (with_stf) {
             if (swift_memalign(
@@ -1047,7 +1047,7 @@ void write_output_single(struct engine* e, const char* baseName,
                     gpart_align,
                     Ndm_written * sizeof(struct velociraptor_gpart_data)) != 0)
               error(
-                  "Error while allocating temporart memory for gparts STF "
+                  "Error while allocating temporary memory for gparts STF "
                   "data");
           }
 
@@ -1086,7 +1086,7 @@ void write_output_single(struct engine* e, const char* baseName,
           if (swift_memalign("sparts_written", (void**)&sparts_written,
                              spart_align,
                              Nstars_written * sizeof(struct spart)) != 0)
-            error("Error while allocating temporart memory for sparts");
+            error("Error while allocating temporary memory for sparts");
 
           /* Collect the particles we want to write */
           io_collect_sparts_to_write(sparts, sparts_written, Nstars,
@@ -1111,6 +1111,8 @@ void write_output_single(struct engine* e, const char* baseName,
           /* No inhibted particles: easy case */
           N = Nblackholes;
           black_holes_write_particles(bparts, list, &num_fields);
+          num_fields += chemistry_write_bparticles(bparts, list + num_fields);
+
           if (with_stf) {
             num_fields += velociraptor_write_bparts(bparts, list + num_fields);
           }
@@ -1123,7 +1125,7 @@ void write_output_single(struct engine* e, const char* baseName,
           if (swift_memalign("bparts_written", (void**)&bparts_written,
                              bpart_align,
                              Nblackholes_written * sizeof(struct bpart)) != 0)
-            error("Error while allocating temporart memory for bparts");
+            error("Error while allocating temporary memory for bparts");
 
           /* Collect the particles we want to write */
           io_collect_bparts_to_write(bparts, bparts_written, Nblackholes,
@@ -1131,6 +1133,8 @@ void write_output_single(struct engine* e, const char* baseName,
 
           /* Select the fields to write */
           black_holes_write_particles(bparts_written, list, &num_fields);
+          num_fields +=
+              chemistry_write_bparticles(bparts_written, list + num_fields);
           if (with_stf) {
             num_fields +=
                 velociraptor_write_bparts(bparts_written, list + num_fields);
diff --git a/src/sort_part.h b/src/sort_part.h
index 74116d7a8cada31c0663d5c5b70cfa978b11af8b..4da81215fcadccf33c10922b8f291b9452231fa0 100644
--- a/src/sort_part.h
+++ b/src/sort_part.h
@@ -23,7 +23,7 @@
 /**
  * @brief Entry in a list of sorted indices.
  */
-struct entry {
+struct sort_entry {
 
   /*! Distance on the axis */
   float d;
diff --git a/src/space.c b/src/space.c
index 3ff86f439d20e0e63d2ab61314e5599aedb75dfc..8b76b2d7ee7ffb3469378e20b35931ecf4e09ed8 100644
--- a/src/space.c
+++ b/src/space.c
@@ -213,19 +213,15 @@ void space_rebuild_recycle_mapper(void *map_data, int num_elements,
     c->hydro.count = 0;
     c->hydro.count_total = 0;
     c->hydro.updated = 0;
-    c->hydro.inhibited = 0;
     c->grav.count = 0;
     c->grav.count_total = 0;
     c->grav.updated = 0;
-    c->grav.inhibited = 0;
     c->stars.count = 0;
     c->stars.count_total = 0;
     c->stars.updated = 0;
-    c->stars.inhibited = 0;
     c->black_holes.count = 0;
     c->black_holes.count_total = 0;
     c->black_holes.updated = 0;
-    c->black_holes.inhibited = 0;
     c->grav.init = NULL;
     c->grav.init_out = NULL;
     c->hydro.extra_ghost = NULL;
@@ -235,8 +231,12 @@ void space_rebuild_recycle_mapper(void *map_data, int num_elements,
     c->stars.ghost = NULL;
     c->stars.density = NULL;
     c->stars.feedback = NULL;
-    c->black_holes.ghost = NULL;
+    c->black_holes.density_ghost = NULL;
+    c->black_holes.swallow_ghost[0] = NULL;
+    c->black_holes.swallow_ghost[1] = NULL;
     c->black_holes.density = NULL;
+    c->black_holes.swallow = NULL;
+    c->black_holes.do_swallow = NULL;
     c->black_holes.feedback = NULL;
     c->kick1 = NULL;
     c->kick2 = NULL;
@@ -3934,7 +3934,7 @@ void space_synchronize_particle_positions_mapper(void *map_data, int nr_gparts,
   for (int k = 0; k < nr_gparts; k++) {
 
     /* Get the particle */
-    const struct gpart *restrict gp = &gparts[k];
+    struct gpart *restrict gp = &gparts[k];
 
     if (gp->type == swift_type_dark_matter)
       continue;
@@ -3953,6 +3953,8 @@ void space_synchronize_particle_positions_mapper(void *map_data, int nr_gparts,
       xp->v_full[0] = gp->v_full[0];
       xp->v_full[1] = gp->v_full[1];
       xp->v_full[2] = gp->v_full[2];
+
+      gp->mass = p->mass;
     }
 
     else if (gp->type == swift_type_stars) {
@@ -3964,6 +3966,8 @@ void space_synchronize_particle_positions_mapper(void *map_data, int nr_gparts,
       sp->x[0] = gp->x[0];
       sp->x[1] = gp->x[1];
       sp->x[2] = gp->x[2];
+
+      gp->mass = sp->mass;
     }
 
     else if (gp->type == swift_type_black_hole) {
@@ -3975,6 +3979,8 @@ void space_synchronize_particle_positions_mapper(void *map_data, int nr_gparts,
       bp->x[0] = gp->x[0];
       bp->x[1] = gp->x[1];
       bp->x[2] = gp->x[2];
+
+      gp->mass = bp->mass;
     }
   }
 }
@@ -4080,6 +4086,9 @@ void space_first_init_parts_mapper(void *restrict map_data, int count,
     tracers_first_init_xpart(&p[k], &xp[k], us, phys_const, cosmo, hydro_props,
                              cool_func);
 
+    /* And the black hole markers */
+    black_holes_mark_as_not_swallowed(&p[k].black_holes_data);
+
 #ifdef SWIFT_DEBUG_CHECKS
     /* Check part->gpart->part linkeage. */
     if (p[k].gpart && p[k].gpart->id_or_neg_offset != -(k + delta))
@@ -5209,6 +5218,49 @@ void space_check_limiter(struct space *s) {
 #endif
 }
 
+/**
+ * @brief #threadpool mapper function for the swallow debugging check
+ */
+void space_check_swallow_mapper(void *map_data, int nr_parts,
+                                void *extra_data) {
+#ifdef SWIFT_DEBUG_CHECKS
+  /* Unpack the data */
+  struct part *restrict parts = (struct part *)map_data;
+
+  /* Verify that all particles have been swallowed or are untouched */
+  for (int k = 0; k < nr_parts; k++) {
+
+    if (parts[k].time_bin == time_bin_inhibited) continue;
+
+    const long long swallow_id =
+        black_holes_get_swallow_id(&parts[k].black_holes_data);
+
+    if (swallow_id != -1)
+      error("Particle has not been swallowed! id=%lld", parts[k].id);
+  }
+#else
+  error("Calling debugging code without debugging flag activated.");
+#endif
+}
+
+/**
+ * @brief Checks that all particles have their swallow flag in a "no swallow"
+ * state.
+ *
+ * Should only be used for debugging purposes.
+ *
+ * @param s The #space to check.
+ */
+void space_check_swallow(struct space *s) {
+#ifdef SWIFT_DEBUG_CHECKS
+
+  threadpool_map(&s->e->threadpool, space_check_swallow_mapper, s->parts,
+                 s->nr_parts, sizeof(struct part), 1000, NULL);
+#else
+  error("Calling debugging code without debugging flag activated.");
+#endif
+}
+
 void space_check_sort_flags_mapper(void *map_data, int nr_cells,
                                    void *extra_data) {
 
diff --git a/src/space.h b/src/space.h
index 88f38209e7d8d82b021cd518dfa9806e4ec844eb..21d36b7e880c3b8943e03bfbbc16bfefe6fff0d1 100644
--- a/src/space.h
+++ b/src/space.h
@@ -354,6 +354,7 @@ void space_check_top_multipoles_drift_point(struct space *s,
                                             integertime_t ti_drift);
 void space_check_timesteps(struct space *s);
 void space_check_limiter(struct space *s);
+void space_check_swallow(struct space *s);
 void space_check_sort_flags(struct space *s);
 void space_replicate(struct space *s, int replicate, int verbose);
 void space_generate_gas(struct space *s, const struct cosmology *cosmo,
diff --git a/src/task.c b/src/task.c
index 77731ae02f8c84597565717319241997a77fadf7..9d7167e0b82cf8459676f5216aabb1c0f6ce7fb8 100644
--- a/src/task.c
+++ b/src/task.c
@@ -91,18 +91,21 @@ const char *taskID_names[task_type_count] = {"none",
                                              "stars_sort",
                                              "bh_in",
                                              "bh_out",
-                                             "bh_ghost",
+                                             "bh_density_ghost",
+                                             "bh_swallow_ghost1",
+                                             "bh_swallow_ghost2",
                                              "fof_self",
                                              "fof_pair"};
 
 /* Sub-task type names. */
 const char *subtaskID_names[task_subtype_count] = {
-    "none",          "density",        "gradient",      "force",
-    "limiter",       "grav",           "external_grav", "tend_part",
-    "tend_gpart",    "tend_spart",     "tend_bpart",    "xv",
-    "rho",           "gpart",          "multipole",     "spart",
-    "stars_density", "stars_feedback", "sf_count",      "bpart",
-    "bh_density",    "bh_feedback"};
+    "none",       "density",       "gradient",       "force",
+    "limiter",    "grav",          "external_grav",  "tend_part",
+    "tend_gpart", "tend_spart",    "tend_bpart",     "xv",
+    "rho",        "part_swallow",  "gpart",          "multipole",
+    "spart",      "stars_density", "stars_feedback", "sf_count",
+    "bpart_rho",  "bpart_swallow", "bpart_feedback", "bh_density",
+    "bh_swallow", "do_swallow",    "bh_feedback"};
 
 #ifdef WITH_MPI
 /* MPI communicators for the subtypes. */
@@ -137,6 +140,7 @@ MPI_Comm subtaskMPI_comms[task_subtype_count];
 TASK_CELL_OVERLAP(part, hydro.parts, hydro.count);
 TASK_CELL_OVERLAP(gpart, grav.parts, grav.count);
 TASK_CELL_OVERLAP(spart, stars.parts, stars.count);
+TASK_CELL_OVERLAP(bpart, black_holes.parts, black_holes.count);
 
 /**
  * @brief Returns the #task_actions for a given task.
@@ -172,7 +176,8 @@ __attribute__((always_inline)) INLINE static enum task_actions task_acts_on(
       break;
 
     case task_type_drift_bpart:
-    case task_type_bh_ghost:
+    case task_type_bh_density_ghost:
+    case task_type_bh_swallow_ghost2:
       return task_action_bpart;
       break;
 
@@ -196,6 +201,8 @@ __attribute__((always_inline)) INLINE static enum task_actions task_acts_on(
 
         case task_subtype_bh_density:
         case task_subtype_bh_feedback:
+        case task_subtype_bh_swallow:
+        case task_subtype_do_swallow:
           return task_action_all;
           break;
 
@@ -289,11 +296,15 @@ float task_overlap(const struct task *restrict ta,
       (ta_act == task_action_gpart || ta_act == task_action_all);
   const int ta_spart =
       (ta_act == task_action_spart || ta_act == task_action_all);
+  const int ta_bpart =
+      (ta_act == task_action_bpart || ta_act == task_action_all);
   const int tb_part = (tb_act == task_action_part || tb_act == task_action_all);
   const int tb_gpart =
       (tb_act == task_action_gpart || tb_act == task_action_all);
   const int tb_spart =
       (tb_act == task_action_spart || tb_act == task_action_all);
+  const int tb_bpart =
+      (tb_act == task_action_bpart || tb_act == task_action_all);
 
   /* In the case where both tasks act on parts */
   if (ta_part && tb_part) {
@@ -358,6 +369,27 @@ float task_overlap(const struct task *restrict ta,
     return ((float)size_intersect) / (size_union - size_intersect);
   }
 
+  /* In the case where both tasks act on bparts */
+  else if (ta_bpart && tb_bpart) {
+
+    /* Compute the union of the cell data. */
+    size_t size_union = 0;
+    if (ta->ci != NULL) size_union += ta->ci->black_holes.count;
+    if (ta->cj != NULL) size_union += ta->cj->black_holes.count;
+    if (tb->ci != NULL) size_union += tb->ci->black_holes.count;
+    if (tb->cj != NULL) size_union += tb->cj->black_holes.count;
+
+    if (size_union == 0) return 0.f;
+
+    /* Compute the intersection of the cell data. */
+    const size_t size_intersect = task_cell_overlap_bpart(ta->ci, tb->ci) +
+                                  task_cell_overlap_bpart(ta->ci, tb->cj) +
+                                  task_cell_overlap_bpart(ta->cj, tb->ci) +
+                                  task_cell_overlap_bpart(ta->cj, tb->cj);
+
+    return ((float)size_intersect) / (size_union - size_intersect);
+  }
+
   /* Else, no overlap */
   return 0.f;
 }
@@ -411,6 +443,12 @@ void task_unlock(struct task *t) {
                  (subtype == task_subtype_stars_feedback)) {
         cell_sunlocktree(ci);
         cell_unlocktree(ci);
+      } else if ((subtype == task_subtype_bh_density) ||
+                 (subtype == task_subtype_bh_feedback) ||
+                 (subtype == task_subtype_bh_swallow) ||
+                 (subtype == task_subtype_do_swallow)) {
+        cell_bunlocktree(ci);
+        cell_unlocktree(ci);
       } else {
         cell_unlocktree(ci);
       }
@@ -429,6 +467,14 @@ void task_unlock(struct task *t) {
         cell_sunlocktree(cj);
         cell_unlocktree(ci);
         cell_unlocktree(cj);
+      } else if ((subtype == task_subtype_bh_density) ||
+                 (subtype == task_subtype_bh_feedback) ||
+                 (subtype == task_subtype_bh_swallow) ||
+                 (subtype == task_subtype_do_swallow)) {
+        cell_bunlocktree(ci);
+        cell_bunlocktree(cj);
+        cell_unlocktree(ci);
+        cell_unlocktree(cj);
       } else {
         cell_unlocktree(ci);
         cell_unlocktree(cj);
@@ -550,6 +596,17 @@ int task_lock(struct task *t) {
           cell_sunlocktree(ci);
           return 0;
         }
+      } else if ((subtype == task_subtype_bh_density) ||
+                 (subtype == task_subtype_bh_feedback) ||
+                 (subtype == task_subtype_bh_swallow) ||
+                 (subtype == task_subtype_do_swallow)) {
+        if (ci->black_holes.hold) return 0;
+        if (ci->hydro.hold) return 0;
+        if (cell_blocktree(ci) != 0) return 0;
+        if (cell_locktree(ci) != 0) {
+          cell_bunlocktree(ci);
+          return 0;
+        }
       } else { /* subtype == hydro */
         if (ci->hydro.hold) return 0;
         if (cell_locktree(ci) != 0) return 0;
@@ -596,6 +653,29 @@ int task_lock(struct task *t) {
           cell_unlocktree(ci);
           return 0;
         }
+      } else if ((subtype == task_subtype_bh_density) ||
+                 (subtype == task_subtype_bh_feedback) ||
+                 (subtype == task_subtype_bh_swallow) ||
+                 (subtype == task_subtype_do_swallow)) {
+        /* Lock the BHs and the gas particles in both cells */
+        if (ci->black_holes.hold || cj->black_holes.hold) return 0;
+        if (ci->hydro.hold || cj->hydro.hold) return 0;
+        if (cell_blocktree(ci) != 0) return 0;
+        if (cell_blocktree(cj) != 0) {
+          cell_bunlocktree(ci);
+          return 0;
+        }
+        if (cell_locktree(ci) != 0) {
+          cell_bunlocktree(ci);
+          cell_bunlocktree(cj);
+          return 0;
+        }
+        if (cell_locktree(cj) != 0) {
+          cell_bunlocktree(ci);
+          cell_bunlocktree(cj);
+          cell_unlocktree(ci);
+          return 0;
+        }
       } else { /* subtype == hydro */
         /* Lock the parts in both cells */
         if (ci->hydro.hold || cj->hydro.hold) return 0;
@@ -716,6 +796,12 @@ void task_get_group_name(int type, int subtype, char *cluster) {
     case task_subtype_bh_density:
       strcpy(cluster, "BHDensity");
       break;
+    case task_subtype_bh_swallow:
+      strcpy(cluster, "BHSwallow");
+      break;
+    case task_subtype_do_swallow:
+      strcpy(cluster, "DoSwallow");
+      break;
     case task_subtype_bh_feedback:
       strcpy(cluster, "BHFeedback");
       break;
diff --git a/src/task.h b/src/task.h
index eef0aca5fb075725fb1da51ccaf4033a3608e078..995f39357f810e068b3fe4e4304bfab8f9734080 100644
--- a/src/task.h
+++ b/src/task.h
@@ -86,7 +86,9 @@ enum task_types {
   task_type_stars_sort,
   task_type_bh_in,  /* Implicit */
   task_type_bh_out, /* Implicit */
-  task_type_bh_ghost,
+  task_type_bh_density_ghost,
+  task_type_bh_swallow_ghost1,
+  task_type_bh_swallow_ghost2,
   task_type_fof_self,
   task_type_fof_pair,
   task_type_count
@@ -109,14 +111,19 @@ enum task_subtypes {
   task_subtype_tend_bpart,
   task_subtype_xv,
   task_subtype_rho,
+  task_subtype_part_swallow,
   task_subtype_gpart,
   task_subtype_multipole,
   task_subtype_spart,
   task_subtype_stars_density,
   task_subtype_stars_feedback,
   task_subtype_sf_counts,
-  task_subtype_bpart,
+  task_subtype_bpart_rho,
+  task_subtype_bpart_swallow,
+  task_subtype_bpart_feedback,
   task_subtype_bh_density,
+  task_subtype_bh_swallow,
+  task_subtype_do_swallow,
   task_subtype_bh_feedback,
   task_subtype_count
 } __attribute__((packed));
diff --git a/tools/analyse_memuse_logs.py b/tools/analyse_memuse_logs.py
deleted file mode 100755
index 4026c73d1e13a4484975fe3902e508493dc838cb..0000000000000000000000000000000000000000
--- a/tools/analyse_memuse_logs.py
+++ /dev/null
@@ -1,131 +0,0 @@
-#!/usr/bin/env python
-"""
-Usage:
-    process_memuse.py [options] memuse_report1.dat [memuse_report2.dat] ...
-
-Parse the output of a run of SWIFT to convert the memuse output dumps into a
-timeseries of memory use. Also outputs use in memory per labelled type.
-
-This file is part of SWIFT.
-Copyright (c) 2019 Peter W. Draper (p.w.draper@durham.ac.uk)
-
-This program is free software: you can redistribute it and/or modify
-it under the terms of the GNU Lesser General Public License as published
-by the Free Software Foundation, either version 3 of the License, or
-(at your option) any later version.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU Lesser General Public License
-along with this program.  If not, see <http://www.gnu.org/licenses/>.
-"""
-
-from collections import OrderedDict
-import argparse
-import sys
-
-#  Command-line arguments.
-parser = argparse.ArgumentParser(description="Analyse memory usage reports")
-
-parser.add_argument("memuse_report", nargs='+',
-                    help="Memory usage reports (order by step if using more than one)")
-parser.add_argument(
-    "-b",
-    "--blacklist",
-    dest="blacklist",
-    help="substring of allocations to ignore (maybe be repeated)",
-    default=None,
-    action='append'
-)
-args = parser.parse_args()
-
-memuse = OrderedDict()
-labels = {}
-totalmem = 0
-process_use = ""
-peak = 0.0
-
-for filename in args.memuse_report:
-    sys.stderr.write("## Processing: " + filename + "\n")
-    with open(filename) as infile:
-        print '# {:<18s} {:>30s} {:>9s} {:>9s} {:s}'.format("tic", "label", "allocated", "step", "MB")
-        for line in infile:
-            if line[0] == "#":
-                if "# Current use:" in line:
-                    process_use = line[14:-1]
-            else:
-                tic, adr, rank, step, allocated, label, size = line.split()
-
-                #  Skip blacklisted allocations, these can swamp the signal...
-                if args.blacklist != None:
-                    skip = False
-                    for item in args.blacklist:
-                        if item in label:
-                            skip = True
-                            break
-                    if skip:
-                        continue
-
-                rank = int(rank)
-                step = int(step)
-                allocated = int(allocated)
-                size = int(size)
-
-                doprint = True
-                if allocated == 1:
-                    #  Allocation.
-                    totalmem = totalmem + size
-                    if not adr in memuse:
-                        memuse[adr] = [size]
-                        labels[adr] = label
-                    else:
-                        memuse[adr].append(size)
-                else:
-                    #  Free, locate allocation.
-                    if adr in memuse:
-                        allocs = memuse[adr]
-                        totalmem = totalmem - allocs[0]
-                        if len(allocs) > 1:
-                            memuse[adr] = allocs[1:]
-                        else:
-                            del memuse[adr]
-                    else:
-                        #  Unmatched free, complain and skip.
-                        #print "### unmatched free: ", label, adr
-                        doprint = False
-                if doprint:
-                    if totalmem > peak:
-                        peak = totalmem
-                    print '{:<20s} {:>30s} {:9d} {:9d} {:.3f}'.format(tic, label, allocated, step, totalmem/(1048576.0))
-    sys.stderr.write("## Finished ingestion of: " + filename + "\n")
-
-totals = {}
-numactive = {}
-for adr in labels:
-    #  If any remaining allocations.
-    if adr in memuse:
-        if labels[adr] in totals:
-            totals[labels[adr]] = totals[labels[adr]] + memuse[adr][0]
-            numactive[labels[adr]] = numactive[labels[adr]] + 1
-        else:
-            totals[labels[adr]] = memuse[adr][0]
-            numactive[labels[adr]] = 1
-
-print "# Memory use by label:"
-print "## ", '{:<30s} {:>16s} {:>16s}'.format("label", "MB", "numactive")
-print "## "
-total = 0.0
-for label in sorted(totals):
-    mem = totals[label]/(1048576.0)
-    total = total + mem
-    print "## ", '{:<30s} {:16.3f} {:16d}'.format(label, mem, numactive[label])
-print "## "
-print "# Total memory still in use : ", '{:.3f}'.format(total), " (MB)"
-print "# Peak memory usage         : ", '{:.3f}'.format(peak/1048576.0), " (MB)"
-if process_use != "":
-    print "#"
-    print "# Memory use by process (all/system):", process_use
-sys.exit(0)
diff --git a/tools/plot_task_dependencies.py b/tools/plot_task_dependencies.py
index b835b04261cf54b361f347f88bbb27f230a6be16..dbb39144fc1497d405c368ce3b59063ee8c97983 100644
--- a/tools/plot_task_dependencies.py
+++ b/tools/plot_task_dependencies.py
@@ -146,7 +146,7 @@ def taskIsBlackHoles(name):
     name: str
         Task name
     """
-    if "bh" in name or "bpart" in name:
+    if "bh" in name or "bpart" in name or "swallow" in name:
         return True
     return False
 
@@ -181,13 +181,13 @@ def taskIsHydro(name):
         return True
     if "density" in name and "stars" not in name and "bh" not in name:
         return True
-    if "rho" in name:
+    if "rho" in name and "bpart" not in name:
         return True
     if "gradient" in name:
         return True
     if "force" in name:
         return True
-    if "xv" in name:
+    if "xv" in name and "bpart" not in name:
         return True
 
     task_name = [
diff --git a/tools/process_memuse_logs.sh b/tools/process_memuse_logs.sh
deleted file mode 100755
index c86efe9ceca388afcb8d7236c0a2a4e403a66083..0000000000000000000000000000000000000000
--- a/tools/process_memuse_logs.sh
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/bin/bash
-#
-# Usage:
-#  process_memuse_logs nprocess
-#
-# Description:
-#  Process all the memuse report files in the current directory.
-#  Creating an analysis for step step and one for all the steps.
-#
-#  The input files are created by a run configured for memuse reporting
-#  (--enable-memuse-reports) should be named "memuse_report-step<n>.dat"
-#  in the current directory.
-#
-#  All located files will be processed using "nprocess" concurrent
-#  processes. The output for each step will be named memuse_report_step<n>.log
-#  and the overall analysis will be called memuse_report_all.log.
-#
-# This file is part of SWIFT:
-#
-#  Copyright (C) 2019 Peter W. Draper (p.w.draper@durham.ac.uk)
-#  All Rights Reserved.
-#
-#  This program is free software: you can redistribute it and/or modify
-#  it under the terms of the GNU Lesser General Public License as published
-#  by the Free Software Foundation, either version 3 of the License, or
-#  (at your option) any later version.
-#
-#  This program is distributed in the hope that it will be useful,
-#  but WITHOUT ANY WARRANTY; without even the implied warranty of
-#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-#  GNU General Public License for more details.
-#
-#  You should have received a copy of the GNU Lesser General Public License
-#  along with this program.  If not, see <http://www.gnu.org/licenses/>.
-
-#  Handle command-line
-if test "$1" == ""; then
-    echo "Usage: $0 nprocess"
-    exit 1
-fi
-NPROCS=$1
-
-#  Locate script.
-SCRIPTHOME=$(dirname "$0")
-
-#  Find all report files. Use version sort to get into correct order.
-files=$(ls -v memuse_report-step*.dat)
-if test $? != 0; then
-    echo "Failed to find any memuse report files"
-    exit 1
-fi
-
-#  Construct list of input and output names.
-list=""
-for f in $files; do
-    output=$(echo $f| sed 's,.dat,.log,')
-    list="$list $f $output"
-done
-
-#  And process them.
-echo "Processing memuse report files..."
-echo $list | xargs -P $NPROCS -n 2 /bin/bash -c "${SCRIPTHOME}/analyse_memuse_logs.py \$0 > \$1"
-
-#  Now process the overall file, if more than one file given. 
-n=$(echo $list| wc -w)
-if test $n -gt 2; then
-    echo "Processing _all_ memuse report files..."
-    ${SCRIPTHOME}/analyse_memuse_logs.py $files > memuse_report-all.log
-fi
-
-echo "Finished"
-
-exit
diff --git a/tools/process_memuse_logs_MPI.sh b/tools/process_memuse_logs_MPI.sh
deleted file mode 100755
index 77a949d18432690fcb93f883eca5edff2ea19d92..0000000000000000000000000000000000000000
--- a/tools/process_memuse_logs_MPI.sh
+++ /dev/null
@@ -1,75 +0,0 @@
-#!/bin/bash
-#
-# Usage:
-#  process_memuse_logs_MPI rank nprocess
-#
-# Description:
-#  Process all the memuse report files in the current directory that
-#  are output from the given rank.
-#  Creating an analysis for each step and one for all the steps.
-#
-#  The input files are created by a run configured for memuse reporting
-#  (--enable-memuse-reports) should be named "memuse_report-rank<n>-step<m>.dat"
-#  in the current directory.
-#
-#  All located files will be processed using "nprocess" concurrent
-#  processes. The output for each step will be named memuse_report-rank<n>-step<m>.log
-#  and the overall analysis will be called memuse_report-all-rank<n>.log.
-#
-# This file is part of SWIFT:
-#
-#  Copyright (C) 2019 Peter W. Draper (p.w.draper@durham.ac.uk)
-#  All Rights Reserved.
-#
-#  This program is free software: you can redistribute it and/or modify
-#  it under the terms of the GNU Lesser General Public License as published
-#  by the Free Software Foundation, either version 3 of the License, or
-#  (at your option) any later version.
-#
-#  This program is distributed in the hope that it will be useful,
-#  but WITHOUT ANY WARRANTY; without even the implied warranty of
-#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-#  GNU General Public License for more details.
-#
-#  You should have received a copy of the GNU Lesser General Public License
-#  along with this program.  If not, see <http://www.gnu.org/licenses/>.
-
-#  Handle command-line
-if test "$2" == ""; then
-    echo "Usage: $0 rank nprocess"
-    exit 1
-fi
-RANK=$1
-NPROCS=$2
-
-#  Locate script.
-SCRIPTHOME=$(dirname "$0")
-
-#  Find all report files. Use version sort to get into correct order.
-files=$(ls -v memuse_report-rank${RANK}-step*.dat)
-if test $? != 0; then
-    echo "Failed to find any memuse report files"
-    exit 1
-fi
-
-#  Construct list of input and output names.
-list=""
-for f in $files; do
-    output=$(echo $f| sed 's,.dat,.log,')
-    list="$list $f $output"
-done
-
-#  And process them.
-echo "Processing memuse report files..."
-echo $list | xargs -P $NPROCS -n 2 /bin/bash -c "${SCRIPTHOME}/analyse_memuse_logs.py \$0 > \$1"
-
-#  Now process the overall file, if more than one file given. 
-n=$(echo $list| wc -w)
-if test $n -gt 2; then
-    echo "Processing _all_ memuse report files..."
-    ${SCRIPTHOME}/analyse_memuse_logs.py $files > memuse_report-all-rank${RANK}.log
-fi
-
-echo "Finished"
-
-exit
diff --git a/tools/task_plots/analyse_tasks.py b/tools/task_plots/analyse_tasks.py
index aad447148dd35af434003c0f14f773179b320b03..6dcba21841db61748beb06203441016101619b8f 100755
--- a/tools/task_plots/analyse_tasks.py
+++ b/tools/task_plots/analyse_tasks.py
@@ -134,6 +134,8 @@ SUBTYPES = [
     "sf_counts"
     "bpart",
     "bh_density",
+    "bh_swallow",
+    "do_swallow",
     "bh_feedback",
     "count",
 ]
diff --git a/tools/task_plots/plot_tasks.py b/tools/task_plots/plot_tasks.py
index 070ae2f3c509d6ae8fef8bad9ffec6c316060a7c..e7621b321434ddb97df4b2bb1f31ff9609d71c5e 100755
--- a/tools/task_plots/plot_tasks.py
+++ b/tools/task_plots/plot_tasks.py
@@ -219,6 +219,8 @@ SUBTYPES = [
     "sf_counts",
     "bpart",
     "bh_density",
+    "bh_swallow",
+    "do_swallow",
     "bh_feedback",
     "count",
 ]
@@ -275,6 +277,14 @@ FULLTYPES = [
     "pair/bh_density",
     "sub_self/bh_density",
     "sub_pair/bh_density",
+    "self/bh_swallow",
+    "pair/bh_swallow",
+    "sub_self/bh_swallow",
+    "sub_pair/bh_swallow",
+    "self/do_swallow",
+    "pair/do_swallow",
+    "sub_self/do_swallow",
+    "sub_pair/do_swallow",
     "self/bh_feedback",
     "pair/bh_feedback",
     "sub_self/bh_feedback",