Logs the sends, recvs and successful tests of the MPI requests made in the engine.
Shares infrastructure with the memuse logger, so includes some refactoring of that.
To use this you need to enable it at configure time with
A log is created for each rank and step. Usually these requests will all match, so the logs may not sound interesting, but they do show the use of memory in MPI as the step progresses, and other useful things like the maximum, mean and sum of the sizes of packets sent (this is in a post-amble section), they also allow the inspection of how efficient we are or the MPI library, since we include the handoff time.
Matching between ranks can be done using the
match_mpireports.py script that will
be useful when diagnose the MPI library and fabric performance, but note that ticks
are not necessarily synchronized between ranks. but should be close.
Also included is a cleanup of the MPI section of
scheduler.c. Should be far more obvious
what is happening now, but clearly this needs a good checking as a critical section of code.