Skip to content
Snippets Groups Projects

Tag only supercells

Merged Pedro Gonnet requested to merge tag_only_supercells into master

Reduce the number of tags needed by tagging only cells that actually have send/recv tasks attached. This is now the bare minimum number of tags needed.

This involves adding an additional communication step to exchange tags when rebuilding the space, but this is relatively small compared to all the other stuff going on, so it shouldn't hurt.

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Pedro Gonnet added 321 commits

    added 321 commits

    • d19f9068...c49a9670 - 320 commits from branch master
    • aa6b7c9a - Merge remote-tracking branch 'origin/master' into tag_only_supercells

    Compare with previous version

  • It looks like some of the changes here are not needed anymore if we use one communicator per communication sub-type.

  • Yes, we'll need to remove the engine_task_tags and associated changes.

    In the meanwhile, when I run the test:

    mpirun -np 4 ../swift_mpi -a -t 4 -s eagle_6.yml 

    we die in the proxy_cells_exchange function when calling cell_pack_tags, with a memory overrun.

    Looking at the code it seems to me that the offset array needs to be extended to in and out versions. Otherwise these are mixing up. A quick hack doing this is now working.

  • Looks like we also have problems with multipole exchanges, probably an ordering issue, but not sire if that is worth looking at since gravity and MPI are supported in master at present.

  • BTW, see 5e5a260e for the offsets hack.

  • Peter W. Draper added 17 commits

    added 17 commits

    • aa6b7c9a...0c583dda - 14 commits from branch master
    • c1d900f4 - Need to split up offset array to handle in/out
    • 72d06cba - Merges in changes to use one MPI communicator per subtask type.
    • f2319741 - No longer need engine_task_tags so remove

    Compare with previous version

  • I've merged in the changes for using one communicator per subtype and the offsets fix. Still running other tests.

  • All seems to be working, except I cannot check self-gravity, all the examples I have are periodic (I think we need to move engine_exchange_proxy_multipoles to after engine_maketasks). Do we have a test for this?

  • You can explicitly set periodic to 0 in the main after reading in the ICs that will run the code in non-periodic mode.

    I don't think we can move the exchange of multipoles to after make tasks since this will rely on the multipoles to decide how to create and split gravity tasks.

  • Sounds like you should look at this to make sure we don't have a show stopper here. We don't have any tags before splitting the tasks.

  • That is actually quite related to the problems I am facing with the split gravity tasks over MPI as well as the crashes I get running out of memory with some runs.

    The original plan was to split tasks to reduce the amount of data that needs to be shipped over MPI. But at the moment, I don't think that gain can be realized. We create the proxies including the allocation of the foreign data before the splitting. We would likely have to re-order the way we do things to benefit from lower memory usage. But we may also be reconstructing tasks based on old information which then leads to the crashes. In other words I still need to think about this.

  • Author Developer

    BTW, see 5e5a260e for the offsets hack.

    Wow, yeah, that was dumb of me. Thanks for fixing it!

    Thinking about the different tag types, do we really even need them, e.g. for sending the x and then later the rho? Distinct tags are only useful if the sends/recvs ever overlap, but from the dependency graph, at least the hydro-related ones do not. So could we just use the same tag for sending/recving x, rho, and ti?

  • x and rho are identical. ti can be triggered by gpart separately.

    But note that with Peter's multi-communicator trick it does not matter any more.

  • Author Developer

    Ack.

    Do we see any other advantages of using separate communicators, e.g. less time spent in MPI_Test since the list of incomming/outgoing messages is smaller per communicator?

  • We haven't tested that yet. Not sure we actually trust any of the tools telling us this anyway.

  • Author Developer

    Why bother with tools if we can instrument that ourselves? :)

    I think once we've tested that everything works, that would be a good idea since I have a hunch we may have hidden gains there, and it would be interesting to report them back to Intel.

  • I did a few runs with and without the communicators fix, just to make sure we weren't suffering any slowdowns. The upshot of that was that I couldn't see any difference at all, so much so that I ran them again to make sure I hadn't made a mistake. That was EAGLE_50 on 8 nodes of COSMA7. Obviously that was just testing runtime per step, not looking at the task timing plots.

  • In other words I still need to think about this.

    So, do you think we need to hold this work? Or is it OK to accept and fix up master?

  • If it works for all the hydro cases we can currently run, I think we are safe to go here.

    My reflection above ties in the larger MPI picture.

  • Tried to get this working for self-gravity in master, but that doesn't seem to work. I think the problem is down to the difference between top-level cells and super-cells. The multipole exchange seems to require that all the top-level cells have tags, but that is no longer always true.

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
Please register or sign in to reply
Loading