SWIFTsim merge requestshttps://gitlab.cosma.dur.ac.uk/swift/swiftsim/-/merge_requests2018-11-16T14:06:12Zhttps://gitlab.cosma.dur.ac.uk/swift/swiftsim/-/merge_requests/659Parallel rebuild22018-11-16T14:06:12ZMatthieu SchallerParallel rebuild2Fixes #458. Also renders the discussion in #483 void.
Fixes #458. Also renders the discussion in #483 void.
Peter W. DraperPeter W. Draperhttps://gitlab.cosma.dur.ac.uk/swift/swiftsim/-/merge_requests/663Repartition threadpool2018-11-06T17:35:34ZPeter W. DraperRepartition threadpoolUse the threadpool to generate the graph weights from all the associated tasks during
repartitioning.
Fixes #462, part of #458.Use the threadpool to generate the graph weights from all the associated tasks during
repartitioning.
Fixes #462, part of #458.Matthieu SchallerMatthieu Schallerhttps://gitlab.cosma.dur.ac.uk/swift/swiftsim/-/merge_requests/651Speed-up for large top-level grids2018-11-01T13:48:49ZMatthieu SchallerSpeed-up for large top-level gridsThese are some improvements to the space structure to speed-up MPI calculations and the planetary things:
- We make a list of non-empty top-level cells,
- We make a list of local non-empty top-level cells,
- engine_drift_all() on...These are some improvements to the space structure to speed-up MPI calculations and the planetary things:
- We make a list of non-empty top-level cells,
- We make a list of local non-empty top-level cells,
- engine_drift_all() only loops over the local cells with tasks (instead of all the grid),
- space_split() only loops over the local non-empty cells,
- space_regrid() only loops over the local non-empty cells to figure out h_max.
- The top-level gravity task only loops over the non-empty cells.Peter W. DraperPeter W. Draperhttps://gitlab.cosma.dur.ac.uk/swift/swiftsim/-/merge_requests/635Split stars2018-10-17T12:40:50ZLoic HausammannSplit starsI have implemented the functions to recurse/split the stellar density tasks.
You will also find two new examples (feel free to remove them from the branch if you do not want them).
The first one is a typical zoom in cosmological simulat...I have implemented the functions to recurse/split the stellar density tasks.
You will also find two new examples (feel free to remove them from the branch if you do not want them).
The first one is a typical zoom in cosmological simulation done with GEAR and the second one is a galaxy extracted from the zoom in simulation. As the current star density task is very very slow, I need a smaller example to run a few steps.
I have also a new function in the scheduler that write down the depth of each task in a file. I used it to generate the graphs in the tasking channel of slack. Again feel free to remove the commit if you do not want it.
![dependency_graph](/uploads/431b36351e14e10d1686d004467cbb5f/dependency_graph.png)Stellar physicsMatthieu SchallerMatthieu Schallerhttps://gitlab.cosma.dur.ac.uk/swift/swiftsim/-/merge_requests/506Add ParMETIS support2018-09-14T20:15:25ZPeter W. DraperAdd ParMETIS supportAdds ParMETIS repartitioning to calculate the new cell graph across all the MPI nodes.
ParMETIS also has methods that refine an existing solution, not just create a new one, which can be used to reduce the amount of particle movement....Adds ParMETIS repartitioning to calculate the new cell graph across all the MPI nodes.
ParMETIS also has methods that refine an existing solution, not just create a new one, which can be used to reduce the amount of particle movement.
As part of this work we are reducing the number of repartitioning techniques offered
(these tend to give marginally worse solutions in tests) to "costs/costs",
"costs/none", "none/costs" and "costs/time", that is balanced, vertex only, edge only
and edges weighted by the expected time of next updates.
Initially this work intended to remove support for METIS, but testing the actual runtimes
shows that that produces the best balance, so we are keeping that and offering it
as a continuing option, just enhanced by the ParMETIS options. This may prove to be a better choice
at larger scales than our current simulations.
Other significant updates in this request:
- initial partitioning schemes given more obvious names:
- grid, region, memory or vectorized
region balances by volume, and memory by particle distribution,
the others are only interesting when (Par)METIS is not available.
- weights are now calculated using floats and the sum is scaled into
the range of `idx_t`. That should avoid integer overflow issues.
- Weights are no longer used from any MPI tasks. The position of these is strictly a free parameter
of the solution.
- The balance of weights between vertices and timebins is defined as an equipartition.
Previously the limits were matched.
- new clocks function `clocks_random_seed` to return "random" seeds based on the remainder of the current
number of nanoseconds.Improved MPI scalingMatthieu SchallerMatthieu Schallerhttps://gitlab.cosma.dur.ac.uk/swift/swiftsim/-/merge_requests/524Improvements to GIZMO implementation2018-04-11T10:00:07ZMatthieu SchallerImprovements to GIZMO implementationMixture of small improvements:
- Do not call external functions for min and max.
- Do not use double-precision constants.
- Reduce the number of divisions.
I believe this speeds up the Evrad collapse by 15-20%. @jborrow could you co...Mixture of small improvements:
- Do not call external functions for min and max.
- Do not use double-precision constants.
- Reduce the number of divisions.
I believe this speeds up the Evrad collapse by 15-20%. @jborrow could you confirm this on your accuracy vs. time plot?
@bvandenbroucke I am likely to have made a mistake somewhere. Could you cross-check that it is all fine?Bert VandenbrouckeBert Vandenbroucke