... | ... | @@ -82,4 +82,24 @@ to 16x16x16 (so up to 12384 particles per cell from 1878) by requiring `SPH:max_ |
|
|
That also looks like a good improvement. Seems to be an odd break at around 200 particles.
|
|
|
Note this is using `h_max=1` on the drift+skip branch, not master.
|
|
|
|
|
|
On further investigation it seems that the 200 particle break occurs when we get to time
|
|
|
step 1400 to the end of the run 2000. So is probably a geometric issue, like more active cells
|
|
|
for the same number of particle updates.
|
|
|
|
|
|
Looking for further clues, and wondering what a single thread result will look like,
|
|
|
we can check how the actual scaling affects the graphs by running with different numbers of cores.
|
|
|
|
|
|
![eagle_25-12cores-fig9](/uploads/ff392bd82cd142a0cc3bb44987b2e295/eagle_25-12cores-fig9.png)
|
|
|
|
|
|
Note these runs now use `h_max=1`. The horizontal line at 70ms are all the steps that
|
|
|
run `scheduler_reweight`.
|
|
|
|
|
|
If we assume that these scale by number of cores (which is somewhat untrue since
|
|
|
the node used had turbo boost enabled, so small cores will clock faster) we get
|
|
|
for a select few:
|
|
|
|
|
|
![eagle_25-12cores-fig10](/uploads/2469551b8628bf9e580bfcd887feb172/eagle_25-12cores-fig10.png)
|
|
|
|
|
|
And just to clear things up here are just the 1 and 12 core runs alone:
|
|
|
|
|
|
![eagle_25-12cores-fig11](/uploads/04ebe832eaf79c5d7ce05bbaa852ff60/eagle_25-12cores-fig11.png) |