... | ... | @@ -26,5 +26,15 @@ of particles... So a plot of active tasks per step against updated particles: |
|
|
![eagle_25-12cores-fig3](/uploads/34c0c78e99bc68bb6bd45f7ee46b1d75/eagle_25-12cores-fig3.png)
|
|
|
|
|
|
Looks quite linear down to ~500 particles, when we loose good scaling, that is still 40%
|
|
|
of all steps, but we are still somewhat linear. So that idea is unproven. Need to count
|
|
|
cells, or be selective about which tasks? |
|
|
\ No newline at end of file |
|
|
of all steps, but we are still somewhat linear.
|
|
|
|
|
|
Next plot, same as above but now comparing the 12 core run with a 1 core run:
|
|
|
|
|
|
![eagle_25-12cores-fig4](/uploads/6079b9f40e8371a74bce7da85c069ee4/eagle_25-12cores-fig4.png)
|
|
|
|
|
|
So we see the same effect. The scaling in the flat section is x2.
|
|
|
|
|
|
Hah, but this is the master branch and the threadpool chunks tasks typically at
|
|
|
the 1000-10000 scale, so maybe that is the issue and we're not using all the
|
|
|
threads available.
|
|
|
|