|
|
# Using DDT with Swift on Cosma
|
|
|
|
|
|
## Running on the compute nodes
|
|
|
|
|
|
DDT can submit batch jobs itself but if Cosma is busy this means you go back into the queue if you exit the debugger. An alternative method:
|
|
|
|
|
|
* Reserve some nodes with SLURM:
|
|
|
```
|
|
|
salloc --ntasks=2 --ntasks-per-node=2 -p cosma7 -A dp004 -t 8:00:00
|
|
|
```
|
|
|
This starts a new shell on the login node. Any MPI programs run in this shell will be started on the allocated compute nodes.
|
|
|
|
|
|
* Start DDT
|
|
|
```
|
|
|
ddt ./swiftsim/build/examples/swift_mpi --verbose=1 --self-gravity --cosmology --velociraptor --pin --threads=14 ./eagle_12.yml
|
|
|
```
|
|
|
Need to make sure ddt does NOT submit a batch job. The slurm allocation will be cancelled if you exit the shell started by salloc.
|
|
|
|
|
|
## Memory profiling
|
|
|
|
|
|
DDT memory profiling with a default Swift build using the Intel compiler results in segfaults. It seems to work if you set
|
|
|
```
|
|
|
CFLAGS="-fno-inline -shared-intel"
|
|
|
```
|
|
|
when running configure. |