|
|
To run SWIFT on Hamilton with Intel MPI on multiple nodes you can use:
|
|
|
|
|
|
```
|
|
|
#!/bin/bash
|
|
|
|
|
|
#SBATCH -N 2
|
|
|
#SBATCH -o out_file.o%j
|
|
|
#SBATCH -e err_file.e%j
|
|
|
#SBATCH --exclusive
|
|
|
#SBATCH --tasks-per-node=2
|
|
|
|
|
|
## Load any modules required here
|
|
|
module purge
|
|
|
module load slurm/current
|
|
|
module load hdf5/gcc/1.8.5
|
|
|
module load intelmpi/intel/2017.2
|
|
|
|
|
|
#export I_MPI_FABRICS="shm:dapl"
|
|
|
|
|
|
## Execute the MPI program
|
|
|
mpirun -bootstrap srun ../swift_mpi -s -a -t 22 eagle_25.yml -n 4096
|
|
|
|
|
|
```
|
|
|
|
|
|
This script runs SWIFT on 2 nodes with 2 MPI ranks on each node. Hamilton contains an Intel Omni-Path interconnect which should be selected by default by Intel MPI, but the environment variable `I_MPI_FABRICS` can be used to experiment with this.
|
|
|
|
|
|
With `I_MPI_FABRICS` you can specify the fabric used for communications within a node and between nodes. The first option: `shm` is shared memory for intra-node communication and the second option: `dapl` is for inter-node communication. See page 18 of the Intel MPI library manual for more details: https://software.intel.com/sites/default/files/managed/74/c6/intelmpi-2017-developer-guide-linux.pdf. |
|
|
\ No newline at end of file |