... | ... | @@ -12,16 +12,16 @@ To run SWIFT on Hamilton with Intel MPI on multiple nodes you can use: |
|
|
## Load any modules required here
|
|
|
module purge
|
|
|
module load slurm/current
|
|
|
module load hdf5/gcc/1.8.5
|
|
|
module load intelmpi/intel/2017.2
|
|
|
module load hdf5/impi/intel/1.8.14
|
|
|
module load intelmpi/intel/2018.2
|
|
|
|
|
|
#export I_MPI_FABRICS="shm:dapl"
|
|
|
|
|
|
## Execute the MPI program
|
|
|
mpirun -bootstrap srun ../swift_mpi -s -a -t 22 eagle_25.yml -n 4096
|
|
|
mpirun -bootstrap srun ../swift_mpi -s -a -t 24 eagle_25.yml -n 4096
|
|
|
|
|
|
```
|
|
|
|
|
|
This script runs SWIFT on 2 nodes with 2 MPI ranks on each node. Hamilton contains an Intel Omni-Path interconnect which should be selected by default by Intel MPI, but the environment variable `I_MPI_FABRICS` can be used to experiment with this.
|
|
|
|
|
|
With `I_MPI_FABRICS` you can specify the fabric used for communications within a node and between nodes. The first option: `shm` is shared memory for intra-node communication and the second option: `dapl` is for inter-node communication. See page 18 of the Intel MPI library manual for more details: https://software.intel.com/sites/default/files/managed/74/c6/intelmpi-2017-developer-guide-linux.pdf. |
|
|
\ No newline at end of file |
|
|
With `I_MPI_FABRICS` you can specify the fabric used for communications within a node and between nodes. The first option: `shm` is shared memory for intra-node communication and the second option: `dapl` is for inter-node communication. See page 18 of the Intel MPI library manual for more details: https://software.intel.com/en-us/intel-mpi-library/documentation. |
|
|
\ No newline at end of file |