Building SWIFT on Hamilton
- Load compiler modules:
module load intel/xe_2018.2 intelmpi/intel/2018.2 hdf5/impi/intel/1.8.14 gsl/intel/1.15
- The METIS library is out of date so you can use the local METIS library (5.1.0) here:
/ddn/data/rsrd54/metis-5.1.0/build/metis-install/
- Configure and build with:
./configure CC=mpicc --with-gsl --with-metis=/ddn/data/rsrd54/metis-5.1.0/build/metis-install/ CPPFLAGS=-I/ddn/data/rsrd54/metis-5.1.0/build/metis-install/include/
Running SWIFT on Hamilton
#!/bin/bash
#SBATCH -N 2
#SBATCH -o out_file.o%j
#SBATCH -e err_file.e%j
#SBATCH --exclusive
#SBATCH --tasks-per-node=2
## Load any modules required here
module purge
module load slurm/current
module load hdf5/impi/intel/1.8.14
module load intelmpi/intel/2018.2
module load gsl/intel/1.15
#export I_MPI_FABRICS="shm:dapl"
## Execute the MPI program
mpirun -np $SLURM_JOB_NUM_NODES ../swift_mpi -s -t 24 eagle_25.yml -n 4096
This script runs SWIFT on 2 nodes with 2 MPI ranks on each node. Hamilton contains an Intel Omni-Path interconnect which should be selected by default by Intel MPI, but the environment variable I_MPI_FABRICS
can be used to experiment with this.
With I_MPI_FABRICS
you can specify the fabric used for communications within a node and between nodes. The first option: shm
is shared memory for intra-node communication and the second option: dapl
is for inter-node communication. See page 18 of the Intel MPI library manual for more details: https://software.intel.com/en-us/intel-mpi-library/documentation.