This script runs SWIFT on 2 nodes with 2 MPI ranks on each node. Hamilton contains an Intel Omni-Path interconnect which should be selected by default by Intel MPI, but the environment variable I_MPI_FABRICS can be used to experiment with this.
With I_MPI_FABRICS you can specify the fabric used for communications within a node and between nodes. The first option: shm is shared memory for intra-node communication and the second option: dapl is for inter-node communication. See page 18 of the Intel MPI library manual for more details: https://software.intel.com/en-us/intel-mpi-library/documentation.