|MPI over InfiniBand|
|Author||Ohio State University|
|Help|| user guide|
From the README
MVAPICH2 is a high performance MPI-2 implementation (with initial support for MPI-3) for InfiniBand, 10GigE/iWARP and RoCE. MVAPICH2 delivers best performance, scalability and fault tolerance for high-end computing systems and servers. MVAPICH2 provides underlying support for several interfaces (such as OFA-IB, OFA-iWARP, OFA-RoCE, PSM, Shared Memory, and TCP) for portability across multiple networks.
module load mpi/mvapich2/2.0a-ics-slurm
You can still run the executible that compiled with "mpi/mvapich2/2.0a-ics". The execcutible does not need to be re-compiled but you need to load "mpi/mvapich2/2.0a-ics-slurm" to run it.
Using MVAPICH2 with Slurm
To run the mvapich2 with slurm. Your software should be compiled with the library "-lpmi". To do so:
mpicc -L/gpfs/gpfs1/slurm/lib -lpmi your_software
To run the software with sbatch, please use the following script:
#!/bin/bash #SBATCH -n 2 # number of tasks #SBATCH -N 1 # number of nodes srun --mpi=none your_software # use none module for mpi
Using MVAPICH2 without recompiling
If you did not configure your software with the library "-lpmi", please run your job with mpiexec without the number of cores:
#!/bin/bash #SBATCH -n 2 # number of tasks #SBATCH -N 1 # number of nodes mpiexec your_software