Difference between revisions of "MPIEngine"
(→Example) |
(→Example) |
||
Line 29: | Line 29: | ||
=Example= | =Example= | ||
− | {{large|Case 1}} | + | {{large|Case 1}} |
(1): Executable script: /apps2/mpiengine/1.0/test/test.py | (1): Executable script: /apps2/mpiengine/1.0/test/test.py |
Revision as of 14:09, 19 December 2019
MPIEngine | |
---|---|
Author | University of Connecticut |
Website | MPIEngine |
Source | MPIEngine Github |
Category | MPI |
Help | documentation |
Introduction
Motivation
Some users want to run same script (or exe) multiple times by using different arguments each time. The simplest way is recording these command lines sequentially inside slurm script. The downside of this strategy is that the command line is activated only after all its previous command lines been finished. Another simple way is creating multiple sbatch script, and each script runs one command line. However, the problem of this method is that only the maximum 8 jobs could be run simultaneously, and you may gain long pending time for the remaining submitted jobs.
Solution
MPIEngine can boost any script (or exe) with multiple processes by using different argument for each process. You only need to submit one slurm job.
Module required
module load gcc/5.4.0-alt java/1.8.0_162 mpi/openmpi/3.1.3 mpiengine/1.0
Application Usage
- MPIEngine -n <number of cores> your_program
- Fill the parameters into "config.ini". Each line in "config.ini" is corresponding to all the arguments required by one process.
- Example: there are three case, and each of them applies 8 cores to run the program with multiple parameters of each.
Case 1: mpirun -n 8 MPIEngine config.ini Case 2: mpirun -n 8 MPIEngine myprogram.exe config.ini Case 3: mpirun -n 8 MPIEngine python mySourceCode.py config.ini
Example
(1): Executable script: /apps2/mpiengine/1.0/test/test.py
(2): Config.ini (include 8 groups of parameters that required by each different process each)
Arg11 Arg12 Arg13 Arg14 > 1.txt Arg21 Arg22 Arg23 Arg24 > 2.txt Arg31 Arg32 Arg33 Arg34 > 3.txt Arg41 Arg42 Arg43 Arg44 > 4.txt Arg51 Arg52 Arg53 Arg54 > 5.txt Arg61 Arg62 Arg63 Arg64 > 6.txt Arg71 Arg72 Arg73 Arg74 > 7.txt Arg81 Arg82 Arg83 Arg84 > 8.txt
3: Slurm script (/apps2/mpiengine/1.0/test/job.sh). In this slurm script, we require 2 nodes and 4 cores for each node (total 8 cores) from general partition. Then we create 8 processes by using MPI. Each process handle with one line parameters recorded in config.ini.
#!/bin/bash #SBATCH -p general #SBATCH -N 2 #SBATCH --ntasks-per-node=4 module purge module load gcc/5.4.0-alt zlib/1.2.11 java/1.8.0_162 mpi/openmpi/3.1.3 python/2.7.6 mpiengine/1.0 RootPath=/apps2/mpiengine/1.0/test mpirun -n 8 MPIEngine python $RootPath/test.py $RootPath/config.ini