Difference between revisions of "Fluent Guide"
From Storrs HPC Wiki
(Remove obsolete LSF instructions.) |
(Multiple updates (Thanks, Saurabh Sarkar)) |
||
Line 1: | Line 1: | ||
=Module File= | =Module File= | ||
− | First | + | First check the available fluent module |
− | module | + | module avail fluent |
− | + | To load fluent version 16.2: | |
− | module load fluent/ | + | module load fluent/16.2 |
+ | |||
=Run Fluent with Slurm= | =Run Fluent with Slurm= | ||
− | + | [[HPC Getting Started#SSH access|SSH]] into the HORNET cluster to use SLURM. | |
==Serial Job== | ==Serial Job== | ||
− | + | Create a batch script, say, fluent.sh, with something similar to: | |
#!/bin/bash | #!/bin/bash | ||
#SBATCH -n 1 # only allocate 1 task | #SBATCH -n 1 # only allocate 1 task | ||
Line 18: | Line 19: | ||
fluent 2d -g -i foo.txt | fluent 2d -g -i foo.txt | ||
+ | |||
Then, submit the job as: | Then, submit the job as: | ||
sbatch<fluent.sh | sbatch<fluent.sh | ||
==Parallel Job== | ==Parallel Job== | ||
− | To run several tasks in parallel on one or more nodes, | + | To run several tasks in parallel on one or more nodes, here is an example fluentP.sh batch script: |
#!/bin/bash | #!/bin/bash | ||
− | #SBATCH -N 2 # allocate | + | #SBATCH -N 2 # allocate 2 nodes for the job |
#SBATCH -n 40 # total number of tasks. Or you can specify number of tasks per node as "#SBATCH --ntasks-per-node=20" | #SBATCH -n 40 # total number of tasks. Or you can specify number of tasks per node as "#SBATCH --ntasks-per-node=20" | ||
#SBATCH --exclusive # no other jobs on the nodes while job is running | #SBATCH --exclusive # no other jobs on the nodes while job is running |
Revision as of 18:11, 11 August 2015
Module File
First check the available fluent module
module avail fluent
To load fluent version 16.2:
module load fluent/16.2
Run Fluent with Slurm
SSH into the HORNET cluster to use SLURM.
Serial Job
Create a batch script, say, fluent.sh, with something similar to:
#!/bin/bash #SBATCH -n 1 # only allocate 1 task #SBATCH -J fluent1 # sensible name for the job #SBATCH -o fluent_$J.out #the file to write the stdout for fluent job #SBATCH -e fluent_%J.err #the fiel to write the stderr for fluent job export FLUENT_GUI=off fluent 2d -g -i foo.txt
Then, submit the job as:
sbatch<fluent.sh
Parallel Job
To run several tasks in parallel on one or more nodes, here is an example fluentP.sh batch script:
#!/bin/bash #SBATCH -N 2 # allocate 2 nodes for the job #SBATCH -n 40 # total number of tasks. Or you can specify number of tasks per node as "#SBATCH --ntasks-per-node=20" #SBATCH --exclusive # no other jobs on the nodes while job is running #SBATCH -J fluentP1 # sensible name for the job #SBATCH -o fluentP_$J.out #the file to write the stdout for fluent job #SBATCH -e fluentP_%J.err #the fiel to write the stderr for fluent job export FLUENT_GUI=off if [ -z "$SLURM_NPROCS" ]; then N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') )) else N=$SLURM_NPROCS fi echo -e "N: $N\n"; # run fluent in batch on the allocated node(s) fluent 2ddp -g -slurm -t$N -pnmpi -ssh -i foo.txt
Then, submit your job as:
sbatch< fluentP.sh
Interactive Job
For an interactive run of the Fluent, you can use this simple script, fluent-srun.sh:
#!/bin/bash HOSTSFILE=.hostlist-job$SLURM_JOB_ID if [ "$SLURM_PROCID" == "0" ]; then srun hostname -f > $HOSTSFILE fluent -t $SLURM_NTASKS -cnf=$HOSTSFILE -ssh 3d rm -f $HOSTSFILE fi exit 0
To run an interactive session, use srun like this:
$ srun -n <#procs> ./fluent-srun.sh