Difference between revisions of "Fluent Guide"

From Storrs HPC Wiki
Jump to: navigation, search
(Run Fluent with Slurm)
Line 5: Line 5:
 
  module load fluent/14.0
 
  module load fluent/14.0
 
=Run Fluent with Slurm=
 
=Run Fluent with Slurm=
 +
Please '''ssh cn65''' in the Hornet or '''ssh hornet-login3.engr.uconn.edu''' from the client in order to use SLURM.
 
==Serial Job==
 
==Serial Job==
 
You need the fluent.sh script as following:
 
You need the fluent.sh script as following:
Line 53: Line 54:
 
To run an interactive session, use srun like this:
 
To run an interactive session, use srun like this:
 
  $ srun -n  ./fluent-srun.sh
 
  $ srun -n  ./fluent-srun.sh
 +
 
=Run Fluent with LSF=
 
=Run Fluent with LSF=
 
==Serial Job==
 
==Serial Job==

Revision as of 10:16, 16 July 2015

Module File

First load the fluent module

module load fluent/version

For example, to load fluent-v14.0:

module load fluent/14.0

Run Fluent with Slurm

Please ssh cn65 in the Hornet or ssh hornet-login3.engr.uconn.edu from the client in order to use SLURM.

Serial Job

You need the fluent.sh script as following:

#!/bin/bash
#SBATCH -n 1 # only allocate 1 task
#SBATCH -J fluent1 # sensible name for the job
#SBATCH -o fluent_$J.out #the file to write the stdout for fluent job
#SBATCH -e fluent_%J.err #the fiel to write the stderr for fluent job

export FLUENT_GUI=off

fluent 2d -g -i foo.txt

Then, submit the job as:

sbatch<fluent.sh

Parallel Job

To run several tasks in parallel on one or more nodes, the submission file, fluentP.sh, could be as follows:

#!/bin/bash
#SBATCH -N 2 # allocate 25 nodes for the job
#SBATCH -n 40 # total number of tasks. Or you can specify number of tasks per node as "#SBATCH --ntasks-per-node=20"
#SBATCH --exclusive # no other jobs on the nodes while job is running
#SBATCH -J fluentP1 # sensible name for the job
#SBATCH -o fluentP_$J.out #the file to write the stdout for fluent job
#SBATCH -e fluentP_%J.err #the fiel to write the stderr for fluent job

export FLUENT_GUI=off
if [ -z "$SLURM_NPROCS" ]; then
  N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
  N=$SLURM_NPROCS
fi

echo -e "N: $N\n";

# run fluent in batch on the allocated node(s)
fluent 2ddp -g -slurm -t$N -pnmpi -ssh -i foo.txt

Then, submit your job as:

sbatch< fluentP.sh

Interactive Job

For an interactive run of the Fluent, you can use this simple script, fluent-srun.sh:

#!/bin/bash
HOSTSFILE=.hostlist-job$SLURM_JOB_ID
if [ "$SLURM_PROCID" == "0" ]; then
   srun hostname -f > $HOSTSFILE
   fluent -t $SLURM_NTASKS -cnf=$HOSTSFILE -ssh 3d
   rm -f $HOSTSFILE
fi
exit 0

To run an interactive session, use srun like this:

$ srun -n  ./fluent-srun.sh

Run Fluent with LSF

Serial Job

Next, with an input script foo.txt, run

bsub -o fluent_%J.out -e fluent_$J.err fluent 3d -g -i foo.txt

foo.txt must refer to a .cas input file. 3d is one of

2d    2ddp_host  2d_host  3d	3ddp_host  3d_host
2ddp  2ddp_node  2d_node  3ddp	3ddp_node  3d_node

Parallel Job

To run with mpi, try

bsub -n $np -o fluent_%J.out -e fluent_$J.err fluent 2ddp -t$np -pnmpi -ssh -g -i foo.txt -lsf

Replace $np with the number of cores you need.