Fluent Guide

From Storrs HPC Wiki
Revision as of 10:16, 16 July 2015 by Tix11001 (talk | contribs)
Jump to: navigation, search

Module File

First load the fluent module

module load fluent/version

For example, to load fluent-v14.0:

module load fluent/14.0

Run Fluent with Slurm

Serial Job

You need the fluent.sh script as following:

#!/bin/bash
#SBATCH -n 1 # only allocate 1 task
#SBATCH -J fluent1 # sensible name for the job
#SBATCH -o fluent_$J.out #the file to write the stdout for fluent job
#SBATCH -e fluent_%J.err #the fiel to write the stderr for fluent job

export FLUENT_GUI=off

fluent 2d -g -i foo.txt

Then, submit the job as:

sbatch<fluent.sh

Parallel Job

To run several tasks in parallel on one or more nodes, the submission file, fluentP.sh, could be as follows:

#!/bin/bash
#SBATCH -N 2 # allocate 25 nodes for the job
#SBATCH -n 40 # total number of tasks. Or you can specify number of tasks per node as "#SBATCH --ntasks-per-node=20"
#SBATCH --exclusive # no other jobs on the nodes while job is running
#SBATCH -J fluentP1 # sensible name for the job
#SBATCH -o fluentP_$J.out #the file to write the stdout for fluent job
#SBATCH -e fluentP_%J.err #the fiel to write the stderr for fluent job

export FLUENT_GUI=off
if [ -z "$SLURM_NPROCS" ]; then
  N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
  N=$SLURM_NPROCS
fi

echo -e "N: $N\n";

# run fluent in batch on the allocated node(s)
fluent 2ddp -g -slurm -t$N -pnmpi -ssh -i foo.txt

Then, submit your job as:

sbatch< fluentP.sh

Interactive Job

For an interactive run of the Fluent, you can use this simple script, fluent-srun.sh:

#!/bin/bash
HOSTSFILE=.hostlist-job$SLURM_JOB_ID
if [ "$SLURM_PROCID" == "0" ]; then
   srun hostname -f > $HOSTSFILE
   fluent -t $SLURM_NTASKS -cnf=$HOSTSFILE -ssh 3d
   rm -f $HOSTSFILE
fi
exit 0

To run an interactive session, use srun like this:

$ srun -n  ./fluent-srun.sh

Run Fluent with LSF

Serial Job

Next, with an input script foo.txt, run

bsub -o fluent_%J.out -e fluent_$J.err fluent 3d -g -i foo.txt

foo.txt must refer to a .cas input file. 3d is one of

2d    2ddp_host  2d_host  3d	3ddp_host  3d_host
2ddp  2ddp_node  2d_node  3ddp	3ddp_node  3d_node

Parallel Job

To run with mpi, try

bsub -n $np -o fluent_%J.out -e fluent_$J.err fluent 2ddp -t$np -pnmpi -ssh -g -i foo.txt -lsf

Replace $np with the number of cores you need.