Difference between revisions of "Fluent Guide"

From Storrs HPC Wiki
Jump to: navigation, search
(Parallel Job for new fluent versions)
 
(28 intermediate revisions by 6 users not shown)
Line 1: Line 1:
=Module File=
+
==Module File==
First check the available fluent module
+
First check the available fluent modules
 
  module avail fluent
 
  module avail fluent
To load fluent version 16.2:
+
To load fluent version 19.1:
  module load fluent/16.2
+
  module load fluent/19.1
 
+
To load fluent version 2019R3:
=Run Fluent with Slurm=
+
  module load fluent/2019R3
[[HPC Getting Started#SSH access|SSH]] into the HORNET cluster to use SLURM.
 
 
 
==Serial Job==
 
Create a batch script, say, fluent.sh, with something similar to:
 
#!/bin/bash
 
#SBATCH -n 1 # only allocate 1 task
 
#SBATCH -J fluent1 # sensible name for the job
 
#SBATCH -o fluent_$J.out #the file to write the stdout for fluent job
 
#SBATCH -e fluent_%J.err #the fiel to write the stderr for fluent job
 
 
export FLUENT_GUI=off
 
 
fluent 2d -g -i foo.txt
 
 
 
Then, submit the job as:
 
  sbatch<fluent.sh
 
  
 
==Parallel Job==
 
==Parallel Job==
To run several tasks in parallel on one or more nodes, here is an example fluentP.sh batch script:
+
To run several tasks in parallel on one or more nodes, here is an example fluentP.sh batch script for older versions of Fluent:
 
  #!/bin/bash
 
  #!/bin/bash
 
  #SBATCH -N 2 # allocate 2 nodes for the job
 
  #SBATCH -N 2 # allocate 2 nodes for the job
  #SBATCH -n 40 # total number of tasks. Or you can specify number of tasks per node as "#SBATCH --ntasks-per-node=20"
+
  #SBATCH --ntasks-per-node=20
 
  #SBATCH --exclusive # no other jobs on the nodes while job is running
 
  #SBATCH --exclusive # no other jobs on the nodes while job is running
#SBATCH -J fluentP1 # sensible name for the job
+
  #SBATCH -o fluentP_%J.out # the file to write stdout for fluent job
  #SBATCH -o fluentP_$J.out #the file to write the stdout for fluent job
+
  #SBATCH -e fluentP_%J.err # the file to write stderr for fluent job
  #SBATCH -e fluentP_%J.err #the fiel to write the stderr for fluent job
+
 +
source /etc/profile.d/modules.sh
 +
module load fluent/$VERSION
 
   
 
   
export FLUENT_GUI=off
 
 
  if [ -z "$SLURM_NPROCS" ]; then
 
  if [ -z "$SLURM_NPROCS" ]; then
 
   N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
 
   N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
Line 43: Line 28:
 
   
 
   
 
  # run fluent in batch on the allocated node(s)
 
  # run fluent in batch on the allocated node(s)
  fluent 2ddp -g -slurm -t$N -pnmpi -ssh -i foo.txt
+
  fluent <span style=color:red>2ddp</span> -g -slurm -t$N -pnmpi -ssh -i <span style=color:red>foo.txt</span>
Then, submit your job as:
+
 
  sbatch< fluentP.sh
+
Make sure to replace <span style=color:red>foo.txt</span> in the last line with the name of your job file.
 +
Then, submit your job with:
 +
  sbatch fluentP.sh
 +
 
 +
==Parallel Job for new fluent versions==
 +
Here is an example fluentP.sh batch script for newer versions of Fluent:
 +
#!/bin/bash
 +
#SBATCH -N 2 # allocate 2 nodes for the job
 +
#SBATCH --ntasks-per-node=20
 +
#SBATCH --exclusive # no other jobs on the nodes while job is running
 +
#SBATCH -o fluentP_%J.out # the file to write stdout for fluent job
 +
#SBATCH -e fluentP_%J.err # the file to write stderr for fluent job
 +
 
 +
module purge # allow for a clean module environment for the submission
 +
module load gcc/5.4.0-alt zlib/1.2.11 java/1.8.0_162 mpi/openmpi/3.1.3 fluent/2019R3 # load the most current fluent module to be used along with other modules
 +
 
 +
# run fluent in batch mode on the allocated node(s) using either one of the types of fluent solvers 2d,2ddp,3d, or 3ddp
 +
# The -g option is used to run without GUI or graphics, The -slurm option is used to specify the scheduler, -t is used to specify the number of processors "cores" for the fluent model to use
 +
# The -i option is used to specify the input file for fluent to execute, the file would need to be in the SAME directory where this submission script is located
 +
# The standard error and output will be generated with the above format in the SBATCH headers.  The names for these files can change to the name of the specific project currently being worked
 +
on.
 +
# the fluent -h or fluent -help commands will list available options for fluent that can be used
 +
 
 +
fluent <span style=color:red>3ddp</span> -g -slurm -t$SLURM_NTASKS -pdefault -i <span style=color:red>foo.txt</span>
 +
 
 +
Make sure to replace <span style=color:red>foo.txt</span> in the last line with the name of your job input file.
 +
Then, submit your job with:
 +
sbatch fluentP.sh
  
 
==Interactive Job==
 
==Interactive Job==
For an interactive run of the Fluent, you can use this simple script, fluent-srun.sh:
+
 
#!/bin/bash
+
Start the interactive session on 1 node with 12 CPUs with fisbatch:
  HOSTSFILE=.hostlist-job$SLURM_JOB_ID
+
fisbatch -N 1 -n 12
if [ "$SLURM_PROCID" == "0" ]; then
+
 
    srun hostname -f > $HOSTSFILE
+
=== Fluent ===
    fluent -t $SLURM_NTASKS -cnf=$HOSTSFILE -ssh 3d
+
 
    rm -f $HOSTSFILE
+
The first parameter to fluent is the version (namely, <span style=color:red>2ddp</span> below).
  fi
+
The version can be either 2d, 3d, 2ddp, or 3ddp; change it to whatever you need.
exit 0
+
The "dp" stands for "double precision".
To run an interactive session, use srun like this:
+
 
  $ srun -n <#procs> ./fluent-srun.sh
+
Run fluent with:
 +
 
 +
  fluent <span style=color:red>2ddp</span> -g -t $SLURM_NTASKS
 +
 
 +
Type "exit" to end the session.
 +
 
 +
=== Ansys Electromagnetics Suite ===
 +
 
 +
Run the graphical launcher for Ansys Electromagnetics Suite with:
 +
 
 +
  ansysedt
 +
 
 +
{{ambox| issue = OSX users should not use XQuartz 2.7.9 because of an OpenGL bug and should instead downgrade to version 2.7.8}}
 +
 
 +
== User Guide ==
 +
 
 +
The command-line help usage actually provides very limited information:
 +
 
 +
  fluent -help
 +
 
 +
The only place Ansys supplies to read more documentation is from the "Help" inside the Fluent GUI.
 +
Open a terminal in [[X|X2Go]], load the fluent module, and then run `fluent` from the command-line to launch the GUI.
 +
Click on the Help button > Help on Starting and Executing Fluent
  
 
[[Category:Software]]
 
[[Category:Software]]

Latest revision as of 06:01, 13 September 2019

Module File

First check the available fluent modules

module avail fluent

To load fluent version 19.1:

module load fluent/19.1

To load fluent version 2019R3:

module load fluent/2019R3

Parallel Job

To run several tasks in parallel on one or more nodes, here is an example fluentP.sh batch script for older versions of Fluent:

#!/bin/bash
#SBATCH -N 2 # allocate 2 nodes for the job
#SBATCH --ntasks-per-node=20
#SBATCH --exclusive # no other jobs on the nodes while job is running
#SBATCH -o fluentP_%J.out # the file to write stdout for fluent job
#SBATCH -e fluentP_%J.err # the file to write stderr for fluent job

source /etc/profile.d/modules.sh
module load fluent/$VERSION

if [ -z "$SLURM_NPROCS" ]; then
  N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
  N=$SLURM_NPROCS
fi

echo -e "N: $N\n";

# run fluent in batch on the allocated node(s)
fluent 2ddp -g -slurm -t$N -pnmpi -ssh -i foo.txt

Make sure to replace foo.txt in the last line with the name of your job file. Then, submit your job with:

sbatch fluentP.sh

Parallel Job for new fluent versions

Here is an example fluentP.sh batch script for newer versions of Fluent:

#!/bin/bash
#SBATCH -N 2 # allocate 2 nodes for the job
#SBATCH --ntasks-per-node=20
#SBATCH --exclusive # no other jobs on the nodes while job is running
#SBATCH -o fluentP_%J.out # the file to write stdout for fluent job
#SBATCH -e fluentP_%J.err # the file to write stderr for fluent job
module purge # allow for a clean module environment for the submission
module load gcc/5.4.0-alt zlib/1.2.11 java/1.8.0_162 mpi/openmpi/3.1.3 fluent/2019R3 # load the most current fluent module to be used along with other modules
# run fluent in batch mode on the allocated node(s) using either one of the types of fluent solvers 2d,2ddp,3d, or 3ddp
# The -g option is used to run without GUI or graphics, The -slurm option is used to specify the scheduler, -t is used to specify the number of processors "cores" for the fluent model to use
# The -i option is used to specify the input file for fluent to execute, the file would need to be in the SAME directory where this submission script is located
# The standard error and output will be generated with the above format in the SBATCH headers.  The names for these files can change to the name of the specific project currently being worked 
on.
# the fluent -h or fluent -help commands will list available options for fluent that can be used
fluent 3ddp -g -slurm -t$SLURM_NTASKS -pdefault -i foo.txt

Make sure to replace foo.txt in the last line with the name of your job input file. Then, submit your job with:

sbatch fluentP.sh

Interactive Job

Start the interactive session on 1 node with 12 CPUs with fisbatch:

fisbatch -N 1 -n 12

Fluent

The first parameter to fluent is the version (namely, 2ddp below). The version can be either 2d, 3d, 2ddp, or 3ddp; change it to whatever you need. The "dp" stands for "double precision".

Run fluent with:

fluent 2ddp -g -t $SLURM_NTASKS

Type "exit" to end the session.

Ansys Electromagnetics Suite

Run the graphical launcher for Ansys Electromagnetics Suite with:

ansysedt

User Guide

The command-line help usage actually provides very limited information:

fluent -help

The only place Ansys supplies to read more documentation is from the "Help" inside the Fluent GUI. Open a terminal in X2Go, load the fluent module, and then run `fluent` from the command-line to launch the GUI. Click on the Help button > Help on Starting and Executing Fluent