Difference between revisions of "Fluent Guide"

From Storrs HPC Wiki
Jump to: navigation, search
(Serial Job)
(Parallel Job for new fluent versions)
 
(38 intermediate revisions by 7 users not shown)
Line 1: Line 1:
=Module File=
+
==Module File==
First load the fluent module
+
First check the available fluent modules
  module load fluent/version
+
  module avail fluent
For example, to load fluent-v14.0:
+
To load fluent version 19.1:
  module load fluent/14.0
+
  module load fluent/19.1
=Run Fluent with LSF=
+
To load fluent version 2019R3:
==Serial Job==
+
  module load fluent/2019R3
Next, with an input script foo.txt, run
 
  bsub -o fluent_%J.out -e fluent_$J.err fluent 3d -g -i foo.txt
 
foo.txt must refer to a .cas input file.  3d is one of
 
2d    2ddp_host  2d_host  3d 3ddp_host  3d_host
 
2ddp  2ddp_node  2d_node  3ddp 3ddp_node  3d_node
 
  
 
==Parallel Job==
 
==Parallel Job==
To run with mpi, try
+
To run several tasks in parallel on one or more nodes, here is an example fluentP.sh batch script for older versions of Fluent:
bsub -n $np fluent 2ddp -t$np -pnmpi -ssh -g -i foo.txt -lsf
 
Replace $np with the number of cores you need.
 
=Run Fluent with Slurm=
 
==Serial Job==
 
You need the fluent.sh script as following:
 
 
  #!/bin/bash
 
  #!/bin/bash
  #SBATCH -n 1 # only allocate 1 task
+
  #SBATCH -N 2 # allocate 2 nodes for the job
  #SBATCH -J fluent1 # sensible name for the job
+
#SBATCH --ntasks-per-node=20
  #SBATCH -o fluent_$J.out #the file to write the stdout for fluent job
+
  #SBATCH --exclusive # no other jobs on the nodes while job is running
  #SBATCH -e fluent_%J.err #the fiel to write the stderr for fluent job
+
  #SBATCH -o fluentP_%J.out # the file to write stdout for fluent job
+
  #SBATCH -e fluentP_%J.err # the file to write stderr for fluent job
export FLUENT_GUI=off
 
 
   
 
   
  fluent 2d -g -i foo.txt
+
  source /etc/profile.d/modules.sh
Then, submit the job as:
+
  module load fluent/$VERSION
sbatch<fluent.sh
 
==Parallel Job==
 
To run several tasks in parallel on one or more nodes, the submission file, fluentP.sh, could be as follows:
 
  #!/bin/bash
 
#SBATCH -N 2 # allocate 25 nodes for the job
 
#SBATCH -n 40 # or you can specify number of tasks per node as "#SBATCH --ntasks-per-node=20"
 
#SBATCH --exclusive # no other jobs on the nodes while job is running
 
#SBATCH -J fluentP1 # sensible name for the job
 
#SBATCH -o fluentP_$J.out #the file to write the stdout for fluent job
 
#SBATCH -e fluentP_%J.err #the fiel to write the stderr for fluent job
 
 
   
 
   
export FLUENT_GUI=off
 
 
  if [ -z "$SLURM_NPROCS" ]; then
 
  if [ -z "$SLURM_NPROCS" ]; then
 
   N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
 
   N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
Line 50: Line 28:
 
   
 
   
 
  # run fluent in batch on the allocated node(s)
 
  # run fluent in batch on the allocated node(s)
  fluent 2ddp -g -slurm -t$N -pnmpi -ssh -i foo.txt
+
  fluent <span style=color:red>2ddp</span> -g -slurm -t$N -pnmpi -ssh -i <span style=color:red>foo.txt</span>
Then, submit your job as:
+
 
  sbatch< fluentP.sh
+
Make sure to replace <span style=color:red>foo.txt</span> in the last line with the name of your job file.
 +
Then, submit your job with:
 +
  sbatch fluentP.sh
 +
 
 +
==Parallel Job for new fluent versions==
 +
Here is an example fluentP.sh batch script for newer versions of Fluent:
 +
#!/bin/bash
 +
#SBATCH -N 2 # allocate 2 nodes for the job
 +
#SBATCH --ntasks-per-node=20
 +
#SBATCH --exclusive # no other jobs on the nodes while job is running
 +
#SBATCH -o fluentP_%J.out # the file to write stdout for fluent job
 +
#SBATCH -e fluentP_%J.err # the file to write stderr for fluent job
 +
 
 +
module purge # allow for a clean module environment for the submission
 +
module load gcc/5.4.0-alt zlib/1.2.11 java/1.8.0_162 mpi/openmpi/3.1.3 fluent/2019R3 # load the most current fluent module to be used along with other modules
 +
 
 +
# run fluent in batch mode on the allocated node(s) using either one of the types of fluent solvers 2d,2ddp,3d, or 3ddp
 +
# The -g option is used to run without GUI or graphics, The -slurm option is used to specify the scheduler, -t is used to specify the number of processors "cores" for the fluent model to use
 +
# The -i option is used to specify the input file for fluent to execute, the file would need to be in the SAME directory where this submission script is located
 +
# The standard error and output will be generated with the above format in the SBATCH headers.  The names for these files can change to the name of the specific project currently being worked
 +
on.
 +
# the fluent -h or fluent -help commands will list available options for fluent that can be used
 +
 
 +
fluent <span style=color:red>3ddp</span> -g -slurm -t$SLURM_NTASKS -pdefault -i <span style=color:red>foo.txt</span>
 +
 
 +
Make sure to replace <span style=color:red>foo.txt</span> in the last line with the name of your job input file.
 +
Then, submit your job with:
 +
sbatch fluentP.sh
 +
 
 +
==Interactive Job==
 +
 
 +
Start the interactive session on 1 node with 12 CPUs with fisbatch:
 +
fisbatch -N 1 -n 12
 +
 
 +
=== Fluent ===
 +
 
 +
The first parameter to fluent is the version (namely, <span style=color:red>2ddp</span> below).
 +
The version can be either 2d, 3d, 2ddp, or 3ddp; change it to whatever you need.
 +
The "dp" stands for "double precision".
 +
 
 +
Run fluent with:
 +
 
 +
fluent <span style=color:red>2ddp</span> -g -t $SLURM_NTASKS
 +
 
 +
Type "exit" to end the session.
 +
 
 +
=== Ansys Electromagnetics Suite ===
 +
 
 +
Run the graphical launcher for Ansys Electromagnetics Suite with:
 +
 
 +
ansysedt
 +
 
 +
{{ambox| issue = OSX users should not use XQuartz 2.7.9 because of an OpenGL bug and should instead downgrade to version 2.7.8}}
 +
 
 +
== User Guide ==
 +
 
 +
The command-line help usage actually provides very limited information:
 +
 
 +
fluent -help
 +
 
 +
The only place Ansys supplies to read more documentation is from the "Help" inside the Fluent GUI.
 +
Open a terminal in [[X|X2Go]], load the fluent module, and then run `fluent` from the command-line to launch the GUI.
 +
Click on the Help button > Help on Starting and Executing Fluent
 +
 
 +
[[Category:Software]]

Latest revision as of 06:01, 13 September 2019

Module File

First check the available fluent modules

module avail fluent

To load fluent version 19.1:

module load fluent/19.1

To load fluent version 2019R3:

module load fluent/2019R3

Parallel Job

To run several tasks in parallel on one or more nodes, here is an example fluentP.sh batch script for older versions of Fluent:

#!/bin/bash
#SBATCH -N 2 # allocate 2 nodes for the job
#SBATCH --ntasks-per-node=20
#SBATCH --exclusive # no other jobs on the nodes while job is running
#SBATCH -o fluentP_%J.out # the file to write stdout for fluent job
#SBATCH -e fluentP_%J.err # the file to write stderr for fluent job

source /etc/profile.d/modules.sh
module load fluent/$VERSION

if [ -z "$SLURM_NPROCS" ]; then
  N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
  N=$SLURM_NPROCS
fi

echo -e "N: $N\n";

# run fluent in batch on the allocated node(s)
fluent 2ddp -g -slurm -t$N -pnmpi -ssh -i foo.txt

Make sure to replace foo.txt in the last line with the name of your job file. Then, submit your job with:

sbatch fluentP.sh

Parallel Job for new fluent versions

Here is an example fluentP.sh batch script for newer versions of Fluent:

#!/bin/bash
#SBATCH -N 2 # allocate 2 nodes for the job
#SBATCH --ntasks-per-node=20
#SBATCH --exclusive # no other jobs on the nodes while job is running
#SBATCH -o fluentP_%J.out # the file to write stdout for fluent job
#SBATCH -e fluentP_%J.err # the file to write stderr for fluent job
module purge # allow for a clean module environment for the submission
module load gcc/5.4.0-alt zlib/1.2.11 java/1.8.0_162 mpi/openmpi/3.1.3 fluent/2019R3 # load the most current fluent module to be used along with other modules
# run fluent in batch mode on the allocated node(s) using either one of the types of fluent solvers 2d,2ddp,3d, or 3ddp
# The -g option is used to run without GUI or graphics, The -slurm option is used to specify the scheduler, -t is used to specify the number of processors "cores" for the fluent model to use
# The -i option is used to specify the input file for fluent to execute, the file would need to be in the SAME directory where this submission script is located
# The standard error and output will be generated with the above format in the SBATCH headers.  The names for these files can change to the name of the specific project currently being worked 
on.
# the fluent -h or fluent -help commands will list available options for fluent that can be used
fluent 3ddp -g -slurm -t$SLURM_NTASKS -pdefault -i foo.txt

Make sure to replace foo.txt in the last line with the name of your job input file. Then, submit your job with:

sbatch fluentP.sh

Interactive Job

Start the interactive session on 1 node with 12 CPUs with fisbatch:

fisbatch -N 1 -n 12

Fluent

The first parameter to fluent is the version (namely, 2ddp below). The version can be either 2d, 3d, 2ddp, or 3ddp; change it to whatever you need. The "dp" stands for "double precision".

Run fluent with:

fluent 2ddp -g -t $SLURM_NTASKS

Type "exit" to end the session.

Ansys Electromagnetics Suite

Run the graphical launcher for Ansys Electromagnetics Suite with:

ansysedt

User Guide

The command-line help usage actually provides very limited information:

fluent -help

The only place Ansys supplies to read more documentation is from the "Help" inside the Fluent GUI. Open a terminal in X2Go, load the fluent module, and then run `fluent` from the command-line to launch the GUI. Click on the Help button > Help on Starting and Executing Fluent