Difference between revisions of "Comsol Guide"

From Storrs HPC Wiki
Jump to: navigation, search
(Add draft GPU instructions)
(comsol with MPI)
Line 1: Line 1:
 
=== comsol with MPI ===
 
=== comsol with MPI ===
$ ssh <NetID>@login.storrs.hpc.uconn.edu
 
$ module load intelics/2013.1.039-compiler zlib/1.2.8-ics mpi/mvapich2/2.0a-ics-slurm comsol/5.2
 
 
 
To run comsol with SLURM, please create following comsol.sh:
 
To run comsol with SLURM, please create following comsol.sh:
 +
 
  #!/bin/bash
 
  #!/bin/bash
 
   
 
   
  # Ask for a number of compute nodes
+
  # For general partition
  #SBATCH -N 2 #specify the number of nodes to change total number of cores
+
  #SBATCH --nodes 8-16
#DO NOT CHANGE the following two lines
+
  #SBATCH --cpus-per-task 12
  #SBATCH -c12
+
  #SBATCH --ntasks-per-node 1
  #SBATCH --ntasks-per-node=1
+
#SBATCH --output comsol.log
 
   
 
   
  # Set your email address to be notified of jobs updates
+
  # Clear the output log
  #SBATCH --mail-type=ALL
+
  echo -n > comsol.log
#SBATCH --mail-user=your@email.address
 
 
   
 
   
  # Specify the run time you require, in the format HH:MM:SS
+
  # Add debugging information.
  #SBATCH --time=04:00:00
+
  scontrol show job $SLURM_JOBID
 
   
 
   
  # Details of your input and output files
+
  # Details of input and output files.
 
  INPUTFILE=/path/to/input_model.mph
 
  INPUTFILE=/path/to/input_model.mph
 
  OUTPUTFILE=/path/to/output_model.mph
 
  OUTPUTFILE=/path/to/output_model.mph
TMPDIR=
 
 
   
 
   
  # Load our comsol module
+
  # Load our comsol module.
source /etc/profile.d/modules.sh
 
 
  module purge
 
  module purge
  module load intelics/2013.1.039-compiler zlib/1.2.8-ics mpi/mvapich2/2.0a-ics-slurm comsol/5.2
+
  module load intelics/2013.1.039-compiler zlib/1.2.8-ics mpi/mvapich2/2.0a-ics-slurm comsol/5.3a
 
   
 
   
  ######## DO NOT EDIT BELOW THIS LINE ########
+
  # Run comsol.
  # check if tmpdir exists
+
  comsol batch \
if [ ! -z $TMPDIR ]; then
+
    -clustersimple \
  TMPDIR="-tmpdir $TMPDIR"
+
    -np $SLURM_CPUS_PER_TASK \
fi
+
    -mpifabrics shm:ofa \
## Now, run COMSOL in batch mode with the input and output detailed above.
+
    -inputfile $INPUTFILE \
comsol -clustersimple batch -inputfile $INPUTFILE -outputfile $OUTPUTFILE $TMPDIR
+
    -outputfile $OUTPUTFILE \
 
+
    -tmpdir temp
 
Then submit the script by issuing:
 
Then submit the script by issuing:
 
  $ sbatch comsol.sh
 
  $ sbatch comsol.sh

Revision as of 16:53, 22 May 2018

comsol with MPI

To run comsol with SLURM, please create following comsol.sh:

#!/bin/bash

# For general partition
#SBATCH --nodes 8-16
#SBATCH --cpus-per-task 12
#SBATCH --ntasks-per-node 1
#SBATCH --output comsol.log

# Clear the output log
echo -n > comsol.log

# Add debugging information.
scontrol show job $SLURM_JOBID

# Details of input and output files.
INPUTFILE=/path/to/input_model.mph
OUTPUTFILE=/path/to/output_model.mph

# Load our comsol module.
module purge
module load intelics/2013.1.039-compiler zlib/1.2.8-ics mpi/mvapich2/2.0a-ics-slurm comsol/5.3a

# Run comsol.
comsol batch \
    -clustersimple \
    -np $SLURM_CPUS_PER_TASK \
    -mpifabrics shm:ofa \
    -inputfile $INPUTFILE \
    -outputfile $OUTPUTFILE \
    -tmpdir temp

Then submit the script by issuing:

$ sbatch comsol.sh

comsol with GPU

$ ssh <NetID>@login.storrs.hpc.uconn.edu
$ module load intelics/2013.1.039-compiler zlib/1.2.8-ics mpi/mvapich2/2.0a-ics-slurm comsol/5.2

To run comsol with SLURM, please create following comsol.sh:

#!/bin/bash

# Ask for a number of compute nodes
#SBATCH -p gpu
#SBATCH --gres=gpu:1
#SBATCH -N 1
#SBATCH -n 12
# Set your email address to be notified of jobs updates
#SBATCH --mail-type=ALL
#SBATCH --mail-user=your@email.address

# Details of your input and output files
INPUTFILE=/path/to/input_model.mph
OUTPUTFILE=/path/to/output_model.mph
TMPDIR=

# Load our comsol module
source /etc/profile.d/modules.sh
module purge
module load comsol/5.2a

######## DO NOT EDIT BELOW THIS LINE ########
# check if tmpdir exists
if [ ! -z $TMPDIR ]; then 
  TMPDIR="-tmpdir $TMPDIR"
fi
## Now, run COMSOL in batch mode with the input and output detailed above.
comsol -3drend ogl -np $SLURM_NTASKS -inputfile $INPUTFILE -outputfile $OUTPUTFILE $TMPDIR

Then submit the script by issuing:

$ sbatch comsol.sh

comsol on single node

Some comsol algorithms like PARDISO do not make use of MPI like . To run these, you can specify -np instead of -clustersimple as follows

#!/bin/bash

# Ask for a number of compute nodes
#SBATCH -N 1
#SBATCH -n 24

# Details of your input and output files
INPUTFILE=input_model.mph
OUTPUTFILE=output_model.mph
TMPDIR=temp

######## DO NOT EDIT BELOW THIS LINE ########

# Load our comsol module
source /etc/profile.d/modules.sh
module purge
module load comsol/5.2

## Now, run COMSOL in batch mode with the input and output detailed above.
comsol batch -np $SLURM_NTASKS -inputfile $INPUTFILE -outputfile $OUTPUTFILE -tmpdir $TMPDIR

Then submit the script by issuing:

$ sbatch comsol.sh