Difference between revisions of "Comsol Guide"
From Storrs HPC Wiki
(→comsol with MPI) |
|||
Line 1: | Line 1: | ||
+ | === NOTE REGARDING COMSOL 5.5 === | ||
+ | |||
+ | Starting version 5.5, COMSOL is built on a newer GLIBC version (2.14) and is no longer compatible wih RHEL6.x. To use COMSOL 5.5, submit jobs to the GeneralSky or any GPU partition. | ||
+ | |||
+ | |||
=== comsol with MPI === | === comsol with MPI === | ||
To run comsol with SLURM, please create following comsol.sh: | To run comsol with SLURM, please create following comsol.sh: |
Latest revision as of 07:50, 20 April 2020
NOTE REGARDING COMSOL 5.5
Starting version 5.5, COMSOL is built on a newer GLIBC version (2.14) and is no longer compatible wih RHEL6.x. To use COMSOL 5.5, submit jobs to the GeneralSky or any GPU partition.
comsol with MPI
To run comsol with SLURM, please create following comsol.sh:
#!/bin/bash # For general partition #SBATCH --nodes 8-16 #SBATCH --cpus-per-task 12 #SBATCH --ntasks-per-node 1 #SBATCH --output comsol.log # Clear the output log echo -n > comsol.log # Add debugging information. scontrol show job $SLURM_JOBID # Details of input and output files. INPUTFILE=/path/to/input_model.mph OUTPUTFILE=/path/to/output_model.mph # Load our comsol module. module purge module load intelics/2013.1.039-compiler zlib/1.2.8-ics mpi/mvapich2/2.0a-ics-slurm comsol/5.3a # Run comsol. comsol batch \ -clustersimple \ -np $SLURM_CPUS_PER_TASK \ -mpifabrics shm:ofa \ -inputfile $INPUTFILE \ -outputfile $OUTPUTFILE \ -tmpdir temp
Then submit the script by issuing:
$ sbatch comsol.sh
comsol with GPU
$ ssh <NetID>@login.storrs.hpc.uconn.edu $ module load intelics/2013.1.039-compiler zlib/1.2.8-ics mpi/mvapich2/2.0a-ics-slurm comsol/5.2
To run comsol with SLURM, please create following comsol.sh:
#!/bin/bash # Ask for a number of compute nodes #SBATCH -p gpu #SBATCH --gres=gpu:1 #SBATCH -N 1 #SBATCH -n 12 # Set your email address to be notified of jobs updates #SBATCH --mail-type=ALL #SBATCH --mail-user=your@email.address # Details of your input and output files INPUTFILE=/path/to/input_model.mph OUTPUTFILE=/path/to/output_model.mph TMPDIR= # Load our comsol module source /etc/profile.d/modules.sh module purge module load comsol/5.2a ######## DO NOT EDIT BELOW THIS LINE ######## # check if tmpdir exists if [ ! -z $TMPDIR ]; then TMPDIR="-tmpdir $TMPDIR" fi ## Now, run COMSOL in batch mode with the input and output detailed above. comsol -3drend ogl -np $SLURM_NTASKS -inputfile $INPUTFILE -outputfile $OUTPUTFILE $TMPDIR
Then submit the script by issuing:
$ sbatch comsol.sh
comsol on single node
Some comsol algorithms like PARDISO do not make use of MPI like .
To run these, you can specify -np
instead of -clustersimple
as follows
#!/bin/bash # Ask for a number of compute nodes #SBATCH -N 1 #SBATCH -n 24 # Details of your input and output files INPUTFILE=input_model.mph OUTPUTFILE=output_model.mph TMPDIR=temp ######## DO NOT EDIT BELOW THIS LINE ######## # Load our comsol module source /etc/profile.d/modules.sh module purge module load comsol/5.2 ## Now, run COMSOL in batch mode with the input and output detailed above. comsol batch -np $SLURM_NTASKS -inputfile $INPUTFILE -outputfile $OUTPUTFILE -tmpdir $TMPDIR
Then submit the script by issuing:
$ sbatch comsol.sh