Difference between revisions of "HPC Software"

From Storrs HPC Wiki
Jump to: navigation, search
(Gromacs)
(How to run)
Line 64: Line 64:
 
When loading the gromacs module above, the gmx command changes to gmx_mpi and that change would need to be declared in the submission script that is used.
 
When loading the gromacs module above, the gmx command changes to gmx_mpi and that change would need to be declared in the submission script that is used.
  
== How to run ==
 
  
When running on Haswell nodes, add
 
#SBATCH --exclude=cn[65-136,325-343,345-353,355-358,360-364,369-398,400-401],gpu[07-10]
 
When running on Skylake nodes, add
 
#SBATCH --exclude=cn[65-136,153-256,265-320,325-328]
 
  
 
=== Gaussian ===
 
=== Gaussian ===

Revision as of 14:52, 18 April 2019

Software Guides

Abaqus Guide - How to use Abaqus FEA 6.12
CAMx Guide - How to use CAMx with MPI and OMP
Comsol Guide - How to use COMSOL on the HPC Cluster
DXA Guide - How to use Dislocation Extraction Algorithm
Fluent Guide - How to use Fluent of ANSYS
GPU Mathematica Guide - How to run Wolfram Mathematica with GPU acceleration
Grace Guide - How to run grace / xmgrace with GUI
Hadoop Guide - How to use hadoop software
Intel SDK Guide - How to use Intel Cluster Studio XE 2013 (in progress)
LAMMPS Guide - How to use LAMMPS
LS-Dyna Guide - How to use LS-DYNA
MAPLE Guide - How to use MAPLE on the HPC Cluster
MATLAB Guide - How to submit MATLAB jobs
Modules Guide - How to manage and load environment modules
Motif Guide - How to use Motif on the HPC Cluster
MPI Guide - A Quick Guide to Programming with MPI
MPJ Guide - MPJ (Java MPI)
MPICH2 Guide - How to use MPICH2 on the HPC Cluster
MVAPICH2 Guide - How to use MVAPICH2 on the HPC Cluster
NAMD Guide - How to use NAMD on the HPC Cluster
OpenACC Guide - How to use OpenACC on the HPC Cluster
OpenMp Usage - how to use OpenMp on the HPC Cluster
OpenMPI Guide - How to use OpenMPI on the HPC Cluster
phpbb - How to use phpbb
Python virtualenv Guide - How to use virtualenv for python
Qiime Guide - How to use Qiime on the HPC Cluster (in progress)
R Guide - How to use GNU R on HPC Cluster
R-LINE Guide - How to use R-LINE on HPC Cluster
Screenie Guide - How to use Screenie / GNU Screen
StarCCM Guide - How to use StarCCM (in progress)
Trinity Guide - How to use the Trinity RNA Sequence Assembler (in progress)
VASP Guide - How to use VASP
WINE Guide - How to use WINE on HPC Cluster
Tensorflow Guide - How to use Tensorflow on HPC Cluster
Globus(Linux) Guide - How to use Globus command line on HPC Cluster

Other Software Guidelines

BWA

BWA has been compiled to run on the Westmere nodes. When using sbatch, specify --partition=Westmere

$ module load bwa/0.7.5a

GCC

On Westmere nodes which refer to cn01 to cn64.

$ module load gcc/4.7.1

on Sandy Bridge nodes which refer to cn65 to cn104.

$ module load gcc/4.8.2

GEOS-Chem

$ module load intelics/2012.0.032 zlib/1.2.3-ics hdf5/1.8.9-ics netcdf/4.2-ics geos-chem/v9-02

Gromacs

For 5.1.4-plumed-gsl (the default "gromacs" module), the order of modulefiles should follow this command:

$ module load gcc/5.4.0-alt plumed/2-gnu boost/1.61.0-gcc-mpi mpi/openmpi/2.1.0 zlib/1.2.8 fftw/3.3.6-gcc540a gsl gromacs

Note that Gromacs only runs on our Haswell and Skylake nodes. Please add one of the following lines to your Slurm submission script.

To submit to the Haswell node architecture:

#SBATCH --exclude=cn[65-136,325-343,345-353,355-358,360-364,369-398,400-401],gpu[07-10]

To submit to the Skylake node architecture:

#SBATCH --exclude=cn[65-136,153-256,265-320,325-328]

When loading the gromacs module above, the gmx command changes to gmx_mpi and that change would need to be declared in the submission script that is used.


Gaussian

$ module load gaussian/g09d01

igraph

$ module load igraph/0.6.5

TrinityRNASeq

$ module load gcc/4.7.1 trinityrnaseq/2013.08.14