Difference between revisions of "HPC Software"

From Storrs HPC Wiki
Jump to: navigation, search
(LAMMPS)
(Software Guides)
 
(52 intermediate revisions by 10 users not shown)
Line 1: Line 1:
:[http://becat.uconn.edu/hpc/software/ List of software] installed on the HORNET cluster
 
 
== Software Guides ==
 
== Software Guides ==
 
:[[Abaqus Guide]] - How to use Abaqus FEA 6.12
 
:[[Abaqus Guide]] - How to use Abaqus FEA 6.12
:[[Comsol Guide]] - How to use COMSOL on Hornet Cluster
+
:[[CAMx Guide]] - How to use CAMx with MPI and OMP
 +
:[[Comsol Guide]] - How to use COMSOL on the HPC Cluster
 
:[[DXA Guide]] - How to use Dislocation Extraction Algorithm
 
:[[DXA Guide]] - How to use Dislocation Extraction Algorithm
 +
:[[Fluent Guide]] - How to use Fluent of ANSYS
 
:[[GPU Mathematica Guide]] - How to run Wolfram Mathematica with GPU acceleration
 
:[[GPU Mathematica Guide]] - How to run Wolfram Mathematica with GPU acceleration
 
:[[Grace Guide]] - How to run grace / xmgrace with GUI
 
:[[Grace Guide]] - How to run grace / xmgrace with GUI
 
:[[Hadoop Guide]] - How to use hadoop software
 
:[[Hadoop Guide]] - How to use hadoop software
 
:[[Intel SDK Guide]] - How to use Intel Cluster Studio XE 2013 (in progress)
 
:[[Intel SDK Guide]] - How to use Intel Cluster Studio XE 2013 (in progress)
 +
:[[LAMMPS Guide]] - How to use LAMMPS
 
:[[LS-Dyna Guide]] - How to use LS-DYNA
 
:[[LS-Dyna Guide]] - How to use LS-DYNA
:[[MAPLE Guide]] - How to use MAPLE on Hornet cluster
+
:[[MAPLE Guide]] - How to use MAPLE on the HPC Cluster
 
:[[MATLAB Guide]] - How to submit MATLAB jobs
 
:[[MATLAB Guide]] - How to submit MATLAB jobs
 
:[[Modules Guide]] - How to manage and load environment modules
 
:[[Modules Guide]] - How to manage and load environment modules
 +
:[[Motif Guide]] - How to use Motif on the HPC Cluster
 
:[[MPI Guide]] - A Quick Guide to Programming with MPI
 
:[[MPI Guide]] - A Quick Guide to Programming with MPI
 
:[[MPJ Guide]] - MPJ (Java MPI)
 
:[[MPJ Guide]] - MPJ (Java MPI)
:[[MPICH2 Guide]] - How to use MPICH2 on the Hornet cluster
+
:[[MPICH2 Guide]] - How to use MPICH2 on the HPC Cluster
:[[NAMD Guide]] - How to use NAMD on the Hornet cluster
+
:[[MVAPICH2 Guide]] - How to use MVAPICH2 on the HPC Cluster
:[[OpenACC Guide]] - How to use OpenACC on the Hornet cluster
+
:[[NAMD Guide]] - How to use NAMD on the HPC Cluster
:[[OpenMp Usage]] - how to use OpenMp on the Hornet cluster
+
:[[OpenACC Guide]] - How to use OpenACC on the HPC Cluster
 +
:[[OpenMp Usage]] - how to use OpenMp on the HPC Cluster
 +
:[[OpenMPI Guide]] - How to use OpenMPI on the HPC Cluster
 
:[[phpbb]] - How to use phpbb
 
:[[phpbb]] - How to use phpbb
 
:[[Python virtualenv Guide]] - How to use virtualenv for python
 
:[[Python virtualenv Guide]] - How to use virtualenv for python
:[[Qiime Guide]] - How to use Qiime on the Hornet cluster (in progress)
+
:[[Qiime Guide]] - How to use Qiime on the HPC Cluster (in progress)
:[[R Guide]] - How to use GNU R on Hornet cluster
+
:[[R Guide]] - How to use GNU R on HPC Cluster
 +
:[[R-LINE Guide]] - How to use R-LINE on HPC Cluster
 
:[[Screenie | Screenie Guide]] - How to use Screenie / GNU Screen
 
:[[Screenie | Screenie Guide]] - How to use Screenie / GNU Screen
 
:[[StarCCM Guide]] - How to use StarCCM (in progress)
 
:[[StarCCM Guide]] - How to use StarCCM (in progress)
 
:[[Trinity Guide]] - How to use the Trinity RNA Sequence Assembler (in progress)
 
:[[Trinity Guide]] - How to use the Trinity RNA Sequence Assembler (in progress)
 
:[[VASP Guide]] - How to use VASP
 
:[[VASP Guide]] - How to use VASP
 +
:[[WINE Guide]] - How to use WINE on HPC Cluster
 +
:[[Tensorflow Guide]] - How to use Tensorflow on HPC Cluster
 +
:[[Globus(Linux) Guide]] - How to use Globus command line on HPC Cluster
 +
:[[MPIEngine]] - Boost any exe with multiple processes by using different argument for each process
 +
 
== Other Software Guidelines ==
 
== Other Software Guidelines ==
 +
===BWA===
 +
BWA has been compiled to run on the Westmere nodes. When using sbatch, specify --partition=Westmere
 +
$ module load bwa/0.7.5a
 
===GCC===
 
===GCC===
On Westmere nodes
+
On Westmere nodes which refer to cn01 to cn64.
 
  $ module load gcc/4.7.1
 
  $ module load gcc/4.7.1
on Sandy Bridge nodes
+
on Sandy Bridge nodes which refer to cn65 to cn104.
 
  $ module load gcc/4.8.2
 
  $ module load gcc/4.8.2
 +
 
=== GEOS-Chem===
 
=== GEOS-Chem===
 
  $ module load intelics/2012.0.032 zlib/1.2.3-ics hdf5/1.8.9-ics netcdf/4.2-ics geos-chem/v9-02
 
  $ module load intelics/2012.0.032 zlib/1.2.3-ics hdf5/1.8.9-ics netcdf/4.2-ics geos-chem/v9-02
=== Grace ===
+
If you want to compile geos-chem, for some cases, you need to use ifortran in netcdf (e.g. ticket 46382). The following modules are required:
  $ module load intelics/2013.1.039-compiler libjpeg-turbo libpng zlib/1.2.8-ics motif grace/5.1.23
+
  $ module load intelics/2013.1.039-full zlib/1.2.8-ics hdf5/1.8.12-ics netcdf/4.2-ics-2013.1.039
 +
 
 +
=== Gromacs ===
 +
For 5.1.4-plumed-gsl (the default "gromacs" module), the order of modulefiles should follow this command:
 +
$ module load gcc/5.4.0-alt zlib/1.2.8 mpi/openmpi/2.1.0 plumed/2-gnu boost/1.61.0-gcc-mpi fftw/3.3.6-gcc540a gsl gromacs
 +
 
 +
Note that '''Gromacs only runs on our Haswell and Skylake nodes'''.  Please add one of the following lines to your Slurm submission script.
 +
 
 +
To submit to the Haswell node architecture:
 +
 
 +
#SBATCH --exclude=cn[65-69,71-136,325-343,345-353,355-358,360-364,369-398,400-401],gpu[07-10]
 +
 
 +
To submit to the Skylake node architecture:
 +
 
 +
#SBATCH --exclude=cn[65-69,71-136,153-256,265-320,325-328]
 +
 
 +
When loading the gromacs module above, the '''gmx''' command changes to '''gmx_mpi''' and that change would need to be declared in the submission script that is used.
 +
 
 
=== Gaussian ===
 
=== Gaussian ===
 
  $ module load gaussian/g09d01
 
  $ module load gaussian/g09d01
 
===igraph===
 
===igraph===
 
  $ module load igraph/0.6.5
 
  $ module load igraph/0.6.5
=== LAMMPS ===
+
=== TrinityRNASeq ===
  $ module load lammps/23Sep13
+
  $ module load gcc/4.7.1 trinityrnaseq/2013.08.14
===motif ===
 
$ module load motif/2.3.4
 
 
 
=== R-LINE ===
 
$module load rline/1.2
 
Then run:
 
$ RLINE
 
== Trouble Shooting ==
 
===Module Load ===
 
If the 'module load' command returns the following errors:
 
$ module load <Module1>
 
<Module1>(4):ERROR:150: Module '<Module1>' conflicts with the currently loaded module(s) '<Module2>'
 
<Module1>(4):ERROR:102: Tcl command execution failed: conflict <Module_Group>
 
This means that the module you want to load conflicts with the currently loaded module, <Module2>. To fix it, please unload <Module2> and then load <Module1> again:
 
$ module unload <Module2>
 
$ module load <Module1>
 
Or
 
$ module switch <Module2> <Module1>
 
=== Intel SDK and MPIs ===
 
If you got the following errors while using both Intel SDK and one of the mpi modules:
 
/apps/intelics/2013.1.039/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpirun: line 96:
 
/apps/intelics/2013.1.039/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpivars.sh: No such file or directory
 
 
 
please unload the intelics and mpi modules and reload them in the order. Make sure that the intelics module is prior to the mpi module:
 
$ module list
 
1) modules                                      3) intelics/<version>
 
2) mpi/<software>/<version>
 
$ module unload intelics mpi
 
$ module load intelics/<version> mpi/<software>/<version>
 
$ module list
 
1) modules                                      3) mpi/<software>/<version>
 
2) intelics/<version>
 

Latest revision as of 17:50, 12 September 2019

Software Guides

Abaqus Guide - How to use Abaqus FEA 6.12
CAMx Guide - How to use CAMx with MPI and OMP
Comsol Guide - How to use COMSOL on the HPC Cluster
DXA Guide - How to use Dislocation Extraction Algorithm
Fluent Guide - How to use Fluent of ANSYS
GPU Mathematica Guide - How to run Wolfram Mathematica with GPU acceleration
Grace Guide - How to run grace / xmgrace with GUI
Hadoop Guide - How to use hadoop software
Intel SDK Guide - How to use Intel Cluster Studio XE 2013 (in progress)
LAMMPS Guide - How to use LAMMPS
LS-Dyna Guide - How to use LS-DYNA
MAPLE Guide - How to use MAPLE on the HPC Cluster
MATLAB Guide - How to submit MATLAB jobs
Modules Guide - How to manage and load environment modules
Motif Guide - How to use Motif on the HPC Cluster
MPI Guide - A Quick Guide to Programming with MPI
MPJ Guide - MPJ (Java MPI)
MPICH2 Guide - How to use MPICH2 on the HPC Cluster
MVAPICH2 Guide - How to use MVAPICH2 on the HPC Cluster
NAMD Guide - How to use NAMD on the HPC Cluster
OpenACC Guide - How to use OpenACC on the HPC Cluster
OpenMp Usage - how to use OpenMp on the HPC Cluster
OpenMPI Guide - How to use OpenMPI on the HPC Cluster
phpbb - How to use phpbb
Python virtualenv Guide - How to use virtualenv for python
Qiime Guide - How to use Qiime on the HPC Cluster (in progress)
R Guide - How to use GNU R on HPC Cluster
R-LINE Guide - How to use R-LINE on HPC Cluster
Screenie Guide - How to use Screenie / GNU Screen
StarCCM Guide - How to use StarCCM (in progress)
Trinity Guide - How to use the Trinity RNA Sequence Assembler (in progress)
VASP Guide - How to use VASP
WINE Guide - How to use WINE on HPC Cluster
Tensorflow Guide - How to use Tensorflow on HPC Cluster
Globus(Linux) Guide - How to use Globus command line on HPC Cluster
MPIEngine - Boost any exe with multiple processes by using different argument for each process

Other Software Guidelines

BWA

BWA has been compiled to run on the Westmere nodes. When using sbatch, specify --partition=Westmere

$ module load bwa/0.7.5a

GCC

On Westmere nodes which refer to cn01 to cn64.

$ module load gcc/4.7.1

on Sandy Bridge nodes which refer to cn65 to cn104.

$ module load gcc/4.8.2

GEOS-Chem

$ module load intelics/2012.0.032 zlib/1.2.3-ics hdf5/1.8.9-ics netcdf/4.2-ics geos-chem/v9-02

If you want to compile geos-chem, for some cases, you need to use ifortran in netcdf (e.g. ticket 46382). The following modules are required:

$ module load intelics/2013.1.039-full zlib/1.2.8-ics hdf5/1.8.12-ics netcdf/4.2-ics-2013.1.039

Gromacs

For 5.1.4-plumed-gsl (the default "gromacs" module), the order of modulefiles should follow this command:

$ module load gcc/5.4.0-alt zlib/1.2.8 mpi/openmpi/2.1.0 plumed/2-gnu boost/1.61.0-gcc-mpi fftw/3.3.6-gcc540a gsl gromacs

Note that Gromacs only runs on our Haswell and Skylake nodes. Please add one of the following lines to your Slurm submission script.

To submit to the Haswell node architecture:

#SBATCH --exclude=cn[65-69,71-136,325-343,345-353,355-358,360-364,369-398,400-401],gpu[07-10]

To submit to the Skylake node architecture:

#SBATCH --exclude=cn[65-69,71-136,153-256,265-320,325-328]

When loading the gromacs module above, the gmx command changes to gmx_mpi and that change would need to be declared in the submission script that is used.

Gaussian

$ module load gaussian/g09d01

igraph

$ module load igraph/0.6.5

TrinityRNASeq

$ module load gcc/4.7.1 trinityrnaseq/2013.08.14