Difference between revisions of "NAMD Guide"

From Storrs HPC Wiki
Jump to: navigation, search
 
Line 5: Line 5:
 
  #SBATCH -J fhvtest2          # job name
 
  #SBATCH -J fhvtest2          # job name
 
  #SBATCH -o fhvtest.o%j      # output and error file name (%j expands to jobID)
 
  #SBATCH -o fhvtest.o%j      # output and error file name (%j expands to jobID)
  #SBATCH -n 40               # total number of mpi tasks requested
+
  #SBATCH -n 36               # total number of mpi tasks requested
  #SBATCH --ntasks-per-node=20 # Ivy Bridge: 20, SandyBridge: 16, Westmere: 12
+
  #SBATCH --ntasks-per-node=36 # cores per node
  #SBATCH -p IvyBridge         # queue (partition) -- SandyBridge, Westmere, etc.
+
  #SBATCH -p generalsky         # queue (partition)
 
   
 
   
 
  charmrun +p$SLURM_NTASKS ++mpiexec ++remote-shell \
 
  charmrun +p$SLURM_NTASKS ++mpiexec ++remote-shell \
           "srun --mpi=pmi2 --resv-port" /apps/namd/2.9-ibverbs/namd2 \
+
           "srun --mpi=pmi2 --resv-port" /apps/namd/2.10-ibverbs/namd2 \
 
           $workdir/namd_pentamer_equil.conf > quil.out # use srun command instead of mpirun.
 
           $workdir/namd_pentamer_equil.conf > quil.out # use srun command instead of mpirun.
  
 
[[Category:Software]]
 
[[Category:Software]]

Latest revision as of 18:14, 9 March 2021

You need following modules:

module load namd/2.10-ibverbs mpi/openmpi/1.6.5-gcc

The script for submission will be:

#!/bin/tcsh -f
#SBATCH -J fhvtest2           # job name
#SBATCH -o fhvtest.o%j       # output and error file name (%j expands to jobID)
#SBATCH -n 36               # total number of mpi tasks requested
#SBATCH --ntasks-per-node=36 # cores per node
#SBATCH -p generalsky          # queue (partition)

charmrun +p$SLURM_NTASKS ++mpiexec ++remote-shell \
         "srun --mpi=pmi2 --resv-port" /apps/namd/2.10-ibverbs/namd2 \
         $workdir/namd_pentamer_equil.conf > quil.out # use srun command instead of mpirun.