Schrödinger Guide

From Storrs HPC Wiki
Jump to: navigation, search

The Schrödinger Suite is a collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. There is a campus site license for this software, supported by UITS. More information is available here:

Load Modules

$ module load schrodinger/2019-2

You can then see a list of executable programs:

$ find /apps2/schrodinger/2019-2 -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t
autots    covalent  glide     ligand_s  mcpro_fe  phase_fq  qikfit    sitemap
biolumin  desmond   hppmap    ligprep   mcpro_lr  phase_hy  qikprop   ska
blast     diagnost  ifd       lsbd      mxmd      phase_qs  qiksim    ssp
bmin      elements  impact    machid    para_tes  phase_sc  qpld      sta
canvas    epik      installa  macromod  pfam      pipeline  qsite     strike
combgen   fep_bind  jaguar    maestro   phase_bu  prime     run       structur
confgen   fep_plus  jobcontr  material  phase_da  prime_mm  shape_sc  testapp
confgenx  fep_solu  knime     mcpro     phase_fi  primex    shape_sc  vsw
consensu  gfxinfo   licadmin  mcpro_dd

Host Configuration

The Schrödinger Suite is configured to submit jobs directly to the SLURM job scheduler. Therefore, you do not need to wrap your commands in a submission script. You can execute Schrödinger commands directly from a login node. When you submit Schrödinger jobs, you do so to hosts. We have created the following hosts: slurm-parallel-24, slurm-parallel-48, slurm-parallel-96, slurm-parallel-192, slurm-parallel-384. Each one of these hosts will submit a job to SLURM's parallel for the number of cores specified by the number at the end of its name.

Example Application Usage


$ qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 
Launching JAGUAR under jobcontrol.
Exec: /apps2/schrodinger/2016-2/jaguar-v9.2/bin/Linux-x86_64
JobId: cn01-0-57b33646

Note that the numeric value of -PARALLEL should match the numeric value of the -HOST that you specified.

You can then view the status of your running job with sacct.

$ sacct
       JobID    JobName  Partition    Account  AllocCPUS      State ExitCode 
------------ ---------- ---------- ---------- ---------- ---------- -------- 
39148       j3IIS_Per1   parallel   abc12345         24    RUNNING      0:0 
391148.0       hostname              abc12345         24  COMPLETED      0:0

Run Test Suite

$ testapp -HOST slurm-parallel-24 -DEBUG
$ para_testapp -HOST slurm-parallel-48 -DEBUG

Installation Oddities

schrodinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:

plm_slurm_args = --cpu_bind=boards

This causes cpu affinities to be set sub-optimally, but better than how they are set without the cpu_bind line. To append this line to the 2016-3 install an admin would do:

echo 'plm_slurm_args = --cpu_bind=boards' >> /apps2/schrodinger/2017-1/mmshare-v3.7/lib/Linux-x86_64/openmpi/etc/openmpi-mca-params.conf