Difference between revisions of "Schrödinger Guide"

From Storrs HPC Wiki
Jump to: navigation, search
(Added section describing an oddity with the mpi setup in schrodinger)
(Update to schrodinger 2019-2)
(One intermediate revision by one other user not shown)
Line 4: Line 4:
=Load Modules=
=Load Modules=
  $ module load schrodinger/2016-2
  $ module load schrodinger/2019-2
You can then see a list of executable programs:
You can then see a list of executable programs:
$ find /apps2/schrodinger/2016-2 -maxdepth 1 -executable -type f -printf "%f\n" | sort | column
$ find /apps2/schrodinger/2019-2 -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t
blast epik jobcontrol mcpro_fep phase_inactive primex sta
autots    covalent  glide    ligand_s  mcpro_fe  phase_fq  qikfit   sitemap
bmin fep_plus knime mcpro_lrm phase_multiPartition qikfit strike
biolumin  desmond  hppmap    ligprep  mcpro_lr  phase_hy  qikprop   ska
canvas gfxinfo licadmin membrane_permeability phase_multiQsar qikprop structurebased_adme
blast    diagnost  ifd      lsbd      mxmd      phase_qs  qiksim   ssp
combgen glide ligand_strain para_testapp phase_partition qiksim testapp
bmin      elements  impact    machid    para_tes  phase_sc  qpld     sta
combiglide hppmap ligprep pfam phase_qsar qpld vsw
canvas    epik      installa  macromod  pfam      pipeline  qsite     strike
confgen hunt lsbd phase_build_qsar phase_scoring qsite
combgen  fep_bind  jaguar    maestro  phase_bu  prime    run       structur
confgenx ifd machid phase_database phase_screen run
confgen  fep_plus  jobcontr  material  phase_da  prime_mm  shape_sc  testapp
consensus_homology impact macromodel phase_feature pipeline shape_screen
confgenx  fep_solu  knime    mcpro     phase_fi  primex    shape_sc  vsw
covalent_docking installation_check maestro phase_find_common platform sitemap
consensu  gfxinfo  licadmin  mcpro_dd
desmond jagscript mcpro phase_fqsar prime ska
diagnostics jaguar mcpro_ddg phase_hypoCluster prime_mmgbsa sip

Latest revision as of 14:45, 25 September 2019

The Schrödinger Suite is a collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. There is a campus site license for this software, supported by UITS. More information is available here: http://software.uconn.edu/schrodinger/.

Load Modules

$ module load schrodinger/2019-2

You can then see a list of executable programs:

$ find /apps2/schrodinger/2019-2 -maxdepth 1 -executable -type f -printf "%f\n" | sort | pr -tT -8 | column -t
autots    covalent  glide     ligand_s  mcpro_fe  phase_fq  qikfit    sitemap
biolumin  desmond   hppmap    ligprep   mcpro_lr  phase_hy  qikprop   ska
blast     diagnost  ifd       lsbd      mxmd      phase_qs  qiksim    ssp
bmin      elements  impact    machid    para_tes  phase_sc  qpld      sta
canvas    epik      installa  macromod  pfam      pipeline  qsite     strike
combgen   fep_bind  jaguar    maestro   phase_bu  prime     run       structur
confgen   fep_plus  jobcontr  material  phase_da  prime_mm  shape_sc  testapp
confgenx  fep_solu  knime     mcpro     phase_fi  primex    shape_sc  vsw
consensu  gfxinfo   licadmin  mcpro_dd

Host Configuration

The Schrödinger Suite is configured to submit jobs directly to the SLURM job scheduler. Therefore, you do not need to wrap your commands in a submission script. You can execute Schrödinger commands directly from a login node. When you submit Schrödinger jobs, you do so to hosts. We have created the following hosts: slurm-parallel-24, slurm-parallel-48, slurm-parallel-96, slurm-parallel-192, slurm-parallel-384. Each one of these hosts will submit a job to SLURM's parallel for the number of cores specified by the number at the end of its name.

Example Application Usage


$ qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 3IIS_Per1.in 
Launching JAGUAR under jobcontrol.
Exec: /apps2/schrodinger/2016-2/jaguar-v9.2/bin/Linux-x86_64
JobId: cn01-0-57b33646

Note that the numeric value of -PARALLEL should match the numeric value of the -HOST that you specified.

You can then view the status of your running job with sacct.

$ sacct
       JobID    JobName  Partition    Account  AllocCPUS      State ExitCode 
------------ ---------- ---------- ---------- ---------- ---------- -------- 
39148       j3IIS_Per1   parallel   abc12345         24    RUNNING      0:0 
391148.0       hostname              abc12345         24  COMPLETED      0:0

Run Test Suite

$ testapp -HOST slurm-parallel-24 -DEBUG
$ para_testapp -HOST slurm-parallel-48 -DEBUG

Installation Oddities

schrodinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:

plm_slurm_args = --cpu_bind=boards

This causes cpu affinities to be set sub-optimally, but better than how they are set without the cpu_bind line. To append this line to the 2016-3 install an admin would do:

echo 'plm_slurm_args = --cpu_bind=boards' >> /apps2/schrodinger/2017-1/mmshare-v3.7/lib/Linux-x86_64/openmpi/etc/openmpi-mca-params.conf