Schrödinger Guide

From Storrs HPC Wiki
Revision as of 15:57, 16 August 2017 by Lwm14001 (talk | contribs) (Added section describing an oddity with the mpi setup in schrodinger)
Jump to: navigation, search

The Schrödinger Suite is a collection of software for chemical and biochemical use. It offers various tools that facilitate the investigation of the structures, reactivity and properties of chemical systems. There is a campus site license for this software, supported by UITS. More information is available here: http://software.uconn.edu/schrodinger/.

Load Modules

$ module load schrodinger/2016-2

You can then see a list of executable programs:

$ find /apps2/schrodinger/2016-2 -maxdepth 1 -executable -type f -printf "%f\n" | sort | column
blast			epik			jobcontrol		mcpro_fep		phase_inactive		primex			sta
bmin			fep_plus		knime			mcpro_lrm		phase_multiPartition	qikfit			strike
canvas		gfxinfo			licadmin		membrane_permeability	phase_multiQsar		qikprop			structurebased_adme
combgen		glide			ligand_strain		para_testapp		phase_partition		qiksim			testapp
combiglide		hppmap			ligprep			pfam			phase_qsar		qpld			vsw
confgen		hunt			lsbd			phase_build_qsar	phase_scoring		qsite
confgenx		ifd			machid			phase_database		phase_screen		run
consensus_homology	impact			macromodel		phase_feature		pipeline		shape_screen
covalent_docking	installation_check	maestro			phase_find_common	platform		sitemap
desmond		jagscript		mcpro			phase_fqsar		prime			ska
diagnostics		jaguar			mcpro_ddg		phase_hypoCluster	prime_mmgbsa		sip

Host Configuration

The Schrödinger Suite is configured to submit jobs directly to the SLURM job scheduler. Therefore, you do not need to wrap your commands in a submission script. You can execute Schrödinger commands directly from a login node. When you submit Schrödinger jobs, you do so to hosts. We have created the following hosts: slurm-parallel-24, slurm-parallel-48, slurm-parallel-96, slurm-parallel-192, slurm-parallel-384. Each one of these hosts will submit a job to SLURM's parallel for the number of cores specified by the number at the end of its name.

Example Application Usage

qsite

$ qsite -SAVE -PARALLEL 24 -HOST slurm-parallel-24 3IIS_Per1.in 
Launching JAGUAR under jobcontrol.
Exec: /apps2/schrodinger/2016-2/jaguar-v9.2/bin/Linux-x86_64
JobId: cn01-0-57b33646

Note that the numeric value of -PARALLEL should match the numeric value of the -HOST that you specified.

You can then view the status of your running job with sacct.

$ sacct
       JobID    JobName  Partition    Account  AllocCPUS      State ExitCode 
------------ ---------- ---------- ---------- ---------- ---------- -------- 
39148       j3IIS_Per1   parallel   abc12345         24    RUNNING      0:0 
391148.0       hostname              abc12345         24  COMPLETED      0:0

Run Test Suite

$ testapp -HOST slurm-parallel-24 -DEBUG
$ para_testapp -HOST slurm-parallel-48 -DEBUG

Installation Oddities

schrodinger comes pre-packaged with an outdated version of mpi(< 1.8.1), meaning an old bug in the MPI->SLURM interface needs to be manually patched by appending the following line to schrodinger's mpi's default config file:

plm_slurm_args = --cpu_bind=boards

This causes cpu affinities to be set sub-optimally, but better than how they are set without the cpu_bind line. To append this line to the 2016-3 install an admin would do:

echo 'plm_slurm_args = --cpu_bind=boards' >> /apps2/schrodinger/2017-1/mmshare-v3.7/lib/Linux-x86_64/openmpi/etc/openmpi-mca-params.conf