LAMMPS Guide
LAMMPS Molecular Dynamics Simulator | |
---|---|
Author | Sandia National Labs and Temple University |
Website | http://lammps.sandia.gov |
Source | Git |
Category | Commandline utility |
Help | manual mailing list workshops |
From the LAMMPS README file:
LAMMPS is a classical molecular dynamics simulation code designed to run efficiently on parallel computers. It was developed at Sandia National Laboratories, a US Department of Energy facility, with funding from the DOE. It is an open-source code, distributed freely under the terms of the GNU Public License (GPL).
Contents
Loading the LAMMPS module
Best practise is to specify the module with the version number for programs to run consistently.
To see the versions of modules on the cluster, use the module avail
command:
module avail lammps
Now we can load the latest available LAMMPS module, which at the time of writing is 28Jun14:
module load lammps/28Jun14
One can avoid loading the module at each log in, by using module initadd
:
$ module initadd lammps/28Jun14
Running LAMMPS
If possible, you should always first run your code in your local machine just to ensure that your code is correct. You can do it on a small dataset and a small configuration (single processor, etc.). This way you would be able to catch any errors not related to the cluster even before submitting your job.
Below we show a step by step example on how to run a simple LAMMPS simulation in the cluster. We have used one of the examples bundled with LAMMPS distribution, namely flow
.
Copy your code and data to the cluster
We are assuming that you are using the terminal to copy your data. If you are using a GUI client such as [1] you should be able to do it in a visual way.
Open a terminal to connect to the cluster and create a directory for the experiment.
$ mkdir lammpstest && ls lammpstest
Our code / data is located in the directory ~/Downloads/lammps-10Aug15/examples/flow
in the local machine.
$ ls ~/Downloads/lammps-10Aug15/examples/flow in.flow.couette log.15May15.flow.couette.g++.1 log.15May15.flow.pois.g++.1 in.flow.pois log.15May15.flow.couette.g++.4 log.15May15.flow.pois.g++.4
Let us copy everything in this folder to the cluster using the [2] command. Because we are using secure protocol you will be asked for your the password of your cluster account. In the snippet below remember to replace the word hronetuser with your actual account name.
$ cd ~/Downloads/lammps-10Aug15/examples $ ls | grep flow flow $ scp -r flow user@login3.storrs.hpc.uconn.edu:~/lammpstest/flow in.flow.pois 100% 1503 1.5KB/s 00:00 in.flow.couette 100% 1505 1.5KB/s 00:00 log.15May15.flow.couette.g++.1 100% 4559 4.5KB/s 00:00 log.15May15.flow.pois.g++.4 100% 4561 4.5KB/s 00:00 log.15May15.flow.pois.g++.1 100% 4559 4.5KB/s 00:00 log.15May15.flow.couette.g++.4 100% 4560 4.5KB/s 00:00
The -r
switch tells the scp
command to copy everything recursively. You can of course selectively copy the files you need by omitting this switch. See the manual for scp
for details.
Now let's make sure our files are copied to the cluster. For this we switch back to the cluster's terminal and do the following:
$ ls ~/lammpstest flow $ cd ~/lammpstest/flow && ls in.flow.couette log.15May15.flow.couette.g++.1 log.15May15.flow.pois.g++.1 in.flow.pois log.15May15.flow.couette.g++.4 log.15May15.flow.pois.g++.4
SLURM script
SLURM
is the scheduler program for our cluster. In the cluster we need to create a simple script which would tell SLURM
how to run your job. For details see the SLURM Guide.
You can either create this script in the terminal using any editor such as nano
, or you can create it in your local machine and use the scp
command to copy it into the cluster. We can put this script in the lammpstest
directory, and it would contain the following lines:
$ cd ~/lammpstest $ cat lammps_job.sh #!/bin/bash #SBATCH -n 48 #SBATCH -o lammps_sim_out-%J.txt #SBATCH -e lammps_sim_out-%J.txt #SBATCH --mail-type=ALL #SBATCH --mail-user=user@engr.uconn.edu mpiexec lammps < flow/in.flow.couette
This script is telling how many processors we need as well as which files the output (and errors) should be written to. Basically, the lines starting with #SBATCH
provide the switches for the [3] command, which submits a job to SLURM
. Note that we have told SLURM
to email us at every event for this job such as begin / queued / end / error etc.
The last line is the command that would be run as the job. It invokes the lammps
module with the input ~/lammpstest/flow/in.flow.couette
.
Extra Note: If you are using mpirun in your submission scripts, it is recommended to use the following command syntax:
mpirun lammps -in inputfile
The lammps documentation site mentioned this note if mpirun is being used with the < operator:
"The redirection operator “<” will not always work when running in parallel with mpirun; for those systems the -in form is required."
Submitting your job
Before you submit your job make sure that the LAMMPS
module is loaded, as described at the first part of this guide. When you are ready, simple do the following:
$ sbatch < lammps_job.sh Submitted batch job 24703
Checking output
When the job is done we would get email notifications. You can also check your job status using the sjobs
command. We can check on the lammps output itself using tail:
$ tail -f lammps_sim_out.txt