Difference between revisions of "LAMMPS Guide"

From Storrs HPC Wiki
Jump to: navigation, search
(Add LAMMPS infobox.)
Line 12: Line 12:
 
[http://lammps.sandia.gov/ LAMMPS] is a program for simulating large scale atomic / molecular dynamics leveraging parallel hardware.
 
[http://lammps.sandia.gov/ LAMMPS] is a program for simulating large scale atomic / molecular dynamics leveraging parallel hardware.
  
==Check Module Availability==
+
==Loading / Unloading LAMMPS module==
 +
 
 +
====Check Module Availability====
 
To see which LAMMPS modules are available to the system, use the following command:
 
To see which LAMMPS modules are available to the system, use the following command:
  
Line 23: Line 25:
 
</code>
 
</code>
  
 +
If the module list is empty please [[HPC_Help#Contact your administrators|contact your administrators]].
  
==Loading a LAMMPS Module==
+
 
 +
====Loading a LAMMPS Module====
 
You can load the default LAMMPS module as follows:
 
You can load the default LAMMPS module as follows:
 
<code>
 
<code>
Line 45: Line 49:
  
  
==Automatically Loading a LAMMPS Module==
+
====Automatically Loading a LAMMPS Module====
 
If you frequently use a particular module (which is usual) you can do the following:
 
If you frequently use a particular module (which is usual) you can do the following:
 
<code>
 
<code>
Line 58: Line 62:
  
  
==Unloading a LAMMPS Module==
+
====Unloading a LAMMPS Module====
 
You can unload an already-loaded module as follows:
 
You can unload an already-loaded module as follows:
 
<code>
 
<code>
Line 71: Line 75:
 
To make sure that a module is unloaded, use the command <code>'''which lammps'''</code> which should say <code>/usr/bin/which: no lammps in ...</code>
 
To make sure that a module is unloaded, use the command <code>'''which lammps'''</code> which should say <code>/usr/bin/which: no lammps in ...</code>
  
 +
 +
==Running a LAMMPS job in the Hornet Cluster==
 +
 +
If possible, you should always first run your code in your local machine just to ensure that your code is correct. You can do it on a small dataset and a small configuration (single processor, etc.). This way you would be able to catch any errors not related to the cluster even before submitting your job.
 +
 +
Below we show a step by step example on how to run a simple LAMMPS simulation in the cluster. We have used one of the examples bundled with LAMMPS distribution, namely <code>flow</code>.
 +
 +
 +
====Copy your code and data into the cluster====
 +
 +
We are assuming that you are using the terminal to copy your data. If you are using a GUI client such as [https://filezilla-project.org/|FileZilla] you should be able to do it in a visual way.
 +
 +
Open a terminal to [[HPC_Getting_Started#Connecting_to_the_cluster|connect to the cluster]] and create a directory for the experiment.
 +
 +
<code>
 +
'''mkdir lammpstest''' && ls
 +
lammpstest
 +
</code>
 +
 +
Our code / data is located in the directory <code>~/Downloads/lammps-10Aug15/examples/flow</code> in the '''local machine'''.
 +
 +
<code>
 +
$ '''ls ~/Downloads/lammps-10Aug15/examples/flow'''
 +
in.flow.couette  log.15May15.flow.couette.g++.1  log.15May15.flow.pois.g++.1
 +
in.flow.pois    log.15May15.flow.couette.g++.4  log.15May15.flow.pois.g++.4
 +
</code>
 +
 +
Let us copy everything in this folder to the cluster using the [http://linux.die.net/man/1/scp|scp] command. Because we are using secure protocol you will be ''asked for your the password'' of your cluster account. In the snippet below remember to '''replace the word hronetuser with your actual account name.'''
 +
 +
<code>
 +
$ '''cd ~/Downloads/lammps-10Aug15/examples'''
 +
$ '''ls | grep flow'''
 +
flow
 +
$ '''scp -r flow ''hornetuser''@hornet-login3.engr.uconn.edu:~/lammpstest/flow'''
 +
in.flow.pois                                  100% 1503    1.5KB/s  00:00   
 +
in.flow.couette                              100% 1505    1.5KB/s  00:00   
 +
log.15May15.flow.couette.g++.1                100% 4559    4.5KB/s  00:00   
 +
log.15May15.flow.pois.g++.4                  100% 4561    4.5KB/s  00:00   
 +
log.15May15.flow.pois.g++.1                  100% 4559    4.5KB/s  00:00   
 +
log.15May15.flow.couette.g++.4                100% 4560    4.5KB/s  00:00
 +
</code>
 +
 +
The <code>-r</code> switch tells the <code>scp</code> command to copy everything recursively. You can of course selectively copy the files you need by omitting this switch. See the manual for <code>scp</code> for details.
 +
 +
Now let's make sure our files are copied to the cluster. For this we switch back to the cluster's terminal and do the following:
 +
 +
<code>
 +
$ '''ls ~/lammpstest'''
 +
flow
 +
$ '''cd ~/lammpstest/flow && ls'''
 +
in.flow.couette  log.15May15.flow.couette.g++.1  log.15May15.flow.pois.g++.1
 +
in.flow.pois    log.15May15.flow.couette.g++.4  log.15May15.flow.pois.g++.4
 +
</code>
 +
 +
 +
====Create a SLURM script to run your job====
 +
 +
<code>SLURM</code> is the scheduler program for our cluster. In the cluster we need to create a simple script which would tell <code>SLURM</code> how to run your job. For details see the [[SLURM Guide]].
 +
 +
You can either create this script in the terminal using any editor such as <code>nano</code>, or you can create it in your local machine and use the <code>scp</code> command to copy it into the cluster. We can put this script in the <code>lammpstest</code> directory, and it would contain the following lines:
 +
 +
<code>
 +
$ '''cd ~/lammpstest'''
 +
$ '''cat lammps_job.sh'''
 +
#!/bin/bash
 +
#SBATCH -n2
 +
#SBATCH -o ~/lammpstest/lammps_sim_out.txt
 +
#SBATCH -e ~/lammpstest/lammps_sim_out.txt
 +
#SBATCH --mail-type=ALL
 +
#SBATCH --mail-user=user@engr.uconn.edu
 +
 +
lammps < ~/lammpstest/flow/in.flow.couette
 +
</code>
 +
 +
This script is telling how many processors we need as well as which files the output (and errors) should be written to. Basically, the lines starting with <code>#SBATCH</code> provide the switches for the [http://slurm.schedmd.com/sbatch.html|sbatch] command, which submits a job to <code>SLURM</code>. Note that we have told <code>SLURM</code> to email us at every event for this job such as begin / queued / end / error etc.
 +
 +
The last line is the command that would be run as the job. It invokes the <code>lammps</code> module with the input <code>~/lammpstest/flow/in.flow.couette</code>.
 +
 +
 +
====Submitting your job====
 +
 +
Before you submit your job make sure that the <code>LAMMPS</code> module is loaded, as described at the first part of this guide. When you are ready, simple do the following:
 +
 +
<code>
 +
$ sbatch < ~/lammpstest/lammps_job.sh
 +
Submitted batch job 24703
 +
</code>
 +
 +
 +
====Checking output====
 +
 +
When the job is done we would get email notifications. You can also check your job status using the <code>bjobs</code> command. If we check the directory where we expect the output we would see the following:
 +
 +
<code>
 +
$ ls -l ~/lammpstest
 +
total 64
 +
drwxr-xr-x 2 hpc-saad hpc-saad  512 Sep 10 15:43 '''flow'''
 +
-rw-rw-r-- 1 hpc-saad hpc-saad  186 Sep 14 09:11 '''lammps_job.sh'''
 +
-rw-rw-r-- 1 hpc-saad hpc-saad 3064 Sep 14 09:16 '''lammps_sim_out.txt'''
 +
-rw-rw-r-- 1 hpc-saad hpc-saad 4419 Sep 14 09:16 '''log.lammps'''
 +
</code>
  
 
[[Category:Software]]
 
[[Category:Software]]

Revision as of 09:24, 14 September 2015

LAMMPS Molecular Dynamics Simulator
Author Sandia National Labs and Temple University
Website http://lammps.sandia.gov
Source Git
Category Commandline utility
Help manual
mailing list
workshops


LAMMPS is a program for simulating large scale atomic / molecular dynamics leveraging parallel hardware.

Loading / Unloading LAMMPS module

Check Module Availability

To see which LAMMPS modules are available to the system, use the following command:

$ module avail lammps

You should see all available modules in lammps/<version> format:

----------------------- /apps2/Modules/3.2.6/modulefiles -----------------------
lammps/1Feb14      lammps/23Sep13     lammps/28Jun14     lammps/28Jun14MoS2

If the module list is empty please contact your administrators.


Loading a LAMMPS Module

You can load the default LAMMPS module as follows:

$ module load lammps

To see which version is being used, do the following:

$ which lammps
/apps2/lammps/28Jun14MoS2/bin/lammps


If your program needs a specific versin e.g., lammps/28Jun14, do the following:

$ module load lammps/28Jun14

After you load a module you can use it until you log out; you will need to load it again the next time you log in and run your program.


Automatically Loading a LAMMPS Module

If you frequently use a particular module (which is usual) you can do the following:

$ module initadd lammps

Or, if you need a specific version,

$ module initadd lammps/28Jun14

Note that this auto-loading of the module will start from your next login session. Therefore if you need the module at the present session you will need to load the module using the module load command.


Unloading a LAMMPS Module

You can unload an already-loaded module as follows:

$ module unload lammps

Or, to unload a specific version,

$ module unload lammps/28Jun14

To make sure that a module is unloaded, use the command which lammps which should say /usr/bin/which: no lammps in ...


Running a LAMMPS job in the Hornet Cluster

If possible, you should always first run your code in your local machine just to ensure that your code is correct. You can do it on a small dataset and a small configuration (single processor, etc.). This way you would be able to catch any errors not related to the cluster even before submitting your job.

Below we show a step by step example on how to run a simple LAMMPS simulation in the cluster. We have used one of the examples bundled with LAMMPS distribution, namely flow.


Copy your code and data into the cluster

We are assuming that you are using the terminal to copy your data. If you are using a GUI client such as [1] you should be able to do it in a visual way.

Open a terminal to connect to the cluster and create a directory for the experiment.

mkdir lammpstest && ls
lammpstest

Our code / data is located in the directory ~/Downloads/lammps-10Aug15/examples/flow in the local machine.

$ ls ~/Downloads/lammps-10Aug15/examples/flow
in.flow.couette  log.15May15.flow.couette.g++.1  log.15May15.flow.pois.g++.1
in.flow.pois     log.15May15.flow.couette.g++.4  log.15May15.flow.pois.g++.4

Let us copy everything in this folder to the cluster using the [2] command. Because we are using secure protocol you will be asked for your the password of your cluster account. In the snippet below remember to replace the word hronetuser with your actual account name.

$ cd ~/Downloads/lammps-10Aug15/examples
$ ls | grep flow
flow
$ scp -r flow hornetuser@hornet-login3.engr.uconn.edu:~/lammpstest/flow
in.flow.pois                                  100% 1503     1.5KB/s   00:00    
in.flow.couette                               100% 1505     1.5KB/s   00:00    
log.15May15.flow.couette.g++.1                100% 4559     4.5KB/s   00:00    
log.15May15.flow.pois.g++.4                   100% 4561     4.5KB/s   00:00    
log.15May15.flow.pois.g++.1                   100% 4559     4.5KB/s   00:00    
log.15May15.flow.couette.g++.4                100% 4560     4.5KB/s   00:00 

The -r switch tells the scp command to copy everything recursively. You can of course selectively copy the files you need by omitting this switch. See the manual for scp for details.

Now let's make sure our files are copied to the cluster. For this we switch back to the cluster's terminal and do the following:

$ ls ~/lammpstest
flow
$ cd ~/lammpstest/flow && ls
in.flow.couette  log.15May15.flow.couette.g++.1  log.15May15.flow.pois.g++.1
in.flow.pois     log.15May15.flow.couette.g++.4  log.15May15.flow.pois.g++.4 


Create a SLURM script to run your job

SLURM is the scheduler program for our cluster. In the cluster we need to create a simple script which would tell SLURM how to run your job. For details see the SLURM Guide.

You can either create this script in the terminal using any editor such as nano, or you can create it in your local machine and use the scp command to copy it into the cluster. We can put this script in the lammpstest directory, and it would contain the following lines:

$ cd ~/lammpstest
$ cat lammps_job.sh
#!/bin/bash
#SBATCH -n2
#SBATCH -o ~/lammpstest/lammps_sim_out.txt
#SBATCH -e ~/lammpstest/lammps_sim_out.txt
#SBATCH --mail-type=ALL
#SBATCH --mail-user=user@engr.uconn.edu
lammps < ~/lammpstest/flow/in.flow.couette

This script is telling how many processors we need as well as which files the output (and errors) should be written to. Basically, the lines starting with #SBATCH provide the switches for the [3] command, which submits a job to SLURM. Note that we have told SLURM to email us at every event for this job such as begin / queued / end / error etc.

The last line is the command that would be run as the job. It invokes the lammps module with the input ~/lammpstest/flow/in.flow.couette.


Submitting your job

Before you submit your job make sure that the LAMMPS module is loaded, as described at the first part of this guide. When you are ready, simple do the following:

$ sbatch < ~/lammpstest/lammps_job.sh
Submitted batch job 24703


Checking output

When the job is done we would get email notifications. You can also check your job status using the bjobs command. If we check the directory where we expect the output we would see the following:

$ ls -l ~/lammpstest
total 64
drwxr-xr-x 2 hpc-saad hpc-saad  512 Sep 10 15:43 flow
-rw-rw-r-- 1 hpc-saad hpc-saad  186 Sep 14 09:11 lammps_job.sh
-rw-rw-r-- 1 hpc-saad hpc-saad 3064 Sep 14 09:16 lammps_sim_out.txt
-rw-rw-r-- 1 hpc-saad hpc-saad 4419 Sep 14 09:16 log.lammps