LS-Dyna Guide

From Storrs HPC Wiki
Jump to: navigation, search

LS-DYNA

LS-DYNA is an advanced general-purpose multiphysics simulation software package developed by the Livermore Software Technology Corporation (LSTC). While the package continues to contain more and more possibilities for the calculation of many complex, real world problems, its origins and core-competency lie in highly nonlinear transient dynamic finite element analysis (FEA) using explicit time integration. LS-DYNA is being used by the automobile, aerospace, construction, military, manufacturing, and bioengineering industries. LS-DYNA smd version is used on single node multicore and LS-DYNA mpp version is used on multiple nodes.

Group Permission

LS-DYNA on the cluster is owned by Arash Esmaili Zaghi and the license restricts the usage of LS-DYNA to a specific user group. If you want to be granted permission, please contact us through email (hpc@uconn.edu) for more information. We will liaison with the software technical owner, Arash Esmaili Zaghi. If granted, we will add you to user group.

Loading LS-DYNA Modules

There are two versions of LS-DYNA currently available on the cluster. To load the newest issue the command:

module load ls-dyna/R7.1.1

or

module load ls-dyna/R7.1.1_mpp

To make it load automatically upon login issue:

module initadd ls-dyna/<version>

Running LS-DYNA

Running an interactive serial job

To run the LS-DYNA in the interactive mode, please read interactive first.

$ fisbatch -c 12 -p Westmere --exclusive
FISBATCH -- the maximum time for the interactive screen is limited to 6 hours. You can add QoS to overwrite it.
FISBATCH -- waiting for JOBID 52319 to start on cluster=cluster and partition=Westmere
!
FISBATCH -- Connecting to head node (cn04)


[hpc-pei@cn04 ~]$ls-dyna_smp_d_r7_1_1_x64_redhat59_ifort131 
License option : check network license only
     Date: 12/14/2015      Time: 10:32:23  

    ___________________________________________________
    |                                                 |
    |  Livermore  Software  Technology  Corporation   |
    |                                                 |
    |  7374 Las Positas Road                          |
    |  Livermore, CA 94551                            |
    |  Tel: (925) 449-2500  Fax: (925) 449-2507       |
    |  www.lstc.com                                   |
    |_________________________________________________|
    |                                                 |
    |  LS-DYNA, A Program for Nonlinear Dynamic       |
    |  Analysis of Structures in Three Dimensions     |
    |  Version : smp d R7.1.1    Date: 04/04/2014     |
    |  Revision: 88541           Time: 09:35:54       |
    |                                                 |
    |  Features enabled in this version:              |
    |    Shared Memory Parallel                       |
    |    CESE CHEMISTRY EM ICFD STOCHASTIC_PARTICLES  |
    |    FFTW (multi-dimensional FFTW Library)        |
    |    Interactive Graphics                         |
    |    ANSYS Database format                        |
    |    ANSYS License (ANSYS150)                     |
    |                                                 |
    |  Licensed to: University of Connecticut - 1953  |
    |  Issued by  : trent_12112015                    |
    |                                                 |
    |  Platform   : Xeon64 System                     |
    |  OS Level   : Linux 2.6.18 uo                   |
    |  Compiler   : Intel Fortran Compiler 13.1 SSE2  |
    |  Hostname   : cn65                              |
    |  Precision  : Double precision (I8R8)           |
    |  SVN Version: 88833                             |
    |                                                 |
    |  Unauthorized use infringes LSTC copyrights     |
    |_________________________________________________|


 please define input file names or change defaults :
>i=full_input_file_name

[hpc-pei@cn04 ~]$ exit
Connection to cn04 closed.
FISBATCH -- exiting job

Running a parallel job

Firstly, you need a submission script, ls-dyna.sh:

#!/bin/bash
#SBATCH -c 12 # number of the cpus assigns to the job
#SBATCH -p Westmere # specify a partition

ls-dyna_smp_d_r7_1_1_x64_redhat59_ifort131 i=inputfile

Then you can run the parallel program

 sbatch ls-dyna.sh

Running LS-DYNA with LSOPT

Running LS-DYNA on single node

To run LS-DYNA in the interactive mode, please read interactive first.

$ ssh -X <netid>@login3.storrs.hpc.uconn.edu
$ fisbatch -c 12 -p Westmere --exclusive
$ module load ls-dyna/R7.1.1 lsopt
$ cd <directory of *.lsopt>
$ lsoptui *.lsopt

Running LS-DYNA on multiple nodes

To run LS-DYNA in the interactive mode, please read interactive first. Then follow the steps below.

$ ssh -X <netid>@login3.storrs.hpc.uconn.edu
$ fisbatch -N4 --ntasks-per-node=12 -p Westmere   ##(Do not use -c12 here; otherwise, it will only run one process on one node)
$ module load intelics/ifort/11.0.084 mpi/openmpi/1.6.5-ifort11 ls-dyna/R7.1.1_mpp lsopt

To check if executables can work properly, you can use command

$ ldd /apps2/ls-dyna/R7.1.1_mpp/ls-dyna_mpp_d_r7_1_1_88920_x64_redhat54_ifort131_sse2_openmpi165

If there is any library showing 'not found' on the screen, then it won't be working properly, you will need to unload all the modules and load them again.

$ cd <directory of *.lsopt>

Create a script submit_slurm as following:

#!/bin/csh -f
#
# Run jobs on a remote processor, remote disk
set newdir=`pwd | sed -n 's/.*\/\(.*\)\/\(.*\)/\1\/\2/p'`
# Run jobs on a remote processor, local disk (no transmission)
# set newdir=`pwd`
echo $newdir
setenv LSDYNA971_MPP "/apps2/ls-dyna/R7.1.1_mpp/ls-dyna_mpp_d_r7_1_1_88920_x64_redhat54_ifort131_sse2_openmpi165"
setenv LSOPT_WRAPPER "/apps2/lsopt/5.1.1/LSOPT_EXE/wrapper"
cat > dynscr << EOF
#!/bin/csh -f
#
# Define LSDYNA971_MPP environment variables in lsopt input
# or shell command ("setenv").
# $1 represents i=DynaOpt.inp and is automatically
# tagged on as the last argument of the lsopt "solver command".
#
setenv EXE "$LSDYNA971_MPP $1"

setenv LSOPT_HOST $LSOPT_HOST
setenv LSOPT_PORT $LSOPT_PORT
# Run jobs on a remote processor, remote disk
mkdir -p lsopt/$newdir
cd lsopt/$newdir
#
# This actually executes the job
#
$LSOPT_WRAPPER srun --mpi=openmpi \$EXE
EOF
# ============== E N D O F S C R I P T ===================
/bin/csh dynscr

Make sure script submit_slurm is executable. Otherwise, use command 'chmod +x submit_slurm' to make it runnable. This script can be changed according to your own needs.

$ lsoptui < *.lsopt file>

In the screen of lsopt, double click 'Stage' box, 'Setup'->'Command', browse to $ <path to>/submit_slurm; 'Input File', browse to *.k file; setup 'units per job' and 'global limit' as you need; Select 'Use Queuing'->'Slurm'->'OK'. Then click on 'Normal Run'.

LS-DYNA license check

If your LS-DYNA process is pending for a long time, you can check the license usage of ls-dyna. The licenses might be occupied by your previous jobs although those jobs are not running on nodes. It happens sometimes when your previous processes are not ended normally. To check process using license:

$ lstc_qrun

It will show info like this:

                    Running Programs

   User             Host          Program              Started       # procs
-----------------------------------------------------------------------------
hpc-pei    23507@cn19             MPPDYNA          Thu Feb  4 12:53    48

If you want release the occupied 48 licenses above, you can use:

$ lstc_qkill 23507@cn19

Note: The lstc_qrun command is only available for version R 7.1.1 and will not work for version R10

LS-DYNA R10

Starting an interactive job

To run the LS-DYNA R10 in the interactive mode, a fisbatch session would need to be set up. The previous section above has a link explaining how to set up a fisbatch interactive session.

The fisbatch command can have parameters to be passed to it so it can allocate resources to your job.

Version R10 would need to run on the Haswell architecture and other architectures above the Haswell specifications. eg. Skylake

The following fisbatch command will allocate 2 nodes with 24 cores between the two of them and will run the job on the Haswell nodes:

fisbatch --exclude=cn[01-136,325-328] -N 2 -n 24 

If you have access to priority resources, feel free to specify that as a fisbatch parameter. eg. -p prioritypartitionname

Running ls-dyna R10 in an interactive job

Once your interactive fisbatch session starts, a couple of modules would need to be loaded before running ls-dyna.

module load intelics/2017 ls-dyna/R10

Once the modules have been loaded, you can call ls-dyna R10 using different executables.

lsdyna_mpp_d - MPI version with double precision
lsdyna_mpp_s - MPI version with single precision
lsdyna_smp_d - Shared memory version with double precision
lsdyna_smp_s - Shared memory version with single precision

The shared memory executables would not need a mpi command passed to them.

For the two MPI executables, a mpi parameter would need to be declared when running the executables.

The following command will allow the MPI version with double precision executable to run and spawn 24 processes of ls-dyna:

mpiexec -np 24 ls-dyna_mpp_d

If the command is entered without the -n or -np option, the mpiexec will use the necessary number of cores on each node to run ls-dyna

mpiexec ls-dyna_mpp_d

Licenses for the software is limited to a certain amount of cores and if multiple jobs are running and using up the licenses, you might need to wait for the licenses to be freed up or specify a lower amount of cores in your mpi command above.

Memory allocation for ls-dyna R10

Depending on the ls-dyna executable that is used from above, memory allocation can change.

The double precision executables have an option to specify the memory2 parameter along with memory1 to allocate memory to the ls-dyna job.

MEMORY_2 depends on the number of processors, the more number of processors, the smaller the decomposed model, and consequently, smaller the memory required.

MEMORY_2 (only for MPP) depends on the number of processors. A good number would be to start with 20-40% of the total memory available on the nodes after which LS-DYNA will then dynamically allocate more memory if required.

MEMORY_2 is used for the remaining processes not allocated to the decomposition model.

The memory2 option is used by all the processors (including the master processor) to solve the various decomposed problems that are performed in ls-dyna. This description indicates that it will use allocate the memory that is declared in the command line option to all the assigned nodes for the job.

It is recommended to allocate memory to the decomposition using the memory1 parameter (this value will change depending on the model) and the job will allocate the needed memory it needs for memory2 later on after the decomposition completes.

command syntax would be:

mpiexec ls-dyna_mpp_d i=inputfile memory1=24000m

24000m is equivalent to 24000 mega words or 192GB (24,000 Megabytes (24 GB) * 8 Bytes)

Feel free to change the memory1 value to the needed value to complete the decomposition model.