Using Intel Cluster Studio XE 2013 on the HPC Cluster

From Storrs HPC Wiki
Revision as of 17:37, 19 February 2016 by Cdk10001 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

What is Intel® Cluster Studio XE 2013

Intel® Cluster Studio XE 2013 meets the challenges facing HPC developers by providing, for the first time, a comprehensive suite of tools that enables developers to boost HPC application performance and reliability. It combines Intel’s proven cluster tools with Intel’s advanced threading/memory correctness analysis and performance profiling tools to enable scaling application development for today’s and tomorrow’s HPC cluster systems.

What's Included

Intel® Composer XE compilers and libraries

Intel® MPI Library

Intel® VTune™ Amplifier XE

Intel® Inspector XE

Intel® Advisor XE

Load the Module on the HPC

To load all the components in Intel® Cluster Studio XE 2013, please type the following command:

module load intelics/2013.1.039-full

Note This modulefile will conflict with the mpi modulefiles because it contains impi. If you want to use the other mpi tools instead of impi, please only load Intel® Composer XE compilers and libraries

Links

Intel Cluster Studio XE 2013

Intel® Composer XE compilers and libraries

The Intel® Composer XE suites are available in several configurations that combine industry leading:

C, C++ and Fortran compilers
Intel® Cilk™ Plus and OpenMP
Intel® Math Kernel Library (Intel® MKL)
Intel® Integrated Performance Primitives (Intel® IPP)
Intel® Threading Building Blocks (Intel® TBB)

for leadership application performance on systems using Intel® Core™ and Xeon® processors, Intel® Xeon Phi™ coprocessors and compatible processors.

Load the Module on the HPC

If you are only using the compilers and libraries to compile your source code in order to get good performance from the cluster architecture. Please load the compiler module as following:

module load intelics/2013.1.039-compiler

Invoking the Intel® C++ Compiler

The command is either icc or icpc.

When you invoke the compiler with icc, the compiler builds C source files using C libraries and C include files. If you use icc with a C++ source file, it is compiled as a C++ file. Use icc to link C object files.
When you invoke the compiler with icpc the compiler builds C++ source files using C++ libraries and C++ include files. If you use icpc with a C source file, it is compiled as a C++ file. Use icpc to link C++ object files.

The icc or icpc command does the following:

Compiles and links the input source file(s).
Produces one executable file, a.out, in the current directory.

Command-line Syntax

{icc|icpc} [options] file1 [file2 . . .]

Links

For more details of C++ compiler, please read:

Invoking the Intel® C++ Compiler

Invoking the Intel® Fortran Compiler

The command to invoke the compiler is ifort.

The ifort command can compile and link projects in one step or compile them then link them as a separate step.

In most cases, a single ifort command will invoke the compiler and linker.

The ifort command invokes a driver program that is the user interface to the compiler and linker. It accepts a list of command options and file names and directs processing for each file.

The driver program does the following:

Calls the Intel(R) Fortran Compiler to process Fortran files.
Passes the linker options to the linker.
Passes object files created by the compiler to the linker.
Passes libraries to the linker.
Calls the linker or librarian to create the executable or library file.

You can also use ld to build libraries of object modules. These commands provide syntax instructions at the command line if you request it with the -help(Linux* OS and OS X*) option.

The ifort command automatically references the appropriate Intel® Fortran Run-Time Libraries when it invokes the linker. To link one or more object files created by the Intel® Fortran compiler, you should use the ifort command instead of the ld command.

Using the ifort Command from the Command Line

ifort [options]input_file(s)

Links

For more details of Fortran Compiler, please read:

Invoking the Intel® Fortran Compiler

Using Intel® MPI with Slurm

if you use 40 MPI tasks.

#!/bin/bash

# set the job name to impi
#SBATCH --job-name=impi

# request 40 MPI tasks
#SBATCH -n 40
# set I_MPI_PMI_LIBRARY to link with SLURM
export I_MPI_PMI_LIBRARY=/gpfs/gpfs1/slurm/lib/libpmi.so

# Run the process with mpirun. Notice -n is not required. mpirun will
# automatically figure out how many processes to run from the slurm options
srun ./your-jobs

Using Intel® mpi + OpenMP with Slurm

if you use 4 MPI tasks per node and 20 cores per task.

#!/bin/bash

# set the job name to hybird
#SBATCH --job-name=hybird

# this job requests 4 nodes
#SBATCH --nodes=4

# this job requests exclusive access to the nodes it is given
# this mean it will be the only job running on the node
#SBATCH --exclusive

# only request 1 MPI task per node
#SBATCH --ntasks-per-node=1

# and request 20 cpus per task for OpenMP threads
#SBATCH --cpus-per-task=20

# set OMP_NUM_THREADS to the number of --cpus-per-task we asked for
export I_MPI_PMI_LIBRARY=/gpfs/gpfs1/slurm/lib/libpmi.so
export I_MPI_FABRICS=shm:ofa
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# Run the process with mpirun. Notice -n is not required. mpirun will
# automatically figure out how many processes to run from the slurm options
srun ./your-jobs