OpenMp Usage

From Storrs HPC Wiki
Revision as of 14:56, 1 July 2019 by Jar02014 (talk | contribs) (Thread creation)
Jump to: navigation, search

OpenMP (Open Multiprocessing) is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most processor architectures and operating systems, including Solaris, AIX, HP-UX, GNU/Linux, Mac OS X, and Windows platforms. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.

An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), or more transparently through the use of OpenMP extensions for non-shared memory systems.


An illustration of multithreading where the master thread forks off a number of threads which execute blocks of code in parallel before joining the master thread again.

OpenMP is an implementation of multithreading, a method of parallelizing whereby a master thread (a series of instructions executed consecutively) forks a specified number of slave threads and a task is divided among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.

The section of code that is meant to run in parallel is marked accordingly, with a preprocessor directive that will cause the threads to form before the section is executed. Each thread has an id attached to it which can be obtained using a function (called omp_get_thread_num()). The thread id is an integer, and the master thread has an id of 0. After the execution of the parallelized code, the threads join back into the master thread, which continues onward to the end of the program.

By default, each thread executes the parallelized section of code independently. Work-sharing constructs can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism and data parallelism can be achieved using OpenMP in this way.

The runtime environment allocates threads to processors depending on usage, machine load and other factors. The number of threads can be assigned by the runtime environment based on environment variables or in code using functions. The OpenMP functions are included in a header file labelled omp.h in C/C++.

Parallel using C/C++

In C/C++, OpenMP uses #pragmas to illustrate the parallel process. The OpenMP specific pragmas are listed below.

Thread Creation Example

The pragma omp parallel is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original process will be denoted as master thread with thread ID 0.

Example (C program): Display "Hello, world" using multiple threads.

#include <stdio.h>
#include <omp.h>
int main(void)
  #pragma omp parallel
  int tid = omp_get_thread_num();
  printf("Hello, world. %d\n", tid);
  printf("Hi again %d\n", tid);
  return 0;

The bold lines above use OpenMP specific includes, pragmas, and function calls. The include statement loads the OpenMP function definitions. The pragma statement tells the compiler that the statement, or block, that follows is to be parallelized. The function omp_get_thread_num returns the integer id for each thread that executes the block.

To compile with GCC, use the flag -fopenmp:

$gcc -fopenmp hello.c -o hello

Output on a computer with 3 Cores and 3 threads.

Hello, world. 1
Hi again 1
Hello, world. 0
Hi again 0
Hello, world. 2
Hi again 2

The 0, 1 and 2 above identify which of the three threads printed the each line. Note that the threads did not execute in order of 0,1,2. The execution order varies and depends on the state of the computer.

Work-sharing constructs

Used to specify how to assign independent work to one or all of the threads.

  • omp for or omp do: used to split up loop iterations among the threads, also called loop constructs.
  • sections: assigning consecutive but independent code blocks to different threads
  • single: specifying a code block that is executed by only one thread, a barrier is implied in the end
  • master: similar to single, but the code block will be executed by the master thread only and no barrier implied in the end.

Example: initialize the value of a large array in parallel, using each thread to do part of the work

int main(int argc, char *argv[]) {
   const int N = 100000;
   int i, a[N];
   #pragma omp parallel for
   for (i = 0; i < N; i++)
       a[i] = 2 * i;
   return 0;

Other Resources about OpenMP

Using OpenMp under Slurm

To run an OpenMP job with MPI on multiple hosts, specify the number of processors and the number of processes per machine. For example, to reserve 32 processors and run 4 processes per machine. You need a submission script first and then submit it via sbatch:

$ cat 
#SBATCH -N 8 # require for 8 nodes
#SBATCH -c 4 # require 4 threads per node
#SBATCH --ntasks-per-node=4 # assign 1 process per node. These processes will be multi-threaded.

srun ./myOpenMPJob # run on 8 nodes with 4 threads per node

$ sbatch

myOpenMPJob runs across 8 machines (4/32=8) and PAM starts 1 MPI process per machine.

To run a parallel OpenMP job on a single host, specify the number of processors:

$ cat 
#SBATCH -n 1 # require for 1 process
#SBATCH -c 4 # require 4 threads per node

srun ./myOpenMPJob 

$ sbatch