Difference between revisions of "OpenMp Usage"

From Storrs HPC Wiki
Jump to: navigation, search
(introduction)
(introduction)
Line 6: Line 6:
  
 
==introduction==
 
==introduction==
[[File:FFork_join.svg.png|right|450px|thumb|An illustration of [[thread (computer science)|multithreading]] where the master thread forks off a number of threads which execute blocks of code in parallel before joining the master thread again.]]
+
[[File:Fork_join.svg.png|right|450px|thumb|An illustration of [[thread (computer science)|multithreading]] where the master thread forks off a number of threads which execute blocks of code in parallel before joining the master thread again.]]
  
 
OpenMP is an implementation of multithreading, a method of parallelizing whereby a master thread (a series of instructions executed consecutively) forks a specified number of slave threads and a task is divided among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.
 
OpenMP is an implementation of multithreading, a method of parallelizing whereby a master thread (a series of instructions executed consecutively) forks a specified number of slave threads and a task is divided among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.

Revision as of 15:38, 30 June 2019

OpenMP (Open Multiprocessing) is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most processor architectures and operating systems, including Solaris, AIX, HP-UX, GNU/Linux, Mac OS X, and Windows platforms. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.

An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), or more transparently through the use of OpenMP extensions for non-shared memory systems.

introduction

An illustration of multithreading where the master thread forks off a number of threads which execute blocks of code in parallel before joining the master thread again.

OpenMP is an implementation of multithreading, a method of parallelizing whereby a master thread (a series of instructions executed consecutively) forks a specified number of slave threads and a task is divided among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.

The section of code that is meant to run in parallel is marked accordingly, with a preprocessor directive that will cause the threads to form before the section is executed. Each thread has an id attached to it which can be obtained using a function (called omp_get_thread_num()). The thread id is an integer, and the master thread has an id of 0. After the execution of the parallelized code, the threads join back into the master thread, which continues onward to the end of the program.

By default, each thread executes the parallelized section of code independently. Work-sharing constructs can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism and data parallelism can be achieved using OpenMP in this way.

The runtime environment allocates threads to processors depending on usage, machine load and other factors. The number of threads can be assigned by the runtime environment based on environment variables or in code using functions. The OpenMP functions are included in a header file labelled omp.h in C/C++.

Parallel using C/C++

In C/C++, OpenMP uses #pragmas to illustrate the parallel process. The OpenMP specific pragmas are listed below.


Thread creation

The pragma omp parallel is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original process will be denoted as master thread with thread ID 0.

Example (C program): Display "Hello, world" using multiple threads.

#include <stdio.h>
int main(void)
{
  #pragma omp parallel
    printf("Hello, world.\n");
  return 0;
}

Use flag -fopenmp to compile using GCC:

$gcc -fopenmp hello.c -o hello


Output on a computer with 2 Cores and 2 threads.


Hello, world.
Hello, world.


However, the output may also be garbled because of the race condition caused from the two threads sharing the standard output.

Hello, wHello, woorld.
rld.

Work-sharing constructs

Used to specify how to assign independent work to one or all of the threads.

  • omp for or omp do: used to split up loop iterations among the threads, also called loop constructs.
  • sections: assigning consecutive but independent code blocks to different threads
  • single: specifying a code block that is executed by only one thread, a barrier is implied in the end
  • master: similar to single, but the code block will be executed by the master thread only and no barrier implied in the end.

Example: initialize the value of a large array in parallel, using each thread to do part of the work

int main(int argc, char *argv[]) {
   const int N = 100000;
   int i, a[N];
   #pragma omp parallel for
   for (i = 0; i < N; i++)
       a[i] = 2 * i;
   return 0;
}


Using OpenMp under Slurm

To run an OpenMP job with MPI on multiple hosts, specify the number of processors and the number of processes per machine. For example, to reserve 32 processors and run 4 processes per machine. You need a submission script first and then submit it via sbatch:

$ cat slurm.sh 
#!/bin/bash
#SBATCH -N 8 # require for 8 nodes
#SBATCH -c 4 # require 4 threads per node
#SBATCH --ntasks-per-node=4 # assign 1 process per node. These processes will be multi-threaded.

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./myOpenMPJob # run on 8 nodes with 4 threads per node

$ sbatch slurm.sh

myOpenMPJob runs across 8 machines (4/32=8) and PAM starts 1 MPI process per machine.

To run a parallel OpenMP job on a single host, specify the number of processors:

$ cat slurm.sh 
#!/bin/bash
#SBATCH -n 1 # require for 1 process
#SBATCH -c 4 # require 4 threads per node

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./myOpenMPJob 

$ sbatch slurm.sh