OpenMp Usage

From Storrs HPC Wiki
Revision as of 21:37, 5 July 2019 by Jar02014 (talk | contribs) (Another Thread Example)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

OpenMP (Open Multiprocessing) is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most processor architectures and operating systems, including Solaris, AIX, HP-UX, GNU/Linux, Mac OS X, and Windows platforms. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.

An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), or more transparently through the use of OpenMP extensions for non-shared memory systems.

Introduction

An illustration of multithreading where the master thread forks off a number of threads which execute blocks of code in parallel before joining the master thread again.

OpenMP is an implementation of multithreading, a method of parallelizing whereby a master thread (a series of instructions executed consecutively) forks a specified number of slave threads and a task is divided among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.

The section of code that is meant to run in parallel is marked accordingly, with a preprocessor directive that will cause the threads to form before the section is executed. Each thread has an id attached to it which can be obtained using a function (called omp_get_thread_num()). The thread id is an integer, and the master thread has an id of 0. After the execution of the parallelized code, the threads join back into the master thread, which continues onward to the end of the program.

By default, each thread executes the parallelized section of code independently. Work-sharing constructs can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism and data parallelism can be achieved using OpenMP in this way.

The runtime environment allocates threads to processors depending on usage, machine load and other factors. The number of threads can be assigned by the runtime environment based on environment variables or in code using functions. The OpenMP functions are included in a header file labelled omp.h in C/C++.

Parallel using C/C++

In C/C++, OpenMP uses #pragmas to illustrate the parallel process. The OpenMP specific pragmas are listed below.


Thread Creation Example

The pragma omp parallel is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original process will be denoted as master thread with thread ID 0.

Example (C program): Display "Hello, world" using multiple threads.

#include <stdio.h>
#include <omp.h>
int main(void)
{
  #pragma omp parallel
  {
  int tid = omp_get_thread_num();
  printf("Hello, world. %d\n", tid);
  printf("Hi again %d\n", tid);
  }
  return 0;
}

The bold lines above use OpenMP specific includes, pragmas, and function calls. The include statement loads the OpenMP function definitions. The pragma statement tells the compiler that the statement, or block, that follows is to be parallelized. The function omp_get_thread_num returns the integer id for each thread that executes the block.

To compile with GCC, use the flag -fopenmp:

$gcc -fopenmp hello.c -o hello


Output on a computer with 3 Cores and 3 threads.


Hello, world. 1
Hi again 1
Hello, world. 0
Hi again 0
Hello, world. 2
Hi again 2

The 0, 1 and 2 above identify which of the three threads printed the each line. Note that the threads did not execute in order of 0,1,2. The execution order varies and depends on the state of the computer.

Another Thread Example

This example performs an integration of the function f(x) = 4/(1 + x*x) as x moves from 0 to 1. The expected value is pi. The integration works by calculating, and summing, all the value of f(x) on a fine grid of points between 0 and 1. This is easy to parallelize, because we can have each thread work on a subset of x values. Once the threads complete their local sum, the program combine the local sums into a final sum.

Here is the program

#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
#include <math.h>

#define CORRECT 3.141592653589793

void Usage() {
    printf("\n   Usage: ex2 NSTEPS NUMTHREADS\n");
    printf("\n     Examples:\n");
    printf("\n         1)  ./ex2  100000000  4\n");
    printf("\n         2)  for n in 1 2 3 4 5 6 7 8; do ./ex2  100000000  $n;  done\n");
    printf("\n         3)  for n in $(seq 12); do ./ex2  100000000  $n;  done\n");
    exit(0);
}

double get_function(double x) { return 4.0/(1.0 + x*x); }

int main(int argc, char *argv[]) {
    int i;
    if (argc<3) Usage();
    //  Get command-line parameters
    int num_steps   = atoi(argv[1]);
    int num_threads_requested = atoi(argv[2]);
    //  Set number of threads
    omp_set_num_threads(num_threads_requested);
    //  Width of step
    double width = 1.0 / num_steps;
    //  Sum
    double shared_sum = 0.0;
    //  Make shared location to save actual number of threads
    int num_threads_actual;
    //  Start timer
    double start_sec = omp_get_wtime();
    #pragma omp parallel
    {
        int i;
        double x, local_sum = 0;
        int tid = omp_get_thread_num();
        num_threads_actual = omp_get_num_threads();
        for (i=tid;i<num_steps;i+=num_threads_actual)  {
            x = (i+0.5)*width;
            local_sum += get_function(x);
        }
        #pragma omp critical
        shared_sum += local_sum;
    }
    double final_result = shared_sum / num_steps;
    double elapsed_sec = omp_get_wtime() - start_sec;
    double error = 100 * (final_result - CORRECT) / CORRECT;
    //  Print results
    printf("elapsed, steps, threads, result, error:  %10.5fs  %8d  %5d  %8.5f  %8.4e%%\n", 
        elapsed_sec, num_steps, num_threads_actual, final_result, error);
    return 0;
}


This program compiles in the same fashion as our previous example.

 $gcc -fopenmp -o ex2 ex2.c

We run a series of test on the compile dprogram as follows

 for n in 1 2 3 4 5 6 7 8; do ./ex2 100000000 $n; done

Here is the output on a computer with 8 Cores and 8 threads.

elapsed, steps, threads, result, error:     1.29372s  100000000      1   3.14159  2.0158e-11%
elapsed, steps, threads, result, error:     0.65491s  100000000      2   3.14159  7.2941e-12%
elapsed, steps, threads, result, error:     0.43708s  100000000      3   3.14159  1.1492e-11%
elapsed, steps, threads, result, error:     0.33271s  100000000      4   3.14159  1.3486e-11%
elapsed, steps, threads, result, error:     0.27220s  100000000      5   3.14159  -4.7638e-12%
elapsed, steps, threads, result, error:     0.22989s  100000000      6   3.14159  -3.8449e-12%
elapsed, steps, threads, result, error:     0.19740s  100000000      7   3.14159  -4.1135e-12%
elapsed, steps, threads, result, error:     0.18071s  100000000      8   3.14159  -5.7391e-12%


The two bold columns are the time spent on the calculation and the number of threads. You can see that the time spent shrinks as the number of threads increases, which verifies that the program is successfully using threads to speed up the calculation

Work-sharing constructs

Used to specify how to assign independent work to one or all of the threads.

  • omp for or omp do: used to split up loop iterations among the threads, also called loop constructs.
  • sections: assigning consecutive but independent code blocks to different threads
  • single: specifying a code block that is executed by only one thread, a barrier is implied in the end
  • master: similar to single, but the code block will be executed by the master thread only and no barrier implied in the end.

Example: initialize the value of a large array in parallel, using each thread to do part of the work

int main(int argc, char *argv[]) {
   const int N = 100000;
   int i, a[N];
   #pragma omp parallel for
   for (i = 0; i < N; i++)
       a[i] = 2 * i;
   return 0;
}

Other Resources about OpenMP

Using OpenMP under Slurm

You can run the second example job above with the following Sbatch files

$cat openmp.sh
#!/bin/bash
#SBATCH -N  1  #  One node
#SBATCH -n 12  #  Twelve cores
#SBATCH --time=00:05:00  #  Only want 5 minutes

./ex2 100000000  1
./ex2 100000000  2
./ex2 100000000  3
./ex2 100000000  4
./ex2 100000000  5
./ex2 100000000  6
./ex2 100000000  7
./ex2 100000000  8
./ex2 100000000  9
./ex2 100000000 10
./ex2 100000000 11
./ex2 100000000 12

Run the script like this:

$sbatch openmp.sh

The output, which will be returned in a Slurm output file like slurm-NNNNNNN, will look like this:

elapsed, steps, threads, result, error:     1.30298s  100000000      1   3.14159  2.0158e-11%
elapsed, steps, threads, result, error:     0.65196s  100000000      2   3.14159  3.1833e-07%
elapsed, steps, threads, result, error:     0.43493s  100000000      3   3.14159  1.9099e-06%
elapsed, steps, threads, result, error:     0.32603s  100000000      4   3.14159  9.5494e-07%
elapsed, steps, threads, result, error:     0.26087s  100000000      5   3.14159  1.2733e-06%
elapsed, steps, threads, result, error:     0.21743s  100000000      6   3.14159  2.8648e-06%
elapsed, steps, threads, result, error:     0.18636s  100000000      7   3.14159  5.0929e-06%
elapsed, steps, threads, result, error:     0.16331s  100000000      8   3.14159  2.2282e-06%
elapsed, steps, threads, result, error:     0.14539s  100000000      9   3.14159  7.6394e-06%
elapsed, steps, threads, result, error:     0.13060s  100000000     10   3.14159  2.8648e-06%
elapsed, steps, threads, result, error:     0.11872s  100000000     11   3.14159  9.5493e-06%
elapsed, steps, threads, result, error:     0.10884s  100000000     12   3.14159  8.5944e-06%

Using OpenMP with MPI on multiple hosts

To run an OpenMP job with MPI on multiple hosts, specify the number of processors and the number of processes per machine. For example, to reserve 32 processors and run 4 processes per machine. You need a submission script first and then submit it via sbatch:

$ cat slurm.sh
#!/bin/bash
#SBATCH -N 8 # require for 8 nodes
#SBATCH -c 4 # require 4 threads per node
#SBATCH --ntasks-per-node=4 # assign 1 process per node. These processes will be multi-threaded.

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./myOpenMPJob # run on 8 nodes with 4 threads per node

$ sbatch slurm.sh

myOpenMPJob runs across 8 machines (4/32=8) and PAM starts 1 MPI process per machine.

$ sbatch slurm.sh