×

Search anything:

#pragma omp parallel for

Internship at OpenGenus

Get this book -> Problems on Array: For Interviews and Competitive Programming

In this article, we have explained the concept behind #pragma omp parallel and presented the idea with C++ code examples. We have explained how it is different from pragma omp parallel and pragma omp for.

Table of contents:

  1. Introduction to pragma omp
  2. pragma omp parallel
  3. pragma omp for
  4. pragma omp parallel for

Pre-requisite:

Introduction to pragma omp

The syntax of using pragma omp is as follows:

#pragma omp <list of directives>

For work sharing across threads, there are 4 directives:

  • parallel: defines a code segment that will be executed by multiple threads
  • for: divides the iterations of a for loop (within a parallel region) to be distributed across different threads
  • sections: marks code sections to be executed by available threads
  • single: forces a code segment to be executed by only one thread

One can use a single directive or multiple directives together. The pragma omp we will explore in this article at OpenGenus is:

#pragma omp parallel for

The key terms are:

  • Master thread: By default, a program runs using a single thread known as master thread.
  • Team: This is a group of threads executing a given program or code snippet. By default, the team has only master thread.

pragma omp parallel

The first omp directive is as follows:

#pragma omp parallel

By using omp parallel, a new team of threads is created and the current team will have multiple threads one of which is master thread. So, master thread is divided into multiple threads.

The directive omp parallel will enclose a particular code section, so this new team of threads will execute parts of the code snippet in parallel. This is the simplest way to parallize a code.

pragma omp for

The above directive "omp parallel" runs the risk of race condition that is multiple threads try to access a single resource which slows down the execution. To fix this for the "for loop", the solution is to use the directive "omp for".

Note: omp for should come after omp parallel, this is because:

  • omp parallel creates a new team of threads.
  • omp for will assign a single thread to a single iteration and hence, each iteration will have different threads avoiding potential race condition within the for loop.
  • If omp for is not used but omp parallel is used, all threads will try to execute the for loop together and this will slow down the execution.
  • If omp for is used without "omp parallel", the team of threads will have only one thread that is master thread so "omp for" will have no option and run sequentially.
#pragma omp for

The correct way to use it is:

#pragma omp parallel
// ... code section
#pragma omp for
    for(...)
        ...

pragma omp parallel for

The directive "pragma omp parallel for" is a combination of "pragma omp parallel" and "pragma omp for".

#pragma omp parallel for

This will:

  • First, create a new team of threads just for the use of for loop.
  • Secondly, each iteration of the for loop will be assigned a different thread.
  • Once execution of all threads is complete, all threads in the team to merged to the original team that is master thread.
  • This directive enables the parallel execution of the for loop only.

Note:

omp parallel for = omp parallel + omp for

Following two code snippets are equivalent based on this idea:

#pragma omp parallel
{ 
    #pragma omp for
    for(int i = 1; i < 100; ++i)
    {
        ...
    }
}
#pragma omp parallel for
for(int i = 1; i < 100; ++i)
{
   ...
}

With this article at OpenGenus, you must have the complete idea of omp parallel for.

Geoffrey Ziskovin

Geoffrey Ziskovin

Geoffrey Ziskovin is an American Software Developer and Author with an experience of over 30 years. He started his career with Haskell and has interviewed over 700 candidates for Fortune 500 companies

Read More

Improved & Reviewed by:


OpenGenus Foundation OpenGenus Foundation
#pragma omp parallel for
Share this