Difference between revisions of "OpenMP in Small Bites/Worksharing"

From HPC Wiki
OpenMP in Small Bites/Worksharing
Jump to navigation Jump to search
m (Tweak page sorting and title)
 
(22 intermediate revisions by 3 users not shown)
Line 1: Line 1:
[[Category:Tutorials]]
+
[[Category:Tutorials|Worksharing (OpenMP)]]<nowiki />
 
+
{{DISPLAYTITLE:Worksharing (OpenMP)}}<nowiki />
 +
{{Syllabus OpenMP in Small Bites}}<nowiki />
 
__TOC__
 
__TOC__
  
{{Infobox OpenMP in Small Bites}}
+
This video shows the concept of OpenMP worksharing, loop scheduling and synchronization mechanisms. After this tutorial session the programmer already has knowledge about the most common used OpenMP constructs and API functions. How the scoping of data is controlled is introduced in the part on [[OpenMP_in_Small_Bites/Scoping |Data Scoping]].
 
 
  
 
=== Video === <!--T:5-->
 
=== Video === <!--T:5-->
  
<youtube width="600" height="400" right>IfD9IPixgpo</youtube>
+
<youtube width="600" height="340" right>vIX0Zplc2ws</youtube>
  
 +
([[Media:hpc.nrw_02_Introduction-Worksharing.pdf | Slides as pdf]])
  
 
=== Quiz === <!--T:5-->   
 
=== Quiz === <!--T:5-->   
Line 15: Line 16:
  
 
{{hidden begin  
 
{{hidden begin  
|title = 1. What is most commenly used worksharing construct in OpenMP to distribute work among loop interations?
+
|title = 1. What is most commonly used worksharing construct in OpenMP to distribute work among loop interations?
 
}}
 
}}
 
<quiz display=simple>
 
<quiz display=simple>
Line 32: Line 33:
 
|type="()"}
 
|type="()"}
 
+ Click and submit to see the answer
 
+ Click and submit to see the answer
|| C/C++:<br /><div style="margin-left: 2em;"><code> int i; <br />#pragma omp parallel <br />#pragma omp for <br /> for (i = 0; i < 100; i++){ <br /> :;a[i] = b[i] + c[i]; <br /> } </code></div> <br /> Fortran:<br /> <code> INTEGER :: i <br />!$omp parallel <br />!$omp do <br /> DO i = 0, 99<br /> a[i] = b[i] + c[i] <br /> END DO </code>
+
|| C/C++:<br /><code>int i; <br />#pragma omp parallel <br />#pragma omp for <br /> for (i = 0; i < 100; i++){</code> <div style="margin-left: 1em;"> <code> a[i] = b[i] + c[i]; </code> </div> <code> } </code> <br /> Fortran:<br /> <code> INTEGER :: i <br />!$omp parallel <br />!$omp do <br /> DO i = 0, 99</code> <div style="margin-left: 1em;"> <code> a[i] = b[i] + c[i] </code> </div><code>END DO </code>
 
</quiz>
 
</quiz>
 
{{hidden end}}
 
{{hidden end}}
Line 38: Line 39:
  
 
{{hidden begin  
 
{{hidden begin  
|title = 3. Can the following code snipped be parallelized with the OpenMP for-construct with out breaking the semantic? Justify your answer.
+
|title = 3. Can the following code snipped be parallelized with the OpenMP for-construct with out breaking the semantic? Justify your answer. <br>
 +
<code> int i, int s = 0; <br /> for (i = 1; i < 100; i++){ </code> <div style="margin-left: 1em;"><code> s = a[i-1] + a[i]; </code></div> <code>}  </code> <br />
 
}}
 
}}
 
<quiz display=simple>
 
<quiz display=simple>
Line 44: Line 46:
 
|type="()"}
 
|type="()"}
 
+ Click and submit to see the answer
 
+ Click and submit to see the answer
|| <code> int i, int s = 0; <br /> for (i = 1; i < 100; i++){ <br />  s = a[i-1] + a[i]; <br /> }  </code> <br /> (No. Due to the depency between the loop iterations this would cause a data race)
+
|| {{Note|'''No. Due to the dependency between the loop iterations this would cause a data race'''}}
 
</quiz>
 
</quiz>
 
{{hidden end}}
 
{{hidden end}}

Latest revision as of 17:29, 4 December 2020

Tutorial
Title: OpenMP in Small Bites
Provider: HPC.NRW

Contact: tutorials@hpc.nrw
Type: Multi-part video
Topic Area: Programming Paradigms
License: CC-BY-SA
Syllabus

1. Overview
2. Worksharing
3. Data Scoping
4. False Sharing
5. Tasking
6. Tasking and Data Scoping
7. Tasking and Synchronization
8. Loops and Tasks
9. Tasking Example: Sudoku Solver
10. Task Scheduling
11. Non-Uniform Memory Access

This video shows the concept of OpenMP worksharing, loop scheduling and synchronization mechanisms. After this tutorial session the programmer already has knowledge about the most common used OpenMP constructs and API functions. How the scoping of data is controlled is introduced in the part on Data Scoping.

Video

( Slides as pdf)

Quiz

1. What is most commonly used worksharing construct in OpenMP to distribute work among loop interations?

Click and submit to see the answer

2. Give an example for a parallel vector addition using OpenMP worksharing!

Click and submit to see the answer


3. Can the following code snipped be parallelized with the OpenMP for-construct with out breaking the semantic? Justify your answer.
int i, int s = 0;
for (i = 1; i < 100; i++){
s = a[i-1] + a[i];
}

Click and submit to see the answer