Difference between revisions of "GPU Tutorial/Open MP"
GPU Tutorial/Open MP
Jump to navigation
Jump to search
m |
m |
||
(2 intermediate revisions by one other user not shown) | |||
Line 8: | Line 8: | ||
=== Video === <!--T:5--> | === Video === <!--T:5--> | ||
+ | |||
+ | <youtube width="600" height="340" right>EPflqxk4rfk</youtube> | ||
([[Media:GPU_tutorial_saxpy_openmp.pdf |Slides as pdf]]) | ([[Media:GPU_tutorial_saxpy_openmp.pdf |Slides as pdf]]) | ||
Line 17: | Line 19: | ||
{ | { | ||
|type="()"} | |type="()"} | ||
− | - | + | - <code>#pragma omp target gpu</code> |
− | || | + | || <code>#pragma omp target gpu</code> is a syntax error in OpenMP. |
− | - | + | - <code>#pragma omp target acc</code> |
− | || | + | || <code>#pragma omp target acc</code> is a syntax error in OpenMP. |
− | + | + | + <code>#pragma omp target</code> |
− | || Correct: | + | || Correct: <code>#pragma omp target</code> defines a target region, which is a block of computation that operates on GPU. |
</quiz> | </quiz> | ||
{{hidden end}} | {{hidden end}} | ||
Line 49: | Line 51: | ||
{ | { | ||
|type="()"} | |type="()"} | ||
− | - | + | - <code>#pragma omp init teams</code> |
− | || | + | || <code>#pragma omp init teams</code> is a syntax error in OpenMP. |
− | + | + | + <code>#pragma omp teams</code> |
− | || Correct: | + | || Correct: <code>#pragma omp teams</code> creates a league of teams for execution on GPU. |
− | - | + | - <code>#pragma omp gpu teams</code> |
− | || | + | || <code>#pragma omp init teams</code> is a syntax error in OpenMP. |
</quiz> | </quiz> | ||
{{hidden end}} | {{hidden end}} | ||
Line 64: | Line 66: | ||
{ | { | ||
|type="()"} | |type="()"} | ||
− | - | + | - <code>#pragma omp distribute for</code> |
− | || | + | || <code>#pragma omp distribute for</code> is a syntax error in OpenMP. |
− | - | + | - <code>#pragma omp parallel for</code> |
− | || | + | || <code>#pragma omp parallel for</code> only parallelizes the for-loop iterations amongst the threads in one team. |
− | + | + | + <code>#pragma omp distribute parallel for</code> |
− | || Correct: | + | || Correct: <code>#pragma omp distribute parallel for</code> distributes the for-loop iterations across two levels of parallelism, which are the league of teams and the threads in each team. |
</quiz> | </quiz> | ||
{{hidden end}} | {{hidden end}} |
Latest revision as of 16:22, 21 January 2022
Tutorial | |
---|---|
Title: | Introduction to GPU Computing |
Provider: | HPC.NRW
|
Contact: | tutorials@hpc.nrw |
Type: | Multi-part video |
Topic Area: | GPU computing |
License: | CC-BY-SA |
Syllabus
| |
1. Introduction | |
2. Several Ways to SAXPY: CUDA C/C++ | |
3. Several Ways to SAXPY: OpenMP | |
4. Several Ways to SAXPY: Julia | |
5. Several Ways to SAXPY: NUMBA |
This video discusses the SAXPY via OpenMP GPU offloading. OpenMP 4.0 and later enables developers to program GPUs in C/C++ and Fortran by means of OpenMP directives. In this tutorial we present the basic OpenMP syntax for GPU offloading and give a step-by-step guide for implementing SAXPY with it.
Video
Quiz
1. Which one of the following OpenMP directives can create a target region on GPU?
2. The OpenMP `map(to:...)` clause maps variables:
3. Which one of the following OpenMP directives can initialize a league of teams for execution on GPU?
4. Which one of the following OpenMP directives can distribute iterations of for-loop across GPU threads in the teams?