GPU Computing (CUDA C)

From HPC Wiki
GPU Tutorial/SAXPY CUDA C
Jump to navigation Jump to search

Tutorial
Title: Introduction to GPU Computing
Provider: HPC.NRW

Contact: tutorials@hpc.nrw
Type: Multi-part video
Topic Area: GPU computing
License: CC-BY-SA
Syllabus

1. Introduction
2. Several Ways to SAXPY: CUDA C/C++
3. Several Ways to SAXPY: OpenMP
4. Several Ways to SAXPY: Julia
5. Several Ways to SAXPY: NUMBA

This video discusses the SAXPY via NVIDIA CUDA C/C++. CUDA is an application programming interface (API) for NVIDIA GPUs. In general, CUDA works with many programming languages, but this tutorial is going to focus on C/C++. CUDA gives access to a GPUs instruction set, which means we have to go through everything step-by-step, since many things do not happen automatically.

Video

(Slides as pdf)


Quiz

1. Which features does CUDA add to C/C++?

new functions
new syntax
GPU support
All of the above


2. What is a kernel?

It's a flag you can set to automatically parallelize any function.
It's the part of your code that is run on the GPU.
It's a new CUDA function that activates the GPU.


3. How do you flag a function to be a kernel?

__host__
__device__
__global__
__GPU__

4. Let's say you coded your kernel function called "MyKernel". How do you run it?

MyKernel();
CUDA.run(NoBlocks, NoThreads, MyKernel());
<<<NoBlocks, NoThreads>>>MyKernel();
__global(NoBlocks, NoThreads)__ MyKernel();


5. Inside your kernel function, how do you distribute your data over the GPU threads?

You don't have to, CUDA does that automatically for you.
Each thread has has an index attached to it, which is addressed via threadIdx.x
If you use array-element-wise operations, e.g.: y.=a.*x.+b . This is managed by the NVIDIA preprocessor.
You flag a line to be parallelized via keywords, e.g.: __device__ y=a*x+b