Difference between revisions of "MPI in Small Bites"

From HPC Wiki
Jump to navigation Jump to search
m (Add new sections for nb-p2p, coll, datatypes and comms)
 
Line 37: Line 37:
 
=== [[MPI_in_Small_Bites/Blocking_Point-to-Point_Communication |Blocking Point-to-Point Communication ]] ===
 
=== [[MPI_in_Small_Bites/Blocking_Point-to-Point_Communication |Blocking Point-to-Point Communication ]] ===
 
This session provides an introduction into the point-to-point communication paradigm as defined by MPI.
 
This session provides an introduction into the point-to-point communication paradigm as defined by MPI.
 +
 +
=== [[MPI_in_Small_Bites/Non-Blocking_Point-to-Point_Communication | Non-blocking Point-to-Point Communication ]] ===
 +
This session introduces the concept of non-blocking communication at the example of point-to-point communication.
 +
 +
=== [[MPI_in_Small_Bites/Basic_Derived_Datatypes | Basic Derived Datatypes ]] ===
 +
This session introduces general concepts of MPI derived datatype abstractions and how to use them in MPI communication.
 +
 +
=== [[MPI_in_Small_Bites/Blocking_Collective_Communication | Blocking Collective Communication ]] ===
 +
This session introduces collective communication as the second major communication paradigm in MPI.
 +
 +
=== [[MPI_in_Small_Bites/Communicator_and_Group_Basics | Communicator Basics ]] ===
 +
This session introduces general concepts behind communicators and process groups in MPI.

Latest revision as of 10:06, 15 April 2026

Tutorial
Title: MPI in Small Bites
Provider: HPC.NRW

Contact: tutorials@hpc.nrw
Type: Multi-part video
Topic Area: Programming Paradigms
License: CC-BY-SA
Syllabus

1. Introduction and Overview
2. Basic Concepts
3. Blocking Point-to-Point Communication
4. Non-blocking Point-to-Point Communication
5. Basic Derived Datatypes
6. Blocking Collective Communication
7. Communicator Basics

Introduction

Welcome to the HPC.NRW OpenMP Online Tutorial!

Using the Message Passing Interface (MPI) is the de-facto standard for parallel distributed-memory programming. MPI defines an API for portable and scalable process-based parallelism.

How to proceed through this tutorial?

This tutorial is targeted for novice HPC developers as an initial introduction to distributed-memory programming with MPI.

The tutorial is made up of several sections, each covering a separate stand-alone topic, but they are designed to be worked through in order.

Each tutorial consists of a short video, followed by a couple of quiz questions for a self control. Everything in the tutorial is platform-independent and works with every operating system with an OpenMP-compatible compiler available. Although most examples are written in C/C++, the fundamental concepts also work with Fortran.

If you have any questions or encounter problems, you can contact us via e-mail at tutorials@hpc.nrw.

Who created this tutorial?

This tutorial has been developed within the framework of the HPC.NRW project. It is part of a series of online tutorials on various HPC-related topics, all of which were created by HPC.NRW members.

The initial online course is based on the in-person MPI courses developed at the IT Center of RWTH Aachen University. The speaker is Dr. Marc-André Hermanns from RWTH Aachen University. Marc-André works at the university's IT center and has been an active member of the MPI Forum for many years.


Topics

Introduction and Overview

This session provides a brief introduction of distributed-memory programming and

Basic Concepts

This session introduces basic concepts and terminology of MPI.

Blocking Point-to-Point Communication

This session provides an introduction into the point-to-point communication paradigm as defined by MPI.

Non-blocking Point-to-Point Communication

This session introduces the concept of non-blocking communication at the example of point-to-point communication.

Basic Derived Datatypes

This session introduces general concepts of MPI derived datatype abstractions and how to use them in MPI communication.

Blocking Collective Communication

This session introduces collective communication as the second major communication paradigm in MPI.

Communicator Basics

This session introduces general concepts behind communicators and process groups in MPI.