Difference between revisions of "MPI"

From HPC Wiki
Jump to navigation Jump to search
Line 12: Line 12:
 
</syntaxhighlight>
 
</syntaxhighlight>
  
Next to those, a lot of other different functions exist for non-blocking or all-to-all communication. Programs written with these functions have to be compiled with a specific [[compiler]] (options) and executed with a special startup program like detailed [[How_to_Use_MPI|here]].
+
Next to those, a lot of other different functions exist for non-blocking or collective communication. Programs written with these functions have to be compiled with a specific [[compiler]] (options) and executed with a special startup program like detailed [[How_to_Use_MPI|here]].
  
 
Please check the more detailed tutorials in the References.  
 
Please check the more detailed tutorials in the References.  

Revision as of 12:44, 13 April 2018

MPI is a standard for Distributed Memory parallelization. Information on how to run an existing MPI program can be found in the How_to_Use_MPI Section.

General

In MPI the basic operations are the send

int MPI_Send (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

and receive

int MPI_Recv (void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status* status)

Next to those, a lot of other different functions exist for non-blocking or collective communication. Programs written with these functions have to be compiled with a specific compiler (options) and executed with a special startup program like detailed here.

Please check the more detailed tutorials in the References.

References

Introduction to MPI from PPCES (@RWTH Aachen) Part 1

Introduction to MPI from PPCES (@RWTH Aachen) Part 2

Introduction to MPI from PPCES (@RWTH Aachen) Part 3