Difference between revisions of "MPI"

From HPC Wiki
Jump to navigation Jump to search
Line 1: Line 1:
MPI is a standard for Distributed Memory [[Parallel_Programming|parallelization]]. Information on how to run an existing MPI program can be found in the [[How_to_Use_MPI]] Section.
+
MPI is an open standard for Distributed Memory [[Parallel_Programming|parallelization]]. Information on how to run an existing MPI program can be found in the [[How_to_Use_MPI]] Section.
  
 
== General ==
 
== General ==
In MPI the basic operations are the send
+
In MPI the most essential operations are:
 +
* <code>MPI_Send</code> for sending a message
 
<syntaxhighlight lang="c">
 
<syntaxhighlight lang="c">
 
int MPI_Send (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
 
int MPI_Send (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
 
</syntaxhighlight>
 
</syntaxhighlight>
  
and receive
+
* <code>MPI_Recv</code> for receiving a message
 
<syntaxhighlight lang="c">
 
<syntaxhighlight lang="c">
 
int MPI_Recv (void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status* status)
 
int MPI_Recv (void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status* status)
 
</syntaxhighlight>
 
</syntaxhighlight>
  
Next to those, a lot of other different functions exist for non-blocking or collective communication. Programs written with these functions have to be compiled with a specific [[compiler]] (options) and executed with a special startup program like detailed [[How_to_Use_MPI|here]].
+
Although there a 100+ MPI functions defined in the standard (e.g. for non-blocking or collective communication, see the [[#References]] for more details), you can write meaningful MPI application with less than 20 of those. Programs written with these functions have to be compiled with a specific [[compiler]] (options) and executed with a special startup program like detailed [[How_to_Use_MPI|here]].
  
 
Please check the more detailed tutorials in the References.  
 
Please check the more detailed tutorials in the References.  

Revision as of 12:50, 13 April 2018

MPI is an open standard for Distributed Memory parallelization. Information on how to run an existing MPI program can be found in the How_to_Use_MPI Section.

General

In MPI the most essential operations are:

  • MPI_Send for sending a message
int MPI_Send (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
  • MPI_Recv for receiving a message
int MPI_Recv (void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status* status)

Although there a 100+ MPI functions defined in the standard (e.g. for non-blocking or collective communication, see the #References for more details), you can write meaningful MPI application with less than 20 of those. Programs written with these functions have to be compiled with a specific compiler (options) and executed with a special startup program like detailed here.

Please check the more detailed tutorials in the References.

References

Introduction to MPI from PPCES (@RWTH Aachen) Part 1

Introduction to MPI from PPCES (@RWTH Aachen) Part 2

Introduction to MPI from PPCES (@RWTH Aachen) Part 3