Difference between revisions of "MPI"

From HPC Wiki
Jump to: navigation, search
m
Line 2: Line 2:
  
 
== General ==
 
== General ==
 +
In MPI the basic operations are the send
 +
<syntaxhighlight lang="c">
 +
int MPI_Send (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
 +
</syntaxhighlight>
  
 +
and receive
 +
<syntaxhighlight lang="c">
 +
int MPI_Recv (void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status* status)
 +
</syntaxhighlight>
 +
 +
Next to those, a lot of other different functions exist for non-blocking or all-to-all communication.
 +
 +
Please check the more detailed tutorials in the References.
  
 
== References ==
 
== References ==

Revision as of 10:00, 5 April 2018

MPI is an implementation of Distributed Memory parallelization. Information of how to run an existing MPI program can be found in the How_to_Use_MPI Section.

General

In MPI the basic operations are the send

int MPI_Send (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

and receive

int MPI_Recv (void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status* status)

Next to those, a lot of other different functions exist for non-blocking or all-to-all communication.

Please check the more detailed tutorials in the References.

References

Introduction to MPI from PPCES (@RWTH Aachen) Part 1

Introduction to MPI from PPCES (@RWTH Aachen) Part 2

Introduction to MPI from PPCES (@RWTH Aachen) Part 3