Difference between revisions of "Parallel Programming"
(Created page with "In order to solve a problem faster, the work is executed in parallel, as mentioned in Getting_Started#Parallel_Programming_or_.22How-To-Use-More-Than-One-Core.22|Getting Sta...") |
m |
||
Line 1: | Line 1: | ||
− | In order to solve a problem faster, the work is executed in parallel, as mentioned in [[Getting_Started#Parallel_Programming_or_.22How-To-Use-More-Than-One-Core.22|Getting Started]]. To achieve this, one usually uses either a [[#Shared_Memory]] or a [[#Distributed_Memory]] programming model. | + | In order to solve a problem faster, the work is executed in parallel, as mentioned in [[Getting_Started#Parallel_Programming_or_.22How-To-Use-More-Than-One-Core.22|Getting Started]]. To achieve this, one usually uses either a [[#Shared_Memory|Shared_Memory]] or a [[#Distributed_Memory|Distributed_Memory]] programming model. |
− | + | == Shared Memory == | |
+ | Shared Memory programming works like the communication via a pin board. There is one shared memory (pin-board in the analogy) where everybody can see what everybody is doing and how far they have gotten or which results (the bathroom is already clean) they got. Similar to the physical world, there are logistical limits on many people can use the memory (pin board) efficiently and how big it can be. | ||
+ | |||
+ | |||
+ | == Distributed Memory == | ||
+ | [[MPI]] is similar to the way how humans interact with problems: every process 'works' (cleans) on it's own and can communicate with the others by sending messages (talking and listening). Therefore usually [[OpenMP]] is employed for the different processes in one node (computer - corresponds to your house in the example) and [[MPI]] to communicate accross nodes (similar to talking to the neighbour and see how far their house-cleaning is). Both can be used simultaneously. |
Revision as of 13:42, 28 March 2018
In order to solve a problem faster, the work is executed in parallel, as mentioned in Getting Started. To achieve this, one usually uses either a Shared_Memory or a Distributed_Memory programming model.
Shared Memory programming works like the communication via a pin board. There is one shared memory (pin-board in the analogy) where everybody can see what everybody is doing and how far they have gotten or which results (the bathroom is already clean) they got. Similar to the physical world, there are logistical limits on many people can use the memory (pin board) efficiently and how big it can be.
Distributed Memory
MPI is similar to the way how humans interact with problems: every process 'works' (cleans) on it's own and can communicate with the others by sending messages (talking and listening). Therefore usually OpenMP is employed for the different processes in one node (computer - corresponds to your house in the example) and MPI to communicate accross nodes (similar to talking to the neighbour and see how far their house-cleaning is). Both can be used simultaneously.