Difference between revisions of "Benchmarking & Scaling Tutorial/Introduction"

From HPC Wiki
Benchmarking & Scaling Tutorial/Introduction
Jump to navigation Jump to search
(Created page with " == Scalability == Often users who start running applications on an HPC system tend to assume the more resources (compute nodes / cores) they use, the faster their code will...")
 
Line 2: Line 2:
 
== Scalability ==
 
== Scalability ==
  
Often users who start running applications on an HPC system tend to assume the more resources (compute nodes / cores) they use, the faster their code will run (i.e. they expect a linear behaviour). Unfortunately this is not the case for the majority of applications. How fast a program runs with different amounts of resources is referred to as scalability. For parallel programs a limiting factor is defined by [Amdahl's law](https://en.wikipedia.org/wiki/Amdahl%27s_law). It takes into account the fact, that a certain amount of work of your code is done in parallel but the speedup is ultimately limited by the sequential part of the program.
+
Often users who start running applications on an HPC system tend to assume the more resources (compute nodes / cores) they use, the faster their code will run (i.e. they expect a linear behaviour). Unfortunately this is not the case for the majority of applications. How fast a program runs with different amounts of resources is referred to as scalability. For parallel programs a limiting factor is defined by [https://en.wikipedia.org/wiki/Amdahl%27s_law Amdahl's law]. It takes into account the fact, that a certain amount of work of your code is done in parallel but the speedup is ultimately limited by the sequential part of the program.
  
  
Line 24: Line 24:
 
  </math>
 
  </math>
  
<math>t_p/P</math> is the speed up amount of time due to the usage of multiple CPUs. The total **speedup** <math>S</math> is defined as the ratio of the sequential to the parallel runtime:
+
<math>t_p/P</math> is the speed up amount of time due to the usage of multiple CPUs. The total '''speedup''' <math>S</math> is defined as the ratio of the sequential to the parallel runtime:
  
 
  <math>
 
  <math>
Line 30: Line 30:
 
  </math>
 
  </math>
  
The **efficiency** is the speedup per processor, i.e.
+
The '''efficiency''' is the speedup per processor, i.e.
  
 
  <math>
 
  <math>
Line 37: Line 37:
  
  
== Amdahl's Law ==
+
=== Amdahl's Law ===
  
 
Knowing that <math>t_o > 0</math>, and writing <math>F = \frac{t_s}{t_s + t_p}</math> as the fraction of the serial code, we can rewrite this to
 
Knowing that <math>t_o > 0</math>, and writing <math>F = \frac{t_s}{t_s + t_p}</math> as the fraction of the serial code, we can rewrite this to
Line 49: Line 49:
 
  </math>
 
  </math>
  
This places an upper limit on the **strong scalability**, i.e. how quickly can we solve a problem of fixed size <math>N</math> by increasing <math>P</math>. It is known as *Amdahl's Law*.
+
This places an upper limit on the '''strong scalability''' i.e. how quickly can we solve a problem of fixed size <math>N</math> by increasing <math>P</math>. It is known as ''Amdahl's Law''.
 +
 
 +
----
 +
 
 +
Knowing about speedup and efficiency we can now try to measure this ourselves.
 +
 
 +
Next: [[Benchmarking_%26_Scaling_Tutorial/Manual_Benchmarking | Interactive Manual Benchmarking ]]

Revision as of 11:09, 10 September 2021

Scalability

Often users who start running applications on an HPC system tend to assume the more resources (compute nodes / cores) they use, the faster their code will run (i.e. they expect a linear behaviour). Unfortunately this is not the case for the majority of applications. How fast a program runs with different amounts of resources is referred to as scalability. For parallel programs a limiting factor is defined by Amdahl's law. It takes into account the fact, that a certain amount of work of your code is done in parallel but the speedup is ultimately limited by the sequential part of the program.


Speedup and Efficiency

We assume that the total execution time of a program is comprised of

  • , a part of the code which can only run in serial
  • , a part of the code which can be parallelized
  • , parallel overheads due to, e.g. communication

The execution time of a serial code would then be


The time for a parallel code, where the work would be perfectly divided by processors, would be given by


is the speed up amount of time due to the usage of multiple CPUs. The total speedup is defined as the ratio of the sequential to the parallel runtime:


The efficiency is the speedup per processor, i.e.



Amdahl's Law

Knowing that , and writing as the fraction of the serial code, we can rewrite this to



This places an upper limit on the strong scalability i.e. how quickly can we solve a problem of fixed size by increasing . It is known as Amdahl's Law.


Knowing about speedup and efficiency we can now try to measure this ourselves.

Next: Interactive Manual Benchmarking