Difference between revisions of "Scaling"

From HPC Wiki
Jump to navigation Jump to search
Line 1: Line 1:
 
[[Category:HPC-User]]
 
[[Category:HPC-User]]
A Compiler is a computer program translating code from one [[Programming Languages|language]] to another.
+
In the most general sense, '''scalability''' is defined as the ability to handle more work as the size of the computer or application grows. '''scalability''' or '''scaling''' is widely used to indicate the ability of hardware and software to deliver greater computational power when the amount of resources is increased. For HPC clusters, it is important that they are '''scalable''', in other words the capacity of the whole system can be proportionally increased by adding more hardware. For software, '''scalability''' is sometimes referred to as parallelization efficiency — the ratio between the actual speedup and the ideal speedup obtained when using a certain number of processors. For this tutorial, we focus on software scalability and discuss two common types of scaling. The speedup in parallel computing can be straightforwardly defined as
In the most general sense, scalability is defined as the ability to handle more work as
+
 
the size of the computer or application grows. scalability or scaling is widely used to
+
                                                                  '''speedup = t1 / tN'''
indicate the ability of hardware and software to deliver greater computational power
+
 
when the amount of resources is increased. For HPC clusters, it is important that they
+
where '''t1''' is the computational time for running the software using one processor, and '''tN''' is the computational time running the same software with N processors. Ideally, we would like software to have a linear speedup that is equal to the number of processors (speedup = N), as that would mean that every processor would be contributing 100% of its computational power. Unfortunately, this is a very challenging goal for real world applications to attain.
are scalable, in other words that the capacity of the whole system can be proportionally
 
increased by adding more hardware. For software, scalability is sometimes referred to
 
as parallelization efficiency — the ratio between the actual speedup and the ideal
 
speedup obtained when using a certain number of processors.
 
  
  

Revision as of 17:02, 17 September 2020

In the most general sense, scalability is defined as the ability to handle more work as the size of the computer or application grows. scalability or scaling is widely used to indicate the ability of hardware and software to deliver greater computational power when the amount of resources is increased. For HPC clusters, it is important that they are scalable, in other words the capacity of the whole system can be proportionally increased by adding more hardware. For software, scalability is sometimes referred to as parallelization efficiency — the ratio between the actual speedup and the ideal speedup obtained when using a certain number of processors. For this tutorial, we focus on software scalability and discuss two common types of scaling. The speedup in parallel computing can be straightforwardly defined as

                                                                 speedup = t1 / tN

where t1 is the computational time for running the software using one processor, and tN is the computational time running the same software with N processors. Ideally, we would like software to have a linear speedup that is equal to the number of processors (speedup = N), as that would mean that every processor would be contributing 100% of its computational power. Unfortunately, this is a very challenging goal for real world applications to attain.



General

When people write applications, they usually employ a text editor and a high level language like C/C++ or Fortran to produce code that looks somewhat like this: