Difference between revisions of "Application benchmarking"
Line 101: | Line 101: | ||
The following node level settings can influence performance results. | The following node level settings can influence performance results. | ||
− | === | + | === CPU related === |
− | |||
− | |||
− | |||
− | === CPU clock === | + | ==== CPU clock ==== |
'''Influence''' on everything. | '''Influence''' on everything. | ||
'''Recommended setting:''' Make sure to use acpi_cpufreq, fix frequency, make sure the CPU's power management unit doesn't interfere (e.g., likwid-perfctr) | '''Recommended setting:''' Make sure to use acpi_cpufreq, fix frequency, make sure the CPU's power management unit doesn't interfere (e.g., likwid-perfctr) | ||
− | === Turbo mode on/off === | + | ==== Turbo mode on/off ==== |
'''Influence''' on CPU clock. | '''Influence''' on CPU clock. | ||
'''Recommended setting:''' for benchmarking deactivate | '''Recommended setting:''' for benchmarking deactivate | ||
− | + | ||
− | === | + | ==== SMT on/off topology ==== |
− | '''Influence''' on | + | '''Influence''' on resource sharing on core |
+ | |||
+ | '''Recommended setting:''' Can be left on on modern processors without penalty | ||
+ | |||
+ | ==== Frequency governor (performance,...) ==== | ||
+ | '''Influence''' on clock speed ramp-up. | ||
+ | |||
+ | '''Recommended setting:''' For benchmarking set so that clock speed is always fixed | ||
+ | |||
+ | ==== Turbo steps ==== | ||
+ | '''Influence''' on freq vs. # cores. | ||
+ | |||
+ | '''Recommended setting:''' For benchmarking switch off turbo | ||
+ | |||
+ | === Memory related === | ||
+ | |||
+ | ==== Transparent huge pages==== | ||
+ | '''Influence''' on (memory) bandwidth. | ||
+ | |||
+ | '''Recommended setting:''' /sys/kernel/mm/transparent_hugepage/enabled should be set to ‘always’ | ||
− | + | ==== Cluster on die (COD) / Sub NUMA clustering (SNC) mode ==== | |
− | |||
− | === Cluster on die (COD) / Sub NUMA clustering (SNC) mode === | ||
'''Influence''' on L3 and memory latency, (memory bandwidth via snoop mode on HSW/BDW) | '''Influence''' on L3 and memory latency, (memory bandwidth via snoop mode on HSW/BDW) | ||
'''Recommended setting:''' Set in BIOS, check using numactl -H or likwid-topology (MSR would be better) | '''Recommended setting:''' Set in BIOS, check using numactl -H or likwid-topology (MSR would be better) | ||
− | |||
− | |||
− | |||
− | + | ==== LLC prefetcher ==== | |
− | |||
− | === LLC prefetcher === | ||
'''Influence''' on single-core memory bandwidth. | '''Influence''' on single-core memory bandwidth. | ||
'''Recommended setting:''' Set in BIOS, no way to check without MSR | '''Recommended setting:''' Set in BIOS, no way to check without MSR | ||
− | === "Known" prefetchers === | + | ==== "Known" prefetchers ==== |
'''Influence''' on latency and bandwidth of various levels in the cache/memory hierarchy. | '''Influence''' on latency and bandwidth of various levels in the cache/memory hierarchy. | ||
Line 142: | Line 151: | ||
'''Recommended setting:''' Set in BIOS or likwid-features, query status using likwid-features | '''Recommended setting:''' Set in BIOS or likwid-features, query status using likwid-features | ||
− | === Numa balancing === | + | ==== Numa balancing ==== |
'''Influence''' on (memory) data volume and performance. | '''Influence''' on (memory) data volume and performance. | ||
'''Recommended setting:''' /proc/sys/kernel/numa_balancing if 1 the page migration is 'on', else 'off' | '''Recommended setting:''' /proc/sys/kernel/numa_balancing if 1 the page migration is 'on', else 'off' | ||
− | === Memory configuration (channels, DIMM frequency, Single Rank/Dual Rank) === | + | ==== Memory configuration (channels, DIMM frequency, Single Rank/Dual Rank) ==== |
'''Influence''' on Memory performance. | '''Influence''' on Memory performance. | ||
Line 153: | Line 162: | ||
'''Recommended setting:''' Check with dmidecode or look at DIMMs | '''Recommended setting:''' Check with dmidecode or look at DIMMs | ||
− | === NUMA interleaving (BIOS setting) === | + | ==== NUMA interleaving (BIOS setting) ==== |
'''Influence''' on Memory BW. | '''Influence''' on Memory BW. | ||
'''Recommended setting:''' set in BIOS, switch off | '''Recommended setting:''' set in BIOS, switch off | ||
− | |||
− | |||
− | |||
− | + | === Chip/package/node related === | |
− | === | + | ==== Uncore clock ==== |
− | '''Influence''' on | + | '''Influence''' on L3 and memory bandwidth. |
− | '''Recommended setting:''' | + | '''Recommended setting:''' Set it to maximum supported frequency (e.g., using likwid-setFrequency), make sure the CPU's power management unit doesn't interfere (e.g., likwid-perfctr) |
− | + | ||
− | === | + | |
− | '''Influence''' on | + | ==== QPI Snoop mode ==== |
+ | '''Influence''' on memory bandwidth. | ||
− | '''Recommended setting:''' | + | '''Recommended setting:''' Set in BIOS, no way to check without MSR. |
+ | |||
− | === Power cap === | + | ==== Power cap ==== |
'''Influence''' on freq throttling | '''Influence''' on freq throttling | ||
'''Recommended setting:''' Don't use. | '''Recommended setting:''' Don't use. |
Revision as of 15:12, 4 April 2019
Overview
Application benchmarking is an elementary skill for any performance engineering effort. Because it is the base for any other acitivity, it is crucial to measure the result in an accurate, deterministic and reproducible way. The following components are required for meaningful application benchmarking:
- Timing: How to accurately measure time in software.
- Documentation: Because there are many influences, it is essential to document all possible performance-relevant influences.
- System configuration: Modern systems allow adjusting many performance-relevant settings like clock speed, memory settings, cache organisation as well as OS settings.
- Resource allocation and affinity control: What resources are used and how work is mapped onto resources.
Because so many things can go wrong while benchmarking, it is imporatant to have a sceptical attitude towards good results. Especially for very good results one has to check if the result is reasonable. Further results must be deterministic and reproducible, if required statistic distribution over multiple runs has to be documented.
Prerequisite for any benchmarking activity is to get a quite EXCLUSIVE SYSTEM!
In the following all examples use the Likwid Performance Tools for tool support.
Preparation
At the beginning it must be defined what configuration and/or test case is examined. Especially with larger codes with a wide range of functionality, this is essential. Application benchmarking requires to run the code under observation many times with different settings or variants. A test case therefore should have a short runtime which is long enough to be measured reliably but does not run too long for a quick turnaround cycle. Ideally a benchmark runs from several seconds to a few minutes.
For really large complex codes, one can extract performance-critical parts into a so-called proxy app which is easier to handle and benchmark, but still resembles the behaviour of the real application code.
After deciding on a test case, it is required to specify a performance metric. A performance metric is usually useful work per time unit and allows comparing the performance of different test cases or setups. If it is difficult to define an application-specific work unit one over time or MFlops/s might be a fallback solution. Examples for useful work are requests answered, lattice site updates, voxel updates, frames per second and so on.
Timing
For benchmarking, an accurate so-called wallclock timer (end-to-end stop watch) is required. Every timer has a minimal time resolution that can be measured. Therefore, if the code region to be measured is running shorter, the measurement must be extended until it reaches a time duration that can be resolved by the timer used. There are OS-specific routines (POSIX and Windows) and programming models or programming-language-specific solutions available. The latter have the advantage to be portable across operating systems. In any case, one has to read the documentation of the implementation used to ensure the exact properties of the routine used.
Recommended timing routines are
clock_gettime()
, POSIX compliant timing function (man page) which is recommended as a replacement to the widespreadgettimeofday()
MPI_Wtime
andomp_get_wtime
, standardized programming-model-specific timing routine for MPI and OpenMP- Timing in instrumented Likwid regions based on cycle counters for very short measurements
While there are also programming language specific solutions (e.g. in C++ and Fortran), it is recommended to use the OS solution. In case of Fortran this requires providing a wrapper function to the C call (see example below).
Examples
Calling clock_gettime
Put the following code in a C module.
#include <time.h>
double mysecond()
{
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
return (double)ts.tv_sec + (double)ts.tv_nsec * 1.e-9;
}
You can use it in your code like that:
double S, E;
S = mysecond();
/* Your code to measure */
E = mysecond();
printf("Time: %f s\n",E-S);
Fortran example
In Fortran just add the following wrapper to above C module. You may have to adjust the name mangling to your Fortran compiler. Then you can just link with your Fortran application against the object file.
double mysecond_()
{
return mysecond();
}
Use in your Fortran code as follows:
DOUBLE PRECISION s, e
s = mysecond()
! Your code
e = mysecond()
print *, "Time: ",e-s,"s"
Example code
This example code contains a ready-to-use timing routine with C and F90 examples as well as a more advanced timer C module based on the RDTSC instruction.
You can download an archive containing working timing routines with examples here.
Documentation
Without a proper documentation of code generation, system state and runtime modalities, it can be difficult to reproduce performance results. Best practice is to automate the automatic logging of build settings, system state and runtime settings using automated benchmark scripts. Still, too much automation might also result in errors or hinder a fast workflow due to inflexibilities in benchmarking or intransparency of what actually happens. Therefore it is recommended to also execute steps by hand in addition to automated benchmark execution.
System configuration
The following node level settings can influence performance results.
CPU clock
Influence on everything.
Recommended setting: Make sure to use acpi_cpufreq, fix frequency, make sure the CPU's power management unit doesn't interfere (e.g., likwid-perfctr)
Turbo mode on/off
Influence on CPU clock.
Recommended setting: for benchmarking deactivate
SMT on/off topology
Influence on resource sharing on core
Recommended setting: Can be left on on modern processors without penalty
Frequency governor (performance,...)
Influence on clock speed ramp-up.
Recommended setting: For benchmarking set so that clock speed is always fixed
Turbo steps
Influence on freq vs. # cores.
Recommended setting: For benchmarking switch off turbo
Transparent huge pages
Influence on (memory) bandwidth.
Recommended setting: /sys/kernel/mm/transparent_hugepage/enabled should be set to ‘always’
Cluster on die (COD) / Sub NUMA clustering (SNC) mode
Influence on L3 and memory latency, (memory bandwidth via snoop mode on HSW/BDW)
Recommended setting: Set in BIOS, check using numactl -H or likwid-topology (MSR would be better)
LLC prefetcher
Influence on single-core memory bandwidth.
Recommended setting: Set in BIOS, no way to check without MSR
"Known" prefetchers
Influence on latency and bandwidth of various levels in the cache/memory hierarchy.
Recommended setting: Set in BIOS or likwid-features, query status using likwid-features
Numa balancing
Influence on (memory) data volume and performance.
Recommended setting: /proc/sys/kernel/numa_balancing if 1 the page migration is 'on', else 'off'
Memory configuration (channels, DIMM frequency, Single Rank/Dual Rank)
Influence on Memory performance.
Recommended setting: Check with dmidecode or look at DIMMs
NUMA interleaving (BIOS setting)
Influence on Memory BW.
Recommended setting: set in BIOS, switch off
Uncore clock
Influence on L3 and memory bandwidth.
Recommended setting: Set it to maximum supported frequency (e.g., using likwid-setFrequency), make sure the CPU's power management unit doesn't interfere (e.g., likwid-perfctr)
QPI Snoop mode
Influence on memory bandwidth.
Recommended setting: Set in BIOS, no way to check without MSR.
Power cap
Influence on freq throttling Recommended setting: Don't use.
Affinity control
Affinity control allows to specify on which execution resources (cores or SMT threads) Affinity control is crucial to
- eliminate performance variation
- make use of architectural features
- avoid resource contention
Best practices
Authors
Main author: Jan Eitzinger, HPC group at RRZE Erlangen