Interactive Manual Benchmarking
Tutorial | |
---|---|
Title: | Benchmarking & Scaling |
Provider: | HPC.NRW
|
Contact: | tutorials@hpc.nrw |
Type: | Online |
Topic Area: | Performance Analysis |
License: | CC-BY-SA |
Syllabus
| |
1. Introduction & Theory | |
2. Interactive Manual Benchmarking | |
3. Automated Benchmarking using a Job Script | |
4. Automated Benchmarking using JUBE | |
5. Plotting & Interpreting Results |
Prepare your input
It is always a good idea to create some test input for your simulation. If you have a specific system you want to benchmark for a production run make sure you limit the time the simulation is run for. If you also want to test some code given different input sizes of data, prepare those systems (or data-sets) beforehand.
For this tutorials purpose we are using the Molecular Dynamics Code GROMACS as this is a common simulation program and readily available on most clusters. We prepared three test systems with increasing amounts of water molecules which you can download from here:
- Download test input (tgz archive)
- SHA256 checksum:
d1677e755bf5feac025db6f427a929cbb2b881ee4b6e2ed13bda2b3c9a5dc8b0
The unpacked tar archive contains a folder with three binary input files (.tpr), which can directly be used as an input file for GROMACS. Those input files should be compatible with all GROMACS versions equal to or later than 2016.5:
System | Sidelength of simulation box | No. of atoms | Simulation Type | Default simulation time |
---|---|---|---|---|
MD_5NM_WATER.tpr | 5nm | 12165 | NVE | 1ns (500000 steps à 2fs) |
MD_10NM_WATER.tpr | 10nm | 98319 | NVE | 1ns (500000 steps à 2fs) |
MD_15NM_WATER.tpr | 15nm | 325995 | NVE | 1ns (500000 steps à 2fs) |
Download the tar archive to your local machine and upload it to an appropriate directory on your cluster.
Unpack the archive and change into the newly created folder:
tar xfa gromacs_benchmark_input.tgz cd GROMACS_BENCHMARK_INPUT/
Allocate an interactive session
As a first measure you should allocate a single node on your cluster and start an interactive session, i.e. login on the node. Often there are dedicated partitions/queues named "express" or "testing" or the like, for exactly this purpose with a time limit of a few hours. For a cluster running SLURM this could for example look like:
srun -p express -N 1 -n 72 -t 02:00:00 --pty bash
This will allocate 1 node with 72 tasks on the express partition for 2 hours and log you in (i.e. start a new bash shell) on the allocated node. Adjust this according to your clusters provided resources!
First test run
Next, you want to navigate to your input data and load the environment module, which gives you access to GROMACS binaries. Since environment modules are organized differently on many cluster the steps to load the correct module might look different at your site.
module load GROMACS
We will make a first test run using the smallest system (5nm) and 18 cores to get a feeling for how long one simulation might take. Note that we are using the gmx_mpi executable which allows us to run the code on multiple compute nodes later. There might be a second binary available on your system just called "gmx", using a threadMPI parallelism, which is only suitable for shared memory systems (i.e. a single node).
mpirun -n 18 gmx_mpi -quiet mdrun -deffnm MD_5NM_WATER -nsteps 10000
mpirun -n 18
will spawn 18 processes calling the gmx_mpi
executable. mdrun -deffnm MD_5NM_WATER
tells GROMACS to run a molecular dynamics simulation and use the MD_5NM_WATER.tpr as its input. -nsteps 10000
tells it to only run 10000 steps of the simulation instead of the default 500000. We also added the -quiet
flag to suppress some general output as GROMACS is quiet chatty.
The simulation will presumably take a couple of seconds and the output will look similar to this:
Reading file MD_5NM_WATER.tpr, VERSION 2016.5 (single precision) Note: file tpx version 110, software tpx version 119 Overriding nsteps with value passed on the command line: 10000 steps, 20 ps Changing nstlist from 10 to 40, rlist from 1 to 1.099 Using 18 MPI processes Non-default thread affinity set, disabling internal thread affinity Using 4 OpenMP threads per MPI process starting mdrun 'Waterbox 5nm' 10000 steps, 20.0 ps. Writing final coordinates. Dynamic load balancing report: DLB was off during the run due to low measured imbalance. Average load imbalance: 5.3%. The balanceable part of the MD step is 36%, load imbalance is computed from this. Part of the total run time spent waiting due to load imbalance: 1.9%. Core t (s) Wall t (s) (%) Time: 507.743 7.061 7190.9 (ns/day) (hour/ns) Performance: 244.751 0.098
GROMACS gives out a lot of useful information about the simulation we just ran. First it tells us, how many MPI processes were started in total
Using 18 MPI processes
We also get the information that for every MPI process we started, 4 OpenMP threads were created as well
Using 4 OpenMP threads per MPI process
This adds up to 18x4 (72) cores being used - so all of the resources we allocated when requesting the interactive session. However, we do want a bit more control over how many cores are actually being used. Therefore we can either specify an environment variable to limit the number of OpenMP threads, i.e.
export OMP_NUM_THREADS=1
or we can directly tell GROMACS to only use 1 OpenMP thread per process by adding the flag
-ntomp 1
At the end of the output, we will be given some performance metrics:
- Wall time (or elapsed real time): 7.061s
- Core time (accumulated time of each core): 507.743s
- ns/day (nanoseconds one could simulate in 24h): 244.751
- hours/ns (how many hours to simulate 1ns): 0.098
- The total percentage of CPU usage (max = 72*100%): 7190.9%
Next: Automated Benchmarking using a Job Script
Previous: Introduction and Theory