Difference between revisions of "Benchmarking & Scaling Tutorial/Manual Benchmarking"

From HPC Wiki
Benchmarking & Scaling Tutorial/Manual Benchmarking
Jump to navigation Jump to search
(Created page with "{{DISPLAYTITLE:Interactive Manual Benchmarking}}<nowiki /> {{Syllabus Benchmarking & Scaling}}<nowiki /> __TOC__ === Prepare your input === Its always a good idea to create...")
 
Line 22: Line 22:
 
----
 
----
 
'''Next''': [[Benchmarking_%26_Scaling_Tutorial/Automated_Benchmarking | Automated Benchmarking using a Job Script ]]
 
'''Next''': [[Benchmarking_%26_Scaling_Tutorial/Automated_Benchmarking | Automated Benchmarking using a Job Script ]]
 +
 +
'''Previous''': [[Benchmarking_%26_Scaling_Tutorial/Introduction | Introduction and Theory]]

Revision as of 15:42, 14 September 2021

Tutorial
Title: Benchmarking & Scaling
Provider: HPC.NRW

Contact: tutorials@hpc.nrw
Type: Online
Topic Area: Performance Analysis
License: CC-BY-SA
Syllabus

1. Introduction & Theory
2. Interactive Manual Benchmarking
3. Automated Benchmarking using a Job Script
4. Automated Benchmarking using JUBE
5. Plotting & Interpreting Results

Prepare your input

Its always a good idea to create some test input for your simulation. If you have a specific system you want to benchmark for a production run make sure you limit the time the simulation is run for. If you also want to test some code given different input sizes of data, prepare those systems (or data-sets) beforehand.

For this tutorials purpose we are using the Molecular Dynamics Code GROMACS as this is a common simulation program and readily available on most clusters. We prepared three test systems with increasing amounts of water molecules which you can download from here.

  • TODO: Upload GROMACS test systems (5,10,15nm box dims)


Allocate an interactive session

As a first measure you should allocate a single node on your cluster and start and interactive session, i.e. login on the node. Often there are dedicated partitions/queues named "express" or "testing" or the like, for exactly this purpose with a time limit of a few hours. For a cluster running SLURM this could for example look like:

srun -p express -N 1 -n 72 -t 02:00:00 --pty bash

This will allocate 1 node with 72 tasks on the express partition for 2 hours and log you in (i.e. start a new bash shell) on the allocated node. Adjust this according to your clusters provided resources!


Next: Automated Benchmarking using a Job Script

Previous: Introduction and Theory