Difference between revisions of "Multiple Program Runs in one Slurm Job"
m |
m (→) |
||
Line 20: | Line 20: | ||
== Example == | == Example == | ||
− | + | Two runs of the Gaussian chemistry code are started within one Slurm job in the following example. | |
− | + | The target machine has two processors with 24 cores each and provides 192 GB of main memory. | |
− | + | Each program run uses 24 threads such that both runs occupy the whole machine. | |
+ | The batch job script requests the full node exclusively. | ||
+ | Each program run is executed in a separate directory such that file IO does not interfer. | ||
+ | Both programs are started asynchonously and a wait command waits for the termination of both programs. | ||
+ | In order to make sure that both programs profit from the NUMA architecture in an optimal way, the numactl command is used - see explanation below. | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
Line 88: | Line 92: | ||
− | In the case of the | + | In the case of the Gaussian chemistry application some parameters in the input file have to be adjusted. The number of threads has to be specificed by %nprocshared and the amount of main memory for the working array by %mem. If the (fast) file system for scratch files has limitations also the maxdisk parameter has to be set accordingly. |
<syntaxhighlight> | <syntaxhighlight> | ||
Line 96: | Line 100: | ||
#p ... maxdisk=100GB | #p ... maxdisk=100GB | ||
</syntaxhighlight> | </syntaxhighlight> | ||
− | |||
== NUMA Aspects == | == NUMA Aspects == |
Revision as of 19:13, 20 March 2019
In certain circumstances it may be profitable to start multiple OpenMP programs at a time in one single batch job. Here you can find explanations and an example launching multiple runs of the Gaussian chemistry code at a time.
Problem Position
These days, the number of cores per processor chip keeps increasing. Furthermore in many cases there are two (or sometimes) more such chips in each compute node of an HPC cluster. But the scalability of shared-memory- (OpenMP-) programs does not always keep track. In such a case a program cannot profit form such a high number cores and thus resources may be waisted.
One way of operating a cluster of multi-core nodes is to allow sharing nodes between multiple jobs. But because of hardware characteristics (sharing hardware resources like caches, paths to memory) these jobs may influence each other heavily. Thus the runtime of each job is hard to predict and may vary considerably from run to run.
Another possibility is to start multiple program runs with similar runtimes within one single batch job which uses a node exclusively. These program runs will still have an impact on each other, but it is more under control of a single user and when applied repeatedly the total runtime be more predictable. In such a case input data, execution environment and the batch job script has to be adjusted properly.
Example
Two runs of the Gaussian chemistry code are started within one Slurm job in the following example. The target machine has two processors with 24 cores each and provides 192 GB of main memory. Each program run uses 24 threads such that both runs occupy the whole machine.
The batch job script requests the full node exclusively.
Each program run is executed in a separate directory such that file IO does not interfer. Both programs are started asynchonously and a wait command waits for the termination of both programs.
In order to make sure that both programs profit from the NUMA architecture in an optimal way, the numactl command is used - see explanation below.
#!/usr/local_rwth/bin/zsh
#SBATCH --job-name=run2x24
#SBATCH --output=%j.log
#SBATCH --error=%j.err
#SBATCH --time=00-01:00:00
#SBATCH --mem=180G
### exclusive usage of a single node
#SBATCH --exclusive
### use all cores of one node, one thread per core
#SBATCH --ntasks=1 --nodes=1
#SBATCH --cpus-per-task=48
#SBATCH --threads-per-core=1
### prepare your environment for running gaussian
module load CHEMISTRY gaussian
### make sure this environment variable points to a suitable location
### here the gaussian module allocates the scratch directory
echo $GAUSS_SCRDIR
### adjust working directory and input file names and output directory names
export WDIR=....
export INP1=small1.inp24
export INP2=small2.inp24
export OUT1=run1
export OUT2=run2
### this is not necessary in the case of Gaussian program runs
### but it may be important in other cases
export OMP_NUM_THREADS=24
### the program will run in $WDIR/$SLURM_JOB_ID/$OUTx
### Scratch files will be put in $GAUSS_SCRDIR/$SLURM_JOB_ID/$OUTx
### Input files are assumed to be in $WDIR/$INPx
mkdir -p $WDIR/$SLURM_JOB_ID/$OUT1
mkdir -p $WDIR/$SLURM_JOB_ID/$OUT2
mkdir -p $GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT1
mkdir -p $GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT2
### display NUMA characteristics
numaclt -H
numactl --cpubind=0,1 --membind=0,1 -- numactl -show
numactl --cpubind=2,3 --membind=2,3 -- numactl -show
( cd $WDIR/$SLURM_JOB_ID/$OUT1; \
export GAUSS_SCRDIR=$GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT1; \
numactl --cpubind=0,1 --membind=0,1 -- timex g09 < ../../$INP1 > g09.out 2> g09.err ) &
pid1=$!
( cd $WDIR/$SLURM_JOB_ID/$OUT2; \
export GAUSS_SCRDIR=$GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT2; \
numactl --cpubind=2,3 --membind=2,3 -- timex g09 < ../../$INP2 > g09.out 2> g09.err ) &
pid2=$!
wait $pid1 $pid2
In the case of the Gaussian chemistry application some parameters in the input file have to be adjusted. The number of threads has to be specificed by %nprocshared and the amount of main memory for the working array by %mem. If the (fast) file system for scratch files has limitations also the maxdisk parameter has to be set accordingly.
%nprocshared=24
%mem=70000MB
...
#p ... maxdisk=100GB
NUMA Aspects
Bei üblicher NUMA-Architektur von solchen Multicore-Knoten ist es wichtig die einzelnen Programmläufe sorgfältig zu platzieren - z.B. ein Programmlauf pro NUMA-Node.
numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 6 7 8 12 13 14 18 19 20
node 0 size: 47820 MB
node 0 free: 37007 MB
node 1 cpus: 3 4 5 9 10 11 15 16 17 21 22 23
node 1 size: 49152 MB
node 1 free: 41 MB
node 2 cpus: 24 25 26 30 31 32 36 37 38 42 43 44
node 2 size: 49152 MB
node 2 free: 47613 MB
node 3 cpus: 27 28 29 33 34 35 39 40 41 45 46 47
node 3 size: 49152 MB
node 3 free: 47554 MB
node distances:
node 0 1 2 3
0: 10 11 21 21
1: 11 10 21 21
2: 21 21 10 11
3: 21 21 11 10
numactl --cpubind=0,1 --membind=0,1 -- numactl -show
policy: bind
preferred node: 0
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
cpubind: 0 1
nodebind: 0 1
membind: 0 1
da026566@cluster-hpc:~/hpc/benchmarks/Gaussian/Raabe$ ssh ncm0800 numactl --cpubind=2,3 --membind=2,3 -- numactl -show
policy: bind
preferred node: 2
physcpubind: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
cpubind: 2 3
nodebind: 2 3
membind: 2 3
#!/usr/local_rwth/bin/zsh
#SBATCH --job-name=run4x12
#SBATCH --output=%j.log
#SBATCH --error=%j.err
#SBATCH --time=00-01:00:00
#SBATCH --mem=180G
### exclusive usage of a single node
#SBATCH --exclusive
### use all cores of one, one thread per core
#SBATCH --ntasks=1 --nodes=1
#SBATCH --cpus-per-task=48
#SBATCH --threads-per-core=1
### prepare your environment for running gaussian
module load CHEMISTRY gaussian
### make sure this environment variable points to a suitable location
### here the gaussian module allocates the scratch directory
echo $GAUSS_SCRDIR
### adjust working directory and input file names and output directory names
export WDIR=....
export INP1=small1.inp12
export INP2=small2.inp12
export INP3=small3.inp12
export INP4=small4.inp12
export OUT1=run1
export OUT2=run2
export OUT3=run3
export OUT4=run4
### this is not necessary in the case of Gaussian program runs
### but it may be important in other cases
export OMP_NUM_THREADS=12
### the program will run in $WDIR/$SLURM_JOB_ID/$OUTx
### Scratch files will be put in $GAUSS_SCRDIR/$SLURM_JOB_ID/$OUTx
### Input files are assumed to be in $WDIR/$INPx
mkdir -p $WDIR/$SLURM_JOB_ID/$OUT1
mkdir -p $WDIR/$SLURM_JOB_ID/$OUT2
mkdir -p $WDIR/$SLURM_JOB_ID/$OUT3
mkdir -p $WDIR/$SLURM_JOB_ID/$OUT4
mkdir -p $GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT1
mkdir -p $GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT2
mkdir -p $GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT3
mkdir -p $GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT4
### display NUMA characteristics
numaclt -H
numactl --cpubind=0 --membind=0 -- numactl -show
numactl --cpubind=1 --membind=1 -- numactl -show
numactl --cpubind=2 --membind=2 -- numactl -show
numactl --cpubind=3 --membind=3 -- numactl -show
( cd $WDIR/$SLURM_JOB_ID/$OUT1; \
export GAUSS_SCRDIR=$GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT1; \
numactl --cpubind=0 --membind=0 -- timex g09 < ../../$INP1 > g09.out ) &
pid1=$!
( cd $WDIR/$SLURM_JOB_ID/$OUT2; \
export GAUSS_SCRDIR=$GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT2; \
numactl --cpubind=1 --membind=1 -- timex g09 < ../../$INP2 > g09.out ) &
pid2=$!
( cd $WDIR/$SLURM_JOB_ID/$OUT3; \
export GAUSS_SCRDIR=$GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT3; \
numactl --cpubind=2 --membind=2 -- timex g09 < ../../$INP3 > g09.out ) &
pid3=$!
( cd $WDIR/$SLURM_JOB_ID/$OUT4; \
export GAUSS_SCRDIR=$GAUSS_SCRDIR/$SLURM_JOB_ID/$OUT4; \
numactl --cpubind=3 --membind=3 -- timex g09 < ../../$INP4 > g09.out ) &
wait $pid1 $pid2 $pid3 $pid4
Links and more Information
t.b.a.