Difference between revisions of "Hybrid Slurm Job"

From HPC Wiki
Jump to: navigation, search
m (Dieter-anmey-6410@rwth-aachen.de moved page Hybrid Slurm Job to Hybrid Slurm Job)
Line 1: Line 1:
Work in progress ....
+
[[Slurm|SLURM]] is a popular workload manager / job scheduler.
 
+
Here you can find an example of job script to launch a program which is parallelized using MPI and OpenMP at the same time.
 
+
You may find the toy program useful to get started.
 
 
Short introduction: Sample Page is a page that shows how to layout a Wiki-Page. In the introduction you should describe what this is and what it is used for.
 
 
 
[[File:ProPE_Logo.PNG|thumb|200px|ProPE Logo]]
 
 
 
 
 
== Basic usage ==
 
<syntaxhighlight lang="bash">
 
$ cd ..
 
</syntaxhighlight>
 
tut a, b, c
 
<syntaxhighlight lang="bash">
 
$ ls -l
 
</syntaxhighlight>
 
tut d, e und f
 
 
 
== Tips and Tricks ==
 
Get the source code of this sample Page and copy it into new pages to start with a resonable structure.
 
 
 
== Common Pitfalls ==
 
blabla ... Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua.
 
 
 
 
 
== Links and more Information ==
 
For some information how to do some things like LaTeX, Code Highlight or pictures, check the [[Wiki Syntax]]
 
 
 
 
 
  
 +
== Slurm Job Script ==
 
This hybrid MPI+OpenMP job will start the [[Parallel_Programming|parallel program]] "hello.exe" with 4 MPI processes and 3 OpenMP threads each on 2 compute nodes.
 
This hybrid MPI+OpenMP job will start the [[Parallel_Programming|parallel program]] "hello.exe" with 4 MPI processes and 3 OpenMP threads each on 2 compute nodes.
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
Line 59: Line 33:
 
</syntaxhighlight>
 
</syntaxhighlight>
  
 +
== Hybrid Fortran Toy Program ==
 
You can use this hybrid toy Fortran90 program to test the above job script
 
You can use this hybrid toy Fortran90 program to test the above job script
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
Line 83: Line 58:
 
</syntaxhighlight>
 
</syntaxhighlight>
  
 +
== Job Output Example ==
 
When sorting the program output it may look like  
 
When sorting the program output it may look like  
 
<syntaxhighlight lang="fortran">
 
<syntaxhighlight lang="fortran">

Revision as of 17:44, 20 March 2019

SLURM is a popular workload manager / job scheduler. Here you can find an example of job script to launch a program which is parallelized using MPI and OpenMP at the same time. You may find the toy program useful to get started.

Slurm Job Script

This hybrid MPI+OpenMP job will start the parallel program "hello.exe" with 4 MPI processes and 3 OpenMP threads each on 2 compute nodes.

#!/bin/zsh

### Job name
#SBATCH --job-name=HelloHybrid

### 2 compute nodes
#SBATCH --nodes=2

### 4 MPI ranks
#SBATCH --ntasks=4

### 2 MPI ranks per node
#SBATCH --ntasks-per-node=2

### 3 tasks per MPI rank
#SBATCH --cpus-per-task=3

### the number of OpenMP threads 
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

### Change to working directory
cd /home/usr/workingdirectory

### Run your parallel application
srun hello.exe

Hybrid Fortran Toy Program

You can use this hybrid toy Fortran90 program to test the above job script

program hello
   use mpi
   use omp_lib

   integer rank, size, ierror, tag, status(MPI_STATUS_SIZE),threadid
   character*(MPI_MAX_PROCESSOR_NAME) name
   
   call MPI_INIT(ierror)
   call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
   call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
   call MPI_GET_PROCESSOR_NAME(name,len,ierror)
   
!$omp parallel private(threadid)
   threadid=omp_get_thread_num()
   print*, 'node: ', trim(name), '  rank:', rank, ', thread_id:', threadid
!$omp end parallel
   
   call MPI_FINALIZE(ierror)
   
end program

Job Output Example

When sorting the program output it may look like

 node: ncm1018.hpc.itc.rwth-aachen.de  rank:           0 , thread_id:           0
 node: ncm1018.hpc.itc.rwth-aachen.de  rank:           0 , thread_id:           1
 node: ncm1018.hpc.itc.rwth-aachen.de  rank:           0 , thread_id:           2
 node: ncm1018.hpc.itc.rwth-aachen.de  rank:           1 , thread_id:           0
 node: ncm1018.hpc.itc.rwth-aachen.de  rank:           1 , thread_id:           1
 node: ncm1018.hpc.itc.rwth-aachen.de  rank:           1 , thread_id:           2
 node: ncm1019.hpc.itc.rwth-aachen.de  rank:           2 , thread_id:           0
 node: ncm1019.hpc.itc.rwth-aachen.de  rank:           2 , thread_id:           1
 node: ncm1019.hpc.itc.rwth-aachen.de  rank:           2 , thread_id:           2
 node: ncm1019.hpc.itc.rwth-aachen.de  rank:           3 , thread_id:           0
 node: ncm1019.hpc.itc.rwth-aachen.de  rank:           3 , thread_id:           1
 node: ncm1019.hpc.itc.rwth-aachen.de  rank:           3 , thread_id:           2