Difference between revisions of "SLURM"

From HPC Wiki
Jump to navigation Jump to search
Line 52: Line 52:
 
== Job Submission ==
 
== Job Submission ==
  
This command submits the job your defined in your [[Jobscript|jobscript]] to the batch system:
+
This command submits the job you defined in your [[Jobscript|jobscript]] to the batch system:
  
 
  $ sbatch jobscript.sh
 
  $ sbatch jobscript.sh

Revision as of 09:08, 29 March 2018

#SBATCH Usage

If you are writing a jobscript for a SLURM batch system, the magic cookie is "#SBATCH". To use it, start a new line in your script with "#SBATCH". Following that, you can put one of the parameters shown below, where the word written in <...> should be replaced with a value.

Basic settings:

Parameter Function
--job-name=<name> job name
--output=<path> path to the file where the job (error) output is written

Requesting resources:

Parameter Function
--time=<runlimit> runtime limit in the format hours:min:sec; once the time specified is up, the job will be killed by the scheduler
--mem=<memlimit> job memory request, usually an integer followed by a prefix for the unit (e. g. --mem=1G for 1 GB)

Parallel programming (read more here):

Settings for OpenMP:

Parameter Function
--nodes=1 start a parallel job for a shared-memory system on only one node
--cpus-per-task=<num_threads> number of threads to execute OpenMP application with
--ntasks-per-core=<num_hyperthreads> number of hyperthreads per core; i. e. any value greater than 1 will turn on hyperthreading (the possible maximum depends on your CPU)
--ntasks-per-node=1 for OpenMP, use one task per node only

Settings for MPI:

Parameter Function
--nodes=<num_nodes> start a parallel job for a distributed-memory system on several nodes
--cpus-per-task=1 for MPI, use one task per CPU
--ntasks-per-core=1 disable hyperthreading
--ntasks-per-node=<num_procs> number of processes per node (the possible maximum depends on your nodes)


Job Submission

This command submits the job you defined in your jobscript to the batch system:

$ sbatch jobscript.sh

Just like any other incoming job, your job will first be queued. Then, the scheduler decides when your job will be run. The more resources your job requires, the longer it may be waiting to execute.

You can check the current status of your submitted jobs and their job ids with the following shell command. A job can either "pend" (P) or "run" (R). This command will also print the time (hours:min:sec) that your job has been running for.

$ squeue -u <user_id>

In case you submitted a job on accident or realised that your job might not be running correctly, you can always remove it from the queue or terminate it when running by typing:

$ scancel <job_id>


Jobscript Examples

This serial job will run a given executable, in this case "myapp.exe".

#!/bin/bash

### Job name
#SBATCH --job-name=MYJOB

### File for the output
#SBATCH --output=MYJOB_OUTPUT

### Time your job needs to execute, e. g. 15 min 30 sec
#SBATCH --time=00:15:30

### Memory your job needs, e. g. 1 GB
#SBATCH --mem=1G

### The last part consists of regular shell commands:
### Change to working directory
cd /home/usr/workingdirectory

### Execute your application
myapp.exe

If you'd like to run a parallel job on a cluster that is managed by SLURM, you have to clarify that. Therefore, use the command "srun <my_executable>" in your jobscript.

This OpenMP job will start the parallel program "myapp.exe" with 24 threads.

#!/bin/bash

### Job name
#SBATCH --job-name=OMPJOB

### File for the output
#SBATCH --output=OMPJOB_OUTPUT

### Time your job needs to execute, e. g. 30 min
#SBATCH --time=00:30:00

### Memory your job needs, e. g. 500 MB
#SBATCH --mem=500M

### Use one node for parallel jobs on shared-memory systems
#SBATCH --nodes=1

### Number of threads to use, e. g. 24
#SBATCH --cpus-per-task=24

### Number of hyperthreads per core
#SBATCH --ntasks-per-core=1

### Tasks per node (for shared-memory parallelisation, use 1)
#SBATCH --ntasks-per-node=1

### The last part consists of regular shell commands:
### Set the number of threads in your cluster environment to the value specified above
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

### Change to working directory
cd /home/usr/workingdirectory

### Run your parallel application
srun myapp.exe

This MPI job will start the parallel program "myapp.exe" with 12 processes.

#!/bin/bash

### Job name
#SBATCH --job-name=MPIJOB

### File for the output
#SBATCH --output=MPIJOB_OUTPUT

### Time your job needs to execute, e. g. 50 min
#SBATCH --time=00:50:00

### Memory your job needs, e. g. 250 MB
#SBATCH --mem=250M

### Use more than one node for parallel jobs on distributed-memory systems, e. g. 2
#SBATCH --nodes=2

### Number of CPUS per task (for distributed-memory parallelisation, use 1)
#SBATCH --cpus-per-task=1

### Disable hyperthreading by setting the tasks per core to 1
#SBATCH --ntasks-per-core=1

### Number of processes per node, e. g. 6 (6 processes on 2 nodes = 12 processes in total)
#SBATCH --ntasks-per-node=6

### The last part consists of regular shell commands:
### Set the number of threads in your cluster environment to 1, as specified above
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

### Change to working directory
cd /home/usr/workingdirectory

### Run your parallel application
srun myapp.exe


References

SBATCH documentation

SLURM jobscript generator