Difference between revisions of "SLURM"

From HPC Wiki
Jump to navigation Jump to search
Line 32: Line 32:
 
|-
 
|-
 
| --ntasks-per-core=<num_hyperthreads> || number of hyperthreads per core; i. e. any value greater than 1 will  
 
| --ntasks-per-core=<num_hyperthreads> || number of hyperthreads per core; i. e. any value greater than 1 will  
turn on hyperthreading. The possible maximum depends on your CPU.
+
turn on hyperthreading (the possible maximum depends on your CPU)
 
|-
 
|-
| --ntasks-per-node=1 || for OpenMP, use 1 task per node only
+
| --ntasks-per-node=1 || for OpenMP, use one task per node only
 
|}
 
|}
 
Settings for MPI:
 
Settings for MPI:
Line 40: Line 40:
 
| Parameter || Function
 
| Parameter || Function
 
|-
 
|-
| --nodes=1 || start a parallel job for a shared-memory system on only one node
+
| --nodes=<num_nodes> || start a parallel job for a distributed-memory system on several nodes
 
|-
 
|-
| --cpus-per-task=<num_threads> || number of threads to execute OpenMP application with
+
| --cpus-per-task=1 || for MPI, use one task per CPU
 +
|-
 +
| --ntasks-per-core=1 || disable hyperthreading
 +
|-
 +
| --ntasks-per-node=<num_procs> || number of processes per node (the possible maximum depends on
 +
your nodes)
 
|}
 
|}
  

Revision as of 07:19, 29 March 2018

#SBATCH Usage

If you are writing a jobscript for a SLURM batch system, the magic cookie is "#SBATCH". To use it, start a new line in your script with "#SBATCH". Following that, you can put one of the parameters shown below, where the word written in <...> should be replaced with a value.

Basic settings:

Parameter Function
--job-name=<name> job name
--output=<path> path to the file where the job (error) output is written

Requesting resources:

Parameter Function
--time=<runlimit> runtime limit in the format hours:min:sec; once the time specified is up, the job will be killed by the scheduler
--mem=<memlimit> job memory request, e. g. 1gb

Parallel programming (read more here):

Settings for OpenMP:

Parameter Function
--nodes=1 start a parallel job for a shared-memory system on only one node
--cpus-per-task=<num_threads> number of threads to execute OpenMP application with
--ntasks-per-core=<num_hyperthreads> number of hyperthreads per core; i. e. any value greater than 1 will

turn on hyperthreading (the possible maximum depends on your CPU)

--ntasks-per-node=1 for OpenMP, use one task per node only

Settings for MPI:

Parameter Function
--nodes=<num_nodes> start a parallel job for a distributed-memory system on several nodes
--cpus-per-task=1 for MPI, use one task per CPU
--ntasks-per-core=1 disable hyperthreading
--ntasks-per-node=<num_procs> number of processes per node (the possible maximum depends on

your nodes)


References

SLURM sbatch documentation