Difference between revisions of "SLURM"
Jump to navigation
Jump to search
Line 59: | Line 59: | ||
[https://slurm.schedmd.com/sbatch.html SBATCH documentation] | [https://slurm.schedmd.com/sbatch.html SBATCH documentation] | ||
+ | |||
[https://user.cscs.ch/getting_started/running_jobs/jobscript_generator/#slurm-jobscript-generator SLURM jobscript generator] | [https://user.cscs.ch/getting_started/running_jobs/jobscript_generator/#slurm-jobscript-generator SLURM jobscript generator] |
Revision as of 07:23, 29 March 2018
#SBATCH Usage
If you are writing a jobscript for a SLURM batch system, the magic cookie is "#SBATCH". To use it, start a new line in your script with "#SBATCH". Following that, you can put one of the parameters shown below, where the word written in <...> should be replaced with a value.
Basic settings:
Parameter | Function |
--job-name=<name> | job name |
--output=<path> | path to the file where the job (error) output is written |
Requesting resources:
Parameter | Function |
--time=<runlimit> | runtime limit in the format hours:min:sec; once the time specified is up, the job will be killed by the scheduler |
--mem=<memlimit> | job memory request, e. g. 1gb |
Parallel programming (read more here):
Settings for OpenMP:
Parameter | Function |
--nodes=1 | start a parallel job for a shared-memory system on only one node |
--cpus-per-task=<num_threads> | number of threads to execute OpenMP application with |
--ntasks-per-core=<num_hyperthreads> | number of hyperthreads per core; i. e. any value greater than 1 will
turn on hyperthreading (the possible maximum depends on your CPU) |
--ntasks-per-node=1 | for OpenMP, use one task per node only |
Settings for MPI:
Parameter | Function |
--nodes=<num_nodes> | start a parallel job for a distributed-memory system on several nodes |
--cpus-per-task=1 | for MPI, use one task per CPU |
--ntasks-per-core=1 | disable hyperthreading |
--ntasks-per-node=<num_procs> | number of processes per node (the possible maximum depends on
your nodes) |