SLURM

From Mufasa (BioHPC)
Revision as of 09:51, 27 November 2025 by GiulioFontana (talk | contribs) (→‎How to maximise the priority of your jobs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This page presents the features of SLURM that are most relevant to Mufasa's Job Users. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).

Users of Mufasa must use SLURM to run resource-heavy processes, i.e. computing jobs that require one or more of the following:

  • GPUs
  • multiple CPUs
  • powerful CPUs
  • a significant amount of RAM

In fact, only processes run via SLURM have access to all the resources of Mufasa. Processes run outside SLURM are executed by the login server virtual machine, which has minimal resources and no GPUs. Using SLURM is therefore the only way to execute resource-heavy jobs on Mufasa (this is a key difference between Mufasa 1.0 and Mufasa 2.0).

SLURM in a nutshell

Computation jobs on Mufasa needs to be launched via SLURM. SLURM provides jobs with access to the physical resources of Mufasa, such as CPUs, GPUs and RAM. Thanks to SLURM, processing jobs share system resources, optimising their occupation and availability.

When a user runs a job, the job does not get executed immediately and is instead queued. SLURM executes jobs according to their order in the queue: the top job in the queue gets executed as soon as the necessary resources are available, while jobs lower in the queue wait longer. The position of a job in the queue is due to the priority assigned to it by SLURM, with higher-priority jobs closer to the top. As a general rule:

the greater the fraction of Mufasa's overall resources that a job asks for, the lower its priority will be.

The priority mechanism is used to encourage users to use Mufasa's resources in an effective and equitable manner. This page includes a chart explaining how to maximise the priority of your jobs.

The time available to a job for its execution is controlled by SLURM. When a user requests execution of a job, they must specify the duration of the time slot that the job needs. The job must complete its execution before the end of the requested time slot, otherwise it gets killed by SLURM.

In Mufasa 2.0 access to system resources is managed via SLURM's Quality of Service (QOS) mechanism (Mufasa 1.0 used partitions instead). To launch a processing job via SLURM, the user must always specify the chosen QOS. QOSes differ in the set of resources that they provide access to because each of them is designed to fit a given type of job.

SLURM Quality of Service (QOS)

In SLURM, different Quality of Services (QOSes) define different levels of access to the server's resources. SLURM jobs must always specify the QOS that they use: this choice determines what resources the job can access.

The list of Mufasa's QOSes and their main features can be inspected with command

sacctmgr list qos format=name%-11,priority,maxwall,MaxJobsPerUser,maxtres%-80

which provides an output similar to the following:

Name          Priority     MaxWall MaxJobsPU MaxTRES                                                                          
----------- ---------- ----------- --------- -------------------------------------------------------------------------------- 
normal               0                                                                                                        
nogpu                4  3-00:00:00         1 cpu=16,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=128G 
gpuheavy-20          1                     1 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=2,mem=128G             
gpuheavy-40          1                     1 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=1,gres/gpu:4g.20gb=0,mem=128G             
gpulight             8    12:00:00         1 cpu=2,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G              
gpu                  2  1-00:00:00         1 cpu=8,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G              
gpuwide              2  1-00:00:00         2 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=1,mem=64G              
build               32    02:00:00         1 cpu=2,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=16G

The columns of this output are the following:

Name
name of the QOS
Priority
priority tier associated to the QOS: see Job priority for details
MaxJobsPU
maximum number of jobs from a single user can be running with this QOS
(note that there are also other limitations on the number of running jobs by the same user)
MaxWall
maximum wall clock duration of the jobs using the QOS (after which they are killed by SLURM), in format [days-]hours:minutes:seconds
For QOSes gpuheavy-20 and gpuheavy-20 these are not set because they are determined by the partition. Partitions also define the default duration.
MaxTRES
maximum amount of resources ("Trackable RESources") available to a job using the QOS, where
cpu=K means that the maximum number of CPUs (i.e., processor cores) is K
--> if not specified, the job gets the default amount of CPUs specified by the partition
gres/gpu:Type=K means that the maximum number of GPUs of class Type (see gres syntax) is K
--> (for QOSes that allow access to GPUs) if not specified, the job cannot be launched
mem=KG means that the maximum amount of system RAM is K GBytes
--> if not specified, the job gets the default amount of RAM specified by the partition

For instance, QOS gpulight provides jobs that use it with:

  • priority tier 8
  • a maximum of 1 running job per user
  • a maximum of 12 hours of duration
  • a maximum of 2 CPUs
  • a maximum of 64 GB of RAM
  • access to a maximum of 1 GPU of type gpu:3g.20gb
  • no access to GPUs of type gpu:40gb=0
  • no access to GPUs of type gpu:4g.20gb

As seen in the example output from sacctmgr list qos above, each QOS has an associated priority tier. As a rule, the more powerful (i.e., rich with resources) a QOS is, the lower the priority of the jobs that use such QOS. See Priority to understand how priority affects the execution order of jobs in Mufasa 2.0.

The normal QOS is the one applied to jobs if no QOS is specified. normal provides no access at all to Mufasa's resources, so it is always necessary to specify a QOS (different from normal) when running a job via SLURM.

The build QOS

This QOS is specifically designed to be used by Mufasa users to build container images. Its associated priority tier is very high, to allow SLURM jobs launched using this QOS to be executed quickly. On the other side, this QOS has very limited (but fully sufficient for building operations) resources, no access to GPUs and a short maximum duration for jobs: so it is not suitable for other computing activities.

See Building Singularity images for directions about building Singularity container images.

QOS restrictions

Some of the QOSes are not available to M.Sc. students. See Research users and students users to understand the differences between the two categories of users of Mufasa and to find out what access you have to resources.

Resources available to a QOS

The maximum amount of resources that a QOS has access to (available to the running jobs using the QOS, collectively) can be inspected with command

sacctmgr list qos format=name%-11,grpTRES%-34

which provides an output similar to

Name        GrpTRES                            
----------- ---------------------------------- 
normal                                         
nogpu       cpu=48,mem=384G                    
gpuheavy-20 cpu=56,gres/gpu:4g.20gb=4,mem=896G 
gpuheavy-40 cpu=56,gres/gpu:40gb=3,mem=896G    
gpulight    cpu=8,gres/gpu:3g.20gb=4,mem=256G  
gpu         cpu=24,gres/gpu:3g.20gb=3,mem=192G 
gpuwide     cpu=40,gres/gpu:4g.20gb=5,mem=320G 
build       cpu=4,mem=32G

Note how overall resources associated to the set of all QOS greatly exceeds available resources. With SLURM, multiple QOS can be given access to the same physical resource (e.g., a CPU or a GPU), because SLURM guarantees that the overall request for resources from all running jobs does not exceed the overall availability of resources in the system. SLURM will only execute a job if all the resources requested by the job are not already in use at the time of request.

Research users and students users

Users of Mufasa belong to two categories, which provide the users belonging to them with different access to system resources.

The categories are:

Research users, i.e. academic personnel and Ph.D. students
* have access to all QOSes
* their jobs have a higher base priority
* they can have a higher number of running jobs
* they can have a higher number of queued jobs
Students users, i.e. M.Sc. students
* do not have access to some QOSes
* their jobs have a lower base priority
* they can have a lower number of running jobs
* they can have a lower number of queued jobs

You can inspect the differences between researcher and student users with command

sacctmgr list association format="user,priority,maxjobs,maxsubmit,qos%-60" | grep -E 'Priority|research|students'

which provides an output similar to the following:

   Account   Priority MaxJobs MaxSubmit QOS
  research          4       2         4 build,gpu,gpuheavy-20,gpuheavy-40,gpulight,gpuwide,nogpu
  students          1       1         2 build,gpu,gpulight,gpuwide,nogpu

This example output shows that the differences between research users and student users are the following:

  • base priority is 4 for jobs run by research users, while it is 1 for jobs run by students users
  • the number of running jobs is 2 for research users, while it is 1 for jobs run by students users
  • the number of queued jobs (i.e., of jobs submitted to SLURM for execution but not yet running) is 4 for research users, while it is 1 for student users
  • research users can access all QOSes while student users cannot access QOSes gpuheavy-20 and gpuheavy-40

You can inspect your own level of access to Mufasa's resources with

sacctmgr list association format="user,priority,maxjobs,maxsubmit,qos%-60" | grep -E "User|<your_username>"

which provides an output similar to the following:

      User   Priority MaxJobs MaxSubmit QOS                                                          
    preali          4       2         4 build,gpu,gpuheavy-20,gpuheavy-40,gpulight,gpuwide,nogpu

Job priority

Once the execution of a job has been requested, the job is not run immediately: it is instead queued by SLURM, together with all the other jobs awaiting execution. The job on top of the queue at any time is the first to be put into execution as soon as the resources it requires are available. The order of the jobs in the queue depends on the priority of the jobs, and defines the order in which each job will reach execution.

SLURM is configured to maximise resource availability, i.e. to ensure the shorter possible wait time before job execution.

To achieves this goal, SLURM encourages users to avoid asking for resources or execution time that their job does not need. The more resources and the more time a job requests, the lower its priority in the execution queue will be.

This mechanism creates a virtuous cycle. By carefully choosing what to ask for, a user ensures that their job will be executed as soon as possible; at the same time, users limiting their requests to what their jobs really need leave more resources available to other jobs in the queue, which will then be executed sooner.

Elements determining job priority

In Mufasa, the priority of a job is computed by SLURM according to the following elements:

User category (i.e., researcher or M.Sc. student)
Used to provide higher priority to jobs run by research personnel
QOS used by the job
Used to provide higher priority to jobs requesting access to less system resources
Number of CPUs requested by the job (also called "job size")
Used to provide higher priority to jobs requiring less CPUs
Job duration, i.e. the execution time requested by the job
Used to provide higher priority to shorter jobs
Job Age, i.e. the time that the job has been waiting in the queue
Used to provide higher priority to jobs which have been queued for a longer time
FairShare, i.e. a factor computed by SLURM to balance use of the system by different users
Used to provide higher priority to jobs by users who use less resources (CPUs, GPUs, RAM, execution time)
FairShare has a "fading memory", i.e. the influence of past resource usage gets lower the farther it is from now

How to maximise the priority of your jobs

Every time you run a SLURM job, follow these guidelines:

Choose the less powerful QOS compatible with the needs of your job
QOSes with access to less resources lead to higher priority
Only request CPUs that your job will actually use
If you didn't design your code to exploit multiple CPUs, check that it does! If it doesn't, do not ask for them
Do not request more time than your jobs needs to complete
Make a worst-case estimate and only ask for that duration
Test and debug your code using less powerful QOSes before running it on more powerful QOSes
Your test jobs will get a higher priority and your FairShare will improve
Cancel jobs when you don't need them anymore
Use scancel to delete your jobs when finished, or if they become useless (e.g., due to a bug)

System resources subjected to limitations

In systems based on SLURM like Mufasa, TRES (Trackable RESources) are (from SLURM's documentation "resources that can be tracked for usage or used to enforce limits against."

TRES include CPUs, RAM and GRES. The last term stands for Generic RESources that a job may need for its execution. In Mufasa, the only gres resources are the GPUs.

gres syntax

To ask SLURM to assign GRES resources (i.e., GPUs) to a job, a special syntax must be used. Precisely, the name of each GPU resource takes the form

gpu:Name:Type

Considering the GPU complement of Mufasa, Type takes the following values:

  • gpu:40gb for GPUs with 40 Gbytes of RAM
  • gpu:4g.20gb for GPUs with 20 Gbytes of RAM and 4 compute units
  • gpu:3g.20gb for GPUs with 20 Gbytes of RAM and 3 compute units

So, for instance,

gpu:3g.20gb

identifies a resource corresponding to a GPU with 20 GB of RAM and 3 compute units.

When asking for a GRES resource (e.g., in an srun command or an SBATCH directive of an execution script), the syntax required by SLURM is

gpu:<Type>:<Quantity>

where Quantity is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type 4g.20gb the syntax is

gpu:4g.20gb:2

SLURM's generic resources are defined in /etc/slurm/gres.conf. In order to make GPUs available to SLURM's gres management, Mufasa makes use of Nvidia's NVML library. For additional information see SLURM's documentation.

Looking for unused GPUs

GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to use a QOS associated to a type of GPU of which there are one or more that aren't currently in use. This command

sinfo -O Gres:100

provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides an output similar to the following:

GRES                                                                                                
gpu:40gb:3,gpu:4g.20gb:5,gpu:3g.20gb:5

To know which of the GPUs are currently in use, use command

sinfo -O GresUsed:100

which provides an output similar to this:

GRES_USED
gpu:40gb:2(IDX:0-1),gpu:4g.20gb:2(IDX:5,8),gpu:3g.20gb:3(IDX:3-4,6)

By comparing the two lists (GRES and GRES_USED) you can easily spot unused GPUs.

SLURM partitions

Partitions are another mechanism provided by SLURM to create different levels of access to system resources. Since in Mufasa 2.0 access to resources is controlled via QOSes, partitions are not very relevant.

Note, however, that the default values for some features of SLURM jobs (e.g., duration) are defined by the partition.

In Mufasa 2.0, there is a single SLURM partition, called jobs, and all jobs run on it. The partition status of Mufasa can be inspected with

sinfo -o "%10P %5a %9T %11L %10l"

which provides an output similar to the following:

PARTITION  AVAIL STATE     DEFAULTTIME TIMELIMIT 
jobs*      up    idle      1:00:00     3-00:00:00

The columns in the standard output of sinfo shown above correspond to the following information:

PARTITION
name of the partition; the asterisks indicates that it's the default one
AVAIL
state/availability of the partition: see below
STATE
state of the node (using these codes); typical values are mixed - meaning that some of the resources of the node are busy executing jobs while other are idle, and allocated - meaning that all of the resources of the node are busy
DEFAULTTIME
default runtime of a job, in format [days-]hours:minutes:seconds
TIMELIMIT
maximum runtime of a job allowed by the partition, in format [days-]hours:minutes:seconds

The asterisk at the end of the partition name indicates the default partition, i.e. the one on which jobs which do not ask for a specific partition are run.

Partition availability

The most important information that sinfo provides is the availability (also called state) of partitions. This is shown in column "AVAIL". Possible partition states are:

up = the partition is available
It's possible to launch jobs on the partition
Currently running jobs will be completed
Currently queued jobs will be executed as soon as resources allow
drain = the partition is in the process of becoming unavailable (i.e., to go in the down state)
It's not possible to launch jobs on the partition
Currently running jobs will be completed
Queued jobs will be executed when the partition becomes available again (i.e. goes back to the up state)
down = the partition is unavailable
It's not possible to launch jobs on the partition
There are no running jobs
Queued jobs will be executed when the partition becomes available again (i.e. goes back to the up state)

When a partition goes from up to drain no harm is done to running jobs. When a partition passes from any other state to down, running jobs (if they exist) get killed. A partition in state drain or down requires intervention by a Job Administrator to be restored to up.

Default values

The features of SLURM partitions can be inspected with

scontrol show partition

which provides an output similar to this:

PartitionName=jobs
   AllowGroups=ALL AllowAccounts=ALL AllowQos=nogpu,gpulight,gpu,gpuwide,gpuheavy-20,gpuheavy-40
   AllocNodes=ALL Default=YES QoS=N/A
-> DefaultTime=01:00:00 DisableRootJobs=NO ExclusiveUser=NO ExclusiveTopo=NO GraceTime=0 Hidden=NO
   MaxNodes=UNLIMITED MaxTime=3-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
   Nodes=gn01
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
   OverTimeLimit=NONE PreemptMode=OFF
   State=UP TotalCPUs=48 TotalNodes=1 SelectTypeParameters=NONE
   JobDefaults=(null)
-> DefMemPerNode=4096 MaxMemPerNode=UNLIMITED
   TRES=cpu=48,mem=1011435M,node=1,billing=49,gres/gpu=13,gres/gpu:3g.20gb=5,gres/gpu:40gb=3,gres/gpu:4g.20gb=5
   TRESBillingWeights=cpu=1.0,gres/gpu:3g.20gb=6.0,gres/gpu:4g.20gb=6.0,gres/gpu:40gb=6.0,mem=0.05g

In the example, we have highlighted with "->" the most relevant for Mufasa users, i.e. two default values which are applied to jobs that do not make explicit requests. Precisely:

DefaultTime
the default execution time assigned to a job run on the partition (e.g., 1 hour)
DefMemPerNode
the default amount of RAM assigned to a job run on the partition (e.g., 4GB)