Difference between revisions of "SLURM"

From Mufasa (BioHPC)
Jump to navigation Jump to search
 
(192 intermediate revisions by the same user not shown)
Line 1: Line 1:
This page presents the features of SLURM that are most relevant to Mufasa's [[Roles|Job Users]]. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).
This page presents the features of SLURM that are most relevant to Mufasa's [[Roles|Job Users]]. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).


Users of Mufasa '''must use SLURM''' to run resource-heavy processes, i.e. computing jobs that require any of the following:
Users of Mufasa '''must use SLURM''' to run resource-heavy processes, i.e. computing jobs that require one or more of the following:
* GPUs
* GPUs
* multiple CPUs
* multiple CPUs
* a significant amount of RAM.
* powerful CPUs
* a significant amount of RAM


In fact, only processes run via SLURM have access to all the resources of Mufasa. Processes run outside SLURM are executed by the [[System#Login server|login server]] virtual machine, which has minimal resources and no GPUs. Using SLURM is therefore the only way to execute resource-heavy jobs on Mufasa (this is a key difference between Mufasa 1.0 and Mufasa 2.0).
In fact, only processes run via SLURM have access to all the resources of Mufasa. Processes run outside SLURM are executed by the [[System#Login server|login server]] virtual machine, which has minimal resources and no GPUs. Using SLURM is therefore the only way to execute resource-heavy jobs on Mufasa (this is a key difference between Mufasa 1.0 and Mufasa 2.0).


= <span style="background:#FFFF00">SLURM in a nutshell</span> =
= SLURM in a nutshell =


Computation jobs on Mufasa needs to be launched via [[System#The SLURM job scheduling system|SLURM]]. SLURM provides jobs with access to the [[#System resources subjected to limitations|physical resources]] of Mufasa, such as CPUs, GPUs and RAM. Thanks to SLURM, processing jobs share system resources, optimising their occupation and availability.  
Computation jobs on Mufasa needs to be launched via [[System#The SLURM job scheduling system|SLURM]]. SLURM provides jobs with access to the [[#System resources subjected to limitations|physical resources]] of Mufasa, such as CPUs, GPUs and RAM. Thanks to SLURM, processing jobs share system resources, optimising their occupation and availability.  


When a user runs a job, the job does not get executed immediately and is instead ''queued''. SLURM executes jobs according to their order in the queue: the top job in the queue gets executed as soon as the necessary resources are available, while jobs lower in the queue wait longer. The position of a job in the queue is due to the '''[[#Job priority|priority]]''' assigned to it by SLURM, with higher-priority jobs closer to the top. As a general rule,
When a user runs a job, the job does not get executed immediately and is instead ''queued''. SLURM executes jobs according to their order in the queue: the top job in the queue gets executed as soon as the necessary resources are available, while jobs lower in the queue wait longer. The position of a job in the queue is due to the '''[[#Job priority|priority]]''' assigned to it by SLURM, with higher-priority jobs closer to the top. As a general rule:
;: '''The greater the fraction of Mufasa's overall resources that a job asks for, the lower its priority'''.
 
;: '''the greater the fraction of Mufasa's overall resources that a job asks for, the lower the job's priority will be'''.
 
The priority mechanism is used to encourage users to use Mufasa's resources in an effective and equitable manner. This page includes a [[#How_to_maximise_the_priority_of_your_jobs|chart explaining how to maximise the priority of your jobs]].


The '''time''' available to a job for its execution is controlled by SLURM. When a user requests execution of a job, they must specify the duration of the time slot that the job needs. The job must complete its execution before the end of the requested time slot, otherwise it gets killed by SLURM.
The '''time''' available to a job for its execution is controlled by SLURM. When a user requests execution of a job, they must specify the duration of the time slot that the job needs. The job must complete its execution before the end of the requested time slot, otherwise it gets killed by SLURM.


In Mufasa 2.0 (this was different in Mufasa 1.0) access to system resources is managed via SLURM's '''[[#Quality of Service|Quality of Service (QOS)]]''' mechanism. To launch a processing job via SLURM, the user must always specify the chosen QOS. QOSes differ in the set of resources that they provide access to because each of them is designed to fit a given type of job.
In Mufasa 2.0 access to system resources is managed via SLURM's '''[[#SLURM Quality of Service (QOS)|Quality of Service (QOS)]]''' mechanism (Mufasa 1.0 used [[#SLURM_partitions|partitions]] instead). To launch a processing job via SLURM, the user must always specify the chosen QOS. QOSes differ in the set of resources that they provide access to because each of them is designed to fit a given type of job.
 
Mufasa sets limits to the number of jobs by the same user. This page includes a [[#Limits on jobs by the same user|table summarising such limits]].
 
= SLURM Quality of Service (QOS) =


= <span style="background:#FFFF00">Quality of Service (QOS)</span> =
Through '''Quality of Services''' ('''QOSes'''), SLURM lets system configurators assign a name to a set of related constraints.


In SLURM, different Quality of Services (QOSes) define different levels of access to the server's resources. SLURM jobs must always specify the QOS that they use: this choice determines what resources the job can access.
In Mufasa 2.0, QOSes are used to define different levels of access to the server's resources. When [[User Jobs|executing a job with SLURM]], a user must always '''specify the QOS''' that their job will use: this choice, in turn, determines what resources the job is able to access and influences the [[#Job priority|priority]] of the job.


The list of Mufasa's QOSes and their main features can be inspected with command
Mufasa's QOSes and their features can be inspected with command


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sacctmgr list qos format=name%-11,priority,MaxJobsPerUser,maxwall,maxtres%-80
sacctmgr list qos format=name%-11,priority,MaxSubmit,maxwall,maxtres%-80
</pre>
</pre>


Line 32: Line 40:


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
Name          Priority MaxJobsPU     MaxWall MaxTRES                                                                           
Name          Priority MaxSubmit     MaxWall MaxTRES                                                                           
----------- ---------- --------- ----------- --------------------------------------------------------------------------------  
----------- ---------- --------- ----------- --------------------------------------------------------------------------------  
normal              0                                                                                                         
normal              0                                                                                                         
Line 40: Line 48:
gpulight            8        1    12:00:00 cpu=2,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G               
gpulight            8        1    12:00:00 cpu=2,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G               
gpu                  2        1  1-00:00:00 cpu=8,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G               
gpu                  2        1  1-00:00:00 cpu=8,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G               
gpuwide              2        2  1-00:00:00 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=1,mem=64G</pre>
gpuwide              2        2  1-00:00:00 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=1,mem=64G            
build              32        1    02:00:00 cpu=2,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=16G
</pre>


The columns of this output are the following:
The columns of this output are the following:


; Name
:; Name
: name of the QOS
:: name of the QOS


; Priority
:; Priority
: priority tier associated to the QOS
:: priority tier associated to the QOS (higher value = higher priority): see [[#Job priority|Job priority]] for details


; MaxJobsPU
:; MaxSubmit
: maximum number of jobs from a single user can be running with this QOS
:: maximum number of jobs from a single user that can be submitted to SLURM with this QOS; submitted jobs include both running and queued jobs
: (note that there are also [[Research users and students users|other limitations]] on the number of running jobs by the same user)
:: See [[#Limits on jobs by the same user|Limits on jobs by the same user]] for an overview of the limits on jobs set by Mufasa.


; MaxWall
:; MaxWall
: maximum wall clock duration of the jobs using the QOS (after which they are killed by SLURM), in format ''[days-]hours:minutes:seconds''
:: maximum wall clock duration of the jobs using the QOS (after which they are killed by SLURM), in format ''[days-]hours:minutes:seconds''
:: For some QOSes these are not set: it means that they are determined by the [[#SLURM partitions|partition]]. Partitions also define the [[#Default values|default duration]] of jobs.


; MaxTRES
:; MaxTRES
: maximum amount of resources ("''Trackable RESources''") available to a job using the QOS, where
:: amount of [[#System resources subjected to limitations|resources subjected to limitations]] ("''Trackable RESources''") available to a job using the QOS, where
: <code>'''cpu=''K'''''</code> means that the maximum number of processor cores is ''K''
:: <code>'''cpu=''K'''''</code> means that the maximum number of CPUs (i.e., processor cores) is ''K''
: <code>'''gres/''gpu:Type''=''K'''''</code> means that the maximum number of GPUs of class <code>''Type''</code> (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) is ''K''
::: --> if not specified, the job gets the default amount of CPUs specified by the [[#SLURM partitions|partition]]
: <code>'''mem=''K''G'''</code> means that the maximum amount of system RAM is ''K'' GBytes
:: <code>'''gres/''gpu:Type''=''K'''''</code> means that the maximum number of GPUs of class <code>''Type''</code> (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) is ''K''
::: --> (for QOSes that allow access to GPUs) if not specified, the job cannot be launched
:: <code>'''mem=''K''G'''</code> means that the maximum amount of system RAM is ''K'' GBytes
::: --> if not specified, the job gets the default amount of RAM specified by the [[#SLURM partitions|partition]]


For instance, QOS <code>gpulight</code> provides jobs that use it with:
For instance, QOS <code>gpulight</code> provides jobs that use it with:
* priority tier 8
* priority tier equal to 8
* a maximum of 1 running job per user
* a maximum of 1 submitted job per user
* a maximum of 12 hours of duration
* a maximum of 12 hours of duration
* a maximum of 2 CPUs
* a maximum of 2 CPUs
* a maximum of 64 GB of RAM
* a maximum of 64 GB of RAM
* access to a maximum of 1 GPU of type ''gpu:3g.20gb''
* this access to GPUs:
* no access to GPUs of type ''gpu:40gb=0''
** max 1 GPU of type ''gpu:3g.20gb''
* no access to GPUs of type ''gpu:4g.20gb''
** no GPUs of type ''gpu:40gb=0''
** no GPUs of type ''gpu:4g.20gb''
 
As seen in the example output from <code>sacctmgr list qos</code> above, each QOS has an associated '''priority tier'''. In Mufasa 2.0, priority tiers are used to encourage users to use the '''least powerful QOS that is compatible with their needs''', where "powerful" means "rich with resources". Less powerful QOSes increase the priority of the jobs that use them, so these jobs tend to be executed sooner.
 
See [[#Priority|Priority]] to understand how priority affects the execution order of jobs in Mufasa 2.0.
 
The <code>normal</code> QOS is the default one: it exists only to ensure that users always specify a QOS when running a job. Since <code>normal</code> has zero priority and no resources, a job run using this QOS would never be run.


== The <code>build</code> QOS ==


The <code>normal</code> QOS is the one applied to jobs if no QOS is specified. <code>normal</code> provides no access at all to Mufasa's resources, so '''it is always necessary to specify a QOS''' (different from <code>normal</code>) when running a job via SLURM.
This QOS is specifically designed to be used by Mufasa users to '''build [[System#Containers|container images]]'''. Its associated priority tier is very high, so  SLURM jobs launched using this QOS are executed quickly.  


As seen in the example output from <code>sacctmgr list qos</code> above, each QOS has an associated '''priority tiers'''. As a rule, the more powerful (i.e., rich with resources) a QOS is, the lower the priority of the jobs that use such QOS. See [[#Priority|Priority]] to understand how priority affects the execution order of jobs in Mufasa 2.0.
The <code>build</code> QOS, though, has resources that are strictly limited to those needed for building operations; additionally, it has no access to GPUs and a short maximum duration for jobs. This makes it unsuitable for computing activities different from building containers.


;: Important note. Some of the QOSes may be available only to a subset of users. In Mufasa, such a limitation is associated to the [[#Research users and students users|category]] that users belongs to.
See [[Singularity#Building Singularity images|Building Singularity images]] for directions about building Singularity container images.


== Amount of resource available to a QOS ==
== Restricted QOSes ==


The maximum amount of resources that a QOS has access to (available to the running jobs using the QOS, collectively) can be inspected with command
In Mufasa, the most powerful QOSes are reserved to researchers (including Ph.D. students), and not available to M.Sc. students.


<pre style="color: lightgrey; background: black;">
See [[#research users and students users|below]] to understand the differences between <code>researcher</code> users and <code>students</code> users.
sacctmgr list qos format=name%-11,grpTRES%-34
</pre>


which provides an output similar to
= <code>research</code> users and <code>students</code> users =


<pre style="color: lightgrey; background: black;">
Users of Mufasa belong to two '''user categories''', which provide the users belonging to them with different access to system resources. The idea behind these categories is to provide researchers with more access to Mufasa's resources, without preventing students from using the server.
Name        GrpTRES                           
----------- ----------------------------------
normal                                       
nogpu      cpu=48,mem=384G                   
gpuheavy-20 gres/gpu:4g.20gb=4               
gpuheavy-40 gres/gpu:40gb=3                   
gpulight    cpu=8,gres/gpu:3g.20gb=4,mem=256G 
gpu        cpu=24,gres/gpu:3g.20gb=3,mem=192G
gpuwide    cpu=40,gres/gpu:4g.20gb=5,mem=320G
</pre>


Note how overall resources associated to the set of all QOS exceed overall available resources. With SLURM, multiple QOS can be given access to the same physical resource (e.g., a CPU or a GPU), because SLURM guarantees that the overall request for resources from all running jobs does not exceed the overall availability of resources in the system. SLURM will only execute a job if all the resources requested by the job are not already in use at the time of request.
User categories are:


== Partitions ==
:: '''<code>research</code>''', i.e. academic personnel and Ph.D. students
::: * have access to all [[#SLURM Quality of Service (QOS)|QOSes]]
::: * their jobs have a higher ''base priority''
::: * the number of running jobs that the user can have is higher


Since in Mufasa 2.0 access to resources is controlled via QOSes, partitions are not very relevant. Partitions are another mechanism provided by SLURM to create different levels of access to system resources.
:: '''<code>students</code>''', i.e. M.Sc. students
::: * have access to a restricted set of [[#SLURM Quality of Service (QOS)|QOSes]]
::: * their jobs have a lower ''base priority''
::: * the number of running jobs that the user can have is lower


In Mufasa 2.0, there is a single SLURM partition, called <code>jobs</code>, and all jobs run on it. The partition status of Mufasa can be inspected with
You can inspect the differences between <code>research</code> and <code>students</code> users with command


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sinfo
sacctmgr list association format="account,priority,maxjobs" | grep -E 'Account|research|students'
</pre>
</pre>


Line 117: Line 133:


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
  Account  Priority MaxJobs
jobs*        up 3-00:00:00      1   idle gn01
  research          4      2
  students          1      1
</pre>
</pre>


As explained, in Mufasa 2.0 partitions are not much relevant, while QOS are very relevant.
To know what limits apply to your own user, use command


= <span style="background:#FFFF00">Research users and students users</span> =
<pre style="color: lightgrey; background: black;">
sacctmgr list association where user=$USER format="user,priority,maxjobs,qos%-60"
</pre>


Users of Mufasa belong to two categories, which provide the users belonging to them with different access to system resources.
which provides an output similar to the following:
The categories are:


'''Research''' users, i.e. academic personnel and Ph.D. students
<pre style="color: lightgrey; background: black;">
* have access to all [[#Quality of Service (QOS)|QOSes]]
      User  Priority MaxJobs QOS                                                        
* their jobs have a higher ''base priority''
---------- ---------- ------- ------------------------------------------------------------
    preali          4      2 build,gpu,gpuheavy-20,gpuheavy-40,gpulight,gpuwide,nogpu
</pre>


'''Students''' users, i.e. M.Sc. students
The list under "QOS" shows what QOSes your user is allowed to use when [[User Jobs|running jobs]]. <code>research</code> users can use all of them, while <code>students</code> users can only access a subset of them.
* do not have access to QOS gpuheavy-20 and gpuheavy-40
* their jobs have a lower ''base priority''


You can inspect the differences between researcher and student users with command
= Limits on jobs by the same user =


<pre style="color: lightgrey; background: black;">
Mufasa sets limits on the number of jobs from a single user. Such limits aim at preventing users from "hoarding" system resources, and apply to:
sacctmgr list association format=account,priority,maxjobs,maxsubmit | grep -E 'Priority|research|students'
* '''submitted jobs''', i.e. jobs that the user asked SLURM to execute, each of which may currently be either running or queued
</pre>
* '''running jobs''', i.e. jobs that are currently in execution


which provides an output similar to the following:
The following table summarises the limits that Mufasa sets on the number of jobs by the same user:


<pre style="color: lightgrey; background: black;">
{| class="wikitable" style="text-align:center;"
  Account  Priority MaxJobs MaxSubmit
|-
  research          4      2        4
!
  students          1       1        2
! number of running jobs
</pre>
! number of submitted jobs
|-
! rowspan="1" style="text-align:center;" | global limits<br/>(system-wide)
| '''''2 for'' <code>research</code> ''users'''''<br/>'''''1 for'' <code>students</code> ''users'''''
| '''''not limited directly...'''''<br/>...but cannot exceed the sum of the limits on submitted jobs set by the QOSes (below)
|-
! rowspan="1" style="text-align:center;" | limits for each QoS
| '''''not limited directly...'''''<br/>...but cannot exceed the global limit on running jobs (above) nor the QoS limit on submitted jobs (on the right)
| '''''2 for'' <code>gpuwide</code> ''QOS'''''<br/>'''''1 for all other QOSes'''''
|}


This example output shows that the differences between research and students are the following:
Limits on the number of running jobs depend on the user category (either [[#research users and students users|researcher or students]]) that the user belongs to; limits on the number of submitted jobs depend on the properties of the [[#SLURM Quality of Service (QOS)|SLURM QOSes]] used to launch them.


* '''base priority''' is 4 for jobs run by ''research'' users, while it is 1 for jobs run by ''students'' users
= Job priority =
* the '''number of running jobs''' is 2 for ''research'' users, while it is 1 for jobs run by ''students'' users
* the '''number of queued jobs''' (i.e., of jobs submitted to SLURM for execution but not yet running) is 4 for ''research'' users, while it is 1 for ''student'' users


= <span style="background:#FFFF00">Job priority</span> =
Once the execution of a job has been requested, the job is not run immediately: it is instead ''queued'' by SLURM, together with all the other jobs awaiting execution. The job on top of the queue at any time is the first to be put into execution as soon as the resources it requires are available.


Once the execution of a job has been requested, the job is not run immediately: it is instead ''queued'' by SLURM, together with all the other jobs awaiting execution. The job on top of the queue at any time is the first to be put into execution as soon as the resources it requires are available. The order of the jobs in the queue depends on the '''priority''' of the jobs, and defines the order in which each job will reach execution.
The order of the items in the job queue depends on their '''priority'''. The job with the highest priority is put by SLURM on top of the queue, while all other jobs are put in the queue in order of descending priority.


SLURM is configured to maximise resource availability, i.e. to ensure the shorter possible wait time before job execution. This goal is achieved by '''encouraging users to avoid asking for resources or execution time that their job does not need'''.
The goal of SLURM is to maximise resource availability: i.e., to ensure the shorter possible wait time before job execution. To achieve this goal, in Mufasa SLURM is configured to '''encourage users not to ask for resources or execution time that their job doesn't need'''. This is done via the priority mechanism: the more resources and/or time a job requests, the lower its priority will be; and the later it will be executed.


This mechanism creates a '''virtuous cycle'''. By carefully choosing what to ask for, a user ensures that their job will be executed as soon as possible; at the same time, users limiting their requests to what their jobs really need leave more resources available to other jobs in the queue, which will then be executed sooner.
Priority management in Mufasa is designed to set up a '''virtuous cycle''' where users, by carefully choosing what to ask for, obtain two results:
* they ensure that their job is executed as soon as possible;
* they leave as much as possible of Mufasa's resources free for other users's jobs.


== Elements determining job priority ==
== Elements determining job priority ==
In Mufasa, the priority of a job is computed by SLURM according to the following elements:
In Mufasa, the priority of a job is computed by SLURM according to the following elements:


; Category of user who launched the job - see [[#Research users and students users|Research users and students users]]
: '''[[#research users and students users|User category]]''' (i.e., <code>research</code> or <code>students</code>)
: Used to provide higher priority to jobs run by research personnel
::: Used to provide higher priority to jobs run by '''research personnel'''
 
: '''[[#SLURM Quality of Service (QOS)|QOS]]''' used by the job
::: Used to provide higher priority to jobs asking for '''less resources'''


; QOS used by the job - see [[#Quality of Service (QOS)|Quality of Service (QOS)]]
: '''Number of CPUs''' requested by the job (also called "job size")
: Used to provide higher priority to jobs requesting access to less system resources
::: Used to provide higher priority to jobs asking for '''a lower number of CPUs'''


; Job duration, i.e. the execution time requested by the job
: '''Job duration''', i.e. the execution time requested by the job
: Used to provide higher priority to shorter jobs
::: Used to provide higher priority to '''shorter jobs'''


; Job size, i.e. the number of CPUs requested by the job
: '''Job Age''', i.e. the time that the job has been waiting in the queue
: Used to provide higher priority to jobs requiring less CPUs.
::: Used to provide higher priority to jobs which have been '''queued for a longer time'''


; Age, i.e. the time that the job has been waiting in the queue
: '''FairShare''', i.e. a factor computed by SLURM to balance use of the system by different users
: Used to provide higher priority to jobs which have been queued for a long time
::: Used to provide higher priority to jobs by users who '''used Mufasa less than others'''


; FairShare, i.e. a factor computed by SLURM to balance use of the system by different users
The main features of FairShare are:
: Used to provide higher priority to jobs by users who use Mufasa less than others
* the Fairshare value is higher for users whose jobs used less CPUs, GPUs, RAM, execution time.
* the FairShare mechanism has a "fading memory", i.e. resource use has more impact on it if recent, less if farther in the past


== How to maximise the priority of your jobs ==
== How to maximise the priority of your jobs ==


Considering how [[#Elements determining job priority|SLURM assigns priorities to jobs]], it is easy to define a few rules to increase the priority of your jobs:
Every time you run a SLURM job, follow these guidelines:
 
:{|class="wikitable"
|
; Choose the less powerful QOS compatible with the needs of your job
:: QOSes with access to less resources lead to higher priority
 
; Only request CPUs that your job will actually use
:: If you didn't design your code to exploit multiple CPUs, check that it does! If it doesn't, do not ask for them
 
; Do not request more time than your jobs needs to complete
:: Make a worst-case estimate and only ask for that duration
 
; Test and debug your code using less powerful QOSes before running it on more powerful QOSes
:: Your test jobs will get a higher priority and your FairShare will improve
 
; Cancel jobs when you don't need them anymore
:: [[User_Jobs#Cancelling_a_job_with_scancel|Use scancel]] to delete your jobs when finished (or if they become useless due to a bug): your Fairshare will improve
|}


# Choose QOSes with lower access to resources
Suggestion: if you're going to run a job, it's a good idea to [[#Looking for unused GPUs|look for unused GPUs]] before choosing what GPU to request. Choosing a GPU that is currently idle should help your job get run sooner.
# Do not request more time than needed by your job to complete
# Do not request more CPUs than your job actually needs


= System resources subjected to limitations =
= System resources subjected to limitations =
Line 197: Line 244:
In systems based on SLURM like Mufasa, '''TRES (Trackable RESources)''' are (from [https://slurm.schedmd.com/tres.html SLURM's documentation] "''resources that can be tracked for usage or used to enforce limits against.''"
In systems based on SLURM like Mufasa, '''TRES (Trackable RESources)''' are (from [https://slurm.schedmd.com/tres.html SLURM's documentation] "''resources that can be tracked for usage or used to enforce limits against.''"


TRES include CPUs, RAM and '''GRES'''. The last term stands for ''generic resources'' that a job may need for its execution. In Mufasa, the only <code>gres</code> resources are the GPUs.
TRES include CPUs, RAM and '''GRES'''. The last term stands for ''Generic RESources'' that a job may need for its execution. In Mufasa, the only <code>gres</code> resources are the GPUs.


== <code>gres</code> syntax ==
== <code>gres</code> syntax ==


To ask SLURM to assign <code>gres</code> (i.e. GPUs) to a job, a special syntax must be used. Precisely, the name of each GPU resource takes the form
To ask SLURM to assign GRES resources (i.e., GPUs) to a job, a special syntax must be used. Precisely, the name of each GPU resource takes the form


'''<code>Name:Type</code>'''
'''<code>gpu:Name:Type</code>'''


where <code>Name</code> is '''<code>gpu</code>''' and <code>Type</code> takes the following values <span style="background:#00FF00">[to be updated]</span>:
Considering the [[System#CPUs and GPUs|GPU complement of Mufasa]], <code>Type</code> takes the following values:


* '''<code>40gb</code>''' for GPUs with 40 Gbytes of onboard RAM
* '''<code>gpu:40gb</code>''' for GPUs with 40 Gbytes of RAM
* '''<code>20gb</code>''' for GPUs with 20 Gbytes of onboard RAM
* '''<code>gpu:4g.20gb</code>''' for GPUs with 20 Gbytes of RAM and 4 compute units
* '''<code>gpu:3g.20gb</code>''' for GPUs with 20 Gbytes of RAM and 3 compute units


So, for instance,
So, for instance,


<code>gpu:20gb</code>
<code>gpu:3g.20gb</code>


identifies the resource corresponding to GPUs with 20 GB of RAM.
identifies a resource corresponding to a GPU with 20 GB of RAM and 3 compute units.


When asking for a <code>gres</code> resource (e.g., in an <code>srun</code> command or an <code>SBATCH</code> directive of an [[User Jobs#Using execution scripts to run jobs|execution script]]), the syntax required by SLURM is
When asking for a GRES resource (e.g., in an <code>srun</code> command or an <code>SBATCH</code> directive of an [[User Jobs#Using execution scripts to run jobs|execution script]]), the syntax required by SLURM is


'''<code><Name>:<Type>:<Quantity></code>'''
'''<code>gpu:<Type>:<Quantity></code>'''


where <code>Quantity</code> is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type <code>20gb</code> the syntax is
where <code>Quantity</code> is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type <code>4g.20gb</code> the syntax is


<code>gpu:20gb:2</code>
<code>gpu:4g.20gb:2</code>


SLURM's ''generic resources'' are defined in <code>/etc/slurm/gres.conf</code>. In order to make GPUs available to SLURM's <code>gres</code> management, Mufasa makes use of Nvidia's [https://developer.nvidia.com/nvidia-management-library-nvml NVML library]. For additional information see [https://slurm.schedmd.com/gres.html SLURM's documentation].
SLURM's ''generic resources'' are defined in <code>/etc/slurm/gres.conf</code>. In order to make GPUs available to SLURM's <code>gres</code> management, Mufasa makes use of Nvidia's [https://developer.nvidia.com/nvidia-management-library-nvml NVML library]. For additional information see [https://slurm.schedmd.com/gres.html SLURM's documentation].
Line 228: Line 276:
== Looking for unused GPUs ==
== Looking for unused GPUs ==


GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to request a GPU that is not currently in use. This command
GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to use a QOS associated to a type of GPU of which there are one or more that aren't currently in use. This command
 
<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sinfo -O Gres:100
sinfo -O Gres:100
</pre>
</pre>
provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides an output similar to the following <span style="background:#00FF00">[to be updated]</span>:
 
provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides an output similar to the following:
 
<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
GRES                                                                                                 
GRES                                                                                                 
gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1)
gpu:40gb:3,gpu:4g.20gb:5,gpu:3g.20gb:5
</pre>
</pre>


To know which of the GPUs are currently in use, use command
To know which of the GPUs are currently in use, use command
<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sinfo -O GresUsed:100
sinfo -O GresUsed:100
</pre>
</pre>
which provides an output similar to this:
which provides an output similar to this:
<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
GRES_USED                                                                                          
GRES_USED
gpu:40gb:2(IDX:0-1),gpu:20gb:2(IDX:5,8),gpu:10gb:3(IDX:3-4,6)  
gpu:40gb:2(IDX:0-1),gpu:4g.20gb:2(IDX:5,8),gpu:3g.20gb:3(IDX:3-4,6)
</pre>
</pre>
By comparing the two lists (GRES and GRES_USED) in the examples above, you can see that in this example:


* the system has 2 40 GB GPUs, all of which are in use
By comparing the two lists (GRES and GRES_USED) you can easily spot unused GPUs.
* the system has 3 20 GB GPUs, of which one is not in use
* the system has 6 10 GB GPUs, of which 3 are not in use


= SLURM partitions =
= SLURM partitions =


Execution queues for jobs in SLURM are called '''partitions'''. Each partition has features (in term of resources available to the jobs on that queue) that make the partition suitable for a certain category of jobs. SLURM command
Partitions are another mechanism provided by SLURM to create different levels of access to system resources. Since in Mufasa 2.0 access to resources is controlled via [[#SLURM Quality of Service (QOS)|QOSes]], partitions are not very relevant. (This is one of the main differences between Mufasa 1.0 and Mufasa 2.0.)


<pre style="color: lightgrey; background: black;">
Note, however, that the default values for some features of SLURM jobs (e.g., duration) are [[#Default values|set by the partition]].
sinfo
</pre>


([https://slurm.schedmd.com/sinfo.html link to SLURM docs]) provides a list of available partitions. Its output is similar to this <span style="background:#00FF00">[to be updated]</span>:
In Mufasa 2.0, there is a single SLURM partition, called <code>jobs</code>, and all jobs run on it. The state of <code>jobs</code> can be inspected with


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST
sinfo -o "%10P %5a %9T %11L %10l"
debug*        up      20:00      1    mix gn01
small        up  12:00:00      1    mix gn01
normal        up 1-00:00:00      1    mix gn01
longnormal    up 3-00:00:00      1    mix gn01
gpu          up 1-00:00:00      1    mix gn01
gpulong      up 3-00:00:00      1    mix gn01
fat          up 3-00:00:00      1    mix gn01
</pre>
</pre>


In this example, available partitions are named “debug”, “small”, “normal”, “longnormal”, “gpu”, “gpulong”, “fat”. The asterisk beside "debug" indicates that this is the default partition, i.e. the one that SLURM selects to run a job when no partition has been specified. On Mufasa, partition names have been chosen to reflect the type of job that they are dedicated to.
which provides an output similar to the following:


The columns in the standard output of <code>sinfo</code> shown above correspond to the following information:
<pre style="color: lightgrey; background: black;">
PARTITION  AVAIL STATE    DEFAULTTIME TIMELIMIT
jobs*      up    idle      1:00:00    3-00:00:00</pre>


; PARTITION
where columns correspond to the following information:
: name of the partition


; AVAIL
:; PARTITION
: state/availability of the partition: see [[User Jobs#Partition availability|below]]
:: name of the partition; the asterisks indicates that it's the default one


; TIMELIMIT
:; AVAIL
: maximum runtime of a job allowed by the partition, in format ''[days-]hours:minutes:seconds''
:: state/availability of the partition: see [[#Partition availability|below]]


; NODES
:; STATE
: number of nodes available to jobs run on the partition: for Mufasa, this is always 1 since [[System#The SLURM job scheduling system|there is only 1 node in the computing cluster]]
:: state (using [https://slurm.schedmd.com/sinfo.html#SECTION_NODE-STATE-CODES these codes])
:: typical values are '''<code>mixed</code>''' - meaning that some of the resources are busy executing jobs while other are idle, and '''<code>allocated</code>''' - meaning that all of the resources are in use


; STATE
:; DEFAULTTIME
: state of the node (using [https://slurm.schedmd.com/sinfo.html#SECTION_NODE-STATE-CODES these codes]); typical values are <code>mixed</code> - meaning that some of the resources of the node are busy executing jobs while other are free, and <code>allocated</code> - meaning that all of the resources of the node are busy
:: default runtime of a job, in format ''[days-]hours:minutes:seconds''


; NODELIST
:; TIMELIMIT
: list of nodes available to the partition: for Mufasa this field always contains <code>gn01</code> since [[System#The SLURM job scheduling system|Mufasa is the only node in the computing cluster]] <span style="background:#00FF00">[to be updated]</span>
:: maximum runtime of a job, in format ''[days-]hours:minutes:seconds''


For what concerns hardware resources (such as CPUs, GPUs and RAM) the amounts of each resource available to Mufasa's partitions are set by SLURM's accounting system, and are not visible to <code>sinfo</code>. See [[User Jobs#Partition features|Partition features]] for a description of these amounts.
The asterisk at the end of the partition name indicates the default partition, i.e. the one on which jobs which do not ask for a specific partition are run.


== Partition features ==
Command <code>sinfo</code> does not tell you about the ''jobs'' submitted to a partition. This information is obtained, instead, with [[User Jobs#Inspecting jobs with squeue|command <code>squeue</code>]].


The output of <code>sinfo</code> ([[User Jobs#SLURM partitions|see above]]) provides a list of available partitions, but (except for time) it does not provide information about the amount of resources that a partition makes available to the user jobs which are run on it. The amount of resources is visible through command
== Partition availability ==
 
<pre style="color: lightgrey; background: black;">
sacctmgr list qos format=name%-10,maxwall,maxtres%-64
</pre>
 
which provides an output similar to the following <span style="background:#00FF00">[to be updated]</span>:


<pre style="color: lightgrey; background: black;">
The most important information that <code>sinfo</code> provides is the '''availability''' (also called '''state''') of partitions. This is shown in column "AVAIL". Possible partition states are:
Name          MaxWall MaxTRES                                                         
---------- ----------- ----------------------------------------------------------------
normal      1-00:00:00 cpu=16,gres/gpu:10gb=0,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=128G 
small        12:00:00 cpu=2,gres/gpu:10gb=1,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=16G   
longnormal  3-00:00:00 cpu=16,gres/gpu:10gb=0,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=128G 
gpu        1-00:00:00 cpu=8,gres/gpu:10gb=2,gres/gpu:20gb=2,mem=64G                   
gpulong    3-00:00:00 cpu=8,gres/gpu:10gb=2,gres/gpu:20gb=2,mem=64G                   
fat        3-00:00:00 cpu=32,gres/gpu:10gb=2,gres/gpu:20gb=2,gres/gpu:40gb=2,mem=256G
</pre>


Its elements are the following (for more information, see [https://slurm.schedmd.com/qos.html SLURM's documentation]):
:'''<code>up</code>''' = the partition is available
:: Currently running jobs will be completed
:: It's possible to launch jobs on the partition
:: Queued jobs will be executed as soon as resources allow


; Name
:'''<code>drain</code>''' = the partition is in the process of becoming unavailable (i.e., of entering the <code>down</code> state: see below)
: name of the partition
:: Currently running jobs will be completed
:: It's not possible to launch jobs on the partition
:: Queued jobs will be executed when the partition becomes available again (i.e. goes back to the <code>up</code> state)


; MaxWall
:'''<code>down</code>''' = the partition is unavailable
: maximum wall clock duration of the jobs run on the partition (after which they are killed by SLURM), in format ''[days-]hours:minutes:seconds''
:: There are no running jobs
:: It's not possible to launch jobs on the partition
:: Queued jobs will be executed when the partition becomes available again (i.e. goes back to the <code>up</code> state)


; MaxTRES
When a partition goes from <code>up</code> to <code>drain</code> no harm is done to running jobs. In a normally functioning SLURM system, the passage from <code>up</code> or <code>drain</code> to <code>down</code> happens only when no jobs are running on the partition. If (e.g., due to a malfunction) the passage happens with jobs still running, they get killed.
: maximum amount of resources ("''Trackable RESources''") available to a job running on the partition, where
: <code>'''cpu=''K'''''</code> means that the maximum number of processor cores is ''K''
: <code>'''gres/''gpu:Type''=''K'''''</code> means that the maximum number of GPUs of class <code>''Type''</code> (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) is ''K''
: <code>'''mem=''K''G'''</code> means that the maximum amount of system RAM is ''K'' GBytes


Note that there may be additional limits to the possibility to fully exploit the resources of a partition. For instance, there may be a cap on the maximum number of GPUs that can be used at the same time by a single job and/or a single user.
A partition in state <code>drain</code> or <code>down</code> requires intervention by a [[Roles|Job Administrator]] to be restored to <code>up</code>.


=== Partitions of Mufasa 2.0 ===
== Default values ==


The features of the SLURM partitions of Mufasa 2.0 are the following:
The features of SLURM partitions, including the '''default values''' which are applied to jobs that do not make explicit requests, can be inspected with


{| class=wikitable
<pre style="color: lightgrey; background: black;">
!align="center"| Name of partition
scontrol show partition
!align="center"| example of use
</pre>
!align="center"| max running jobs per user
!align="center"| GPU configurations available to each job
!align="center"| GPUs that the partition has access to
!align="center"| max CPUs per job
!align="center"| max RAM per job [GB]
!align="center"| max wall clock time per job [h]
!align="center"| default resources assigned to jobs
!align="center"| allowed users
|-
!align="center"| gpulight
|align="center"| debug code
|align="center"| 1
|align="center"| 1 x 20 GB
|align="center"| 4 x 20 GB
|align="center"| 2
|align="center"| 64
|align="center"| 6
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers, students
|-
!align="center"| nogpu
|align="center"| tasks not requiring GPUs (in particular: not AI)
|align="center"| 1
|align="center"| -
|align="center"| none
|align="center"| 16
|align="center"| 128
|align="center"| 72
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers, students
|-
!align="center"| gpu
|align="center"| AI: train an already debugged model
|align="center"| 1
|align="center"| 1 x 20 GB
|align="center"| 3 x 20 GB
|align="center"| 8
|align="center"| 64
|align="center"| 24
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers, students
|-
!align="center"| gpuwide
|align="center"| AI: search for optimal hyperparameter values
|align="center"| 2
|align="center"| 1 x 20 GB
|align="center"| 5 x 20 GB
|align="center"| 8
|align="center"| 64
|align="center"| 24
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers, students
|-
!align="center"| gpuheavy
|align="center"| AI: train an already optimised model
|align="center"| 1
|align="center"| 1 x 20 GB or 2 x 20 GB or 1 x 40 GB
|align="center"| 3 x 40 GB + 4 x 20 GB
|align="center"| 8
|align="center"| 128
|align="center"| 72
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers
|}


Overall resources associated to the set of all partitions exceed overall available resources, as multiple partitions can be given access to the same resource (e.g., a CPU or a GPU). SLURM will only execute a job if all the resources requested by the job are not already in use at the time of request.
which provides an output similar to this:


== Partition availability ==
<pre style="color: lightgrey; background: black;">
 
PartitionName=jobs
An important information that ''sinfo'' provides (column "AVAIL") is the ''availability'' (also called ''state'') of partitions. Possible partition states are:
  AllowGroups=ALL AllowAccounts=ALL AllowQos=nogpu,gpulight,gpu,gpuwide,gpuheavy-20,gpuheavy-40
 
  AllocNodes=ALL Default=YES QoS=N/A
; up = the partition is available
=> DefaultTime=01:00:00 DisableRootJobs=NO ExclusiveUser=NO ExclusiveTopo=NO GraceTime=0 Hidden=NO
: Currently running jobs will be completed
  MaxNodes=UNLIMITED MaxTime=3-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
: Currently queued jobs will be executed as soon as resources allow
  Nodes=gn01
  PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
  OverTimeLimit=NONE PreemptMode=OFF
  State=UP TotalCPUs=48 TotalNodes=1 SelectTypeParameters=NONE
  JobDefaults=(null)
=> DefMemPerNode=4096 MaxMemPerNode=UNLIMITED
  TRES=cpu=48,mem=1011435M,node=1,billing=49,gres/gpu=13,gres/gpu:3g.20gb=5,gres/gpu:40gb=3,gres/gpu:4g.20gb=5
  TRESBillingWeights=cpu=1.0,gres/gpu:3g.20gb=6.0,gres/gpu:4g.20gb=6.0,gres/gpu:40gb=6.0,mem=0.05g
</pre>


; drain = the partition is in the process of becoming unavailable (i.e., to go in the ''down'' state)
In the example, we have highlighted with "=>" the most relevant default values for Mufasa users, i.e.:
: Currently running jobs will be completed
: Queued jobs will be executed when the partition becomes available again (i.e. goes back to the ''up'' state)


; down = the partition is unavailable
;<code>DefaultTime</code>
: There are no running jobs
:: the default execution time assigned to a job run on the partition (e.g., 1 hour)
: Queued jobs will be executed when the partition becomes available again (i.e. goes back to the ''up'' state)


When a partition goes from ''up'' to ''drain'' no harm is done to running jobs. When a partition passes from any other state to ''down'', running jobs (if they exist) get killed. A partition in state ''drain'' or ''down'' requires intervention by a [[Roles|Job Administrator]] to be restored to ''up''.
;<code>DefMemPerNode</code>
:: the default amount of RAM assigned to a job run on the partition (e.g., 4GB)

Latest revision as of 16:34, 4 May 2026

This page presents the features of SLURM that are most relevant to Mufasa's Job Users. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).

Users of Mufasa must use SLURM to run resource-heavy processes, i.e. computing jobs that require one or more of the following:

  • GPUs
  • multiple CPUs
  • powerful CPUs
  • a significant amount of RAM

In fact, only processes run via SLURM have access to all the resources of Mufasa. Processes run outside SLURM are executed by the login server virtual machine, which has minimal resources and no GPUs. Using SLURM is therefore the only way to execute resource-heavy jobs on Mufasa (this is a key difference between Mufasa 1.0 and Mufasa 2.0).

SLURM in a nutshell

Computation jobs on Mufasa needs to be launched via SLURM. SLURM provides jobs with access to the physical resources of Mufasa, such as CPUs, GPUs and RAM. Thanks to SLURM, processing jobs share system resources, optimising their occupation and availability.

When a user runs a job, the job does not get executed immediately and is instead queued. SLURM executes jobs according to their order in the queue: the top job in the queue gets executed as soon as the necessary resources are available, while jobs lower in the queue wait longer. The position of a job in the queue is due to the priority assigned to it by SLURM, with higher-priority jobs closer to the top. As a general rule:

the greater the fraction of Mufasa's overall resources that a job asks for, the lower the job's priority will be.

The priority mechanism is used to encourage users to use Mufasa's resources in an effective and equitable manner. This page includes a chart explaining how to maximise the priority of your jobs.

The time available to a job for its execution is controlled by SLURM. When a user requests execution of a job, they must specify the duration of the time slot that the job needs. The job must complete its execution before the end of the requested time slot, otherwise it gets killed by SLURM.

In Mufasa 2.0 access to system resources is managed via SLURM's Quality of Service (QOS) mechanism (Mufasa 1.0 used partitions instead). To launch a processing job via SLURM, the user must always specify the chosen QOS. QOSes differ in the set of resources that they provide access to because each of them is designed to fit a given type of job.

Mufasa sets limits to the number of jobs by the same user. This page includes a table summarising such limits.

SLURM Quality of Service (QOS)

Through Quality of Services (QOSes), SLURM lets system configurators assign a name to a set of related constraints.

In Mufasa 2.0, QOSes are used to define different levels of access to the server's resources. When executing a job with SLURM, a user must always specify the QOS that their job will use: this choice, in turn, determines what resources the job is able to access and influences the priority of the job.

Mufasa's QOSes and their features can be inspected with command

sacctmgr list qos format=name%-11,priority,MaxSubmit,maxwall,maxtres%-80

which provides an output similar to the following:

Name          Priority MaxSubmit     MaxWall MaxTRES                                                                          
----------- ---------- --------- ----------- -------------------------------------------------------------------------------- 
normal               0                                                                                                        
nogpu                4         1  3-00:00:00 cpu=16,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=128G 
gpuheavy-20          1         1             cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=2,mem=128G             
gpuheavy-40          1         1             cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=1,gres/gpu:4g.20gb=0,mem=128G             
gpulight             8         1    12:00:00 cpu=2,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G              
gpu                  2         1  1-00:00:00 cpu=8,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G              
gpuwide              2         2  1-00:00:00 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=1,mem=64G              
build               32         1    02:00:00 cpu=2,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=16G

The columns of this output are the following:

Name
name of the QOS
Priority
priority tier associated to the QOS (higher value = higher priority): see Job priority for details
MaxSubmit
maximum number of jobs from a single user that can be submitted to SLURM with this QOS; submitted jobs include both running and queued jobs
See Limits on jobs by the same user for an overview of the limits on jobs set by Mufasa.
MaxWall
maximum wall clock duration of the jobs using the QOS (after which they are killed by SLURM), in format [days-]hours:minutes:seconds
For some QOSes these are not set: it means that they are determined by the partition. Partitions also define the default duration of jobs.
MaxTRES
amount of resources subjected to limitations ("Trackable RESources") available to a job using the QOS, where
cpu=K means that the maximum number of CPUs (i.e., processor cores) is K
--> if not specified, the job gets the default amount of CPUs specified by the partition
gres/gpu:Type=K means that the maximum number of GPUs of class Type (see gres syntax) is K
--> (for QOSes that allow access to GPUs) if not specified, the job cannot be launched
mem=KG means that the maximum amount of system RAM is K GBytes
--> if not specified, the job gets the default amount of RAM specified by the partition

For instance, QOS gpulight provides jobs that use it with:

  • priority tier equal to 8
  • a maximum of 1 submitted job per user
  • a maximum of 12 hours of duration
  • a maximum of 2 CPUs
  • a maximum of 64 GB of RAM
  • this access to GPUs:
    • max 1 GPU of type gpu:3g.20gb
    • no GPUs of type gpu:40gb=0
    • no GPUs of type gpu:4g.20gb

As seen in the example output from sacctmgr list qos above, each QOS has an associated priority tier. In Mufasa 2.0, priority tiers are used to encourage users to use the least powerful QOS that is compatible with their needs, where "powerful" means "rich with resources". Less powerful QOSes increase the priority of the jobs that use them, so these jobs tend to be executed sooner.

See Priority to understand how priority affects the execution order of jobs in Mufasa 2.0.

The normal QOS is the default one: it exists only to ensure that users always specify a QOS when running a job. Since normal has zero priority and no resources, a job run using this QOS would never be run.

The build QOS

This QOS is specifically designed to be used by Mufasa users to build container images. Its associated priority tier is very high, so SLURM jobs launched using this QOS are executed quickly.

The build QOS, though, has resources that are strictly limited to those needed for building operations; additionally, it has no access to GPUs and a short maximum duration for jobs. This makes it unsuitable for computing activities different from building containers.

See Building Singularity images for directions about building Singularity container images.

Restricted QOSes

In Mufasa, the most powerful QOSes are reserved to researchers (including Ph.D. students), and not available to M.Sc. students.

See below to understand the differences between researcher users and students users.

research users and students users

Users of Mufasa belong to two user categories, which provide the users belonging to them with different access to system resources. The idea behind these categories is to provide researchers with more access to Mufasa's resources, without preventing students from using the server.

User categories are:

research, i.e. academic personnel and Ph.D. students
* have access to all QOSes
* their jobs have a higher base priority
* the number of running jobs that the user can have is higher
students, i.e. M.Sc. students
* have access to a restricted set of QOSes
* their jobs have a lower base priority
* the number of running jobs that the user can have is lower

You can inspect the differences between research and students users with command

sacctmgr list association format="account,priority,maxjobs" | grep -E 'Account|research|students'

which provides an output similar to the following:

   Account   Priority MaxJobs 
  research          4       2 
  students          1       1

To know what limits apply to your own user, use command

sacctmgr list association where user=$USER format="user,priority,maxjobs,qos%-60"

which provides an output similar to the following:

      User   Priority MaxJobs QOS                                                          
---------- ---------- ------- ------------------------------------------------------------ 
    preali          4       2 build,gpu,gpuheavy-20,gpuheavy-40,gpulight,gpuwide,nogpu

The list under "QOS" shows what QOSes your user is allowed to use when running jobs. research users can use all of them, while students users can only access a subset of them.

Limits on jobs by the same user

Mufasa sets limits on the number of jobs from a single user. Such limits aim at preventing users from "hoarding" system resources, and apply to:

  • submitted jobs, i.e. jobs that the user asked SLURM to execute, each of which may currently be either running or queued
  • running jobs, i.e. jobs that are currently in execution

The following table summarises the limits that Mufasa sets on the number of jobs by the same user:

number of running jobs number of submitted jobs
global limits
(system-wide)
2 for research users
1 for students users
not limited directly...
...but cannot exceed the sum of the limits on submitted jobs set by the QOSes (below)
limits for each QoS not limited directly...
...but cannot exceed the global limit on running jobs (above) nor the QoS limit on submitted jobs (on the right)
2 for gpuwide QOS
1 for all other QOSes

Limits on the number of running jobs depend on the user category (either researcher or students) that the user belongs to; limits on the number of submitted jobs depend on the properties of the SLURM QOSes used to launch them.

Job priority

Once the execution of a job has been requested, the job is not run immediately: it is instead queued by SLURM, together with all the other jobs awaiting execution. The job on top of the queue at any time is the first to be put into execution as soon as the resources it requires are available.

The order of the items in the job queue depends on their priority. The job with the highest priority is put by SLURM on top of the queue, while all other jobs are put in the queue in order of descending priority.

The goal of SLURM is to maximise resource availability: i.e., to ensure the shorter possible wait time before job execution. To achieve this goal, in Mufasa SLURM is configured to encourage users not to ask for resources or execution time that their job doesn't need. This is done via the priority mechanism: the more resources and/or time a job requests, the lower its priority will be; and the later it will be executed.

Priority management in Mufasa is designed to set up a virtuous cycle where users, by carefully choosing what to ask for, obtain two results:

  • they ensure that their job is executed as soon as possible;
  • they leave as much as possible of Mufasa's resources free for other users's jobs.

Elements determining job priority

In Mufasa, the priority of a job is computed by SLURM according to the following elements:

User category (i.e., research or students)
Used to provide higher priority to jobs run by research personnel
QOS used by the job
Used to provide higher priority to jobs asking for less resources
Number of CPUs requested by the job (also called "job size")
Used to provide higher priority to jobs asking for a lower number of CPUs
Job duration, i.e. the execution time requested by the job
Used to provide higher priority to shorter jobs
Job Age, i.e. the time that the job has been waiting in the queue
Used to provide higher priority to jobs which have been queued for a longer time
FairShare, i.e. a factor computed by SLURM to balance use of the system by different users
Used to provide higher priority to jobs by users who used Mufasa less than others

The main features of FairShare are:

  • the Fairshare value is higher for users whose jobs used less CPUs, GPUs, RAM, execution time.
  • the FairShare mechanism has a "fading memory", i.e. resource use has more impact on it if recent, less if farther in the past

How to maximise the priority of your jobs

Every time you run a SLURM job, follow these guidelines:

Choose the less powerful QOS compatible with the needs of your job
QOSes with access to less resources lead to higher priority
Only request CPUs that your job will actually use
If you didn't design your code to exploit multiple CPUs, check that it does! If it doesn't, do not ask for them
Do not request more time than your jobs needs to complete
Make a worst-case estimate and only ask for that duration
Test and debug your code using less powerful QOSes before running it on more powerful QOSes
Your test jobs will get a higher priority and your FairShare will improve
Cancel jobs when you don't need them anymore
Use scancel to delete your jobs when finished (or if they become useless due to a bug): your Fairshare will improve

Suggestion: if you're going to run a job, it's a good idea to look for unused GPUs before choosing what GPU to request. Choosing a GPU that is currently idle should help your job get run sooner.

System resources subjected to limitations

In systems based on SLURM like Mufasa, TRES (Trackable RESources) are (from SLURM's documentation "resources that can be tracked for usage or used to enforce limits against."

TRES include CPUs, RAM and GRES. The last term stands for Generic RESources that a job may need for its execution. In Mufasa, the only gres resources are the GPUs.

gres syntax

To ask SLURM to assign GRES resources (i.e., GPUs) to a job, a special syntax must be used. Precisely, the name of each GPU resource takes the form

gpu:Name:Type

Considering the GPU complement of Mufasa, Type takes the following values:

  • gpu:40gb for GPUs with 40 Gbytes of RAM
  • gpu:4g.20gb for GPUs with 20 Gbytes of RAM and 4 compute units
  • gpu:3g.20gb for GPUs with 20 Gbytes of RAM and 3 compute units

So, for instance,

gpu:3g.20gb

identifies a resource corresponding to a GPU with 20 GB of RAM and 3 compute units.

When asking for a GRES resource (e.g., in an srun command or an SBATCH directive of an execution script), the syntax required by SLURM is

gpu:<Type>:<Quantity>

where Quantity is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type 4g.20gb the syntax is

gpu:4g.20gb:2

SLURM's generic resources are defined in /etc/slurm/gres.conf. In order to make GPUs available to SLURM's gres management, Mufasa makes use of Nvidia's NVML library. For additional information see SLURM's documentation.

Looking for unused GPUs

GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to use a QOS associated to a type of GPU of which there are one or more that aren't currently in use. This command

sinfo -O Gres:100

provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides an output similar to the following:

GRES                                                                                                
gpu:40gb:3,gpu:4g.20gb:5,gpu:3g.20gb:5

To know which of the GPUs are currently in use, use command

sinfo -O GresUsed:100

which provides an output similar to this:

GRES_USED
gpu:40gb:2(IDX:0-1),gpu:4g.20gb:2(IDX:5,8),gpu:3g.20gb:3(IDX:3-4,6)

By comparing the two lists (GRES and GRES_USED) you can easily spot unused GPUs.

SLURM partitions

Partitions are another mechanism provided by SLURM to create different levels of access to system resources. Since in Mufasa 2.0 access to resources is controlled via QOSes, partitions are not very relevant. (This is one of the main differences between Mufasa 1.0 and Mufasa 2.0.)

Note, however, that the default values for some features of SLURM jobs (e.g., duration) are set by the partition.

In Mufasa 2.0, there is a single SLURM partition, called jobs, and all jobs run on it. The state of jobs can be inspected with

sinfo -o "%10P %5a %9T %11L %10l"

which provides an output similar to the following:

PARTITION  AVAIL STATE     DEFAULTTIME TIMELIMIT 
jobs*      up    idle      1:00:00     3-00:00:00

where columns correspond to the following information:

PARTITION
name of the partition; the asterisks indicates that it's the default one
AVAIL
state/availability of the partition: see below
STATE
state (using these codes)
typical values are mixed - meaning that some of the resources are busy executing jobs while other are idle, and allocated - meaning that all of the resources are in use
DEFAULTTIME
default runtime of a job, in format [days-]hours:minutes:seconds
TIMELIMIT
maximum runtime of a job, in format [days-]hours:minutes:seconds

The asterisk at the end of the partition name indicates the default partition, i.e. the one on which jobs which do not ask for a specific partition are run.

Command sinfo does not tell you about the jobs submitted to a partition. This information is obtained, instead, with command squeue.

Partition availability

The most important information that sinfo provides is the availability (also called state) of partitions. This is shown in column "AVAIL". Possible partition states are:

up = the partition is available
Currently running jobs will be completed
It's possible to launch jobs on the partition
Queued jobs will be executed as soon as resources allow
drain = the partition is in the process of becoming unavailable (i.e., of entering the down state: see below)
Currently running jobs will be completed
It's not possible to launch jobs on the partition
Queued jobs will be executed when the partition becomes available again (i.e. goes back to the up state)
down = the partition is unavailable
There are no running jobs
It's not possible to launch jobs on the partition
Queued jobs will be executed when the partition becomes available again (i.e. goes back to the up state)

When a partition goes from up to drain no harm is done to running jobs. In a normally functioning SLURM system, the passage from up or drain to down happens only when no jobs are running on the partition. If (e.g., due to a malfunction) the passage happens with jobs still running, they get killed.

A partition in state drain or down requires intervention by a Job Administrator to be restored to up.

Default values

The features of SLURM partitions, including the default values which are applied to jobs that do not make explicit requests, can be inspected with

scontrol show partition

which provides an output similar to this:

PartitionName=jobs
   AllowGroups=ALL AllowAccounts=ALL AllowQos=nogpu,gpulight,gpu,gpuwide,gpuheavy-20,gpuheavy-40
   AllocNodes=ALL Default=YES QoS=N/A
=> DefaultTime=01:00:00 DisableRootJobs=NO ExclusiveUser=NO ExclusiveTopo=NO GraceTime=0 Hidden=NO
   MaxNodes=UNLIMITED MaxTime=3-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
   Nodes=gn01
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
   OverTimeLimit=NONE PreemptMode=OFF
   State=UP TotalCPUs=48 TotalNodes=1 SelectTypeParameters=NONE
   JobDefaults=(null)
=> DefMemPerNode=4096 MaxMemPerNode=UNLIMITED
   TRES=cpu=48,mem=1011435M,node=1,billing=49,gres/gpu=13,gres/gpu:3g.20gb=5,gres/gpu:40gb=3,gres/gpu:4g.20gb=5
   TRESBillingWeights=cpu=1.0,gres/gpu:3g.20gb=6.0,gres/gpu:4g.20gb=6.0,gres/gpu:40gb=6.0,mem=0.05g

In the example, we have highlighted with "=>" the most relevant default values for Mufasa users, i.e.:

DefaultTime
the default execution time assigned to a job run on the partition (e.g., 1 hour)
DefMemPerNode
the default amount of RAM assigned to a job run on the partition (e.g., 4GB)