Difference between revisions of "SLURM"

From Mufasa (BioHPC)
Jump to navigation Jump to search
 
(144 intermediate revisions by the same user not shown)
Line 1: Line 1:
This page presents the features of SLURM that are most relevant to Mufasa's [[Roles|Job Users]]. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).
This page presents the features of SLURM that are most relevant to Mufasa's [[Roles|Job Users]]. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).


Users of Mufasa '''must use SLURM''' to run resource-heavy processes, i.e. computing jobs that require any of the following:
Users of Mufasa '''must use SLURM''' to run resource-heavy processes, i.e. computing jobs that require one or more of the following:
* GPUs
* GPUs
* multiple CPUs
* multiple CPUs
* a significant amount of RAM.
* powerful CPUs
* a significant amount of RAM


In fact, only processes run via SLURM have access to all the resources of Mufasa. Processes run outside SLURM are executed by the [[System#Login server|login server]] virtual machine, which has minimal resources and no GPUs. Using SLURM is therefore the only way to execute resource-heavy jobs on Mufasa (this is a key difference between Mufasa 1.0 and Mufasa 2.0).
In fact, only processes run via SLURM have access to all the resources of Mufasa. Processes run outside SLURM are executed by the [[System#Login server|login server]] virtual machine, which has minimal resources and no GPUs. Using SLURM is therefore the only way to execute resource-heavy jobs on Mufasa (this is a key difference between Mufasa 1.0 and Mufasa 2.0).
Line 12: Line 13:
Computation jobs on Mufasa needs to be launched via [[System#The SLURM job scheduling system|SLURM]]. SLURM provides jobs with access to the [[#System resources subjected to limitations|physical resources]] of Mufasa, such as CPUs, GPUs and RAM. Thanks to SLURM, processing jobs share system resources, optimising their occupation and availability.  
Computation jobs on Mufasa needs to be launched via [[System#The SLURM job scheduling system|SLURM]]. SLURM provides jobs with access to the [[#System resources subjected to limitations|physical resources]] of Mufasa, such as CPUs, GPUs and RAM. Thanks to SLURM, processing jobs share system resources, optimising their occupation and availability.  


When a user runs a job, the job does not get executed immediately and is instead ''queued''. SLURM executes jobs according to their order in the queue: the top job in the queue gets executed as soon as the necessary resources are available, while jobs lower in the queue wait longer. The position of a job in the queue is due to the '''[[#Job priority|priority]]''' assigned to it by SLURM, with higher-priority jobs closer to the top. As a general rule,
When a user runs a job, the job does not get executed immediately and is instead ''queued''. SLURM executes jobs according to their order in the queue: the top job in the queue gets executed as soon as the necessary resources are available, while jobs lower in the queue wait longer. The position of a job in the queue is due to the '''[[#Job priority|priority]]''' assigned to it by SLURM, with higher-priority jobs closer to the top. As a general rule:
;: '''The greater the fraction of Mufasa's overall resources that a job asks for, the lower its priority'''.
 
;: '''the greater the fraction of Mufasa's overall resources that a job asks for, the lower its priority will be'''.
 
The priority mechanism is used to encourage users to use Mufasa's resources in an effective and equitable manner. This page includes a [[#How_to_maximise_the_priority_of_your_jobs|chart explaining how to maximise the priority of your jobs]].


The '''time''' available to a job for its execution is controlled by SLURM. When a user requests execution of a job, they must specify the duration of the time slot that the job needs. The job must complete its execution before the end of the requested time slot, otherwise it gets killed by SLURM.
The '''time''' available to a job for its execution is controlled by SLURM. When a user requests execution of a job, they must specify the duration of the time slot that the job needs. The job must complete its execution before the end of the requested time slot, otherwise it gets killed by SLURM.


In Mufasa 2.0 (this was different in Mufasa 1.0) access to system resources is managed via SLURM's '''[[#Quality of Service|Quality of Service (QOS)]]''' mechanism. To launch a processing job via SLURM, the user must always specify the chosen QOS. QOSes differ in the set of resources that they provide access to because each of them is designed to fit a given type of job.
In Mufasa 2.0 access to system resources is managed via SLURM's '''[[#SLURM Quality of Service (QOS)|Quality of Service (QOS)]]''' mechanism (Mufasa 1.0 used [[#SLURM_partitions|partitions]] instead). To launch a processing job via SLURM, the user must always specify the chosen QOS. QOSes differ in the set of resources that they provide access to because each of them is designed to fit a given type of job.


= <span style="background:#FFFF00">Quality of Service (QOS)</span> =
= <span style="background:#FFFF00">SLURM Quality of Service (QOS)</span> =


In SLURM, different Quality of Services (QOSes) define different levels of access to the server's resources. SLURM jobs must always specify the QOS that they use: this choice determines what resources the job can access.
In SLURM, different Quality of Services (QOSes) define different levels of access to the server's resources. SLURM jobs must always specify the QOS that they use: this choice determines what resources the job can access.
Line 26: Line 30:


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sacctmgr list qos format=name%-11,priority,MaxJobsPerUser,maxwall,maxtres%-80
sacctmgr list qos format=name%-11,priority,maxwall,MaxJobsPerUser,maxtres%-80
</pre>
</pre>


Line 32: Line 36:


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
Name          Priority MaxJobsPU     MaxWall MaxTRES                                                                           
Name          Priority    MaxWall MaxJobsPU MaxTRES                                                                           
----------- ---------- --------- ----------- --------------------------------------------------------------------------------  
----------- ---------- ----------- --------- --------------------------------------------------------------------------------  
normal              0                                                                                                         
normal              0                                                                                                         
nogpu                4         1 3-00:00:00 cpu=16,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=128G  
nogpu                4  3-00:00:00         1 cpu=16,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=128G  
gpuheavy-20          1         1             cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=2,mem=128G             
gpuheavy-20          1                     1 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=2,mem=128G             
gpuheavy-40          1         1             cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=1,gres/gpu:4g.20gb=0,mem=128G             
gpuheavy-40          1                     1 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=1,gres/gpu:4g.20gb=0,mem=128G             
gpulight            8         1   12:00:00 cpu=2,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G               
gpulight            8    12:00:00         1 cpu=2,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G               
gpu                  2         1 1-00:00:00 cpu=8,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G               
gpu                  2  1-00:00:00         1 cpu=8,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G               
gpuwide              2        2  1-00:00:00 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=1,mem=64G</pre>
gpuwide              2  1-00:00:00         2 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=1,mem=64G            
build              32    02:00:00        1 cpu=2,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=16G
</pre>


The columns of this output are the following:
The columns of this output are the following:


; Name
:; Name
: name of the QOS
:: name of the QOS


; Priority
:; Priority
: priority tier associated to the QOS
:: priority tier associated to the QOS: see [[#Job priority|Job priority]] for details


; MaxJobsPU
:; MaxJobsPU
: maximum number of jobs from a single user can be running with this QOS
:: maximum number of jobs from a single user can be running with this QOS
: (note that there are also [[Research users and students users|other limitations]] on the number of running jobs by the same user)
:: (note that there are also [[#Research users and students users|other limitations]] on the number of running jobs by the same user)


; MaxWall
:; MaxWall
: maximum wall clock duration of the jobs using the QOS (after which they are killed by SLURM), in format ''[days-]hours:minutes:seconds''
:: maximum wall clock duration of the jobs using the QOS (after which they are killed by SLURM), in format ''[days-]hours:minutes:seconds''
:: For QOSes <code>gpuheavy-20</code> and <code>gpuheavy-20</code> these are not set because they are determined by the [[#SLURM partitions|partition]]. Partitions also define the [[#Default values|default duration]].


; MaxTRES
:; MaxTRES
: maximum amount of resources ("''Trackable RESources''") available to a job using the QOS, where
:: maximum amount of resources ("''Trackable RESources''") available to a job using the QOS, where
: <code>'''cpu=''K'''''</code> means that the maximum number of processor cores is ''K''
:: <code>'''cpu=''K'''''</code> means that the maximum number of CPUs (i.e., processor cores) is ''K''
: <code>'''gres/''gpu:Type''=''K'''''</code> means that the maximum number of GPUs of class <code>''Type''</code> (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) is ''K''
::: --> if not specified, the job gets the default amount of CPUs specified by the [[#SLURM partitions|partition]]
: <code>'''mem=''K''G'''</code> means that the maximum amount of system RAM is ''K'' GBytes
:: <code>'''gres/''gpu:Type''=''K'''''</code> means that the maximum number of GPUs of class <code>''Type''</code> (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) is ''K''
::: --> (for QOSes that allow access to GPUs) if not specified, the job cannot be launched
:: <code>'''mem=''K''G'''</code> means that the maximum amount of system RAM is ''K'' GBytes
::: --> if not specified, the job gets the default amount of RAM specified by the [[#SLURM partitions|partition]]


For instance, QOS <code>gpulight</code> provides jobs that use it with:
For instance, QOS <code>gpulight</code> provides jobs that use it with:
Line 73: Line 83:
* no access to GPUs of type ''gpu:4g.20gb''
* no access to GPUs of type ''gpu:4g.20gb''


As seen in the example output from <code>sacctmgr list qos</code> above, each QOS has an associated '''priority tier'''. As a rule, the more powerful (i.e., rich with resources) a QOS is, the lower the priority of the jobs that use such QOS. See [[#Priority|Priority]] to understand how priority affects the execution order of jobs in Mufasa 2.0.


The <code>normal</code> QOS is the one applied to jobs if no QOS is specified. <code>normal</code> provides no access at all to Mufasa's resources, so '''it is always necessary to specify a QOS''' (different from <code>normal</code>) when running a job via SLURM.
The <code>normal</code> QOS is the one applied to jobs if no QOS is specified. <code>normal</code> provides no access at all to Mufasa's resources, so '''it is always necessary to specify a QOS''' (different from <code>normal</code>) when running a job via SLURM.


As seen in the example output from <code>sacctmgr list qos</code> above, each QOS has an associated '''priority tiers'''. As a rule, the more powerful (i.e., rich with resources) a QOS is, the lower the priority of the jobs that use such QOS. See [[#Priority|Priority]] to understand how priority affects the execution order of jobs in Mufasa 2.0.
== The <code>build</code> QOS ==


;: Important note. Some of the QOSes may be available only to a subset of users. In Mufasa, such a limitation is associated to the [[#Research users and students users|category]] that users belongs to.
This QOS is specifically designed to be used by Mufasa users to '''build [[System#Containers|container images]]'''. Its associated priority tier is very high, to allow SLURM jobs launched using this QOS to be executed quickly. On the other side, this QOS has very limited (but fully sufficient for building operations) resources, no access to GPUs and a short maximum duration for jobs: so it is not suitable for other computing activities.


== Amount of resource available to a QOS ==
See [[Singularity#Building Singularity images|Building Singularity images]] for directions about building Singularity container images.
 
== QOS restrictions ==
 
Some of the QOSes are not available to M.Sc. students. See [[#Research users and students users|Research users and students users]] to understand the differences between the two categories of users of Mufasa and to find out what access you have to resources.
 
== Resources available to a QOS ==


The maximum amount of resources that a QOS has access to (available to the running jobs using the QOS, collectively) can be inspected with command
The maximum amount of resources that a QOS has access to (available to the running jobs using the QOS, collectively) can be inspected with command
Line 95: Line 112:
normal                                         
normal                                         
nogpu      cpu=48,mem=384G                     
nogpu      cpu=48,mem=384G                     
gpuheavy-20 gres/gpu:4g.20gb=4                
gpuheavy-20 cpu=56,gres/gpu:4g.20gb=4,mem=896G
gpuheavy-40 gres/gpu:40gb=3                  
gpuheavy-40 cpu=56,gres/gpu:40gb=3,mem=896G   
gpulight    cpu=8,gres/gpu:3g.20gb=4,mem=256G   
gpulight    cpu=8,gres/gpu:3g.20gb=4,mem=256G   
gpu        cpu=24,gres/gpu:3g.20gb=3,mem=192G  
gpu        cpu=24,gres/gpu:3g.20gb=3,mem=192G  
gpuwide    cpu=40,gres/gpu:4g.20gb=5,mem=320G
gpuwide    cpu=40,gres/gpu:4g.20gb=5,mem=320G  
build      cpu=4,mem=32G
</pre>
</pre>


Note how overall resources associated to the set of all QOS exceed overall available resources. With SLURM, multiple QOS can be given access to the same physical resource (e.g., a CPU or a GPU), because SLURM guarantees that the overall request for resources from all running jobs does not exceed the overall availability of resources in the system. SLURM will only execute a job if all the resources requested by the job are not already in use at the time of request.
Note how overall resources associated to the set of all QOS greatly exceeds [[System#Hardware|available resources]]. With SLURM, multiple QOS can be given access to the same physical resource (e.g., a CPU or a GPU), because SLURM guarantees that the overall request for resources from all running jobs does not exceed the overall availability of resources in the system. SLURM will only execute a job if all the resources requested by the job are not already in use at the time of request.
 
= <span style="background:#FFFF00">Research users and students users</span> =


== Partitions ==
Users of Mufasa belong to two ''categories'', which provide the users belonging to them with different access to system resources.


Since in Mufasa 2.0 access to resources is controlled via QOSes, partitions are not very relevant. Partitions are another mechanism provided by SLURM to create different levels of access to system resources.
The categories are:


In Mufasa 2.0, there is a single SLURM partition, called <code>jobs</code>, and all jobs run on it. The partition status of Mufasa can be inspected with
:: '''Research users''', i.e. academic personnel and Ph.D. students
::: * have access to all [[#SLURM Quality of Service (QOS)|QOSes]]
::: * their jobs have a higher ''base priority''
::: * they can have a higher number of running jobs
::: * they can have a higher number of queued jobs
 
:: '''Students users''', i.e. M.Sc. students
::: * do not have access to some [[#SLURM Quality of Service (QOS)|QOSes]]
::: * their jobs have a lower ''base priority''
::: * they can have a lower number of running jobs
::: * they can have a lower number of queued jobs
 
You can inspect the differences between researcher and student users with command


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sinfo
sacctmgr list association format="user,priority,maxjobs,maxsubmit,qos%-60" | grep -E 'Priority|research|students'
</pre>
</pre>


Line 117: Line 149:


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
  Account  Priority MaxJobs MaxSubmit QOS
jobs*        up 3-00:00:00      1   idle gn01
  research          4      2        4 build,gpu,gpuheavy-20,gpuheavy-40,gpulight,gpuwide,nogpu
  students          1       1        2 build,gpu,gpulight,gpuwide,nogpu
</pre>
</pre>


As explained, in Mufasa 2.0 partitions are not much relevant, while QOS are very relevant.
This example output shows that the differences between research users and student users are the following:


= <span style="background:#FFFF00">Research users and students users</span> =
* '''base priority''' is 4 for jobs run by ''research'' users, while it is 1 for jobs run by ''students'' users
* the '''number of running jobs''' is 2 for ''research'' users, while it is 1 for jobs run by ''students'' users
* the '''number of queued jobs''' (i.e., of jobs submitted to SLURM for execution but not yet running) is 4 for ''research'' users, while it is 1 for ''student'' users
* ''research'' users can access all '''QOSes''' while ''student'' users cannot access QOSes <code>gpuheavy-20</code> and <code>gpuheavy-40</code>


Users of Mufasa belong to two categories, which provide the users belonging to them with different access to system resources.
You can inspect your own level of access to Mufasa's resources with
The categories are:
 
'''Research''' users, i.e. academic personnel and Ph.D. students
* have access to all [[#Quality of Service (QOS)|QOSes]]
* their jobs have a higher ''base priority''
 
'''Students''' users, i.e. M.Sc. students
* do not have access to QOS gpuheavy-20 and gpuheavy-40
* their jobs have a lower ''base priority''
 
You can inspect the differences between researcher and student users with command


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sacctmgr list association format=account,priority,maxjobs,maxsubmit | grep -E 'Priority|research|students'
sacctmgr list association format="user,priority,maxjobs,maxsubmit,qos%-60" | grep -E "User|<your_username>"
</pre>
</pre>


Line 145: Line 170:


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
  Account   Priority MaxJobs MaxSubmit  
      User   Priority MaxJobs MaxSubmit QOS                                                         
  research         4      2        4  
    preali         4      2        4 build,gpu,gpuheavy-20,gpuheavy-40,gpulight,gpuwide,nogpu
  students          1      1        2
</pre>
</pre>
This example output shows that the differences between research and students are the following:
* '''base priority''' is 4 for jobs run by ''research'' users, while it is 1 for jobs run by ''students'' users
* the '''number of running jobs''' is 2 for ''research'' users, while it is 1 for jobs run by ''students'' users
* the '''number of queued jobs''' (i.e., of jobs submitted to SLURM for execution but not yet running) is 4 for ''research'' users, while it is 1 for ''student'' users


= <span style="background:#FFFF00">Job priority</span> =
= <span style="background:#FFFF00">Job priority</span> =
Line 160: Line 178:
Once the execution of a job has been requested, the job is not run immediately: it is instead ''queued'' by SLURM, together with all the other jobs awaiting execution. The job on top of the queue at any time is the first to be put into execution as soon as the resources it requires are available. The order of the jobs in the queue depends on the '''priority''' of the jobs, and defines the order in which each job will reach execution.
Once the execution of a job has been requested, the job is not run immediately: it is instead ''queued'' by SLURM, together with all the other jobs awaiting execution. The job on top of the queue at any time is the first to be put into execution as soon as the resources it requires are available. The order of the jobs in the queue depends on the '''priority''' of the jobs, and defines the order in which each job will reach execution.


SLURM is configured to maximise resource availability, i.e. to ensure the shorter possible wait time before job execution. This goal is achieved by '''encouraging users to avoid asking for resources or execution time that their job does not need'''.
SLURM is configured to maximise resource availability, i.e. to ensure the shorter possible wait time before job execution.
 
To achieves this goal, SLURM '''encourages users to avoid asking for resources or execution time that their job does not need'''. The more resources and the more time a job requests, the lower its priority in the execution queue will be.


This mechanism creates a '''virtuous cycle'''. By carefully choosing what to ask for, a user ensures that their job will be executed as soon as possible; at the same time, users limiting their requests to what their jobs really need leave more resources available to other jobs in the queue, which will then be executed sooner.
This mechanism creates a '''virtuous cycle'''. By carefully choosing what to ask for, a user ensures that their job will be executed as soon as possible; at the same time, users limiting their requests to what their jobs really need leave more resources available to other jobs in the queue, which will then be executed sooner.


=== Elements influencing job priority ===
== Elements determining job priority ==
In Mufasa, the priority of a job is computed by SLURM according to the following elements:
In Mufasa, the priority of a job is computed by SLURM according to the following elements:


; Job duration, i.e. the execution time requested by the job
: '''[[#Research users and students users|User category]]''' (i.e., researcher or M.Sc. student)
: This is used to assign higher priority to shorter jobs
::: Used to provide higher priority to jobs ''run by research personnel''


; Job size, i.e. the number of CPUs requested by the job
: '''[[#SLURM Quality of Service (QOS)|QOS]]''' used by the job
: This is used to assign higher priority to jobs requiring less CPUs.
::: Used to provide higher priority to jobs requesting ''access to less system resources''


; Age, i.e. the length of time that the job has been waiting in the queue
: '''Number of CPUs''' requested by the job (also called "job size")
: This is used to increase the priority of a job the longer it has been waiting for execution.
::: Used to provide higher priority to jobs ''requiring less CPUs''


; QOS (Quality Of Service), i.e. a factor associated to the resources requested by the job
: '''Job duration''', i.e. the execution time requested by the job
: This is used to implement two different mechanisms that influence job priority, i.e.:
::: Used to provide higher priority to ''shorter jobs''
:: - to assign higher priority to jobs run on less powerful partitions
:: - to assign higher priority to jobs run by researchers (e.g., Ph.D. students) wrt jobs run by M.Sc. students


QOS is also used to set limits to the number of jobs by the same user that can be in execution at a given time, as well as the number of jobs by the same user that can be queued at a given time.
: '''Job Age''', i.e. the time that the job has been waiting in the queue
It is possible to get a list of the QOS that are defined in SLURM with command
::: Used to provide higher priority to jobs which have been ''queued for a longer time''


<pre style="color: lightgrey; background: black;">
: '''FairShare''', i.e. a factor computed by SLURM to balance use of the system by different users
sacctmgr list qos
::: Used to provide higher priority to jobs by users who ''use less resources'' (CPUs, GPUs, RAM, execution time)
</pre>
::: FairShare has a "fading memory", i.e. the influence of past resource usage gets lower the farther it is from now


= System resources subjected to limitations =
== How to maximise the priority of your jobs ==


The hardware resources of Mufasa are limited. For this reason, some of them are subjected to limitations, i.e. (these are SLURM's own terms):
Every time you run a SLURM job, follow these guidelines:


; cpu
:{|class="wikitable"
: the number of processor cores that a job uses
|
; Choose the less powerful QOS compatible with the needs of your job
:: QOSes with access to less resources lead to higher priority


; mem
; Only request CPUs that your job will actually use
: the amount of RAM that a job uses
:: If you didn't design your code to exploit multiple CPUs, check that it does! If it doesn't, do not ask for them


;gres
; Do not request more time than your jobs needs to complete
: the amount of ''generic resources'' that a job uses: in Mufasa, gres are '''GPUs'''
:: Make a worst-case estimate and only ask for that duration


These are some of the '''TRES (Trackable RESources)''' defined by SLURM. From [https://slurm.schedmd.com/tres.html SLURM's documentation]: "''A TRES is a resource that can be tracked for usage or used to enforce limits against.''"
; Test and debug your code using less powerful QOSes before running it on more powerful QOSes
:: Your test jobs will get a higher priority and your FairShare will improve


SLURM provides jobs with access to resources only for a limited time: i.e.,
; Cancel jobs when you don't need them anymore
; execution time
:: [[User_Jobs#Cancelling_a_job_with_scancel|Use scancel]] to delete your jobs when finished, or if they become useless (e.g., due to a bug)
: is itself a limited resource.
|}


When a resource is limited, a job cannot use arbitrary quantities of it. On the contrary, the job must specify how much of the resource it requests. Resource requests are done either by running the job on a [[User Jobs#SLURM partitions|partition]] for which a default amount of resources has been defined, or via the options of the [[#Running_jobs_with_SLURM:_generalities|command]] used to launch the job via SLURM.
= System resources subjected to limitations =


== <code>gres</code> syntax ==
In systems based on SLURM like Mufasa, '''TRES (Trackable RESources)''' are (from [https://slurm.schedmd.com/tres.html SLURM's documentation] "''resources that can be tracked for usage or used to enforce limits against.''"


Whenever it is necessary to specify the quantity of <code>gres</code>, i.e. generic resources, a special syntax must be used. In Mufasa <code>gres</code> resources are GPUs, so this syntax applies to GPUs. Number and type of Mufasa's GPUs is described [[System#CPUs and GPUs|here]].
TRES include CPUs, RAM and '''GRES'''. The last term stands for ''Generic RESources'' that a job may need for its execution. In Mufasa, the only <code>gres</code> resources are the GPUs.


The name of each GPU resource takes the form
== <span style="background:#ffff00"><code>gres</code> syntax</span> ==


'''<code>Name:Type</code>'''
To ask SLURM to assign GRES resources (i.e., GPUs) to a job, a special syntax must be used. Precisely, the name of each GPU resource takes the form


where <code>Name</code> is '''<code>gpu</code>''' and <code>Type</code> takes the following values <span style="background:#00FF00">[to be updated]</span>:
'''<code>gpu:Name:Type</code>'''


* '''<code>40gb</code>''' for GPUs with 40 Gbytes of onboard RAM
Considering the [[System#CPUs and GPUs|GPU complement of Mufasa]], <code>Type</code> takes the following values:
* '''<code>20gb</code>''' for GPUs with 20 Gbytes of onboard RAM
 
* '''<code>gpu:40gb</code>''' for GPUs with 40 Gbytes of RAM
* '''<code>gpu:4g.20gb</code>''' for GPUs with 20 Gbytes of RAM and 4 compute units
* '''<code>gpu:3g.20gb</code>''' for GPUs with 20 Gbytes of RAM and 3 compute units


So, for instance,
So, for instance,


<code>gpu:20gb</code>
<code>gpu:3g.20gb</code>


identifies the resource corresponding to GPUs with 20 GB of RAM.
identifies a resource corresponding to a GPU with 20 GB of RAM and 3 compute units.


When asking for a <code>gres</code> resource (e.g., in an <code>srun</code> command or an <code>SBATCH</code> directive of an [[User Jobs#Using execution scripts to run jobs|execution script]]), the syntax required by SLURM is
When asking for a GRES resource (e.g., in an <code>srun</code> command or an <code>SBATCH</code> directive of an [[User Jobs#Using execution scripts to run jobs|execution script]]), the syntax required by SLURM is


'''<code><Name>:<Type>:<Quantity></code>'''
'''<code>gpu:<Type>:<Quantity></code>'''


where <code>Quantity</code> is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type <code>20gb</code> the syntax is
where <code>Quantity</code> is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type <code>4g.20gb</code> the syntax is


<code>gpu:20gb:2</code>
<code>gpu:4g.20gb:2</code>


SLURM's ''generic resources'' are defined in <code>/etc/slurm/gres.conf</code>. In order to make GPUs available to SLURM's <code>gres</code> management, Mufasa makes use of Nvidia's [https://developer.nvidia.com/nvidia-management-library-nvml NVML library]. For additional information see [https://slurm.schedmd.com/gres.html SLURM's documentation].
SLURM's ''generic resources'' are defined in <code>/etc/slurm/gres.conf</code>. In order to make GPUs available to SLURM's <code>gres</code> management, Mufasa makes use of Nvidia's [https://developer.nvidia.com/nvidia-management-library-nvml NVML library]. For additional information see [https://slurm.schedmd.com/gres.html SLURM's documentation].
Line 240: Line 264:
== Looking for unused GPUs ==
== Looking for unused GPUs ==


GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to request a GPU that is not currently in use. This command
GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to use a QOS associated to a type of GPU of which there are one or more that aren't currently in use. This command
 
<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sinfo -O Gres:100
sinfo -O Gres:100
</pre>
</pre>
provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides an output similar to the following <span style="background:#00FF00">[to be updated]</span>:
 
provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides an output similar to the following:
 
<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
GRES                                                                                                 
GRES                                                                                                 
gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1)
gpu:40gb:3,gpu:4g.20gb:5,gpu:3g.20gb:5
</pre>
</pre>


To know which of the GPUs are currently in use, use command
To know which of the GPUs are currently in use, use command
<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sinfo -O GresUsed:100
sinfo -O GresUsed:100
</pre>
</pre>
which provides an output similar to this:
which provides an output similar to this:
<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
GRES_USED                                                                                          
GRES_USED
gpu:40gb:2(IDX:0-1),gpu:20gb:2(IDX:5,8),gpu:10gb:3(IDX:3-4,6)  
gpu:40gb:2(IDX:0-1),gpu:4g.20gb:2(IDX:5,8),gpu:3g.20gb:3(IDX:3-4,6)
</pre>
</pre>
By comparing the two lists (GRES and GRES_USED) in the examples above, you can see that in this example:


* the system has 2 40 GB GPUs, all of which are in use
By comparing the two lists (GRES and GRES_USED) you can easily spot unused GPUs.
* the system has 3 20 GB GPUs, of which one is not in use
* the system has 6 10 GB GPUs, of which 3 are not in use


= SLURM partitions =
= SLURM partitions =


Execution queues for jobs in SLURM are called '''partitions'''. Each partition has features (in term of resources available to the jobs on that queue) that make the partition suitable for a certain category of jobs. SLURM command
Partitions are another mechanism provided by SLURM to create different levels of access to system resources. Since in Mufasa 2.0 access to resources is controlled via [[#SLURM Quality of Service (QOS)|QOSes]], partitions are not very relevant.  
 
Note, however, that the default values for some features of SLURM jobs (e.g., duration) are [[#Default values|defined by the partition]].
 
In Mufasa 2.0, there is a single SLURM partition, called <code>jobs</code>, and all jobs run on it. The partition status of Mufasa can be inspected with


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sinfo
sinfo -o "%10P %5a %9T %11L %10l"
</pre>
</pre>


([https://slurm.schedmd.com/sinfo.html link to SLURM docs]) provides a list of available partitions. Its output is similar to this <span style="background:#00FF00">[to be updated]</span>:
which provides an output similar to the following:


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
PARTITION  AVAIL TIMELIMIT NODES  STATE NODELIST
PARTITION  AVAIL STATE    DEFAULTTIME TIMELIMIT  
debug*       up     20:00      1    mix gn01
jobs*      up    idle     1:00:00     3-00:00:00</pre>
small        up   12:00:00      1    mix gn01
normal        up 1-00:00:00      1    mix gn01
longnormal    up 3-00:00:00      1   mix gn01
gpu          up 1-00:00:00     1   mix gn01
gpulong      up 3-00:00:00     1    mix gn01
fat          up 3-00:00:00     1    mix gn01
</pre>
 
In this example, available partitions are named “debug”, “small”, “normal”, “longnormal”, “gpu”, “gpulong”, “fat”. The asterisk beside "debug" indicates that this is the default partition, i.e. the one that SLURM selects to run a job when no partition has been specified. On Mufasa, partition names have been chosen to reflect the type of job that they are dedicated to.


The columns in the standard output of <code>sinfo</code> shown above correspond to the following information:
The columns in the standard output of <code>sinfo</code> shown above correspond to the following information:


; PARTITION
:; PARTITION
: name of the partition
:: name of the partition; the asterisks indicates that it's the default one


; AVAIL
:; AVAIL
: state/availability of the partition: see [[User Jobs#Partition availability|below]]
:: state/availability of the partition: see [[#Partition availability|below]]


; TIMELIMIT
:; STATE
: maximum runtime of a job allowed by the partition, in format ''[days-]hours:minutes:seconds''
:: state of the node (using [https://slurm.schedmd.com/sinfo.html#SECTION_NODE-STATE-CODES these codes]); typical values are '''<code>mixed</code>''' - meaning that some of the resources of the node are busy executing jobs while other are idle, and '''<code>allocated</code>''' - meaning that all of the resources of the node are busy


; NODES
:; DEFAULTTIME
: number of nodes available to jobs run on the partition: for Mufasa, this is always 1 since [[System#The SLURM job scheduling system|there is only 1 node in the computing cluster]]
:: default runtime of a job, in format ''[days-]hours:minutes:seconds''


; STATE
:; TIMELIMIT
: state of the node (using [https://slurm.schedmd.com/sinfo.html#SECTION_NODE-STATE-CODES these codes]); typical values are <code>mixed</code> - meaning that some of the resources of the node are busy executing jobs while other are free, and <code>allocated</code> - meaning that all of the resources of the node are busy
:: maximum runtime of a job allowed by the partition, in format ''[days-]hours:minutes:seconds''


; NODELIST
The asterisk at the end of the partition name indicates the default partition, i.e. the one on which jobs which do not ask for a specific partition are run.
: list of nodes available to the partition: for Mufasa this field always contains <code>gn01</code> since [[System#The SLURM job scheduling system|Mufasa is the only node in the computing cluster]] <span style="background:#00FF00">[to be updated]</span>


For what concerns hardware resources (such as CPUs, GPUs and RAM) the amounts of each resource available to Mufasa's partitions are set by SLURM's accounting system, and are not visible to <code>sinfo</code>. See [[User Jobs#Partition features|Partition features]] for a description of these amounts.
== Partition availability ==
 
== Partition features ==
 
The output of <code>sinfo</code> ([[User Jobs#SLURM partitions|see above]]) provides a list of available partitions, but (except for time) it does not provide information about the amount of resources that a partition makes available to the user jobs which are run on it. The amount of resources is visible through command


<pre style="color: lightgrey; background: black;">
The most important information that <code>sinfo</code> provides is the '''availability''' (also called ''state'') of partitions. This is shown in column "AVAIL". Possible partition states are:
sacctmgr list qos format=name%-10,maxwall,maxtres%-64
</pre>


which provides an output similar to the following <span style="background:#00FF00">[to be updated]</span>:
:'''<code>up</code>''' = the partition is available
:: It's possible to launch jobs on the partition
:: Currently running jobs will be completed
:: Currently queued jobs will be executed as soon as resources allow


<pre style="color: lightgrey; background: black;">
:'''<code>drain</code>''' = the partition is in the process of becoming unavailable (i.e., to go in the <code>down</code> state)
Name          MaxWall MaxTRES                                                         
:: It's not possible to launch jobs on the partition
---------- ----------- ----------------------------------------------------------------
:: Currently running jobs will be completed
normal      1-00:00:00 cpu=16,gres/gpu:10gb=0,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=128G 
:: Queued jobs will be executed when the partition becomes available again (i.e. goes back to the <code>up</code> state)
small        12:00:00 cpu=2,gres/gpu:10gb=1,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=16G   
longnormal  3-00:00:00 cpu=16,gres/gpu:10gb=0,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=128G 
gpu        1-00:00:00 cpu=8,gres/gpu:10gb=2,gres/gpu:20gb=2,mem=64G                   
gpulong    3-00:00:00 cpu=8,gres/gpu:10gb=2,gres/gpu:20gb=2,mem=64G                   
fat        3-00:00:00 cpu=32,gres/gpu:10gb=2,gres/gpu:20gb=2,gres/gpu:40gb=2,mem=256G
</pre>


Its elements are the following (for more information, see [https://slurm.schedmd.com/qos.html SLURM's documentation]):
:'''<code>down</code>''' = the partition is unavailable
:: It's not possible to launch jobs on the partition
:: There are no running jobs
:: Queued jobs will be executed when the partition becomes available again (i.e. goes back to the <code>up</code> state)


; Name
When a partition goes from <code>up</code> to <code>drain</code> no harm is done to running jobs. When a partition passes from any other state to <code>down</code>, running jobs (if they exist) get killed. A partition in state <code>drain</code> or <code>down</code> requires intervention by a [[Roles|Job Administrator]] to be restored to <code>up</code>.
: name of the partition


; MaxWall
== <span style="background:#FFFF00">Default values</span> ==
: maximum wall clock duration of the jobs run on the partition (after which they are killed by SLURM), in format ''[days-]hours:minutes:seconds''


; MaxTRES
The features of SLURM partitions can be inspected with
: maximum amount of resources ("''Trackable RESources''") available to a job running on the partition, where
: <code>'''cpu=''K'''''</code> means that the maximum number of processor cores is ''K''
: <code>'''gres/''gpu:Type''=''K'''''</code> means that the maximum number of GPUs of class <code>''Type''</code> (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) is ''K''
: <code>'''mem=''K''G'''</code> means that the maximum amount of system RAM is ''K'' GBytes


Note that there may be additional limits to the possibility to fully exploit the resources of a partition. For instance, there may be a cap on the maximum number of GPUs that can be used at the same time by a single job and/or a single user.
<pre style="color: lightgrey; background: black;">
scontrol show partition
</pre>


=== Partitions of Mufasa 2.0 ===
which provides an output similar to this:


The features of the SLURM partitions of Mufasa 2.0 are the following:
<pre style="color: lightgrey; background: black;">
 
PartitionName=jobs
{| class=wikitable
  AllowGroups=ALL AllowAccounts=ALL AllowQos=nogpu,gpulight,gpu,gpuwide,gpuheavy-20,gpuheavy-40
!align="center"| Name of partition
  AllocNodes=ALL Default=YES QoS=N/A
!align="center"| example of use
-> DefaultTime=01:00:00 DisableRootJobs=NO ExclusiveUser=NO ExclusiveTopo=NO GraceTime=0 Hidden=NO
!align="center"| max running jobs per user
  MaxNodes=UNLIMITED MaxTime=3-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
!align="center"| GPU configurations available to each job
  Nodes=gn01
!align="center"| GPUs that the partition has access to
  PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
!align="center"| max CPUs per job
  OverTimeLimit=NONE PreemptMode=OFF
!align="center"| max RAM per job [GB]
  State=UP TotalCPUs=48 TotalNodes=1 SelectTypeParameters=NONE
!align="center"| max wall clock time per job [h]
  JobDefaults=(null)
!align="center"| default resources assigned to jobs
-> DefMemPerNode=4096 MaxMemPerNode=UNLIMITED
!align="center"| allowed users
  TRES=cpu=48,mem=1011435M,node=1,billing=49,gres/gpu=13,gres/gpu:3g.20gb=5,gres/gpu:40gb=3,gres/gpu:4g.20gb=5
|-
  TRESBillingWeights=cpu=1.0,gres/gpu:3g.20gb=6.0,gres/gpu:4g.20gb=6.0,gres/gpu:40gb=6.0,mem=0.05g
!align="center"| gpulight
</pre>
|align="center"| debug code
|align="center"| 1
|align="center"| 1 x 20 GB
|align="center"| 4 x 20 GB
|align="center"| 2
|align="center"| 64
|align="center"| 6
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers, students
|-
!align="center"| nogpu
|align="center"| tasks not requiring GPUs (in particular: not AI)
|align="center"| 1
|align="center"| -
|align="center"| none
|align="center"| 16
|align="center"| 128
|align="center"| 72
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers, students
|-
!align="center"| gpu
|align="center"| AI: train an already debugged model
|align="center"| 1
|align="center"| 1 x 20 GB
|align="center"| 3 x 20 GB
|align="center"| 8
|align="center"| 64
|align="center"| 24
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers, students
|-
!align="center"| gpuwide
|align="center"| AI: search for optimal hyperparameter values
|align="center"| 2
|align="center"| 1 x 20 GB
|align="center"| 5 x 20 GB
|align="center"| 8
|align="center"| 64
|align="center"| 24
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers, students
|-
!align="center"| gpuheavy
|align="center"| AI: train an already optimised model
|align="center"| 1
|align="center"| 1 x 20 GB or 2 x 20 GB or 1 x 40 GB
|align="center"| 3 x 40 GB + 4 x 20 GB
|align="center"| 8
|align="center"| 128
|align="center"| 72
|align="center"| <span style="background:#00FF00">to be specified</span>
|align="center"| researchers
|}
 
Overall resources associated to the set of all partitions exceed overall available resources, as multiple partitions can be given access to the same resource (e.g., a CPU or a GPU). SLURM will only execute a job if all the resources requested by the job are not already in use at the time of request.
 
== Partition availability ==
 
An important information that ''sinfo'' provides (column "AVAIL") is the ''availability'' (also called ''state'') of partitions. Possible partition states are:
 
; up = the partition is available
: Currently running jobs will be completed
: Currently queued jobs will be executed as soon as resources allow


; drain = the partition is in the process of becoming unavailable (i.e., to go in the ''down'' state)
In the example, we have highlighted with "->" the most relevant for Mufasa users, i.e. two '''default values''' which are applied to jobs that do not make explicit requests. Precisely:
: Currently running jobs will be completed
: Queued jobs will be executed when the partition becomes available again (i.e. goes back to the ''up'' state)


; down = the partition is unavailable
;<code>DefaultTime</code>
: There are no running jobs
:: the default execution time assigned to a job run on the partition (e.g., 1 hour)
: Queued jobs will be executed when the partition becomes available again (i.e. goes back to the ''up'' state)


When a partition goes from ''up'' to ''drain'' no harm is done to running jobs. When a partition passes from any other state to ''down'', running jobs (if they exist) get killed. A partition in state ''drain'' or ''down'' requires intervention by a [[Roles|Job Administrator]] to be restored to ''up''.
;<code>DefMemPerNode</code>
:: the default amount of RAM assigned to a job run on the partition (e.g., 4GB)

Latest revision as of 09:51, 27 November 2025

This page presents the features of SLURM that are most relevant to Mufasa's Job Users. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).

Users of Mufasa must use SLURM to run resource-heavy processes, i.e. computing jobs that require one or more of the following:

  • GPUs
  • multiple CPUs
  • powerful CPUs
  • a significant amount of RAM

In fact, only processes run via SLURM have access to all the resources of Mufasa. Processes run outside SLURM are executed by the login server virtual machine, which has minimal resources and no GPUs. Using SLURM is therefore the only way to execute resource-heavy jobs on Mufasa (this is a key difference between Mufasa 1.0 and Mufasa 2.0).

SLURM in a nutshell

Computation jobs on Mufasa needs to be launched via SLURM. SLURM provides jobs with access to the physical resources of Mufasa, such as CPUs, GPUs and RAM. Thanks to SLURM, processing jobs share system resources, optimising their occupation and availability.

When a user runs a job, the job does not get executed immediately and is instead queued. SLURM executes jobs according to their order in the queue: the top job in the queue gets executed as soon as the necessary resources are available, while jobs lower in the queue wait longer. The position of a job in the queue is due to the priority assigned to it by SLURM, with higher-priority jobs closer to the top. As a general rule:

the greater the fraction of Mufasa's overall resources that a job asks for, the lower its priority will be.

The priority mechanism is used to encourage users to use Mufasa's resources in an effective and equitable manner. This page includes a chart explaining how to maximise the priority of your jobs.

The time available to a job for its execution is controlled by SLURM. When a user requests execution of a job, they must specify the duration of the time slot that the job needs. The job must complete its execution before the end of the requested time slot, otherwise it gets killed by SLURM.

In Mufasa 2.0 access to system resources is managed via SLURM's Quality of Service (QOS) mechanism (Mufasa 1.0 used partitions instead). To launch a processing job via SLURM, the user must always specify the chosen QOS. QOSes differ in the set of resources that they provide access to because each of them is designed to fit a given type of job.

SLURM Quality of Service (QOS)

In SLURM, different Quality of Services (QOSes) define different levels of access to the server's resources. SLURM jobs must always specify the QOS that they use: this choice determines what resources the job can access.

The list of Mufasa's QOSes and their main features can be inspected with command

sacctmgr list qos format=name%-11,priority,maxwall,MaxJobsPerUser,maxtres%-80

which provides an output similar to the following:

Name          Priority     MaxWall MaxJobsPU MaxTRES                                                                          
----------- ---------- ----------- --------- -------------------------------------------------------------------------------- 
normal               0                                                                                                        
nogpu                4  3-00:00:00         1 cpu=16,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=128G 
gpuheavy-20          1                     1 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=2,mem=128G             
gpuheavy-40          1                     1 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=1,gres/gpu:4g.20gb=0,mem=128G             
gpulight             8    12:00:00         1 cpu=2,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G              
gpu                  2  1-00:00:00         1 cpu=8,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G              
gpuwide              2  1-00:00:00         2 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=1,mem=64G              
build               32    02:00:00         1 cpu=2,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=16G

The columns of this output are the following:

Name
name of the QOS
Priority
priority tier associated to the QOS: see Job priority for details
MaxJobsPU
maximum number of jobs from a single user can be running with this QOS
(note that there are also other limitations on the number of running jobs by the same user)
MaxWall
maximum wall clock duration of the jobs using the QOS (after which they are killed by SLURM), in format [days-]hours:minutes:seconds
For QOSes gpuheavy-20 and gpuheavy-20 these are not set because they are determined by the partition. Partitions also define the default duration.
MaxTRES
maximum amount of resources ("Trackable RESources") available to a job using the QOS, where
cpu=K means that the maximum number of CPUs (i.e., processor cores) is K
--> if not specified, the job gets the default amount of CPUs specified by the partition
gres/gpu:Type=K means that the maximum number of GPUs of class Type (see gres syntax) is K
--> (for QOSes that allow access to GPUs) if not specified, the job cannot be launched
mem=KG means that the maximum amount of system RAM is K GBytes
--> if not specified, the job gets the default amount of RAM specified by the partition

For instance, QOS gpulight provides jobs that use it with:

  • priority tier 8
  • a maximum of 1 running job per user
  • a maximum of 12 hours of duration
  • a maximum of 2 CPUs
  • a maximum of 64 GB of RAM
  • access to a maximum of 1 GPU of type gpu:3g.20gb
  • no access to GPUs of type gpu:40gb=0
  • no access to GPUs of type gpu:4g.20gb

As seen in the example output from sacctmgr list qos above, each QOS has an associated priority tier. As a rule, the more powerful (i.e., rich with resources) a QOS is, the lower the priority of the jobs that use such QOS. See Priority to understand how priority affects the execution order of jobs in Mufasa 2.0.

The normal QOS is the one applied to jobs if no QOS is specified. normal provides no access at all to Mufasa's resources, so it is always necessary to specify a QOS (different from normal) when running a job via SLURM.

The build QOS

This QOS is specifically designed to be used by Mufasa users to build container images. Its associated priority tier is very high, to allow SLURM jobs launched using this QOS to be executed quickly. On the other side, this QOS has very limited (but fully sufficient for building operations) resources, no access to GPUs and a short maximum duration for jobs: so it is not suitable for other computing activities.

See Building Singularity images for directions about building Singularity container images.

QOS restrictions

Some of the QOSes are not available to M.Sc. students. See Research users and students users to understand the differences between the two categories of users of Mufasa and to find out what access you have to resources.

Resources available to a QOS

The maximum amount of resources that a QOS has access to (available to the running jobs using the QOS, collectively) can be inspected with command

sacctmgr list qos format=name%-11,grpTRES%-34

which provides an output similar to

Name        GrpTRES                            
----------- ---------------------------------- 
normal                                         
nogpu       cpu=48,mem=384G                    
gpuheavy-20 cpu=56,gres/gpu:4g.20gb=4,mem=896G 
gpuheavy-40 cpu=56,gres/gpu:40gb=3,mem=896G    
gpulight    cpu=8,gres/gpu:3g.20gb=4,mem=256G  
gpu         cpu=24,gres/gpu:3g.20gb=3,mem=192G 
gpuwide     cpu=40,gres/gpu:4g.20gb=5,mem=320G 
build       cpu=4,mem=32G

Note how overall resources associated to the set of all QOS greatly exceeds available resources. With SLURM, multiple QOS can be given access to the same physical resource (e.g., a CPU or a GPU), because SLURM guarantees that the overall request for resources from all running jobs does not exceed the overall availability of resources in the system. SLURM will only execute a job if all the resources requested by the job are not already in use at the time of request.

Research users and students users

Users of Mufasa belong to two categories, which provide the users belonging to them with different access to system resources.

The categories are:

Research users, i.e. academic personnel and Ph.D. students
* have access to all QOSes
* their jobs have a higher base priority
* they can have a higher number of running jobs
* they can have a higher number of queued jobs
Students users, i.e. M.Sc. students
* do not have access to some QOSes
* their jobs have a lower base priority
* they can have a lower number of running jobs
* they can have a lower number of queued jobs

You can inspect the differences between researcher and student users with command

sacctmgr list association format="user,priority,maxjobs,maxsubmit,qos%-60" | grep -E 'Priority|research|students'

which provides an output similar to the following:

   Account   Priority MaxJobs MaxSubmit QOS
  research          4       2         4 build,gpu,gpuheavy-20,gpuheavy-40,gpulight,gpuwide,nogpu
  students          1       1         2 build,gpu,gpulight,gpuwide,nogpu

This example output shows that the differences between research users and student users are the following:

  • base priority is 4 for jobs run by research users, while it is 1 for jobs run by students users
  • the number of running jobs is 2 for research users, while it is 1 for jobs run by students users
  • the number of queued jobs (i.e., of jobs submitted to SLURM for execution but not yet running) is 4 for research users, while it is 1 for student users
  • research users can access all QOSes while student users cannot access QOSes gpuheavy-20 and gpuheavy-40

You can inspect your own level of access to Mufasa's resources with

sacctmgr list association format="user,priority,maxjobs,maxsubmit,qos%-60" | grep -E "User|<your_username>"

which provides an output similar to the following:

      User   Priority MaxJobs MaxSubmit QOS                                                          
    preali          4       2         4 build,gpu,gpuheavy-20,gpuheavy-40,gpulight,gpuwide,nogpu

Job priority

Once the execution of a job has been requested, the job is not run immediately: it is instead queued by SLURM, together with all the other jobs awaiting execution. The job on top of the queue at any time is the first to be put into execution as soon as the resources it requires are available. The order of the jobs in the queue depends on the priority of the jobs, and defines the order in which each job will reach execution.

SLURM is configured to maximise resource availability, i.e. to ensure the shorter possible wait time before job execution.

To achieves this goal, SLURM encourages users to avoid asking for resources or execution time that their job does not need. The more resources and the more time a job requests, the lower its priority in the execution queue will be.

This mechanism creates a virtuous cycle. By carefully choosing what to ask for, a user ensures that their job will be executed as soon as possible; at the same time, users limiting their requests to what their jobs really need leave more resources available to other jobs in the queue, which will then be executed sooner.

Elements determining job priority

In Mufasa, the priority of a job is computed by SLURM according to the following elements:

User category (i.e., researcher or M.Sc. student)
Used to provide higher priority to jobs run by research personnel
QOS used by the job
Used to provide higher priority to jobs requesting access to less system resources
Number of CPUs requested by the job (also called "job size")
Used to provide higher priority to jobs requiring less CPUs
Job duration, i.e. the execution time requested by the job
Used to provide higher priority to shorter jobs
Job Age, i.e. the time that the job has been waiting in the queue
Used to provide higher priority to jobs which have been queued for a longer time
FairShare, i.e. a factor computed by SLURM to balance use of the system by different users
Used to provide higher priority to jobs by users who use less resources (CPUs, GPUs, RAM, execution time)
FairShare has a "fading memory", i.e. the influence of past resource usage gets lower the farther it is from now

How to maximise the priority of your jobs

Every time you run a SLURM job, follow these guidelines:

Choose the less powerful QOS compatible with the needs of your job
QOSes with access to less resources lead to higher priority
Only request CPUs that your job will actually use
If you didn't design your code to exploit multiple CPUs, check that it does! If it doesn't, do not ask for them
Do not request more time than your jobs needs to complete
Make a worst-case estimate and only ask for that duration
Test and debug your code using less powerful QOSes before running it on more powerful QOSes
Your test jobs will get a higher priority and your FairShare will improve
Cancel jobs when you don't need them anymore
Use scancel to delete your jobs when finished, or if they become useless (e.g., due to a bug)

System resources subjected to limitations

In systems based on SLURM like Mufasa, TRES (Trackable RESources) are (from SLURM's documentation "resources that can be tracked for usage or used to enforce limits against."

TRES include CPUs, RAM and GRES. The last term stands for Generic RESources that a job may need for its execution. In Mufasa, the only gres resources are the GPUs.

gres syntax

To ask SLURM to assign GRES resources (i.e., GPUs) to a job, a special syntax must be used. Precisely, the name of each GPU resource takes the form

gpu:Name:Type

Considering the GPU complement of Mufasa, Type takes the following values:

  • gpu:40gb for GPUs with 40 Gbytes of RAM
  • gpu:4g.20gb for GPUs with 20 Gbytes of RAM and 4 compute units
  • gpu:3g.20gb for GPUs with 20 Gbytes of RAM and 3 compute units

So, for instance,

gpu:3g.20gb

identifies a resource corresponding to a GPU with 20 GB of RAM and 3 compute units.

When asking for a GRES resource (e.g., in an srun command or an SBATCH directive of an execution script), the syntax required by SLURM is

gpu:<Type>:<Quantity>

where Quantity is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type 4g.20gb the syntax is

gpu:4g.20gb:2

SLURM's generic resources are defined in /etc/slurm/gres.conf. In order to make GPUs available to SLURM's gres management, Mufasa makes use of Nvidia's NVML library. For additional information see SLURM's documentation.

Looking for unused GPUs

GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to use a QOS associated to a type of GPU of which there are one or more that aren't currently in use. This command

sinfo -O Gres:100

provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides an output similar to the following:

GRES                                                                                                
gpu:40gb:3,gpu:4g.20gb:5,gpu:3g.20gb:5

To know which of the GPUs are currently in use, use command

sinfo -O GresUsed:100

which provides an output similar to this:

GRES_USED
gpu:40gb:2(IDX:0-1),gpu:4g.20gb:2(IDX:5,8),gpu:3g.20gb:3(IDX:3-4,6)

By comparing the two lists (GRES and GRES_USED) you can easily spot unused GPUs.

SLURM partitions

Partitions are another mechanism provided by SLURM to create different levels of access to system resources. Since in Mufasa 2.0 access to resources is controlled via QOSes, partitions are not very relevant.

Note, however, that the default values for some features of SLURM jobs (e.g., duration) are defined by the partition.

In Mufasa 2.0, there is a single SLURM partition, called jobs, and all jobs run on it. The partition status of Mufasa can be inspected with

sinfo -o "%10P %5a %9T %11L %10l"

which provides an output similar to the following:

PARTITION  AVAIL STATE     DEFAULTTIME TIMELIMIT 
jobs*      up    idle      1:00:00     3-00:00:00

The columns in the standard output of sinfo shown above correspond to the following information:

PARTITION
name of the partition; the asterisks indicates that it's the default one
AVAIL
state/availability of the partition: see below
STATE
state of the node (using these codes); typical values are mixed - meaning that some of the resources of the node are busy executing jobs while other are idle, and allocated - meaning that all of the resources of the node are busy
DEFAULTTIME
default runtime of a job, in format [days-]hours:minutes:seconds
TIMELIMIT
maximum runtime of a job allowed by the partition, in format [days-]hours:minutes:seconds

The asterisk at the end of the partition name indicates the default partition, i.e. the one on which jobs which do not ask for a specific partition are run.

Partition availability

The most important information that sinfo provides is the availability (also called state) of partitions. This is shown in column "AVAIL". Possible partition states are:

up = the partition is available
It's possible to launch jobs on the partition
Currently running jobs will be completed
Currently queued jobs will be executed as soon as resources allow
drain = the partition is in the process of becoming unavailable (i.e., to go in the down state)
It's not possible to launch jobs on the partition
Currently running jobs will be completed
Queued jobs will be executed when the partition becomes available again (i.e. goes back to the up state)
down = the partition is unavailable
It's not possible to launch jobs on the partition
There are no running jobs
Queued jobs will be executed when the partition becomes available again (i.e. goes back to the up state)

When a partition goes from up to drain no harm is done to running jobs. When a partition passes from any other state to down, running jobs (if they exist) get killed. A partition in state drain or down requires intervention by a Job Administrator to be restored to up.

Default values

The features of SLURM partitions can be inspected with

scontrol show partition

which provides an output similar to this:

PartitionName=jobs
   AllowGroups=ALL AllowAccounts=ALL AllowQos=nogpu,gpulight,gpu,gpuwide,gpuheavy-20,gpuheavy-40
   AllocNodes=ALL Default=YES QoS=N/A
-> DefaultTime=01:00:00 DisableRootJobs=NO ExclusiveUser=NO ExclusiveTopo=NO GraceTime=0 Hidden=NO
   MaxNodes=UNLIMITED MaxTime=3-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
   Nodes=gn01
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
   OverTimeLimit=NONE PreemptMode=OFF
   State=UP TotalCPUs=48 TotalNodes=1 SelectTypeParameters=NONE
   JobDefaults=(null)
-> DefMemPerNode=4096 MaxMemPerNode=UNLIMITED
   TRES=cpu=48,mem=1011435M,node=1,billing=49,gres/gpu=13,gres/gpu:3g.20gb=5,gres/gpu:40gb=3,gres/gpu:4g.20gb=5
   TRESBillingWeights=cpu=1.0,gres/gpu:3g.20gb=6.0,gres/gpu:4g.20gb=6.0,gres/gpu:40gb=6.0,mem=0.05g

In the example, we have highlighted with "->" the most relevant for Mufasa users, i.e. two default values which are applied to jobs that do not make explicit requests. Precisely:

DefaultTime
the default execution time assigned to a job run on the partition (e.g., 1 hour)
DefMemPerNode
the default amount of RAM assigned to a job run on the partition (e.g., 4GB)