Difference between revisions of "SLURM"
| Line 174: | Line 174: | ||
;: Choose the "less powerful" QOS compatible with the needs of your job | ;: Choose the "less powerful" QOS compatible with the needs of your job | ||
::: --> | ::: --> "power" of a QOS corresponds to what part of Mufasa's resources it can access: QOSes with greater access are "more powerful" | ||
;: Only request CPUs that your job will actually use | ;: Only request CPUs that your job will actually use | ||
Revision as of 09:36, 6 November 2025
This page presents the features of SLURM that are most relevant to Mufasa's Job Users. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).
Users of Mufasa must use SLURM to run resource-heavy processes, i.e. computing jobs that require any of the following:
- GPUs
- multiple CPUs
- a significant amount of RAM.
In fact, only processes run via SLURM have access to all the resources of Mufasa. Processes run outside SLURM are executed by the login server virtual machine, which has minimal resources and no GPUs. Using SLURM is therefore the only way to execute resource-heavy jobs on Mufasa (this is a key difference between Mufasa 1.0 and Mufasa 2.0).
SLURM in a nutshell
Computation jobs on Mufasa needs to be launched via SLURM. SLURM provides jobs with access to the physical resources of Mufasa, such as CPUs, GPUs and RAM. Thanks to SLURM, processing jobs share system resources, optimising their occupation and availability.
When a user runs a job, the job does not get executed immediately and is instead queued. SLURM executes jobs according to their order in the queue: the top job in the queue gets executed as soon as the necessary resources are available, while jobs lower in the queue wait longer. The position of a job in the queue is due to the priority assigned to it by SLURM, with higher-priority jobs closer to the top. As a general rule,
- The greater the fraction of Mufasa's overall resources that a job asks for, the lower its priority.
The time available to a job for its execution is controlled by SLURM. When a user requests execution of a job, they must specify the duration of the time slot that the job needs. The job must complete its execution before the end of the requested time slot, otherwise it gets killed by SLURM.
In Mufasa 2.0 (this was different in Mufasa 1.0) access to system resources is managed via SLURM's Quality of Service (QOS) mechanism. To launch a processing job via SLURM, the user must always specify the chosen QOS. QOSes differ in the set of resources that they provide access to because each of them is designed to fit a given type of job.
SLURM Quality of Service (QOS)
In SLURM, different Quality of Services (QOSes) define different levels of access to the server's resources. SLURM jobs must always specify the QOS that they use: this choice determines what resources the job can access.
The list of Mufasa's QOSes and their main features can be inspected with command
sacctmgr list qos format=name%-11,priority,maxwall,MaxJobsPerUser,maxtres%-80
which provides an output similar to the following:
Name Priority MaxJobsPU MaxWall MaxTRES ----------- ---------- --------- ----------- -------------------------------------------------------------------------------- normal 0 nogpu 4 1 3-00:00:00 cpu=16,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,gres/gpu=0,mem=128G gpuheavy-20 1 1 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=2,mem=128G gpuheavy-40 1 1 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=1,gres/gpu:4g.20gb=0,mem=128G gpulight 8 1 12:00:00 cpu=2,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G gpu 2 1 1-00:00:00 cpu=8,gres/gpu:3g.20gb=1,gres/gpu:40gb=0,gres/gpu:4g.20gb=0,mem=64G gpuwide 2 2 1-00:00:00 cpu=8,gres/gpu:3g.20gb=0,gres/gpu:40gb=0,gres/gpu:4g.20gb=1,mem=64G
The columns of this output are the following:
- Name
- name of the QOS
- Priority
- priority tier associated to the QOS: see Job priority for details
- MaxJobsPU
- maximum number of jobs from a single user can be running with this QOS
- (note that there are also other limitations on the number of running jobs by the same user)
- MaxWall
- maximum wall clock duration of the jobs using the QOS (after which they are killed by SLURM), in format [days-]hours:minutes:seconds
- For QOSes
gpuheavy-20andgpuheavy-20these are not set because they are determined by the partition
- MaxTRES
- maximum amount of resources ("Trackable RESources") available to a job using the QOS, where
cpu=Kmeans that the maximum number of processor cores is Kgres/gpu:Type=Kmeans that the maximum number of GPUs of classType(seegressyntax) is Kmem=KGmeans that the maximum amount of system RAM is K GBytes
For instance, QOS gpulight provides jobs that use it with:
- priority tier 8
- a maximum of 1 running job per user
- a maximum of 12 hours of duration
- a maximum of 2 CPUs
- a maximum of 64 GB of RAM
- access to a maximum of 1 GPU of type gpu:3g.20gb
- no access to GPUs of type gpu:40gb=0
- no access to GPUs of type gpu:4g.20gb
The normal QOS is the one applied to jobs if no QOS is specified. normal provides no access at all to Mufasa's resources, so it is always necessary to specify a QOS (different from normal) when running a job via SLURM.
As seen in the example output from sacctmgr list qos above, each QOS has an associated priority tiers. As a rule, the more powerful (i.e., rich with resources) a QOS is, the lower the priority of the jobs that use such QOS. See Priority to understand how priority affects the execution order of jobs in Mufasa 2.0.
- Important note. Some of the QOSes may be available only to a subset of users. In Mufasa, such a limitation is associated to the category that users belongs to.
Overall of resource available to a QOS
The maximum amount of resources that a QOS has access to (available to the running jobs using the QOS, collectively) can be inspected with command
sacctmgr list qos format=name%-11,grpTRES%-34
which provides an output similar to
Name GrpTRES ----------- ---------------------------------- normal nogpu cpu=48,mem=384G gpuheavy-20 cpu=56,gres/gpu:4g.20gb=4,mem=896G gpuheavy-40 cpu=56,gres/gpu:40gb=3,mem=896G gpulight cpu=8,gres/gpu:3g.20gb=4,mem=256G gpu cpu=24,gres/gpu:3g.20gb=3,mem=192G gpuwide cpu=40,gres/gpu:4g.20gb=5,mem=320G
Note how overall resources associated to the set of all QOS greatly exceeds available resources. With SLURM, multiple QOS can be given access to the same physical resource (e.g., a CPU or a GPU), because SLURM guarantees that the overall request for resources from all running jobs does not exceed the overall availability of resources in the system. SLURM will only execute a job if all the resources requested by the job are not already in use at the time of request.
Research users and students users
Users of Mufasa belong to two categories, which provide the users belonging to them with different access to system resources. The categories are:
Research users, i.e. academic personnel and Ph.D. students
- have access to all QOSes
- their jobs have a higher base priority (see Job priority for details)
Students users, i.e. M.Sc. students
- do not have access to QOS gpuheavy-20 and gpuheavy-40
- their jobs have a lower base priority
You can inspect the differences between researcher and student users with command
sacctmgr list association format=account,priority,maxjobs,maxsubmit | grep -E 'Priority|research|students'
which provides an output similar to the following:
Account Priority MaxJobs MaxSubmit research 4 2 4 students 1 1 2
This example output shows that the differences between research and students are the following:
- base priority is 4 for jobs run by research users, while it is 1 for jobs run by students users
- the number of running jobs is 2 for research users, while it is 1 for jobs run by students users
- the number of queued jobs (i.e., of jobs submitted to SLURM for execution but not yet running) is 4 for research users, while it is 1 for student users
Job priority
Once the execution of a job has been requested, the job is not run immediately: it is instead queued by SLURM, together with all the other jobs awaiting execution. The job on top of the queue at any time is the first to be put into execution as soon as the resources it requires are available. The order of the jobs in the queue depends on the priority of the jobs, and defines the order in which each job will reach execution.
SLURM is configured to maximise resource availability, i.e. to ensure the shorter possible wait time before job execution.
To achieves this goal, SLURM encourages users to avoid asking for resources or execution time that their job does not need. The more resources and the more time a job requests, the lower its priority in the execution queue will be.
This mechanism creates a virtuous cycle. By carefully choosing what to ask for, a user ensures that their job will be executed as soon as possible; at the same time, users limiting their requests to what their jobs really need leave more resources available to other jobs in the queue, which will then be executed sooner.
Elements determining job priority
In Mufasa, the priority of a job is computed by SLURM according to the following elements:
- category of the user (i.e., researcher or M.Sc. student)
-
- Used to provide higher priority to jobs run by research personnel
-
- Used to provide higher priority to jobs requesting access to less system resources
- Job duration, i.e. the execution time requested by the job
-
- Used to provide higher priority to shorter jobs
- Job size, i.e. the number of CPUs requested by the job
-
- Used to provide higher priority to jobs requiring less CPUs.
- Age, i.e. the time that the job has been waiting in the queue
-
- Used to provide higher priority to jobs which have been queued for a long time
- FairShare, i.e. a factor computed by SLURM to balance use of the system by different users
-
- Users who use Mufasa much less than others get a priority bonus
How to maximise the priority of your jobs
Considering how Mufasa assigns priorities to jobs, it is easy to define a few rules to increase the priority of your jobs:
- Choose the "less powerful" QOS compatible with the needs of your job
-
- --> "power" of a QOS corresponds to what part of Mufasa's resources it can access: QOSes with greater access are "more powerful"
- Only request CPUs that your job will actually use
-
- --> if you didn't design your code to exploit multiple CPUs, check that it does! Otherwise do not ask for them
- Do not request more time than your jobs needs to complete
-
- --> make a worst-case estimate and only ask for that duration
- Do not run unnecessary jobs
-
- --> this includes cancelling jobs when their output becomes useless (e.g., due to a bug)
System resources subjected to limitations
In systems based on SLURM like Mufasa, TRES (Trackable RESources) are (from SLURM's documentation "resources that can be tracked for usage or used to enforce limits against."
TRES include CPUs, RAM and GRES. The last term stands for Generic RESources that a job may need for its execution. In Mufasa, the only gres resources are the GPUs.
gres syntax
To ask SLURM to assign GRES resources (i.e., GPUs) to a job, a special syntax must be used. Precisely, the name of each GPU resource takes the form
gpu:Name:Type
Considering the GPU complement of Mufasa, Type takes the following values:
gpu:40gbfor GPUs with 40 Gbytes of RAMgpu:4g.20gbfor GPUs with 20 Gbytes of RAM and 4 compute unitsgpu:3g.20gbfor GPUs with 20 Gbytes of RAM and 3 compute units
So, for instance,
gpu:3g.20gb
identifies a resource corresponding to a GPU with 20 GB of RAM and 3 compute units.
When asking for a GRES resource (e.g., in an srun command or an SBATCH directive of an execution script), the syntax required by SLURM is
gpu:<Type>:<Quantity>
where Quantity is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type 4g.20gb the syntax is
gpu:4g.20gb:2
SLURM's generic resources are defined in /etc/slurm/gres.conf. In order to make GPUs available to SLURM's gres management, Mufasa makes use of Nvidia's NVML library. For additional information see SLURM's documentation.
Looking for unused GPUs
GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to request a GPU that is not currently in use. This command
sinfo -O Gres:100
provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides an output similar to the following:
GRES gpu:40gb:3,gpu:4g.20gb:5,gpu:3g.20gb:5
To know which of the GPUs are currently in use, use command
sinfo -O GresUsed:100
which provides an output similar to this:
GRES_USED gpu:40gb:2(IDX:0-1),gpu:4g.20gb:2(IDX:5,8),gpu:3g.20gb:3(IDX:3-4,6)
By comparing the two lists (GRES and GRES_USED) you can easily spot unused GPUs.
SLURM partitions
Partitions are another mechanism provided by SLURM to create different levels of access to system resources. Since in Mufasa 2.0 access to resources is controlled via QOSes, partitions are not very relevant.
In Mufasa 2.0, there is a single SLURM partition, called jobs, and all jobs run on it. The partition status of Mufasa can be inspected with
sinfo -o "%10P %5a %9T %11L %10l"
which provides an output similar to the following:
PARTITION AVAIL STATE DEFAULTTIME TIMELIMIT jobs* up idle 1:00:00 3-00:00:00
The columns in the standard output of sinfo shown above correspond to the following information:
- PARTITION
- name of the partition; the asterisks indicates that it's the default one
- AVAIL
- state/availability of the partition: see below
- DEFAULTTIME
- default runtime of a job, in format [days-]hours:minutes:seconds
- TIMELIMIT
- maximum runtime of a job allowed by the partition, in format [days-]hours:minutes:seconds
- NODES
- number of nodes available to jobs run on the partition: for Mufasa, this is always 1 since there is only 1 node in the computing cluster
- STATE
- state of the node (using these codes); typical values are
mixed- meaning that some of the resources of the node are busy executing jobs while other are free, andallocated- meaning that all of the resources of the node are busy
- NODELIST
- list of nodes available to the partition: for Mufasa this field always contains
gn01since Mufasa is the only node in the computing cluster
As explained, in Mufasa 2.0 partitions are not much relevant, while QOS are very relevant.
Partition availability
The most important information that sinfo provides is the availability (also called state) of partitions. This is shown in column "AVAIL". Possible partition states are:
- up = the partition is available
-
- Currently running jobs will be completed
- Currently queued jobs will be executed as soon as resources allow
- drain = the partition is in the process of becoming unavailable (i.e., to go in the down state)
-
- Currently running jobs will be completed
- Queued jobs will be executed when the partition becomes available again (i.e. goes back to the up state)
- down = the partition is unavailable
-
- There are no running jobs
- Queued jobs will be executed when the partition becomes available again (i.e. goes back to the up state)
When a partition goes from up to drain no harm is done to running jobs. When a partition passes from any other state to down, running jobs (if they exist) get killed. A partition in state drain or down requires intervention by a Job Administrator to be restored to up.