Difference between revisions of "User Jobs"

From Mufasa (BioHPC)
Jump to navigation Jump to search
 
(540 intermediate revisions by 3 users not shown)
Line 1: Line 1:
This page presents the features of Mufasa that are most relevant to Mufasa's [[Roles|Job Users]]. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).
This page presents the features of Mufasa that are most relevant to Mufasa's [[Roles|Job Users]]. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).


Job Users are by necessity SLURM users (see [[System#The SLURM job scheduling system|The SLURM job scheduling system]]) so you may also want to read [https://slurm.schedmd.com/quickstart.html SLURM's own Quick Start User Guide].
= System resources subjected to limitations =
 
The hardware resources of Mufasa are limited. For this reason, some of them are subjected to limitations, i.e. (these are SLURM's own terms):
 
; cpu
: the number of processor cores that a job can use
 
; mem
: the amount of RAM that a job can use
 
;gres
: the amount of ''generic resources'' that a job can use: in Mufasa, the only resources belonging to this set are the GPUs (the [[System#CPUs_and_GPUs|virtual GPUs defined by Nvidia MIG]], not the physical GPUs)
 
These are some of the TRES (Trackable RESources) defined by SLURM. From [https://slurm.schedmd.com/tres.html SLURM's documentation]: "''A TRES is a resource that can be tracked for usage or used to enforce limits against.''"
 
SLURM provides jobs with access to resources only for a limited time: i.e., '''execution time''' is itself a limited resource.
 
When a resource is limited, a job cannot use arbitrary quantities of it. On the contrary, the job must specify how much of the resource it requests. Requests are done either by running the job on a [[User Jobs#SLURM Partitions|partition]] for which a default amount of resources has been defined, or through the options of the srun command that executes the job via SLURM.
 
== <code>gres</code> syntax ==
 
Whenever it is necessary to specify the quantity of <code>gres</code>, i.e. generic resources, a special syntax must be used. In Mufasa <code>gres</code> resources are GPUs, so this syntax applies to GPUs. Number and type of Mufasa's GPUs is described [[System#CPUs and GPUs|here]].
 
The name of each GPU resource takes the form
 
'''<code>Name:Type</code>'''
 
where <code>Name</code> is '''<code>gpu</code>''' and <code>Type</code> takes the following values:
 
* '''<code>40gb</code>''' for GPUs with 40 Gbytes of onboard RAM
* '''<code>20gb</code>''' for GPUs with 20 Gbytes of onboard RAM
 
So, for instance,
 
<code>gpu:20gb</code>
 
identifies the resource corresponding to GPUs with 20 GB of RAM. Of this resource Mufasa has [[System#CPUs and GPUs|a given number]], of which a job can request to use some (or all).
 
When asking for a <code>gres</code> resource (e.g., in an <code>srun</code> command or an <code>SBATCH</code> directive of an [[User Jobs#Using execution scripts to run jobs|execution script]]), the syntax required by SLURM is
 
'''<code><Name>:<Type>:<quantity></code>'''
 
where <code>quantity</code> is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type <code>20gb</code> the syntax is
 
<code>gpu:20gb:2</code>
 
SLURM's ''generic resources'' are defined in <code>/etc/slurm/gres.conf</code>. In order to make GPUs available to SLURM's <code>gres</code> management, Mufasa makes use of Nvidia's [https://developer.nvidia.com/nvidia-management-library-nvml NVML library]. For additional information see [https://slurm.schedmd.com/gres.html SLURM's documentation].
 
== Looking for unused GPUs ==
 
GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to request a GPU that is not currently in use.
 
This command
<pre style="color: lightgrey; background: black;">
sinfo -O Gres:100
</pre>
provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides this output:
<pre style="color: lightgrey; background: black;">
GRES                                                                                               
gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1)
</pre>
 
To know which of the GPUs are currently in use, use command
<pre style="color: lightgrey; background: black;">
sinfo -O GresUsed:100
</pre>
which provides an output similar to this:
<pre style="color: lightgrey; background: black;">
GRES_USED                                                                                         
gpu:40gb:2(IDX:0-1),gpu:20gb:2(IDX:5,8),gpu:10gb:3(IDX:3-4,6)
</pre>
By comparing the two lists (GRES and GRES_USED) above, you can see that at the moment:
 
* of the 2 40 GB GPUs, both are in use
* of the 3 20 GB GPUs, one is not in use
* of the 6 10 GB GPUs, 3 are not in use


= SLURM Partitions =
= SLURM Partitions =
Line 15: Line 90:
<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST
PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug         up   infinite     1    mix gn01
debug*        up     20:00     1    mix gn01
small*        up  12:00:00      1    mix gn01
small         up  12:00:00      1    mix gn01
normal        up 1-00:00:00      1    mix gn01
normal        up 1-00:00:00      1    mix gn01
longnormal    up 3-00:00:00      1    mix gn01
longnormal    up 3-00:00:00      1    mix gn01
Line 24: Line 99:
</pre>
</pre>


In this example, available partitions are named “debug”, “small”, “normal”, “longnormal”, “gpu”, “gpulong”, “fat”. The asterisk beside "small" indicates that this is the default partition, i.e. the one that SLURM selects to run a job when no partition has been specified.
In this example, available partitions are named “debug”, “small”, “normal”, “longnormal”, “gpu”, “gpulong”, “fat”. The asterisk beside "debug" indicates that this is the default partition, i.e. the one that SLURM selects to run a job when no partition has been specified. (On Mufasa, partition names have been chosen to reflect the type of job that they are dedicated to.)
 
The columns in the standard output of <code>sinfo</code> shown above correspond to the following information:


On Mufasa, partition names have been chosen to reflect the type of job that they are dedicated to. A complete list of the features of each partition can be obtained with command
; PARTITION
: name of the partition
 
; AVAIL
: state/availability of the partition: see [[User Jobs#Partition availability|below]]
 
; TIMELIMIT
: maximum runtime of a job allowed by the partition, in format ''[days-]hours:minutes:seconds''
 
; NODES
: number of nodes available to jobs run on the partition: for Mufasa, this is always 1 since [[System#The SLURM job scheduling system|there is only 1 node in the computing cluster]]
 
; STATE
: state of the node (using [https://slurm.schedmd.com/sinfo.html#SECTION_NODE-STATE-CODES these codes]); typical values are <code>mixed</code> - meaning that some of the resources of the node are busy executing jobs while other are free, and <code>allocated</code> - meaning that all of the resources of the node are busy
 
; NODELIST
: list of nodes available to the partition: for Mufasa this field always contains <code>gn01</code> since [[System#The SLURM job scheduling system|Mufasa is the only node in the computing cluster]]
 
One information that the standard output of <code>sinfo</code> doesn't provide is if there are partitions that can only be used by the root user of Mufasa. To know which partiions are root-only, you can use command


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sinfo --Format=all
sinfo -o "%.10P %.4r"
</pre>
</pre>


but its output can be overwhelming. For instance, in the example above the output of <code>sinfo --Format=all</code> is the following:
Its output is


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
AVAIL|ACTIVE_FEATURES|CPUS|TMP_DISK|FREE_MEM|AVAIL_FEATURES|GROUPS|OVERSUBSCRIBE|TIMELIMIT|MEMORY|HOSTNAMES|NODE_ADDR|PRIO_TIER|ROOT|JOB_SIZE|STATE|USER|VERSION|WEIGHT|S:C:T|NODES(A/I) |MAX_CPUS_PER_NODE |CPUS(A/I/O/T) |NODES |REASON |NODES(A/I/O/T) |GRES |TIMESTAMP |PRIO_JOB_FACTOR |DEFAULTTIME |PREEMPT_MODE |NODELIST |CPU_LOAD |PARTITION |PARTITION |ALLOCNODES |STATE |USER |CLUSTER |SOCKETS |CORES |THREADS
PARTITION ROOT
up|(null)|62|0|852393|(null)|all|NO|infinite|1027000|rk018445|rk018445|1|yes|1-infinite|mix|Unknown|21.08.2|1|2:31:1|1/0 |UNLIMITED |16/46/0/62 |1 |none |1/0/0/1 |gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) |Unknown |1 |n/a |GANG,SUSPEND |gn01 |3.13 |debug |debug |all |mixed |Unknown |N/A |2 |31 |1
    debug*  no
up|(null)|62|0|852393|(null)|all|FORCE:2|12:00:00|1027000|rk018445|rk018445|0|no|1-infinite|mix|Unknown|21.08.2|1|2:31:1|1/0 |UNLIMITED |16/46/0/62 |1 |none |1/0/0/1 |gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) |Unknown |1 |15:00 |GANG,SUSPEND |gn01 |3.13 |small* |small |all |mixed |Unknown |N/A |2 |31 |1
    small  no
up|(null)|62|0|852393|(null)|all|FORCE:2|1-00:00:00|1027000|rk018445|rk018445|10|no|1-infinite|mix|Unknown|21.08.2|1|2:31:1|1/0 |24 |16/46/0/62 |1 |none |1/0/0/1 |gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) |Unknown |1 |15:00 |GANG,SUSPEND |gn01 |3.13 |normal |normal |all |mixed |Unknown |N/A |2 |31 |1
    normal  no
up|(null)|62|0|852393|(null)|all|FORCE:2|3-00:00:00|1027000|rk018445|rk018445|100|no|1-infinite|mix|Unknown|21.08.2|1|2:31:1|1/0 |24 |16/46/0/62 |1 |none |1/0/0/1 |gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) |Unknown |1 |1:00:00 |GANG,SUSPEND |gn01 |3.13 |longnormal |longnormal |all |mixed |Unknown |N/A |2 |31 |1
longnormal  no
up|(null)|62|0|852393|(null)|all|FORCE:2|1-00:00:00|1027000|rk018445|rk018445|25|no|1-infinite|mix|Unknown|21.08.2|1|2:31:1|1/0 |UNLIMITED |16/46/0/62 |1 |none |1/0/0/1 |gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) |Unknown |1 |15:00 |GANG,SUSPEND |gn01 |3.13 |gpu |gpu |all |mixed |Unknown |N/A |2 |31 |1
      gpu   no
up|(null)|62|0|852393|(null)|all|FORCE:2|3-00:00:00|1027000|rk018445|rk018445|125|no|1-infinite|mix|Unknown|21.08.2|1|2:31:1|1/0 |UNLIMITED |16/46/0/62 |1 |none |1/0/0/1 |gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) |Unknown |1 |1:00:00 |GANG,SUSPEND |gn01 |3.13 |gpulong |gpulong |all |mixed |Unknown |N/A |2 |31 |1
  gpulong  no
up|(null)|62|0|852393|(null)|all|FORCE:2|3-00:00:00|1027000|rk018445|rk018445|200|no|1-infinite|mix|Unknown|21.08.2|1|2:31:1|1/0 |48 |16/46/0/62 |1 |none |1/0/0/1 |gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) |Unknown |1 |1:00:00 |GANG,SUSPEND |gn01 |3.13 |fat |fat |all |mixed |Unknown |N/A |2 |31 |1
      fat  no
</pre>
</pre>


A less comprehensive but more readable view of partition features can be obtained via a tailored <code>sinfo</code> command, i.e. one that only asks for the features that are most relevant to Mufasa users. An example of such command is this:
and shows that on Mufasa no partitions are reserved for root.
 
For what concerns hardware resources (such as CPUs, GPUs and RAM) the amounts of each resource available to Mufasa's partitions are set by SLURM's accounting system, and are not visible to <code>sinfo</code>. See [[User Jobs#Partition features|Partition features]] for a description of these amounts.
 
== Partition features ==
 
The output of <code>sinfo</code> ([[User Jobs#SLURM Partitions|see above]]) provides a list of available partitions, but (except for time) it does not provide information about the amount of resources that a partition makes available to the user jobs which are run on it. The amount of resources is visible through command


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
sinfo -o "%.10P %.6a %.4c %.17B %.60G %.11l %.11L %.4r"
sacctmgr list qos format=name%-10,maxwall,maxtres%-64
</pre>
</pre>


Such command provides an output similar to the following:
which provides an output similar to the following:


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
PARTITION  AVAIL CPUS MAX_CPUS_PER_NODE                                                        GRES  TIMELIMIT DEFAULTTIME ROOT
Name          MaxWall MaxTRES                                                         
    debug    up  62        UNLIMITED        gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1)    infinite        n/a  yes
---------- ----------- ----------------------------------------------------------------
    small*    up  62        UNLIMITED        gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1)    12:00:00       15:00   no
normal      1-00:00:00 cpu=16,gres/gpu:10gb=0,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=128G  
    normal    up  62                24        gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) 1-00:00:00       15:00  no
small        12:00:00 cpu=2,gres/gpu:10gb=1,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=16G   
longnormal    up  62                24        gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) 3-00:00:00     1:00:00  no
longnormal 3-00:00:00 cpu=16,gres/gpu:10gb=0,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=128G  
      gpu    up  62        UNLIMITED        gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1) 1-00:00:00       15:00  no
gpu        1-00:00:00 cpu=8,gres/gpu:10gb=2,gres/gpu:20gb=2,mem=64G                   
  gpulong    up  62        UNLIMITED        gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1)  3-00:00:00     1:00:00  no
gpulong    3-00:00:00 cpu=8,gres/gpu:10gb=2,gres/gpu:20gb=2,mem=64G                   
      fat    up  62                48        gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1)  3-00:00:00     1:00:00  no
fat        3-00:00:00 cpu=32,gres/gpu:10gb=2,gres/gpu:20gb=2,gres/gpu:40gb=2,mem=256G
</pre>
</pre>


The columns in this output correspond to the following information (from [https://slurm.schedmd.com/sinfo.html SLURM docs]), where the ''node'' is Mufasa:
Its elements are the following (for more information, see [https://slurm.schedmd.com/qos.html SLURM's documentation]):
 
; Name
: name of the partition
 
; MaxWall
: maximum wall clock duration of the jobs run on the partition (after which they are killed by SLURM), in format ''[days-]hours:minutes:seconds''


<code>
; MaxTRES
%P Partition name followed by "*" for the default partition
: maximum amount of resources ("''Trackable RESources''") available to a job running on the partition, where
: <code>'''cpu=''K'''''</code> means that the maximum number of processor cores is ''K''
: <code>'''gres/''gpu:Type''=''K'''''</code> means that the maximum number of GPUs of class <code>''Type''</code> (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) is ''K''
: <code>'''mem=''K''G'''</code> means that the maximum amount of system RAM is ''K'' GBytes


%a State/availability of a partition
Note that there may be additional limits to the possibility to fully exploit the resources of a partition. For instance, there may be a cap on the maximum number of GPUs that can be used at the same time by a single job and/or a single user.


%c Number of CPUs per node
== Partition availability ==


%B The max number of CPUs per node available to jobs in the partition
An important information that ''sinfo'' provides (column "AVAIL") is the ''availability'' (also called ''state'') of partitions. Possible partition states are:


%G Generic resources (gres) associated with the nodes [''for Mufasa these correspond to the [[System#Hardware|virtual GPUs defined with MIG]]'']
; up = the partition is available
: Running jobs will be completed
: Currently queued jobs will be executed as soon as resources allow


%l Maximum time for any job in the format "days-hours:minutes:seconds"
; drain = the partition is in the process of becoming unavailable (''down'')
: Running jobs will be completed
: Queued jobs will be executed only when the partition becomes available again (''up'')


%L Default time for any job in the format "days-hours:minutes:seconds"
; down = the partition is unavailable
: There are no running jobs
: Queued jobs will be executed only when the partition becomes available again (''up'')


%r Only user root may initiate jobs, "yes" or "no"
</code>


In the actual command, field identifiers <code>%...</code> are preceded by width specifiers in the form <code>.N</code>, where <code>N</code> is a positive integer. The specifiers define how many characters to reserve to each field in the command output, and can be used to help readability.
When a partition passes from ''up'' to ''drain'' no harm is done to running jobs. When a partition passes from any other state to ''down'', running jobs (if they exist) get killed.


== Partition availability ==
A partition in state ''drain'' or ''down'' requires intervention by a [[Roles|Job Administrator]] to be restored to ''up''.
 
== Choosing the partition on which to run a job ==
 
When launching a job (as explained in [[User Jobs#Executing jobs on Mufasa|Executing jobs on Mufasa]]) a user should select the partition that is most suitable for it according to the job's features. Launching a job on a partition avoids the need for the user to specify explicitly all of the resources that the job requires, relying instead (for unspecified resources) on the default amounts defined for the partition. [[User Jobs#Partition features|Partition features]] explains how to find out how many of Mufasa's resources are associated to each partition.


The most important information that ''sinfo'' provides to users is the ''availability'' (also called ''state'') of partitions.
The fact that by selecting the right partition for their job a user can pre-define the requirements of the job without having to specify them makes partitions very handy, and avoids possible mistakes. However, users can -if needed- change the resource requested by their jobs wrt the default values associated to the chosen partition. Any element of the default assignment of resources provided by a specific partition can be overridden by specifying an option when launching the job, so users are not forced to accept the default value. However, it makes sense to choose the most suitable partition for a job in the first place, and then to specify the job's requirements only for those resources that have an unsuitable default value.


For operational partitions, availability is ''up'', meaning that the partition is available to be allocated work. A state/availability equal to ''drain'' means that the partition is not available to be allocated work, while ''down'' means the same as ''drain'' but also that the partition failed, i.e. that it suffered a disruption.
Resource requests by the user launching a job can be both lower and higher than the default value of the partition for that resource. However, they cannot exceed the maximum value that the partition allows for requests of such resource, if set. If a user tries to run on a partition a job that requests a higher value of a resource than the partition‑specified maximum, the run command is refused.


A partition in state ''drain'' or ''down'' requires intervention by a [[Roles|Job Administrator]] to be restored to ''up'' state. Jobs waiting for that partition are paused.
=== Tips for partition choice ===


== Choosing the "right" partition ==
The larger the fraction of system resources that a job asks for, the heavier the job becomes for Mufasa's limited capabilities. Since SLURM prioritises lighter jobs over heavier ones (in order to maximise the number of completed jobs) it is a ''very bad idea'' for a user to ask for their job more resources than it actually needs: this, in fact, will have the effect of delaying (possibly for a long time) job execution.
These are tips that you can use to guide partition choice for your job in order to get it executed quickly:


When launching a job (as explained in [[User Jobs#Executing jobs on Mufasa|Executing jobs on Mufasa]]) a user should select the partition that is most suitable for it according to the job's features. Launching a job on a partition avoids the need for the user to specify explicitly all of the resources that the job requires, and instead rely on the set of resources already defined for the partition.  
* use the least powerful partition that can support the job
* do not ask for more resources or time than needed
* prefer partitions without access to GPUs
* ask for [https://biohpc.deib.polimi.it/index.php?title=User_Jobs#Looking_for_unused_GPUs GPUs that are currently not in use]


The fact that by selecting the right partition for their job a user can pre-define the requirements of the job without having to specify them makes partitions very handy, and avoids possible mistakes. However, users can -if needed- change the resource requested by their jobs wrt the default values associated to such partitions.
= User limitations on the use of resources =


Any element of the default assignment of resources provided by a specific partition can be overridden by specifying an option when launching the job, so users are not forced to accept the default value. However, it makes sense to choose the most suitable partition for a job in the first place, and then to specify the job's requirements only for those resources that have an unsuitable default value.
Mufasa is a shared machine, meaning that at any given time its [https://biohpc.deib.polimi.it/index.php?title=User_Jobs#System_resources_subjected_to_limitations resources subjected to limitations] are splitted among all users who request them. This also means that there are limitations on the amount of resources that Mufasa can provide to a given user, whatever the amount of resources that the user requested.


Resource requests by the user launching a job can be both lower and higher than the default value of the partition for that resource. However, they cannot exceed the maximum value that the partition allows for requests of such resource, if set. If a user tries to launch on a partition a job that requests a higher value of a resource than the partition‑specified maximum, the launch command is refused.
Such limitations come from two sources.


One of the most important resources provided to jobs by partitions is ''time'', in the sense that a job is permitted to run for no longer than a predefined time duration. Jobs that exceed their allotted time are killed by SLURM.
The first source is the fact that each user job is associated to the SLURM partition on which it runs. So, each job can only access the [https://biohpc.deib.polimi.it/index.php?title=User_Jobs#Partition_features specific subset of resources that are available to the partition].


= Executing jobs on Mufasa =
The second source of limitations is applied by SLURM on a per-user basis. Mufasa is configured in such a way that:


The main reason for a user to interact with Mufasa is to execute jobs that require resources not available to standard desktop-class machines. Therefore, launching jobs is the most important operation for Mufasa users: what follows explains how it is done.
* no more than '''2 jobs per user''' can be running at the same time (note that, since each partition can execute only one job at any given time, the two jobs must make use of different partitions)
* if a user already has a running job, a second job from the same user is only put into execution if there are '''no requests from other users''' for the partition it is intended to be run on


Considering that [[System#Docker Containers|all computation on Mufasa must occur within Docker containers]], the jobs run by Mufasa users are always containers except for menial, non-computationally intensive jobs. The process of launching a user job on Mufasa involves two steps:
Please note that access to some partitions may be restricted to researchers (i.e. M.Sc. students cannot access such partitions).


; Step 1
= Running jobs with SLURM: generalities =
: [[User Jobs#Using SLURM to run a Docker container|Use SLURM to run the Docker container where the job will take place]]


; Step 2
'''''Note''': these are general considerations. See [[User Jobs#Executing jobs on Mufasa|Executing jobs on Mufasa]] for instructions about running your own processing jobs on Mufasa.''
: [[User Jobs#Launching a user job from within a Docker container|Launch the user job from within the Docker container]]


As an optional preparatory step, it is often useful to define an [[User Jobs#Using execution scripts to wrap user jobs|execution script]] to manage the launching process.


The commands that SLURM provides to run jobs are  
The commands that SLURM provides to run jobs are  


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
srun <options> <path_of_the_program_to_be_run_via_SLURM>
srun [options] <command_to_be_run_via_SLURM>
</pre>
</pre>


Line 129: Line 249:


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
srun <options> <path_of_the_program_to_be_run_via_SLURM>
sbatch [options] <command_to_be_run_via_SLURM>
</pre>
</pre>


(see SLURM documentation: [https://slurm.schedmd.com/srun.html srun], [https://slurm.schedmd.com/sbatch.html sbatch]). The main difference between <code>srun</code> and <code>sbatch</code> is that the first locks the shell from which it has been launched, so it is only really suitable for processes that use the console for interaction with their user; <code>sbatch</code>, on the other side, does not lock the shell and simply adds the job to the queue, but does not allow the user to interact with the process while it is running.
(see SLURM documentation: [https://slurm.schedmd.com/srun.html srun], [https://slurm.schedmd.com/sbatch.html sbatch]).
 
In both cases, <code><command_to_be_run_via_SLURM></code> can be any program or Linux shell script. By using <code>srun</code> or <code>sbatch</code>, the command or script specified by <code><command_to_be_run_via_SLURM></code> (including any programs launched by it) are added to SLURM's execution queues.


Among the options available for <code>srun</code> and <code>sbatch</code>, one of the most important is <code>--res=gpu:K</code>, where <code>K</code> is an integer between 1 and the maximum number of GPUs available in the server (5 for Mufasa). This option specifies how many of the GPUs the program requests for use. Since GPUs are the most scarce resources of Mufasa, this option must always be explicitly specified when running a job that requires GPUs.
The main difference between <code>srun</code> and <code>sbatch</code> is that the first locks the shell from which it has been launched, so it is only really suitable for processes that use the console for interaction with their user. ([[User Jobs#Detaching from a running job with screen|You can, though, detach from that shell and come back later using <code>screen</code>]].) <code>sbatch</code>, on the other side, does not lock the shell and simply adds the job to the queue, but does not allow the user to interact with the process while it is running.


As [[User Jobs#SLURM Partition|already explained]], a quick way to define the set of resources that a program will have access to is to use option <code>--p <partition name></code>.
Additionally, with <code>sbatch</code> <command_to_be_run_via_SLURM> can be an [[User Jobs#Using execution scripts to run jobs|'''execution script''']], i.e. a special (and SLURM-specific) type of Linux shell script that includes '''SBATCH directives'''. SBATCH directives can be used to specify the values of some of the parameters that would otherwise have to be set using the <code>[options]</code> part of the <code>sbatch</code> command. This is handy because it allows to write down the parameters in an execution script instead of having to write them in the command line while launching a job, which greatly reduces the possibility of mistakes. Also, an execution script is easy to keep and reuse.
This option specifies that SLURM will run the program on a specific partition, and therefore that it will have access to all and only the resources available to that partition. As a consequence, all options that define how many resources to assign the job, such as ‑‑res=gpu:K, will only be able to provide the job with resources that are available to the chosen partition. Jobs that require resources that are not available to the chosen partition do not get executed.
 
The <code>[options]</code> part of <code>srun</code> and <code>sbatch</code> commands is used to tell SLURM the conditions under which it has to execute the job; in particular, it is used to specify what system resources SLURM should reserve for the job.
 
A quick way to define the set of resources that a program will be provided with is to use [[User Jobs#SLURM Partitions|SLURM partitions]]. This is done with option <code>-p <partition_name></code>. This option specifies that SLURM will run the program on a specific partition, and therefore that it will have access to all and only the resources available to that partition. As a consequence, all options that define how many resources to assign the job will only be able to provide the job with resources that are available to the chosen partition. Jobs that require resources that are not available to the chosen partition do not get executed.


For instance, running
For instance, running
Line 146: Line 271:
makes SLURM run <code>my_program</code> on the partition named “small”. Running the program this way means that the resources associated to this partition will be available to it for use.
makes SLURM run <code>my_program</code> on the partition named “small”. Running the program this way means that the resources associated to this partition will be available to it for use.


= Using SLURM to run a Docker container =
Immediately after a <code>srun</code> command is launched by a user, SLURM outputs a message similar to this:


The first step to run a user job on Mufasa is to run the [[System#Docker Containers|Docker container]] where the job will take place. A container is a “sandbox” containing the environment where the user's application operates. Parts of Mufasa's filesystem can be made visible (and writable, if they belong to the user's <code>/home</code> directory) to the environment of the container. This allows the containerized user application to read from, and write to, Mufasa's filesystem: for instance, to read data and write results.
<pre style="color: lightgrey; background: black;">
srun: job 10849 queued and waiting for resources
</pre>


Each user is in charge of preparing the Docker container(s) where the user's jobs will be executed. In most situations the user can simply select a suitable ready-made container from the many which are already available for use.
The shell is now locked while SLURM prepares the execution of the user program ([[User Jobs#Detaching from a running job with screen|if you are using <code>screen</code> you can detach from that shell and come back later]]).  


In order to run a Docker container via SLURM, a user must use a command similar to the following:
When SLURM is ready to run the program, it prints a message similar to


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
srun ‑‑p <partition_name> ‑‑container-image=<container_path.sqsh> ‑‑no‑container‑entrypoint ‑‑container‑mounts=<mufasa_dir>:<docker_dir> ‑‑gres=<gpu_resources> ‑‑mem=<mem_resources> ‑‑cpus‑per‑task <cpu_amount> ‑‑pty ‑‑time=<hh:mm:ss> <command_to_run_within_container>
srun: job 10849 has been allocated resources
</pre>
</pre>


All parts of the command above that come after ''srun'' are options that specify what to execute and how. Below these options are explained.
and then executes the program.


;‑‑p <partition_name>
== Running interactive jobs via SLURM ==
: specifies the resource partition on which the job will be run.


''Important! If <code>‑‑p <partition_name></code> is used, options that specify how many resources to assign to the job (such as <code>‑‑mem=<mem_resources></code>, <code>‑‑cpus‑per‑task <cpu_number></code> or <code>‑‑time=<hh:mm:ss></code>) can be omitted, greatly simplyfying the command. If an explicit amount is not requested for a given resource, the job is assigned the default amount of the resource (as defined by the chosen partition).
As [[User Jobs#Running jobs with SLURM: generalities|explained]], SLURM command <code>srun</code> is suitable for launching ''interactive'' user jobs, i.e. jobs that use the terminal output and the keyboard to exchange information with a human user. If a user needs this type of interaction, they must run a ''bash shell'' (i.e. a terminal session) with a command similar to
A notable exception to this rule concerns option <code>‑‑gres=<gpu_resources></code>: GPU resources, in fact, must always be explicitly requested with option <code>‑‑gres</code>, otherwise no access to GPUs is granted to the job.''


;‑‑container-image=<container_path.sqsh>
<pre style="color: lightgrey; background: black;">
: specifies the container to be run
srun --pty /bin/bash
</pre>


;‑‑no‑container‑entrypoint
and subsequently use the bash shell to run the interactive program. To close the SLURM-spawned bash shell, run (as with any other shell)
: specifies that the entrypoint defined in the container image should not be executed (ENTRYPOINT in the Dockerfile that defines the container). The entrypoint is a command that gets executed as soon as the container is run: option ‑‑no‑container‑entrypoint is useful when the user is not sure of the effect of such command.


;<nowiki>‑‑container‑mounts=<mufasa_dir>:<docker_dir></nowiki>
<pre style="color: lightgrey; background: black;">
: specifies what parts of Mufasa's filesystem will be available within the container's filesystem, and where they will be mounted; for instance, if <code><mufasa_dir>:<docker_dir></code> takes the value <code>/home/mrossi:/data</code> this tells srun to mount Mufasa's directory <code>/home/mrossi</code> in position <code>/data</code> within the filesystem of the Docker container. When the docker container reads or writes files in directory <code>/data</code> of its own (internal) filesystem, what actually happens is that files in <code>/home/mrossi</code> get manipulated instead. <code>/home/mrossi</code> is the only part of the filesystem of Mufasa that is visible to, and changeable by, the Docker container.
exit
</pre>


;‑‑gres=<gpu_resources>
Of course, also the “base” shell (i.e. the one that opens when an SSH connection to Mufasa is established) can be used to run programs: however, programs launched this way are not being run via SLURM and are not able to access most of the resources of the machine (in particular, there is no way to make GPUs accessible to them, and they can only access 2 CPUs). On the contrary, running programs with <code>srun</code> or <code>sbatch</code> ensures that they can access all the resources managed by SLURM.
: specifies what GPUs to assign to the container; for instance, <code><gpus></code> may be <code>gpu:40gb:2</code>, that corresponds to giving the job control to 2 entire large‑size GPUs.


''Important! The <code>‑‑gres</code> parameter is mandatory if the job needs to use the system's GPUs. Differently from other resources (where unspecified requests lead to the assignment of a default amount of the resource), GPUs must always be explicitly requested with <code>‑‑gres</code>.''
GPU resources (if needed) must always be requested explicitly with parameter <code>--gres=gpu:<20|40>gb:K</code>, where <code>K</code> is an integer between 1 and the maximum number of GPUs of that type available to the partition (see [[User Jobs#gres syntax|<code>gres</code> syntax]]). For instance, in order to run an interactive program which needs one GPU we may first run a bash shell via SLURM with command


;‑‑mem=<mem_resources>
<pre style="color: lightgrey; background: black;">
: specifies the amount of RAM to assign to the container; for instance, <code><mem_resources></code> may be <code>200G</code>
srun --gres=gpu:20gb:1 --pty /bin/bash
</pre>


;‑‑cpus-per-task <cpu_amount>
an then run the interactive program from the shell newly opened by SLURM.
: specifies how many CPUs to assign to the container; for instance, <code><cpu_amount></code> may be <code>2</code>


;‑‑pty
A way to specify what resources to assign to the bash shell run via SLURM is to run <code>/bin/bash</code> on one of the available partitions: by doing this, the shell is given access to the default amount of resources associated to the partition. For instance, to run the shell on partition “small” the command is
: specifies that the job will be interactive (this is necessary when <code><command_to_run_within_container></code> is <code>/bin/bash</code>: see [[User Jobs#Running interactive jobs via SLURM|Running interactive jobs via SLURM]])


;<nowiki>‑‑time=<d-hh:mm:ss></nowiki>
<pre style="color: lightgrey; background: black;">
: specifies the maximum time allowed to the job to run, in the format <code>days-hours:minutes:seconds</code>, where <code>days</code> is optional; for instance, <code><d-hh:mm:ss></code> may be <code>72:00:00</code>
srun -p small --pty /bin/bash
</pre>


;<command_to_run_within_container>
The general structure of a command requesting SLURM to set up an interactive user job is the following:
: the executable that will be run within the Docker container as soon as it is operative.


A typical value for <code><command_to_run_within_container></code> is <code>/bin/bash</code>. This instructs srun to open an interactive shell session (i.e. a command-line terminal interface) within the container, from which the user will then run their job. Another typical value for <code><command_to_run_within_container></code> is <code>python</code>, which launches an interactive Python session from which the user will then run their job. It is also possible to use <code><command_to_run_within_container></code> to launch non-interactive programs.
<pre style="color: lightgrey; background: black;">
srun [‑p <partition_name>] [--job-name=<jobname>] [‑‑gres=<gpu_resources>] [‑‑mem=<mem_resources>] [‑‑cpus‑per‑task=<cpu_amount>] [‑‑time=<duration>] ‑‑pty /bin/bash
</pre>
 
Below, the elements of this command are explained.
 
:;‑p <partition_name>
:: specifies the [[User Jobs#SLURM partitions|SLURM partition]] on which the job will be run.  If it is not specified, the ''default partition'' is used.
 
:: ''Important! The chosen partition limits the resources that can be requested, since it is not allowed to request resources (type or quantity) that exceed what is allowed by the chosen partition.''
 
:: ''Important! If <code>‑‑p <partition_name></code> is used, options that specify how many resources to assign to the job (such as <code>‑‑mem=<mem_resources></code>, <code>‑‑cpus‑per‑task=<cpu_amount></code> or <code>‑‑time=<duration></code>) can be omitted, greatly simplifying the command. If an explicit amount is not requested for a given resource, the job is assigned the default amount of the resource (as defined by the chosen partition). A notable exception concerns option <code>‑‑gres=<gpu_resources></code>, which is always required (see below) if the job needs access to GPUs.''
 
:; --job-name=<jobname>
:: Specifies a name for the job. The specified name will appear along with the JOBID number when querying running jobs on the system with <code>squeue</code>. The default job name (i.e., the one assigned to the job when <code>--job-name</code> is not used) is the executable program's name.
 
:;‑‑gres=<gpu_resources>
:: specifies what GPUs to assign to the container. <code>gpu_resources</code> is a comma-delimited list where each element has the form <code>gpu:<Type>:<amount></code>, where <code><Type></code> is one of the types of GPU available on Mufasa (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) and <code><amount></code> is an integer between 1 and the number of GPUs of such type available to the partition. For instance, <code><gpu_resources></code> may be <code>gpu:40gb:1,gpu:10gb:3</code>, corresponding to asking for one "full" GPU and 3 "small" GPUs.
 
:: ''Important! The <code>‑‑gres</code> parameter is '''mandatory''' if the job needs to use the system's GPUs. Differently from other resources (where unspecified requests lead to the assignment of a default amount), GPUs must always be explicitly requested.''


== Nvidia Pyxis ==
:;‑‑mem=<mem_resources>
:: specifies the amount of RAM to assign to the container; for instance, <code><mem_resources></code> may be <code>200G</code>


Some of the options described below are specifically dedicated to Docker containers: these are provided by the [https://github.com/NVIDIA/pyxis Nvidia Pyxis] package that has been installed on Mufasa as an adjunct to SLURM. Pyxis allows unprivileged users (i.e., those that are not administrators of Mufasa) to execute containers and run commands within them. More specifically, options <code>‑‑container-image</code>, <code>‑‑no‑container‑entrypoint</code>, <code>‑‑container-mounts</code> are provided to <code>srun</code> by Pyxis.
:;‑‑cpus-per-task=<cpu_amount>
:: specifies how many CPUs to assign to the container; for instance, <code><cpu_amount></code> may be <code>2</code>


= Launching a user job from within a Docker container =
:;<nowiki>‑‑time=<duration></nowiki>
:: specifies the maximum time allowed to the job to run, in the format <code>days-hours:minutes:seconds</code>, where <code>days</code> is optional; for instance, <code><d-hh:mm:ss></code> may be <code>72:00:00</code>


Once the container is up and running, usually the user is dropped to the interactive environment specified by <code><command_to_run_within_container></code>. This interactive environment can be, for instance, a bash shell or the interactive Python mode. Once inside the interactive environment, the user can simply run the required program in the usual way (depending on the type of environment).
:;‑‑pty
:: specifies that the job will be interactive (this is necessary when <code><command_to_run_within_container></code> is <code>/bin/bash</code>: see [[User Jobs#Running interactive jobs via SLURM|Running interactive jobs via SLURM]])


= Running interactive jobs via SLURM =


As explained, SLURM command <code>srun</code> is suitable for launching ''interactive'' user jobs, i.e. jobs that use the terminal output and the keyboard to exchange information with a human user. If a user needs this type of interaction, they must run a ''bash shell'' (i.e. a terminal session) with a command similar to
Mufasa is configured to show, as part of the command prompt of a bash shell run via SLURM, a message such as <code>(SLURM ID xx)</code> (where <code>xx</code> is the ID of the <code>/bin/bash</code> process within SLURM). When you see this message, you know that the bash shell you are interacting with is a SLURM-run one.
 
Another way to know if the current shell is the “base” shell or one run via SLURM is to execute command


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
srun --pty /bin/bash
echo $SLURM_JOB_ID
</pre>
</pre>


and subsequently use the bash shell to run the interactive program. To close the SLURM-spawned bash shell, run (as with any other shell)
If no number gets printed, this means that the shell is the “base” one. If a number is printed, it is the SLURM job ID of the /bin/bash process.
 
== Guidelines you must follow when executing jobs on Mufasa ==
Mufasa is a shared machine. In order to make best and fair use of its shared resources, every user must carefully follow the three guidelines below:
 
;Limit resource requests to the amount that your job ''actually needs''
: An example about CPUs: to make use of multiple CPUs, a process must explicitly support multiple workers/processes in parallel. So if you ask SLURM for 16 CPUs but your code only includes one worker, just one CPU will be used to run it, while the other 15 CPUs will sit idle. Nonetheless, you reserved 16 CPUs: so SLURM will prevent anyone else from using those 15 idle CPUs (until your job terminates).
: So it is necessary to adapt every srun or sbatch request to the code that will actually be run. If you don't know how many CPUs your code uses, just run a short-duration job as a test: since SLURM prioritises shorter processes, the test job should get executed much faster than the "real" job.
 
;Run ''every'' resource-intensive process via SLURM, i.e. with commands <code>srun</code> or <code>sbatch</code>
:If you need to launch resource-intensive programs manually via a shell, simply
:# use SLURM to [[User_Jobs#Running_interactive_jobs_via_SLURM|create an interactive shell]]
:# when the SLURM-created shell is active, launch your programs from it
:Running your heavy processes via SLURM is necessary to ensure that you use Mufasa's resources correctly and respectfully towards the other users: in fact SLURM ensures that Mufasa's resources are distributed to users fairly.
 
;When a SLURM job is not needed anymore, [[User_Jobs#Cancelling_a_resource_request_made_with_salloc|close it with scancel]]
:It is typical that one doesn't know how long a piece of code will take to complete its work. So please make sure to check from time to time if that happened, and -if there's still time before the duration of your SLURM job ends- just ''scancel'' the job.
 
== Other resources ==
The contents of this wiki are specifically tailored for users of Mufasa. They should include everything Mufasa users need to make good use of the machine. However, specific needs vary and advanced users may require advanced functionalities of SLURM that are not covered here.
 
There are a lot of resources on the internet dealing with the execution of jobs using SLURM. Usually these have been published for the benefit of the users of a specific High Performance Computing system, so there's no guarantee that whatever they suggest will work on Mufasa. If you feel the need to look for external resources, we you may start with [https://www.e4company.com/en/2021/01/creating-job-with-slurm-how-to-and-automation-examples/ this one], which has been prepared by the same people who built Mufasa.
 
= Executing jobs on Mufasa =
 
The main reason for a user to interact with Mufasa is to execute jobs that require resources not available to standard desktop-class machines. Therefore, launching jobs is the most important operation for Mufasa users: what follows explains how it is done.
 
:'''Important!''' When launching jobs, always [[User_Jobs#Guidelines_you_must_follow_when_executing_jobs_on_Mufasa|follow the guidelines]].
 
Considering that [[System#Docker Containers|all computation on Mufasa must occur within Docker containers]], the jobs run by Mufasa users are always containers except for menial, non-computationally intensive jobs. This wiki includes [[Docker|directions about preparing Docker containers]].
 
The standard process of launching a user job on Mufasa involves the following steps:
 
 
<big>
;: Step 1 --- [[User Jobs#Using SLURM to run a Docker container|Use SLURM to run the Docker container where the job will take place]]
::: [for interactive and non-interactive user jobs]
 
;: Step 2 --- [[User Jobs#Launching a user job from within a Docker container|Manually launch the user job from within the container]]
::: [for interactive user jobs only]
</big>
 
== Interactive and non-interactive user jobs ==
 
:; Interactive user jobs
:: are jobs that require interaction with the user while they are running, via a bash shell running within the Docker container. The shell is used to receive commands from the user and/or print output messages. For interactive user jobs, the job is usually launched manually by the user (with a command issued via the shell) after the Docker container is in execution.
 
:; Non-interactive user jobs
:: are the most common variety. The user prepares the Docker container in such a way that, when in execution, the container autonomously puts the user's jobs into execution. The user does not have any communication with the Docker container while it is in execution.
 
Both interactive and non-interactive user jobs can be run via a [[User Jobs#Using SLURM to run a Docker container|(quite complex) command]] directly issued from the [[System#Accessing Mufasa|terminal opened via SSH]]. To reduce the possibility of mistakes, it is usually preferable to define an [[User Jobs#Using execution scripts to run jobs|execution script]] that takes care of launching the job.
 
== Job output ==
 
The whole point of running a user job is to collect its output. Usually, such output takes the form of one or more files generated within the filesystem of the Docker container.
 
As [[User Jobs#Using SLURM to run a Docker container|explained below]], SLURM includes a mechanism to mount a part of Mufasa's own filesystem onto the container's filesystem: so when the job running within the container writes to this mounted part, it actually writes to Mufasa's filesystem. This means that when the Docker container ends its execution, its output files persist in Mufasa's filesystem (usually in a subdirectory of the user's own <code>/home</code> directory) and can be retrieved by the user at a later time.
 
The same mechanism can be used to allow user jobs running into a Docker container to read their input data from Mufasa's filesystem (usually a subdirectory of the user's own <code>/home</code> directory).
 
== Using SLURM to run a Docker container ==
 
The first step to run a user job on Mufasa is to run the [[System#Docker Containers|Docker container]] where the job will take place. A container is a “sandbox” containing the environment where the user's application operates. Parts of Mufasa's filesystem can be made visible (and writable, if they belong to the user's <code>/home</code> directory) to the environment of the container. This allows the containerized user application to read from, and write to, Mufasa's filesystem: for instance, to read data and write results. This wiki includes [[Docker|directions about preparing Docker containers]]
 
Each user is in charge of preparing the Docker container(s) where the user's jobs will be executed. In most situations the user can simply select a suitable ready-made container from the many which are already available for use.
 
In order to run a Docker container via SLURM, a user must use a command similar to the following ones:
 
For [[User Jobs#Interactive and non-interactive user jobs|interactive user jobs]] (parts within <code>[square brackets]</code> are optional):


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
exit
srun [‑p <partition_name>] ‑‑container-image=<container_path.sqsh> [--job-name=<jobname>] [‑‑no‑container‑entrypoint] ‑‑container‑mounts=<mufasa_dir>:<docker_dir> [‑‑gres=<gpu_resources>] [‑‑mem=<mem_resources>] [‑‑cpus‑per‑task=<cpu_amount>] [‑‑time=<duration>] ‑‑pty /bin/bash
</pre>
</pre>


Of course, also the “base” shell (i.e. the one that opens when an SSH connection to Mufasa is established) can be used to run programs: however, programs launched this way are not being run via SLURM and are not able to access most of the resources of the machine (in particular, there is no way to make GPUs accessible to them, and they can only access 2 CPUs). On the contrary, running programs with <code>srun</code> or <code>sbatch</code> ensures that they can access all the resources managed by SLURM.
The <code>srun</code> command above runs the Docker Container and opens an interactive shell within the container's environment.  


As usual, GPU resources (if needed) must always be requested explicitly with parameter <code>--res=gpu:K</code>. For instance, in order to run an interactive program which needs one GPU we may first run a bash shell via SLURM with command
For [[User Jobs#Interactive and non-interactive user jobs|non-interactive user jobs]] (parts within <code>[square brackets]</code> are optional):


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
srun --gres=gpu:1 --pty /bin/bash
srun [‑p <partition_name>] ‑‑container-image=<container_path.sqsh> [--job-name=<jobname>] [‑‑no‑container‑entrypoint] ‑‑container‑mounts=<mufasa_dir>:<docker_dir> [‑‑gres=<gpu_resources>] [‑‑mem=<mem_resources>] [‑‑cpus‑per‑task=<cpu_amount>] [‑‑time=<duration>] [<command_to_run_within_container>]
</pre>
</pre>


an then run the interactive program from the newly opened shell.
Below, the elements of these commands are explained.


An alternative to explicitly specifying what resources to assign to the bash shell run via SLURM is to run <code>/bin/bash</code> on one of the available partitions. For instance, to run the shell on partition “small” the command is
:;‑p <partition_name>
:: specifies the [[User Jobs#SLURM partitions|SLURM partition]] on which the job will be run.  If it is not specified, the ''default partition'' is used.
 
:: ''Important! The chosen partition limits the resources that can be requested, since it is not allowed to request resources (type or quantity) that exceed what is allowed by the chosen partition.''
 
:: ''Important! If <code>‑‑p <partition_name></code> is used, options that specify how many resources to assign to the job (such as <code>‑‑mem=<mem_resources></code>, <code>‑‑cpus‑per‑task=<cpu_amount></code> or <code>‑‑time=<duration></code>) can be omitted, greatly simplifying the command. If an explicit amount is not requested for a given resource, the job is assigned the default amount of the resource (as defined by the chosen partition). A notable exception concerns option <code>‑‑gres=<gpu_resources></code>, which is always required (see below) if the job needs access to GPUs.''
 
:; --job-name=<jobname>
:: Specifies a name for the job. The specified name will appear along with the JOBID number when querying running jobs on the system with <code>squeue</code>. The default job name (i.e., the one assigned to the job when <code>--job-name</code> is not used) is the executable program's name.
 
:;‑‑container-image=<container_path.sqsh>
:: specifies the container to be run
 
:;‑‑no‑container‑entrypoint
:: specifies that the ''entrypoint'' defined in the container image should not be executed ([[Docker#Preparation|ENTRYPOINT in the Dockerfile that defines the container]]). The entrypoint is an element of a Docker container: a command that gets executed as soon as the container is in execution. Option <code>‑‑no‑container‑entrypoint</code> is useful when -for some reason- the user does not want the entrypoint in the container to be run.
 
:;<nowiki>‑‑container‑mounts=<mufasa_dir>:<docker_dir></nowiki>
:: specifies what parts of Mufasa's filesystem will be available within the container's filesystem, and where they will be mounted. This is necessary to let the container [[User Jobs#Job output|get input data from Mufasa and/or write output data to Mufasa]]. For instance, if <code><mufasa_dir>:<docker_dir></code> takes the value <code>/home/mrossi:/data</code> this tells srun to mount Mufasa's directory <code>/home/mrossi</code> in position <code>/data</code> within the filesystem of the Docker container. When the docker container reads or writes files in directory <code>/data</code> of its own (internal) filesystem, what actually happens is that files in <code>/home/mrossi</code> get manipulated instead. <code>/home/mrossi</code> is the only part of the filesystem of Mufasa that is visible to, and changeable by, the Docker container.
 
:;‑‑gres=<gpu_resources>
:: specifies what GPUs to assign to the container. <code>gpu_resources</code> is a comma-delimited list where each element has the form <code>gpu:<Type>:<amount></code>, where <code><Type></code> is one of the types of GPU available on Mufasa (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) and <code><amount></code> is an integer between 1 and the number of GPUs of such type available to the partition. For instance, <code><gpu_resources></code> may be <code>gpu:40gb:1,gpu:10gb:3</code>, corresponding to asking for one "full" GPU and 3 "small" GPUs.
 
:: ''Important! The <code>‑‑gres</code> parameter is '''mandatory''' if the job needs to use the system's GPUs. Differently from other resources (where unspecified requests lead to the assignment of a default amount), GPUs must always be explicitly requested.''
 
:;‑‑mem=<mem_resources>
:: specifies the amount of RAM to assign to the container; for instance, <code><mem_resources></code> may be <code>200G</code>
 
:;‑‑cpus-per-task=<cpu_amount>
:: specifies how many CPUs to assign to the container; for instance, <code><cpu_amount></code> may be <code>2</code>
 
:;<nowiki>‑‑time=<duration></nowiki>
:: specifies the maximum time allowed to the job to run, in the format <code>days-hours:minutes:seconds</code>, where <code>days</code> is optional; for instance, <code><d-hh:mm:ss></code> may be <code>72:00:00</code>
 
:;‑‑pty
:: specifies that the job will be interactive (this is necessary when <code><command_to_run_within_container></code> is <code>/bin/bash</code>: see [[User Jobs#Running interactive jobs via SLURM|Running interactive jobs via SLURM]])
 
:;<command_to_run_within_container>
:: the command that will be put into execution '''within the Docker container''' as soon as it the container is active. Note that this is mandatory for non-interactive user jobs and optional for interactive user jobs. If specified, this command will be executed in the environment created by Docker.
 
 
For interactive user jobs, a typical value for <code><command_to_run_within_container></code> is <code>/bin/bash</code>. This instructs srun to open an interactive shell session (i.e. a command-line terminal interface) within the container, from which the user will then run their job. Another typical value for <code><command_to_run_within_container></code> is <code>python</code>, which launches an interactive Python session from which the user will then run their job.
 
For non-interactive user jobs, using <code>[command_to_run_within_container]</code> is one of the two available methods to run the program(s) that the user wants to be executed within the Docker container. The other available method to run the user job(s) is to use the ''entrypoint'' of the container. The use of <code>[command_to_run_within_container]</code> is therefore optional.
 
== Using execution scripts to run jobs ==
 
The <code>srun</code> commands described in [[User Jobs#Using SLURM to run a Docker container|Using SLURM to run a Docker container]] are very complex, and it's easy to forget some option or make mistakes while using them. For non-interactive jobs, there is a solution to this problem.
 
When the user job is non-interactive, in fact, the <code>srun</code> command can be substituted with a much simpler '''<code>sbatch</code> command'''. As [[User Jobs#Running jobs with SLURM: generalities|already explained]], <code>sbatch</code> can make use of an '''execution script''' to specify all the parts of the command to be run via SLURM. So the command to run the Docker container where the user job will take place becomes


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
srun -p small --pty /bin/bash
sbatch <execution_script>
</pre>
</pre>


Mufasa is configured to show, as part of the command prompt of a bash shell run via SLURM, a message such as <code>(SLURM ID xx)</code> (where <code>xx</code> is the ID of the <code>/bin/bash</code> process within SLURM). When you see this message, you know that the bash shell you are interacting with is a SLURM-run one.
An execution script is a special type of Linux script that includes SBATCH directives. SBATCH directives are used to specify the values of the parameters that are otherwise set in the [options] part of an <code>srun</code> command.
 
:{|class="wikitable"
|'''''Note on Linux shell scripts'''''
|-
|''A shell script is a text file that will be run by the bash shell. In order to be acceptable as a bash script, a text file must:
 
* ''have the “executable” flag set''
* ''have <code>#!/bin/bash</code> as its very first line''
 
''Usually, a Linux shell script is given a name ending in ''.sh,'' such as ''my_execution_script.sh'', but this is not mandatory.''
 
''Within any shell script, lines preceded by <code>#</code> are comments (with the notable exception of the initial <code>#!/bin/bash</code> line). Use of blank lines as spacers is allowed.''
|}
 
An execution script is a Linux shell script composed of two parts:
 
# a '''preamble''',  composed of directives using which the user specifies the values to be given to parameters, each preceded by the keyword <code>SBATCH</code>
# [optionally] one or more '''<code>srun</code> commands''' that launch jobs with SLURM using the parameter values specified in the preamble
 
The <code>srun</code> commands are optional because jobs can also be launched by the Docker container's own entrypoint.
 
Below is an '''execution script template''' to be copied and pasted into your own execution script text file.
 
The template includes all the options [[User Jobs#Using SLURM to run a Docker container|already described above]], plus a few additional useful ones (for instance, those that enable SLURM to send email messages to the user in correspondence to events in the lifecycle of their job). Information about all the possible options can be found in [SLURM's own documentation].
 
All the SBATCH directives in the script template below are inactive because commented out. To enable a directive, just uncomment it by removing the leading "#". To make them stand out more visibly, in the template the comments corresponding to actual instructions are in bold.
 
<blockquote>
'''<nowiki>#</nowiki>!/bin/bash'''
 
<nowiki>#</nowiki>----------------start of preamble----------------
 
'''<nowiki>#</nowiki>SBATCH ‑p <partition_name>'''
 
'''<nowiki>#</nowiki>SBATCH ‑‑container-image=<container_path.sqsh>'''
 
'''<nowiki>#</nowiki>SBATCH --job-name=<name>'''
 
'''<nowiki>#</nowiki>SBATCH ‑‑no‑container‑entrypoint'''
 
'''<nowiki>#</nowiki>SBATCH ‑‑container‑mounts=<mufasa_dir>:<docker_dir>'''
 
'''<nowiki>#</nowiki>SBATCH ‑‑gres=<gpu_resources>'''
 
'''<nowiki>#</nowiki>SBATCH ‑‑mem=<mem_resources>'''
 
'''<nowiki>#</nowiki>SBATCH ‑‑cpus-per-task=<cpu_amount>'''
 
'''<nowiki>#</nowiki>SBATCH ‑‑time=<d-hh:mm:ss>'''
 
: <nowiki>#</nowiki> The following directives (not described [[User Jobs#Using SLURM to run a Docker container|so far]]) activate SLURM's email notifications:
 
: <nowiki>#</nowiki> the first specifies where they are sent; the following 3 set up notifications start/end/failure of job execution
 
'''<nowiki>#</nowiki>SBATCH --mail-user <email_address>'''
 
'''<nowiki>#</nowiki>SBATCH --mail-type BEGIN'''
 
'''<nowiki>#</nowiki>SBATCH --mail-type END'''
 
'''<nowiki>#</nowiki>SBATCH --mail-type FAIL'''
 
<nowiki>#</nowiki>----------------end of preamble----------------
 
'''<nowiki>#</nowiki> srun <command_to_run_within_container>'''
 
: <nowiki>#</nowiki> to run the user job, either uncomment (and personalise) the above srun command or use the [[Docker#Preparation|entrypoint]] of the Docker container
</blockquote>
 
== Nvidia Pyxis ==
 
Some of the options described below are specifically dedicated to Docker containers: these are provided by the [https://github.com/NVIDIA/pyxis Nvidia Pyxis] package that has been installed on Mufasa as an adjunct to SLURM. Pyxis allows unprivileged users (i.e., those that are not administrators of Mufasa) to execute containers and run commands within them.
 
More specifically, options <code>‑‑container-image</code>, <code>‑‑no‑container‑entrypoint</code>, <code>‑‑container-mounts</code> are provided to <code>srun</code> by Pyxis.
 
See the  [https://github.com/NVIDIA/pyxis Nvidia Pyxis github page] for additional information about the options that it provides to <code>srun</code>.
 
== Launching a user job from within a Docker container ==
 
For interactive user jobs, once the Docker container (run as [[User Jobs#Using SLURM to run a Docker container|explained here]]) is up and running, the user is dropped to the interactive environment specified by <code><command_to_run_within_container></code>. This interactive environment can be, for instance, a bash shell or an interactive Python console. Once inside the interactive environment, the user can simply run the required program in the usual way (depending on the type of environment).
 
Please note that the interactive environment of the Docker container does not have any relation with Mufasa's system. The only contact point is the part of Mufasa's filesystem that has been grafted to the container's filesystem via the <code>‑‑container‑mounts</code> option of <code>srun</code>. In particular, none of the software packages (such as the Nvidia drivers) installed on Mufasa are available in the container, unless they have been installed in it at preparation time (as explained in [[Docker]]), or manually after the container is put in execution.
 
Also note that, once a Docker container launched with <code>srun</code> is in execution, its own bash shell is completely indistinguishable from the bash shell of Mufasa where the <code>srun</code> command that put the container in execution was issued. The two shells share the same terminal window. The only clue to the fact that you now are, in fact, in the container's shell may be the command prompt, which should now show your location as <code>/opt</code>.
 
= Detaching from a running job with <code>screen</code> =
 
A consequence of the way <code>srun</code> operates is that if you launch an [[User Jobs#Interactive and non-interactive user jobs|interactive user job]], the shell where the command is running must remain open: if it closes, the job terminates. That shell runs in the terminal of your own PC where the [[System#Accessing Mufasa|SSH connection to Mufasa]] exists.
 
If you do not plan to keep the SSH connection to Mufasa open (for instance because you have to turn off or suspend your PC), there is a way to keep your interactive job alive. Namely, you should use command <code>srun</code> inside a ''screen session'' (often simply called "a screen"), then ''detach'' from the ''screen'' ([https://linuxize.com/post/how-to-use-linux-screen/ here] is one of many tutorials about <code>screen</code> available online).
 
Once you have detached from the screen session, you can close the SSH connection to Mufasa without damage. When you need to reach your (still running) job again, you can can open a new SSH connection to Mufasa and then ''reattach'' to the ''screen''.
 
A use case for screen is writing your program in such a way that it prints progress advancement messages as it goes on with its work. Then, you can check its advancement by periodically reconnecting to the screen where the program is running and reading the messages it printed.
 
Basic usage of <code>screen</code> is explained below.
 
== Creating a screen session, running a job in it, detaching from it ==
 
# Connect to Mufasa with SSH
# From the Mufasa shell, run <pre style="color: lightgrey; background: black;">screen</pre>
# In the ''screen session'' ("screen") thus created (it has the look of an empty shell), launch your job with <code>srun</code>
# ''Detach'' from the screen by pressing '''''ctrl + A''''' followed by '''''D''''': you will come back to the original Mufasa shell, while your process will go on running in the screen
# You can now close the SSH connection to Mufasa without damaging your running job
 
== Reattaching to an active screen session ==
 
# Connect to Mufasa with SSH
# In the Mufasa shell, run <pre style="color: lightgrey; background: black;">screen -r</pre>
# You are now back to the screen where you launched your job
 
== Closing (i.e. destroying) a screen session ==
 
When you do not need a screen session anymore:
 
# reattach to the screen as explained above
# destroy the screen by pressing '''ctrl + A''' followed by '''\''' (i.e., backslash)
 
Of course, any program running within the screen gets terminated when the screen is destroyed.
 
= Using <code>salloc</code> to reserve resources =
 
== What is <code>salloc</code>? ==
 
[https://slurm.schedmd.com/salloc.html <code>salloc</code>] is a SLURM command that allows a user to reserve a set of resources (e.g., a 40 GB GPU) for a given time in the future.
 
The typical use of <code>salloc</code> is to "book" an interactive session where the user enjoys '''complete control of a set of resources'''. The resources that are part of this set are chosen by the user. Within the "booked" session, any job run by the user that relies on the reserved resources is immediately put into execution by SLURM.
 
More precisely:
* the user, using <code>salloc</code>, specifies what resources they need and the time when they will need them;
* when the delivery comes, SLURM creates an interactive shell session for the user;
* within such session, the user can use <code>srun</code> and <code>sbatch</code> to run programs, enjoying full (i.e. not shared with anyone else) and instantaneous access to the resources.
 
Resource reservation using <code>salloc</code> is only possible if the request is done in advance wrt the delivery time. The more the resources that the user wants to reserve are in high demand, the more anticipated the request should be to ensure that SLURM is able to fulfill it.
 
When a user makes a request for resources with <code>salloc</code>, the request (called an '''allocation''') gets added to the job queue of SLURM of the requisite partition as a job in <code>pending</code> (<code>PD</code>) state (job states are described [[User_Jobs#Interpreting Job state as provided by squeue|here]]). Indeed, resource allocation is the first part of SLURM's process of executing a user job, while the second part is running the program and letting it use the allocated resources. Using <code>salloc</code> actually corresponds to having SLURM perform the first part of the process (resource allocation) while leaving the second part (running programs) to the user.
 
Until the delivery time specified by the user comes, the allocation remains in state <code>PD</code>, and other jobs requesting the same resources, even if submitted later, are executed. While the request waits for the delivery time, however, it accumulates a priority that increases over time. The longer the allocation stays in the <code>PD</code> state, the stronger this accumulation of priority: so, by requesting resources with <code>salloc</code> '''well in advance of the delivery time''', users can ensure that the resources they need will be ready for them at the requested delivery time, even if these resources are highly contended.
 
== <code>salloc</code> commands ==
 
<code>salloc</code> commands use a similar syntax to <code>srun</code> commands. In particular, <code>salloc</code> lets a user specify what resources they need and -importantly- a '''delivery time''' for the requested resources (delivery time can also be specified with <code>srun</code>, but in that case it is not very useful).  


Another way to know if the current shell is the “base” shell or one run via SLURM is to execute command
The typical <code>salloc</code> command has this form:'


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
echo $SLURM_JOB_ID
salloc [-p <partition_name>] [--job-name=<jobname>] [‑‑gres=<gpu_resources>] [‑‑mem=<mem_resources>] [‑‑cpus‑per‑task=<cpu_amount>] ‑‑time=<duration> --begin=<time>
</pre>
</pre>


If no number gets printed, this means that the shell is the “base” one. If a number is printed, it is the SLURM job ID of the /bin/bash process.
The parts of the above commands within <code>[square brackets]</code> are optional.
 
Below, the elements of the command are explained.
 
:;‑p <partition_name>
:: specifies the [[User Jobs#SLURM partitions|SLURM partition]] on which the job will be run. If it is not specified, the ''default partition'' is used.
 
:: ''Important! The chosen partition limits the resources that can be requested, since it is not allowed to request resources (type or quantity) that exceed what is allowed by the chosen partition.''
 
:: ''Important! If <code>‑‑p <partition_name></code> is used, options that specify how many resources to assign to the job (such as <code>‑‑mem=<mem_resources></code>, <code>‑‑cpus‑per‑task=<cpu_amount></code> or <code>‑‑time=<duration></code>) can be omitted, greatly simplifying the command. If an explicit amount is not requested for a given resource, the job is assigned the default amount of the resource (as defined by the chosen partition). A notable exception concerns option <code>‑‑gres=<gpu_resources></code>, which is always required (see below) if the job needs access to GPUs.''
 
:; --job-name=<jobname>
:: Specifies a name for the job corresponding to the resource allocation. The specified name will appear along with the JOBID number when querying running jobs on the system with <code>squeue</code>. The default job name (i.e., the one assigned to the job when <code>--job-name</code> is not used) is "interact".
 
:;‑‑gres=<gpu_resources>
:: specifies what GPUs are requested. <code>gpu_resources</code> is a comma-delimited list where each element has the form <code>gpu:<Type>:<amount></code>, where <code><Type></code> is one of the types of GPU available on Mufasa (see [[User Jobs#gres syntax|<code>gres</code> syntax]]) and <code><amount></code> is an integer between 1 and the number of GPUs of such type available to the partition. For instance, <code><gpu_resources></code> may be <code>gpu:40gb:1,gpu:10gb:3</code>.
 
:: ''Important! The <code>‑‑gres</code> parameter is '''mandatory''' if the job needs to use the system's GPUs. Differently from other resources (where unspecified requests lead to the assignment of a default amount), GPUs must always be explicitly requested.''
 
:;‑‑mem=<mem_resources>
:: specifies the amount of RAM requested; for instance, <code><mem_resources></code> may be <code>200G</code>
 
:;‑‑cpus-per-task=<cpu_amount>
:: specifies how many CPUs are requested; for instance, <code><cpu_amount></code> may be <code>2</code>
 
:;‑‑time=<duration>
:: specifies the maximum time allowed to the job to run, in the format <code>days-hours:minutes:seconds</code>, where <code>days</code> is optional; for instance, <code><d-hh:mm:ss></code> may be <code>72:00:00</code>. While the interactive session associated to the allocation is active, the user can decide to cancel the allocation at any time just by closing the session (e.g., with command <code>exit</code> for <code>bash</code>)
 
:;<nowiki>--begin=<time></nowiki>
:: specifies the delivery time of the resources reserved with <code>salloc</code>, according to the syntax described below. The delivery time must be a future time.
 
=== Syntax of parameter <code>--begin</code> ===
 
If the allocation is for the current day, you can specify <nowiki><time></nowiki> as hours and minutes in the form
 
:<code>HH:MM</code>


= Detach from your running job with <code>screen</code> =
If you want to specify a time of a different day, the form for <time> is <code>YYYY-MM-DDTHH:MM</code>, where the uppercase 'T' separates date from time.


A consequence of the way <code>srun</code> operates is that if you launch an interactive job but do not plan to keep the SSH connection to Mufasa open (or if you fear that the timeout on SSH connections will cut your contact with the shell) you should use command <code>srun</code> inside a ''screen session'' (often simply called "a screen"), then detach from the ''screen'' ([https://linuxize.com/post/how-to-use-linux-screen/ here] is one of many tutorials about <code>screen</code> available online). Now you can disconnect from Mufasa; when you need to reach your job again, you can can reopen an SSH connection to Mufasa and then reconnect to the ''screen''.
It is also possible to specify <time> as relative to the current time, in one of the following forms:
: <code>now+Kminutes</code>
: <code>now+Khours</code>
: <code>now+Kdays</code>
where K is a (positive) integer.


More specifically, to create a screen session and run a job in it:
Examples:
: <code>--begin=16:00</code>
: <code>--begin=now+1hours</code>
: <code>--begin=now+1days</code>
: <code>--begin=2030-01-20T12:34:00</code>


* Connect to Mufasa with SSH
Note that Mufasa's time zone is GMT, so <nowiki><time></nowiki> must be expressed in GMT as well. If you want to know Mufasa's current time, use command
* From the Mufasa shell, run


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
screen
date
</pre>
</pre>


* In the screen session thus created (it has the look of an empty shell), launch your job with <code>srun</code>
It provides an output similar to the following:
* ''Detach'' from the screen session with '''''ctrl + A''''' followed by '''''D''''': you will come back to the original Mufasa shell, while your process will go on running in the screen session
 
* You can now close the SSH connection to Mufasa without damaging your process
<pre style="color: lightgrey; background: black;">
Thu Nov 10 16:43:30 UTC 2022
</pre>
 
== How to use <code>salloc</code> ==
 
In the typical scenario, the user of <code>salloc</code> will make use of [[User_Jobs#Detaching from a running job with screen|screen]]. Command <code>screen</code> creates a shell session (called "a screen") that it is possible to abandon without closing it ("detaching from the screen"). It is then possible to reach again the screen at a later time ("reattaching to the screen"). This means that a user can create a screen, run <code>salloc</code> within it to create an allocation for time X, detach from the screen and reattach to it just before time X to use the reserved resources from the interactive session created by <code>salloc</code>.
 
More precisely, the operations needed to do this are the following:
 
# [[System#Accessing Mufasa|Connect to Mufasa with SSH]].
# From the Mufasa shell, run <pre style="color: lightgrey; background: black;">screen</pre>
# In the ''screen session'' ("screen") thus created run the [[User Jobs#salloc commands|<code>salloc</code> command]], specifying via its options the resources you need and the time at which you want them delivered.
# SLURM will respond with a message similar to <pre style="color: lightgrey; background: black;">salloc: Pending job allocation XXXX</pre>
# ''Detach'' from the screen by pressing '''''ctrl + A''''' followed by '''''D''''': you will come back to the original Mufasa shell.
# You can now close the SSH connection to Mufasa without damaging your resource allocation request.
# At the delivery time you specified in the [[User Jobs#salloc commands|<code>salloc</code> command]], connect to Mufasa with SSH.
# Once you are in the Mufasa shell, reattach to the screen with command <pre style="color: lightgrey; background: black;">screen -r</pre>
# You are now back to the screen where you used <code>salloc</code>; as soon as SLURM provides to you with the resources you reserved, message "''salloc: Pending job allocation XXXX''" changes to the shell prompt.
# You are now in the interactive shell session you booked with <code>salloc</code>. From here, you can run any programs you want, including <code>srun</code> and <code>sbatch</code>. For the whole duration of the allocation, your programs have unrestricted use of all the resources you reserved with <code>salloc</code>.<br>'''Important!''' Any job run within the shell session is subject to the time limit (i.e., maximum duration) imposed by the partition it is running on! Therefore, if the job reaches the time limit, it gets '''forcibly terminated''' by SLURM. Termination depends exclusively from the time limit: so it occurs even if the end time for the allocation has not been reached yet. (Of course, the job also gets terminated if the allocation ends.)
# Once the interactive shell session is not needed anymore, cancel it by exiting from the session with <pre style="color: lightgrey; background: black;">exit</pre> (Note that if you get to the end of the time period you specified in your request without closing the shell session, SLURM does it for you, killing any programs still running.)
# You are now back to your screen. Destroy it by pressing '''ctrl + A''' followed by '''\''' (i.e., backslash) to get back to the Mufasa shell.
 
== Cancelling a resource request made with <code>salloc</code> ==
 
To cancel a request for resources made as explained in [[User Jobs#How to use salloc|How to use <code>salloc</code>]], follow these steps:
 
# [[System#Accessing Mufasa|Connect to Mufasa with SSH]].
# Once you are in the Mufasa shell, reattach to the screen where you used command <code>salloc</code> with command <pre style="color: lightgrey; background: black;">screen -r</pre>
# You should see the message "''salloc: Pending job allocation XXXX''" (if the allocation is still pending) or ""''salloc: job XXXX queued and waiting for resources''" (if the allocation is done and waiting for its start time). Now just press '''Ctrl + C'''. This communicates to SLURM your intention to cancel your request for resources.
# SLURM will communicate the cancellation with message <pre style="color: lightgrey; background: black;">salloc: Job allocation XXXX has been revoked.</pre>
# Destroy the screen by pressing '''ctrl + A''' followed by '''\''' (i.e., backslash) to get back to the Mufasa shell.
 
= Automatic job caching =
 
When a job is run via SLURM (with or without an execution script), Mufasa exploits a (fully tranparent) caching mechanism to speed up its execution. The speedup is obtained by removing the need for the running job to execute accesses to the (mechanical and therefore relatively slow) HDDs where <code>/home</code> partitions reside, substituting them with accesses to (solid-state and therefore much faster) SSDs.
 
Each time a job is run via SLURM, this is what happens automatically:
 
# Mufasa temporarily copies code and associated data from the directory where the executables are located (in the user's own <code>/home</code>) to a cache space located on system SSDs
# Mufasa launches the cached copy of the user executables, using the cached copies of the data as its input files
# The executables create their output files in the cache space
# When the user jobs end, Mufasa copies the output files from the cache space back to the user's own <code>/home</code>
 
The whole process is completely transparent to the user. The user simply prepares the executable (or the [[User Jobs# Using execution scripts to wrap user jobs|execution script]]) in a subdirectory of their <code>/home</code> directory and runs the job. When job execution is complete, the user finds their output data in the origin subdirectory of <code>/home</code>, exactly as if the execution actually occurred there.
 
'''Important!''' The caching mechanism requires that ''during job execution'' the user does not modify the contents of the <code>/home</code> subdirectory where executable and data were at execution time. Any such change, in fact, will be overwritten by Mufasa at the end of the execution, when files are copied back from the caching space.
 
= Monitoring and managing jobs =
 
SLURM provides Job Users with tools to inspect and manage jobs. While a [[Roles|Job User]] is able to see all users' jobs, they are only allowed to interact with their own.
 
The main commands used to interact with jobs are '''[https://slurm.schedmd.com/squeue.html <code>squeue</code>]''' to inspect the scheduling queues and '''[https://slurm.schedmd.com/scancel.html <code>scancel</code>]''' to terminate queued or running jobs.


Later, when you are ready to resume contact with your running process:
== Inspecting jobs with <code>squeue</code> ==


* Connect to Mufasa with SSH
Running command
* In the Mufasa shell, run


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
screen -r
squeue
</pre>
</pre>


* You are now back to the screen session where you launched your job
provides an output similar to the following:


* When you do not need the screen containing your job anymore, destroy it by using (within the screen) '''''ctrl + A''''' followed by '''''X'''''
<pre style="color: lightgrey; background: black;">
JOBID PARTITION    NAME    USER ST      TIME  NODES NODELIST(REASON)
  520      fat    bash acasella  R 2-04:10:25      1 gn01
  523      fat    bash amarzull  R    1:30:35      1 gn01
  522      gpu    bash    clena  R  20:51:16      1 gn01
</pre>


A use case for screen is writing your program in such a way that it prints progress advancement messages as it goes on with its work. Then, you can check its advancement by periodically reconnecting to the screen where the program is running and reading the messages it printed.
This output comprises the following information:
 
; JOBID
: Numerical identifier of the job assigned by SLURM
: This identifier is used to intervene on the job, for instance with <code>scancel</code>
 
; PARTITION
: the partition that the job is run on
 
; NAME
: the name assigned to the job; can be personalised using the <code>--job-name</code> option


= Using execution scripts to wrap user jobs =
; USER
: username of the user who launched the job
; ST
: job state (see [[User Jobs#Job state|Job state]] for further information)


Previous Sections of this page explained how to use SLURM to run user jobs directly, i.e. by specifying the value of SLURM parameters directly on the command line.
; TIME
: time that has passed since the beginning of job execution


In general, though, it is preferable to wrap the commands that run jobs into '''execution scripts'''. An execution script makes specifying all required parameters easier, makes errors in configuring such parameters less likely, and -most importantly- can be reused for other jobs.
; NODES
: number of nodes where the job is being executed (for Mufasa, this is always 1 as it is a single machine)


An execution script is a Linux shell script composed of two parts:
; NODELIST (REASON)
: name of the nodes where the job is being executed: for Mufasa it is always <code>gn01</code>, which is the name of the node corresponding to Mufasa.


# a '''preamble''',  where the user specifies the values to be given to parameters, each preceded by the keyword <code>SBATCH</code>
# one or more '''srun''' commands that launch jobs with SLURM using the parameter values specified in the preamble


An execution script is a special type of Linux ''bash script''. A bash script is a file that is intended to be run by the bash command interpreter. In order to be acceptable as a bash script, a text file must:
To limit the output of <code>squeue</code> to the jobs owned by user <code><username></code>, it can be used like this:


* have the “executable” flag set
<pre style="color: lightgrey; background: black;">
* have <code>#!/bin/bash</code> as its very first line
squeue -u <username>
</pre>


Usually, a Linux bash script is given a name ending in ''.sh,'' such as ''my_execution_script.sh'', but this is not mandatory.
=== Interpreting Job state as provided by <code>squeue</code> ===


To execute the script, just open a terminal (such as the one provided by an SSH connection with Mufasa), write the scripts's full path (e.g., ''./my_execution_script.sh'') and press the <enter> key. The script is executed in the terminal, and any output (e.g., whatever is printed by any <code>echo</code> commands in the script) is shown on the terminal.
Jobs typically pass through several states in the course of their execution. Job state is shown in column "ST" of the output of <code>squeue</code> as an abbreviated code (e.g., "R" for RUNNING).


Within a bash script, lines preceded by <code>#</code> are comments (with the notable exception of the initial <code>#!/bin/bash</code> line). Use of blank lines as spacers is allowed.
The most relevant codes and states are the following:


Below is an example of execution script (actual instructions are shown in bold; the rest are comments):
; PD PENDING
: Job is awaiting resource allocation.


<blockquote>
; R RUNNING
'''#!/bin/bash'''
: Job currently has an allocation.


; S SUSPENDED
: Job has an allocation, but execution has been suspended and CPUs have been released for other jobs.
; CG COMPLETING
: Job is in the process of completing. Some processes on some nodes may still be active.


<nowiki>#</nowiki> ----------------start of preamble----------------
; CD COMPLETED
: Job has terminated all processes on all nodes with an exit code of zero.


<nowiki>#</nowiki> Note: these are examples. Put your own SBATCH directives below
Beyond these, there are other (less frequent) job states. [https://slurm.schedmd.com/squeue.html The SLURM doc page for <code>squeue</code>] provides a complete list of them.


'''SBATCH --job-name=myjob'''
== Knowing when jobs are expected to end or start ==


<nowiki>#</nowiki> name assigned to the job
If you are interested in understanding when jobs are expected to start or end, use command


'''SBATCH --cpus-per-task=1'''
<pre style="color: lightgrey; background: black;">
squeue -o "%5i %8u %10P %.2t |%19S |%.11L|"
</pre>


<nowiki>#</nowiki> number of threads allocated to each task
which provides an output is similar to the following:


'''SBATCH --mem-per-cpu=500M'''
<pre style="color: lightgrey; background: black;">
JOBID USER    PARTITION  ST |START_TIME          |  TIME_LEFT|
5307  thuynh  fat        PD |2022-11-11T17:55:54 | 3-00:00:00|
5308  thuynh  fat        PD |2022-11-11T17:55:54 | 3-00:00:00|
5296  cziyang  fat        R |2022-11-08T16:58:03 | 1-00:48:14|
5306  thuynh  fat        R |2022-11-10T08:13:30 | 2-16:03:41|
5297  gnannini fat        R |2022-11-08T17:55:54 | 1-01:46:05|
5336  ssaitta  gpu        R |2022-11-10T08:13:00 |    6:03:11|
5358  dmilesi  gpulong    R |2022-11-10T15:11:32 | 2-23:01:43|
5338  cziyang  gpulong    R |2022-11-10T09:45:01 | 1-17:35:12|
</pre>


<nowiki>#</nowiki> amount of memory per CPU core
;:For running jobs (state <code>R</code>):
::column "START_TIME" tells you when the job started its execution
::column "TIME_LEFT" tells you how much remains of the running time requested by the job


'''SBATCH --gres=gpu:1'''
;:For pending jobs (state <code>PD</code>):
::column "START_TIME" tells you when the job is expected to start its execution
::column "TIME_LEFT" tells you how much running time has been requested by the job


<nowiki>#</nowiki> number of GPUs per node
'''Important!''' Start and end times are forecasts based on the features of current jobs in the queues, and may change if running jobs end prematurely and/or if new jobs with higher priority are added to the queues. So these times should never be considered as certain.


'''SBATCH --partition=small'''
If you simply want to know when pending jobs (state <code>PD</code>) are expected to begin execution, use


<nowiki>#</nowiki> the partition to run your jobs on
<pre style="color: lightgrey; background: black;">
squeue --start
</pre>


'''SBATCH --time=0-00:01:00'''
which lists pending jobs in order of increasing START_TIME (the job on top is the one which will be run first). For each pending job the command provides an output similar to the example below:


<nowiki>#</nowiki> time assigned to your jobs to run (format: days-hours:minutes:seconds, with days optional)
<pre style="color: lightgrey; background: black;">
JOBID PARTITION    NAME    USER ST          START_TIME  NODES SCHEDNODES          NODELIST(REASON)
5090      fat training  thuynh PD 2022-10-27T09:28:01      1 (null)              (Resources)
</pre>


<nowiki>#</nowiki>----------------end of preamble----------------
== Getting detailed information about a job ==


If needed, complete information about a job (either pending or running) can be obtained using command


<nowiki>#</nowiki> ----------------srun commands-----------------
<pre style="color: lightgrey; background: black;">
scontrol show job <JOBID>
</pre>


<nowiki>#</nowiki> Put your own srun command(s) below
where <code><JOBID></code> is the number from the first column of the output of <code>squeue</code>. The output of this command is similar to the following:


'''srun ...
<pre style="color: lightgrey; background: black;">
JobId=936 JobName=bash
  UserId=acasella(1001) GroupId=acasella(1001) MCS_label=N/A
  Priority=7885 Nice=0 Account=research QOS=normal
  JobState=RUNNING Reason=None Dependency=(null)
  Requeue=0 Restarts=0 BatchFlag=0 Reboot=0 ExitCode=0:0
  RunTime=03:21:59 TimeLimit=3-00:00:00 TimeMin=N/A
  SubmitTime=2022-02-08T11:57:24 EligibleTime=2022-02-08T11:57:24
  AccrueTime=Unknown
  StartTime=2022-02-08T11:57:24 EndTime=2022-02-11T11:57:24 Deadline=N/A
  PreemptEligibleTime=2022-02-08T11:57:24 PreemptTime=None
  SuspendTime=None SecsPreSuspend=0 LastSchedEval=2022-02-08T11:57:24 Scheduler=Main
  Partition=fat AllocNode:Sid=rk018445:4034
  ReqNodeList=(null) ExcNodeList=(null)
  NodeList=gn01
  BatchHost=gn01
  NumNodes=1 NumCPUs=8 NumTasks=1 CPUs/Task=8 ReqB:S:C:T=0:0:*:*
  TRES=cpu=8,mem=128G,node=1,billing=8,gres/gpu:40gb=1
  Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
  MinCPUsNode=8 MinMemoryNode=128G MinTmpDiskNode=0
  Features=(null) DelayBoot=00:00:00
  OverSubscribe=YES Contiguous=0 Licenses=(null) Network=(null)
  Command=/bin/bash
  WorkDir=/home/acasella
  Power=
  TresPerNode=gres:gpu:40gb:1
</pre>


'''srun ...
In particular, the line beginning with ''"StartTime="'' provides expected times for the start and end of job execution. As explained in [[User_Jobs#Knowing_when_jobs_are_expected_to_end_or_start|Knowing when jobs are expected to end or start]], start time is only a prediction and subject to change.


<nowiki>#</nowiki> ----------------end of srun commands-----------------
== Canceling a job with <code>scancel</code> ==
</blockquote>


As the example above shows, beyond the initial directive <code>#!/bin/bash</code> the script includes a series of <code>SBATCH</code> directives used to specify parameter values, and finally one or more <code>srun</code> commands that run the jobs. Any parameter accepted by commands <code>srun</code> and <code>sbatch</code> can be used as an <code>SBATCH<code> directive in an execution script.
It is possible to cancel a job using command <code>scancel</code>, either while it is waiting for execution or when it is in execution (in this case you can choose what system signal to send the process in order to terminate it). The following are some examples of use of <code>scancel</code> adapted from [https://slurm.schedmd.com/scancel.html SLURM's documentation].


= Job caching =
<pre style="color: lightgrey; background: black;">
scancel <JOBID>
</pre>
removes queued job <code><JOBID></code> from the execution queue.


When a Job User runs a job via SLURM (with or without an execution script), Mufasa exploits a (tranparent) caching mechanism to speed up its execution. The speedup is obtained by removing the need for the running job to execute accesses to the (mechanical, slow) HDDs where /home partitions reside, and substituting them with accesses to (solid-state, fast) SSDs.
<pre style="color: lightgrey; background: black;">
scancel --signal=TERM <JOBID>
</pre>
terminates execution of job <code><JOBID></code> with signal SIGTERM (request to stop).


Precisely, each time a job is run via SLURM Mufasa:
<pre style="color: lightgrey; background: black;">
scancel --signal=KILL <JOBID>
</pre>
terminates execution of job <code><JOBID></code> with signal SIGKILL (force stop).


# temporarily copies code and associated data from the user's own /home partition to a cache space located on system SSDs;
<pre style="color: lightgrey; background: black;">
# runs the user job from the SSDs, using the copy of the data on the SSD as input;
scancel --state=PENDING --user=<username> --partition=<partition_name>
# creates the output file(s) on the SSDs;
</pre>
# when the job ends, copies the output files from the SSDs to the user's own /home partition .
cancels all pending jobs belonging to user <code><username></code> in partition <code><partition_name></code>.


The whole process is completely transparent to the user. The user simply prepares executable and data in their /home folder, then runs the job (possibly via an execution script). When job execution ends, the user finds their output data in the /home folder, exactly as if the execution actually occurred there.
== Knowing what jobs you ran today ==


= Monitoring and managing jobs =
Command


SLURM provides Job Users with several tools to inspect and manage jobs. While a Job User is able to inspect all users' jobs, they are only allowed to modify the condition of their own jobs.
<pre style="color: lightgrey; background: black;">
sacct -X
</pre>


From SLURM's overview (the links point to the appropriate URLs in SLURM's online documentation): “User tools include [https://slurm.schedmd.com/srun.html '''''srun'''''] to initiate jobs, [https://slurm.schedmd.com/scancel.html '''''scancel'''''] to terminate queued or running jobs, [https://slurm.schedmd.com/sinfo.html '''''sinfo'''''] to report system status, [https://slurm.schedmd.com/squeue.html '''''squeue'''''] to report the status of jobs [i.e. to inspect the scheduling queue], and [https://slurm.schedmd.com/sacct.html '''''sacct'''''] to get information about jobs and job steps that are running or have completed.
provides a list of all jobs run today by your user.

Latest revision as of 10:45, 31 May 2024

This page presents the features of Mufasa that are most relevant to Mufasa's Job Users. Job Users can submit jobs for execution, cancel their own jobs, and see other users' jobs (but not intervene on them).

System resources subjected to limitations

The hardware resources of Mufasa are limited. For this reason, some of them are subjected to limitations, i.e. (these are SLURM's own terms):

cpu
the number of processor cores that a job can use
mem
the amount of RAM that a job can use
gres
the amount of generic resources that a job can use: in Mufasa, the only resources belonging to this set are the GPUs (the virtual GPUs defined by Nvidia MIG, not the physical GPUs)

These are some of the TRES (Trackable RESources) defined by SLURM. From SLURM's documentation: "A TRES is a resource that can be tracked for usage or used to enforce limits against."

SLURM provides jobs with access to resources only for a limited time: i.e., execution time is itself a limited resource.

When a resource is limited, a job cannot use arbitrary quantities of it. On the contrary, the job must specify how much of the resource it requests. Requests are done either by running the job on a partition for which a default amount of resources has been defined, or through the options of the srun command that executes the job via SLURM.

gres syntax

Whenever it is necessary to specify the quantity of gres, i.e. generic resources, a special syntax must be used. In Mufasa gres resources are GPUs, so this syntax applies to GPUs. Number and type of Mufasa's GPUs is described here.

The name of each GPU resource takes the form

Name:Type

where Name is gpu and Type takes the following values:

  • 40gb for GPUs with 40 Gbytes of onboard RAM
  • 20gb for GPUs with 20 Gbytes of onboard RAM

So, for instance,

gpu:20gb

identifies the resource corresponding to GPUs with 20 GB of RAM. Of this resource Mufasa has a given number, of which a job can request to use some (or all).

When asking for a gres resource (e.g., in an srun command or an SBATCH directive of an execution script), the syntax required by SLURM is

<Name>:<Type>:<quantity>

where quantity is an integer value specifying how many items of the resource are requested. So, for instance, to ask for 2 GPUs of type 20gb the syntax is

gpu:20gb:2

SLURM's generic resources are defined in /etc/slurm/gres.conf. In order to make GPUs available to SLURM's gres management, Mufasa makes use of Nvidia's NVML library. For additional information see SLURM's documentation.

Looking for unused GPUs

GPUs are usually the most limited resource on Mufasa. So, if your job requires a GPU, the best way to get it executed quickly is to request a GPU that is not currently in use.

This command

sinfo -O Gres:100

provides a summary of all the Gres (i.e., GPU) resources possessed by Mufasa. It provides this output:

GRES                                                                                                
gpu:40gb:2(S:0-1),gpu:20gb:3(S:0-1),gpu:10gb:6(S:0-1)

To know which of the GPUs are currently in use, use command

sinfo -O GresUsed:100

which provides an output similar to this:

GRES_USED                                                                                           
gpu:40gb:2(IDX:0-1),gpu:20gb:2(IDX:5,8),gpu:10gb:3(IDX:3-4,6) 

By comparing the two lists (GRES and GRES_USED) above, you can see that at the moment:

  • of the 2 40 GB GPUs, both are in use
  • of the 3 20 GB GPUs, one is not in use
  • of the 6 10 GB GPUs, 3 are not in use

SLURM Partitions

Several execution queues for jobs have been defined on Mufasa. Such queues are called partitions in SLURM terminology. Each partition has features (in term of resources available to the jobs on that queue) that make the partition suitable for a certain category of jobs. SLURM command

sinfo

(link to SLURM docs) provides a list of available partitions. Its output is similar to this:

PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*        up      20:00      1    mix gn01
small         up   12:00:00      1    mix gn01
normal        up 1-00:00:00      1    mix gn01
longnormal    up 3-00:00:00      1    mix gn01
gpu           up 1-00:00:00      1    mix gn01
gpulong       up 3-00:00:00      1    mix gn01
fat           up 3-00:00:00      1    mix gn01

In this example, available partitions are named “debug”, “small”, “normal”, “longnormal”, “gpu”, “gpulong”, “fat”. The asterisk beside "debug" indicates that this is the default partition, i.e. the one that SLURM selects to run a job when no partition has been specified. (On Mufasa, partition names have been chosen to reflect the type of job that they are dedicated to.)

The columns in the standard output of sinfo shown above correspond to the following information:

PARTITION
name of the partition
AVAIL
state/availability of the partition: see below
TIMELIMIT
maximum runtime of a job allowed by the partition, in format [days-]hours:minutes:seconds
NODES
number of nodes available to jobs run on the partition: for Mufasa, this is always 1 since there is only 1 node in the computing cluster
STATE
state of the node (using these codes); typical values are mixed - meaning that some of the resources of the node are busy executing jobs while other are free, and allocated - meaning that all of the resources of the node are busy
NODELIST
list of nodes available to the partition: for Mufasa this field always contains gn01 since Mufasa is the only node in the computing cluster

One information that the standard output of sinfo doesn't provide is if there are partitions that can only be used by the root user of Mufasa. To know which partiions are root-only, you can use command

sinfo -o "%.10P %.4r"

Its output is

 PARTITION ROOT
    debug*   no
     small   no
    normal   no
longnormal   no
       gpu   no
   gpulong   no
       fat   no

and shows that on Mufasa no partitions are reserved for root.

For what concerns hardware resources (such as CPUs, GPUs and RAM) the amounts of each resource available to Mufasa's partitions are set by SLURM's accounting system, and are not visible to sinfo. See Partition features for a description of these amounts.

Partition features

The output of sinfo (see above) provides a list of available partitions, but (except for time) it does not provide information about the amount of resources that a partition makes available to the user jobs which are run on it. The amount of resources is visible through command

sacctmgr list qos format=name%-10,maxwall,maxtres%-64

which provides an output similar to the following:

Name           MaxWall MaxTRES                                                          
---------- ----------- ---------------------------------------------------------------- 
normal      1-00:00:00 cpu=16,gres/gpu:10gb=0,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=128G  
small         12:00:00 cpu=2,gres/gpu:10gb=1,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=16G    
longnormal  3-00:00:00 cpu=16,gres/gpu:10gb=0,gres/gpu:20gb=0,gres/gpu:40gb=0,mem=128G  
gpu         1-00:00:00 cpu=8,gres/gpu:10gb=2,gres/gpu:20gb=2,mem=64G                    
gpulong     3-00:00:00 cpu=8,gres/gpu:10gb=2,gres/gpu:20gb=2,mem=64G                    
fat         3-00:00:00 cpu=32,gres/gpu:10gb=2,gres/gpu:20gb=2,gres/gpu:40gb=2,mem=256G

Its elements are the following (for more information, see SLURM's documentation):

Name
name of the partition
MaxWall
maximum wall clock duration of the jobs run on the partition (after which they are killed by SLURM), in format [days-]hours:minutes:seconds
MaxTRES
maximum amount of resources ("Trackable RESources") available to a job running on the partition, where
cpu=K means that the maximum number of processor cores is K
gres/gpu:Type=K means that the maximum number of GPUs of class Type (see gres syntax) is K
mem=KG means that the maximum amount of system RAM is K GBytes

Note that there may be additional limits to the possibility to fully exploit the resources of a partition. For instance, there may be a cap on the maximum number of GPUs that can be used at the same time by a single job and/or a single user.

Partition availability

An important information that sinfo provides (column "AVAIL") is the availability (also called state) of partitions. Possible partition states are:

up = the partition is available
Running jobs will be completed
Currently queued jobs will be executed as soon as resources allow
drain = the partition is in the process of becoming unavailable (down)
Running jobs will be completed
Queued jobs will be executed only when the partition becomes available again (up)
down = the partition is unavailable
There are no running jobs
Queued jobs will be executed only when the partition becomes available again (up)


When a partition passes from up to drain no harm is done to running jobs. When a partition passes from any other state to down, running jobs (if they exist) get killed.

A partition in state drain or down requires intervention by a Job Administrator to be restored to up.

Choosing the partition on which to run a job

When launching a job (as explained in Executing jobs on Mufasa) a user should select the partition that is most suitable for it according to the job's features. Launching a job on a partition avoids the need for the user to specify explicitly all of the resources that the job requires, relying instead (for unspecified resources) on the default amounts defined for the partition. Partition features explains how to find out how many of Mufasa's resources are associated to each partition.

The fact that by selecting the right partition for their job a user can pre-define the requirements of the job without having to specify them makes partitions very handy, and avoids possible mistakes. However, users can -if needed- change the resource requested by their jobs wrt the default values associated to the chosen partition. Any element of the default assignment of resources provided by a specific partition can be overridden by specifying an option when launching the job, so users are not forced to accept the default value. However, it makes sense to choose the most suitable partition for a job in the first place, and then to specify the job's requirements only for those resources that have an unsuitable default value.

Resource requests by the user launching a job can be both lower and higher than the default value of the partition for that resource. However, they cannot exceed the maximum value that the partition allows for requests of such resource, if set. If a user tries to run on a partition a job that requests a higher value of a resource than the partition‑specified maximum, the run command is refused.

Tips for partition choice

The larger the fraction of system resources that a job asks for, the heavier the job becomes for Mufasa's limited capabilities. Since SLURM prioritises lighter jobs over heavier ones (in order to maximise the number of completed jobs) it is a very bad idea for a user to ask for their job more resources than it actually needs: this, in fact, will have the effect of delaying (possibly for a long time) job execution. These are tips that you can use to guide partition choice for your job in order to get it executed quickly:

  • use the least powerful partition that can support the job
  • do not ask for more resources or time than needed
  • prefer partitions without access to GPUs
  • ask for GPUs that are currently not in use

User limitations on the use of resources

Mufasa is a shared machine, meaning that at any given time its resources subjected to limitations are splitted among all users who request them. This also means that there are limitations on the amount of resources that Mufasa can provide to a given user, whatever the amount of resources that the user requested.

Such limitations come from two sources.

The first source is the fact that each user job is associated to the SLURM partition on which it runs. So, each job can only access the specific subset of resources that are available to the partition.

The second source of limitations is applied by SLURM on a per-user basis. Mufasa is configured in such a way that:

  • no more than 2 jobs per user can be running at the same time (note that, since each partition can execute only one job at any given time, the two jobs must make use of different partitions)
  • if a user already has a running job, a second job from the same user is only put into execution if there are no requests from other users for the partition it is intended to be run on

Please note that access to some partitions may be restricted to researchers (i.e. M.Sc. students cannot access such partitions).

Running jobs with SLURM: generalities

Note: these are general considerations. See Executing jobs on Mufasa for instructions about running your own processing jobs on Mufasa.


The commands that SLURM provides to run jobs are

srun [options] <command_to_be_run_via_SLURM>

and

sbatch [options] <command_to_be_run_via_SLURM>

(see SLURM documentation: srun, sbatch).

In both cases, <command_to_be_run_via_SLURM> can be any program or Linux shell script. By using srun or sbatch, the command or script specified by <command_to_be_run_via_SLURM> (including any programs launched by it) are added to SLURM's execution queues.

The main difference between srun and sbatch is that the first locks the shell from which it has been launched, so it is only really suitable for processes that use the console for interaction with their user. (You can, though, detach from that shell and come back later using screen.) sbatch, on the other side, does not lock the shell and simply adds the job to the queue, but does not allow the user to interact with the process while it is running.

Additionally, with sbatch <command_to_be_run_via_SLURM> can be an execution script, i.e. a special (and SLURM-specific) type of Linux shell script that includes SBATCH directives. SBATCH directives can be used to specify the values of some of the parameters that would otherwise have to be set using the [options] part of the sbatch command. This is handy because it allows to write down the parameters in an execution script instead of having to write them in the command line while launching a job, which greatly reduces the possibility of mistakes. Also, an execution script is easy to keep and reuse.

The [options] part of srun and sbatch commands is used to tell SLURM the conditions under which it has to execute the job; in particular, it is used to specify what system resources SLURM should reserve for the job.

A quick way to define the set of resources that a program will be provided with is to use SLURM partitions. This is done with option -p <partition_name>. This option specifies that SLURM will run the program on a specific partition, and therefore that it will have access to all and only the resources available to that partition. As a consequence, all options that define how many resources to assign the job will only be able to provide the job with resources that are available to the chosen partition. Jobs that require resources that are not available to the chosen partition do not get executed.

For instance, running

srun -p small ./my_program

makes SLURM run my_program on the partition named “small”. Running the program this way means that the resources associated to this partition will be available to it for use.

Immediately after a srun command is launched by a user, SLURM outputs a message similar to this:

srun: job 10849 queued and waiting for resources

The shell is now locked while SLURM prepares the execution of the user program (if you are using screen you can detach from that shell and come back later).

When SLURM is ready to run the program, it prints a message similar to

srun: job 10849 has been allocated resources

and then executes the program.

Running interactive jobs via SLURM

As explained, SLURM command srun is suitable for launching interactive user jobs, i.e. jobs that use the terminal output and the keyboard to exchange information with a human user. If a user needs this type of interaction, they must run a bash shell (i.e. a terminal session) with a command similar to

srun --pty /bin/bash

and subsequently use the bash shell to run the interactive program. To close the SLURM-spawned bash shell, run (as with any other shell)

exit

Of course, also the “base” shell (i.e. the one that opens when an SSH connection to Mufasa is established) can be used to run programs: however, programs launched this way are not being run via SLURM and are not able to access most of the resources of the machine (in particular, there is no way to make GPUs accessible to them, and they can only access 2 CPUs). On the contrary, running programs with srun or sbatch ensures that they can access all the resources managed by SLURM.

GPU resources (if needed) must always be requested explicitly with parameter --gres=gpu:<20|40>gb:K, where K is an integer between 1 and the maximum number of GPUs of that type available to the partition (see gres syntax). For instance, in order to run an interactive program which needs one GPU we may first run a bash shell via SLURM with command

srun --gres=gpu:20gb:1 --pty /bin/bash

an then run the interactive program from the shell newly opened by SLURM.

A way to specify what resources to assign to the bash shell run via SLURM is to run /bin/bash on one of the available partitions: by doing this, the shell is given access to the default amount of resources associated to the partition. For instance, to run the shell on partition “small” the command is

srun -p small --pty /bin/bash

The general structure of a command requesting SLURM to set up an interactive user job is the following:

srun [‑p <partition_name>] [--job-name=<jobname>] [‑‑gres=<gpu_resources>] [‑‑mem=<mem_resources>] [‑‑cpus‑per‑task=<cpu_amount>] [‑‑time=<duration>] ‑‑pty /bin/bash

Below, the elements of this command are explained.

‑p <partition_name>
specifies the SLURM partition on which the job will be run. If it is not specified, the default partition is used.
Important! The chosen partition limits the resources that can be requested, since it is not allowed to request resources (type or quantity) that exceed what is allowed by the chosen partition.
Important! If ‑‑p <partition_name> is used, options that specify how many resources to assign to the job (such as ‑‑mem=<mem_resources>, ‑‑cpus‑per‑task=<cpu_amount> or ‑‑time=<duration>) can be omitted, greatly simplifying the command. If an explicit amount is not requested for a given resource, the job is assigned the default amount of the resource (as defined by the chosen partition). A notable exception concerns option ‑‑gres=<gpu_resources>, which is always required (see below) if the job needs access to GPUs.
--job-name=<jobname>
Specifies a name for the job. The specified name will appear along with the JOBID number when querying running jobs on the system with squeue. The default job name (i.e., the one assigned to the job when --job-name is not used) is the executable program's name.
‑‑gres=<gpu_resources>
specifies what GPUs to assign to the container. gpu_resources is a comma-delimited list where each element has the form gpu:<Type>:<amount>, where <Type> is one of the types of GPU available on Mufasa (see gres syntax) and <amount> is an integer between 1 and the number of GPUs of such type available to the partition. For instance, <gpu_resources> may be gpu:40gb:1,gpu:10gb:3, corresponding to asking for one "full" GPU and 3 "small" GPUs.
Important! The ‑‑gres parameter is mandatory if the job needs to use the system's GPUs. Differently from other resources (where unspecified requests lead to the assignment of a default amount), GPUs must always be explicitly requested.
‑‑mem=<mem_resources>
specifies the amount of RAM to assign to the container; for instance, <mem_resources> may be 200G
‑‑cpus-per-task=<cpu_amount>
specifies how many CPUs to assign to the container; for instance, <cpu_amount> may be 2
‑‑time=<duration>
specifies the maximum time allowed to the job to run, in the format days-hours:minutes:seconds, where days is optional; for instance, <d-hh:mm:ss> may be 72:00:00
‑‑pty
specifies that the job will be interactive (this is necessary when <command_to_run_within_container> is /bin/bash: see Running interactive jobs via SLURM)


Mufasa is configured to show, as part of the command prompt of a bash shell run via SLURM, a message such as (SLURM ID xx) (where xx is the ID of the /bin/bash process within SLURM). When you see this message, you know that the bash shell you are interacting with is a SLURM-run one.

Another way to know if the current shell is the “base” shell or one run via SLURM is to execute command

echo $SLURM_JOB_ID

If no number gets printed, this means that the shell is the “base” one. If a number is printed, it is the SLURM job ID of the /bin/bash process.

Guidelines you must follow when executing jobs on Mufasa

Mufasa is a shared machine. In order to make best and fair use of its shared resources, every user must carefully follow the three guidelines below:

Limit resource requests to the amount that your job actually needs
An example about CPUs: to make use of multiple CPUs, a process must explicitly support multiple workers/processes in parallel. So if you ask SLURM for 16 CPUs but your code only includes one worker, just one CPU will be used to run it, while the other 15 CPUs will sit idle. Nonetheless, you reserved 16 CPUs: so SLURM will prevent anyone else from using those 15 idle CPUs (until your job terminates).
So it is necessary to adapt every srun or sbatch request to the code that will actually be run. If you don't know how many CPUs your code uses, just run a short-duration job as a test: since SLURM prioritises shorter processes, the test job should get executed much faster than the "real" job.
Run every resource-intensive process via SLURM, i.e. with commands srun or sbatch
If you need to launch resource-intensive programs manually via a shell, simply
  1. use SLURM to create an interactive shell
  2. when the SLURM-created shell is active, launch your programs from it
Running your heavy processes via SLURM is necessary to ensure that you use Mufasa's resources correctly and respectfully towards the other users: in fact SLURM ensures that Mufasa's resources are distributed to users fairly.
When a SLURM job is not needed anymore, close it with scancel
It is typical that one doesn't know how long a piece of code will take to complete its work. So please make sure to check from time to time if that happened, and -if there's still time before the duration of your SLURM job ends- just scancel the job.

Other resources

The contents of this wiki are specifically tailored for users of Mufasa. They should include everything Mufasa users need to make good use of the machine. However, specific needs vary and advanced users may require advanced functionalities of SLURM that are not covered here.

There are a lot of resources on the internet dealing with the execution of jobs using SLURM. Usually these have been published for the benefit of the users of a specific High Performance Computing system, so there's no guarantee that whatever they suggest will work on Mufasa. If you feel the need to look for external resources, we you may start with this one, which has been prepared by the same people who built Mufasa.

Executing jobs on Mufasa

The main reason for a user to interact with Mufasa is to execute jobs that require resources not available to standard desktop-class machines. Therefore, launching jobs is the most important operation for Mufasa users: what follows explains how it is done.

Important! When launching jobs, always follow the guidelines.

Considering that all computation on Mufasa must occur within Docker containers, the jobs run by Mufasa users are always containers except for menial, non-computationally intensive jobs. This wiki includes directions about preparing Docker containers.

The standard process of launching a user job on Mufasa involves the following steps:


Step 1 --- Use SLURM to run the Docker container where the job will take place
[for interactive and non-interactive user jobs]
Step 2 --- Manually launch the user job from within the container
[for interactive user jobs only]

Interactive and non-interactive user jobs

Interactive user jobs
are jobs that require interaction with the user while they are running, via a bash shell running within the Docker container. The shell is used to receive commands from the user and/or print output messages. For interactive user jobs, the job is usually launched manually by the user (with a command issued via the shell) after the Docker container is in execution.
Non-interactive user jobs
are the most common variety. The user prepares the Docker container in such a way that, when in execution, the container autonomously puts the user's jobs into execution. The user does not have any communication with the Docker container while it is in execution.

Both interactive and non-interactive user jobs can be run via a (quite complex) command directly issued from the terminal opened via SSH. To reduce the possibility of mistakes, it is usually preferable to define an execution script that takes care of launching the job.

Job output

The whole point of running a user job is to collect its output. Usually, such output takes the form of one or more files generated within the filesystem of the Docker container.

As explained below, SLURM includes a mechanism to mount a part of Mufasa's own filesystem onto the container's filesystem: so when the job running within the container writes to this mounted part, it actually writes to Mufasa's filesystem. This means that when the Docker container ends its execution, its output files persist in Mufasa's filesystem (usually in a subdirectory of the user's own /home directory) and can be retrieved by the user at a later time.

The same mechanism can be used to allow user jobs running into a Docker container to read their input data from Mufasa's filesystem (usually a subdirectory of the user's own /home directory).

Using SLURM to run a Docker container

The first step to run a user job on Mufasa is to run the Docker container where the job will take place. A container is a “sandbox” containing the environment where the user's application operates. Parts of Mufasa's filesystem can be made visible (and writable, if they belong to the user's /home directory) to the environment of the container. This allows the containerized user application to read from, and write to, Mufasa's filesystem: for instance, to read data and write results. This wiki includes directions about preparing Docker containers

Each user is in charge of preparing the Docker container(s) where the user's jobs will be executed. In most situations the user can simply select a suitable ready-made container from the many which are already available for use.

In order to run a Docker container via SLURM, a user must use a command similar to the following ones:

For interactive user jobs (parts within [square brackets] are optional):

srun [‑p <partition_name>] ‑‑container-image=<container_path.sqsh> [--job-name=<jobname>] [‑‑no‑container‑entrypoint] ‑‑container‑mounts=<mufasa_dir>:<docker_dir> [‑‑gres=<gpu_resources>] [‑‑mem=<mem_resources>] [‑‑cpus‑per‑task=<cpu_amount>] [‑‑time=<duration>] ‑‑pty /bin/bash

The srun command above runs the Docker Container and opens an interactive shell within the container's environment.

For non-interactive user jobs (parts within [square brackets] are optional):

srun [‑p <partition_name>] ‑‑container-image=<container_path.sqsh> [--job-name=<jobname>] [‑‑no‑container‑entrypoint] ‑‑container‑mounts=<mufasa_dir>:<docker_dir> [‑‑gres=<gpu_resources>] [‑‑mem=<mem_resources>] [‑‑cpus‑per‑task=<cpu_amount>] [‑‑time=<duration>] [<command_to_run_within_container>]

Below, the elements of these commands are explained.

‑p <partition_name>
specifies the SLURM partition on which the job will be run. If it is not specified, the default partition is used.
Important! The chosen partition limits the resources that can be requested, since it is not allowed to request resources (type or quantity) that exceed what is allowed by the chosen partition.
Important! If ‑‑p <partition_name> is used, options that specify how many resources to assign to the job (such as ‑‑mem=<mem_resources>, ‑‑cpus‑per‑task=<cpu_amount> or ‑‑time=<duration>) can be omitted, greatly simplifying the command. If an explicit amount is not requested for a given resource, the job is assigned the default amount of the resource (as defined by the chosen partition). A notable exception concerns option ‑‑gres=<gpu_resources>, which is always required (see below) if the job needs access to GPUs.
--job-name=<jobname>
Specifies a name for the job. The specified name will appear along with the JOBID number when querying running jobs on the system with squeue. The default job name (i.e., the one assigned to the job when --job-name is not used) is the executable program's name.
‑‑container-image=<container_path.sqsh>
specifies the container to be run
‑‑no‑container‑entrypoint
specifies that the entrypoint defined in the container image should not be executed (ENTRYPOINT in the Dockerfile that defines the container). The entrypoint is an element of a Docker container: a command that gets executed as soon as the container is in execution. Option ‑‑no‑container‑entrypoint is useful when -for some reason- the user does not want the entrypoint in the container to be run.
‑‑container‑mounts=<mufasa_dir>:<docker_dir>
specifies what parts of Mufasa's filesystem will be available within the container's filesystem, and where they will be mounted. This is necessary to let the container get input data from Mufasa and/or write output data to Mufasa. For instance, if <mufasa_dir>:<docker_dir> takes the value /home/mrossi:/data this tells srun to mount Mufasa's directory /home/mrossi in position /data within the filesystem of the Docker container. When the docker container reads or writes files in directory /data of its own (internal) filesystem, what actually happens is that files in /home/mrossi get manipulated instead. /home/mrossi is the only part of the filesystem of Mufasa that is visible to, and changeable by, the Docker container.
‑‑gres=<gpu_resources>
specifies what GPUs to assign to the container. gpu_resources is a comma-delimited list where each element has the form gpu:<Type>:<amount>, where <Type> is one of the types of GPU available on Mufasa (see gres syntax) and <amount> is an integer between 1 and the number of GPUs of such type available to the partition. For instance, <gpu_resources> may be gpu:40gb:1,gpu:10gb:3, corresponding to asking for one "full" GPU and 3 "small" GPUs.
Important! The ‑‑gres parameter is mandatory if the job needs to use the system's GPUs. Differently from other resources (where unspecified requests lead to the assignment of a default amount), GPUs must always be explicitly requested.
‑‑mem=<mem_resources>
specifies the amount of RAM to assign to the container; for instance, <mem_resources> may be 200G
‑‑cpus-per-task=<cpu_amount>
specifies how many CPUs to assign to the container; for instance, <cpu_amount> may be 2
‑‑time=<duration>
specifies the maximum time allowed to the job to run, in the format days-hours:minutes:seconds, where days is optional; for instance, <d-hh:mm:ss> may be 72:00:00
‑‑pty
specifies that the job will be interactive (this is necessary when <command_to_run_within_container> is /bin/bash: see Running interactive jobs via SLURM)
<command_to_run_within_container>
the command that will be put into execution within the Docker container as soon as it the container is active. Note that this is mandatory for non-interactive user jobs and optional for interactive user jobs. If specified, this command will be executed in the environment created by Docker.


For interactive user jobs, a typical value for <command_to_run_within_container> is /bin/bash. This instructs srun to open an interactive shell session (i.e. a command-line terminal interface) within the container, from which the user will then run their job. Another typical value for <command_to_run_within_container> is python, which launches an interactive Python session from which the user will then run their job.

For non-interactive user jobs, using [command_to_run_within_container] is one of the two available methods to run the program(s) that the user wants to be executed within the Docker container. The other available method to run the user job(s) is to use the entrypoint of the container. The use of [command_to_run_within_container] is therefore optional.

Using execution scripts to run jobs

The srun commands described in Using SLURM to run a Docker container are very complex, and it's easy to forget some option or make mistakes while using them. For non-interactive jobs, there is a solution to this problem.

When the user job is non-interactive, in fact, the srun command can be substituted with a much simpler sbatch command. As already explained, sbatch can make use of an execution script to specify all the parts of the command to be run via SLURM. So the command to run the Docker container where the user job will take place becomes

sbatch <execution_script>

An execution script is a special type of Linux script that includes SBATCH directives. SBATCH directives are used to specify the values of the parameters that are otherwise set in the [options] part of an srun command.

Note on Linux shell scripts
A shell script is a text file that will be run by the bash shell. In order to be acceptable as a bash script, a text file must:
  • have the “executable” flag set
  • have #!/bin/bash as its very first line

Usually, a Linux shell script is given a name ending in .sh, such as my_execution_script.sh, but this is not mandatory.

Within any shell script, lines preceded by # are comments (with the notable exception of the initial #!/bin/bash line). Use of blank lines as spacers is allowed.

An execution script is a Linux shell script composed of two parts:

  1. a preamble, composed of directives using which the user specifies the values to be given to parameters, each preceded by the keyword SBATCH
  2. [optionally] one or more srun commands that launch jobs with SLURM using the parameter values specified in the preamble

The srun commands are optional because jobs can also be launched by the Docker container's own entrypoint.

Below is an execution script template to be copied and pasted into your own execution script text file.

The template includes all the options already described above, plus a few additional useful ones (for instance, those that enable SLURM to send email messages to the user in correspondence to events in the lifecycle of their job). Information about all the possible options can be found in [SLURM's own documentation].

All the SBATCH directives in the script template below are inactive because commented out. To enable a directive, just uncomment it by removing the leading "#". To make them stand out more visibly, in the template the comments corresponding to actual instructions are in bold.

#!/bin/bash

#----------------start of preamble----------------

#SBATCH ‑p <partition_name>

#SBATCH ‑‑container-image=<container_path.sqsh>

#SBATCH --job-name=<name>

#SBATCH ‑‑no‑container‑entrypoint

#SBATCH ‑‑container‑mounts=<mufasa_dir>:<docker_dir>

#SBATCH ‑‑gres=<gpu_resources>

#SBATCH ‑‑mem=<mem_resources>

#SBATCH ‑‑cpus-per-task=<cpu_amount>

#SBATCH ‑‑time=<d-hh:mm:ss>

# The following directives (not described so far) activate SLURM's email notifications:
# the first specifies where they are sent; the following 3 set up notifications start/end/failure of job execution

#SBATCH --mail-user <email_address>

#SBATCH --mail-type BEGIN

#SBATCH --mail-type END

#SBATCH --mail-type FAIL

#----------------end of preamble----------------

# srun <command_to_run_within_container>

# to run the user job, either uncomment (and personalise) the above srun command or use the entrypoint of the Docker container

Nvidia Pyxis

Some of the options described below are specifically dedicated to Docker containers: these are provided by the Nvidia Pyxis package that has been installed on Mufasa as an adjunct to SLURM. Pyxis allows unprivileged users (i.e., those that are not administrators of Mufasa) to execute containers and run commands within them.

More specifically, options ‑‑container-image, ‑‑no‑container‑entrypoint, ‑‑container-mounts are provided to srun by Pyxis.

See the Nvidia Pyxis github page for additional information about the options that it provides to srun.

Launching a user job from within a Docker container

For interactive user jobs, once the Docker container (run as explained here) is up and running, the user is dropped to the interactive environment specified by <command_to_run_within_container>. This interactive environment can be, for instance, a bash shell or an interactive Python console. Once inside the interactive environment, the user can simply run the required program in the usual way (depending on the type of environment).

Please note that the interactive environment of the Docker container does not have any relation with Mufasa's system. The only contact point is the part of Mufasa's filesystem that has been grafted to the container's filesystem via the ‑‑container‑mounts option of srun. In particular, none of the software packages (such as the Nvidia drivers) installed on Mufasa are available in the container, unless they have been installed in it at preparation time (as explained in Docker), or manually after the container is put in execution.

Also note that, once a Docker container launched with srun is in execution, its own bash shell is completely indistinguishable from the bash shell of Mufasa where the srun command that put the container in execution was issued. The two shells share the same terminal window. The only clue to the fact that you now are, in fact, in the container's shell may be the command prompt, which should now show your location as /opt.

Detaching from a running job with screen

A consequence of the way srun operates is that if you launch an interactive user job, the shell where the command is running must remain open: if it closes, the job terminates. That shell runs in the terminal of your own PC where the SSH connection to Mufasa exists.

If you do not plan to keep the SSH connection to Mufasa open (for instance because you have to turn off or suspend your PC), there is a way to keep your interactive job alive. Namely, you should use command srun inside a screen session (often simply called "a screen"), then detach from the screen (here is one of many tutorials about screen available online).

Once you have detached from the screen session, you can close the SSH connection to Mufasa without damage. When you need to reach your (still running) job again, you can can open a new SSH connection to Mufasa and then reattach to the screen.

A use case for screen is writing your program in such a way that it prints progress advancement messages as it goes on with its work. Then, you can check its advancement by periodically reconnecting to the screen where the program is running and reading the messages it printed.

Basic usage of screen is explained below.

Creating a screen session, running a job in it, detaching from it

  1. Connect to Mufasa with SSH
  2. From the Mufasa shell, run
    screen
  3. In the screen session ("screen") thus created (it has the look of an empty shell), launch your job with srun
  4. Detach from the screen by pressing ctrl + A followed by D: you will come back to the original Mufasa shell, while your process will go on running in the screen
  5. You can now close the SSH connection to Mufasa without damaging your running job

Reattaching to an active screen session

  1. Connect to Mufasa with SSH
  2. In the Mufasa shell, run
    screen -r
  3. You are now back to the screen where you launched your job

Closing (i.e. destroying) a screen session

When you do not need a screen session anymore:

  1. reattach to the screen as explained above
  2. destroy the screen by pressing ctrl + A followed by \ (i.e., backslash)

Of course, any program running within the screen gets terminated when the screen is destroyed.

Using salloc to reserve resources

What is salloc?

salloc is a SLURM command that allows a user to reserve a set of resources (e.g., a 40 GB GPU) for a given time in the future.

The typical use of salloc is to "book" an interactive session where the user enjoys complete control of a set of resources. The resources that are part of this set are chosen by the user. Within the "booked" session, any job run by the user that relies on the reserved resources is immediately put into execution by SLURM.

More precisely:

  • the user, using salloc, specifies what resources they need and the time when they will need them;
  • when the delivery comes, SLURM creates an interactive shell session for the user;
  • within such session, the user can use srun and sbatch to run programs, enjoying full (i.e. not shared with anyone else) and instantaneous access to the resources.

Resource reservation using salloc is only possible if the request is done in advance wrt the delivery time. The more the resources that the user wants to reserve are in high demand, the more anticipated the request should be to ensure that SLURM is able to fulfill it.

When a user makes a request for resources with salloc, the request (called an allocation) gets added to the job queue of SLURM of the requisite partition as a job in pending (PD) state (job states are described here). Indeed, resource allocation is the first part of SLURM's process of executing a user job, while the second part is running the program and letting it use the allocated resources. Using salloc actually corresponds to having SLURM perform the first part of the process (resource allocation) while leaving the second part (running programs) to the user.

Until the delivery time specified by the user comes, the allocation remains in state PD, and other jobs requesting the same resources, even if submitted later, are executed. While the request waits for the delivery time, however, it accumulates a priority that increases over time. The longer the allocation stays in the PD state, the stronger this accumulation of priority: so, by requesting resources with salloc well in advance of the delivery time, users can ensure that the resources they need will be ready for them at the requested delivery time, even if these resources are highly contended.

salloc commands

salloc commands use a similar syntax to srun commands. In particular, salloc lets a user specify what resources they need and -importantly- a delivery time for the requested resources (delivery time can also be specified with srun, but in that case it is not very useful).

The typical salloc command has this form:'

salloc [-p <partition_name>] [--job-name=<jobname>] [‑‑gres=<gpu_resources>] [‑‑mem=<mem_resources>] [‑‑cpus‑per‑task=<cpu_amount>] ‑‑time=<duration> --begin=<time>

The parts of the above commands within [square brackets] are optional.

Below, the elements of the command are explained.

‑p <partition_name>
specifies the SLURM partition on which the job will be run. If it is not specified, the default partition is used.
Important! The chosen partition limits the resources that can be requested, since it is not allowed to request resources (type or quantity) that exceed what is allowed by the chosen partition.
Important! If ‑‑p <partition_name> is used, options that specify how many resources to assign to the job (such as ‑‑mem=<mem_resources>, ‑‑cpus‑per‑task=<cpu_amount> or ‑‑time=<duration>) can be omitted, greatly simplifying the command. If an explicit amount is not requested for a given resource, the job is assigned the default amount of the resource (as defined by the chosen partition). A notable exception concerns option ‑‑gres=<gpu_resources>, which is always required (see below) if the job needs access to GPUs.
--job-name=<jobname>
Specifies a name for the job corresponding to the resource allocation. The specified name will appear along with the JOBID number when querying running jobs on the system with squeue. The default job name (i.e., the one assigned to the job when --job-name is not used) is "interact".
‑‑gres=<gpu_resources>
specifies what GPUs are requested. gpu_resources is a comma-delimited list where each element has the form gpu:<Type>:<amount>, where <Type> is one of the types of GPU available on Mufasa (see gres syntax) and <amount> is an integer between 1 and the number of GPUs of such type available to the partition. For instance, <gpu_resources> may be gpu:40gb:1,gpu:10gb:3.
Important! The ‑‑gres parameter is mandatory if the job needs to use the system's GPUs. Differently from other resources (where unspecified requests lead to the assignment of a default amount), GPUs must always be explicitly requested.
‑‑mem=<mem_resources>
specifies the amount of RAM requested; for instance, <mem_resources> may be 200G
‑‑cpus-per-task=<cpu_amount>
specifies how many CPUs are requested; for instance, <cpu_amount> may be 2
‑‑time=<duration>
specifies the maximum time allowed to the job to run, in the format days-hours:minutes:seconds, where days is optional; for instance, <d-hh:mm:ss> may be 72:00:00. While the interactive session associated to the allocation is active, the user can decide to cancel the allocation at any time just by closing the session (e.g., with command exit for bash)
--begin=<time>
specifies the delivery time of the resources reserved with salloc, according to the syntax described below. The delivery time must be a future time.

Syntax of parameter --begin

If the allocation is for the current day, you can specify <time> as hours and minutes in the form

HH:MM

If you want to specify a time of a different day, the form for

It is also possible to specify

now+Kminutes
now+Khours
now+Kdays

where K is a (positive) integer.

Examples:

--begin=16:00
--begin=now+1hours
--begin=now+1days
--begin=2030-01-20T12:34:00

Note that Mufasa's time zone is GMT, so <time> must be expressed in GMT as well. If you want to know Mufasa's current time, use command

date

It provides an output similar to the following:

Thu Nov 10 16:43:30 UTC 2022

How to use salloc

In the typical scenario, the user of salloc will make use of screen. Command screen creates a shell session (called "a screen") that it is possible to abandon without closing it ("detaching from the screen"). It is then possible to reach again the screen at a later time ("reattaching to the screen"). This means that a user can create a screen, run salloc within it to create an allocation for time X, detach from the screen and reattach to it just before time X to use the reserved resources from the interactive session created by salloc.

More precisely, the operations needed to do this are the following:

  1. Connect to Mufasa with SSH.
  2. From the Mufasa shell, run
    screen
  3. In the screen session ("screen") thus created run the salloc command, specifying via its options the resources you need and the time at which you want them delivered.
  4. SLURM will respond with a message similar to
    salloc: Pending job allocation XXXX
  5. Detach from the screen by pressing ctrl + A followed by D: you will come back to the original Mufasa shell.
  6. You can now close the SSH connection to Mufasa without damaging your resource allocation request.
  7. At the delivery time you specified in the salloc command, connect to Mufasa with SSH.
  8. Once you are in the Mufasa shell, reattach to the screen with command
    screen -r
  9. You are now back to the screen where you used salloc; as soon as SLURM provides to you with the resources you reserved, message "salloc: Pending job allocation XXXX" changes to the shell prompt.
  10. You are now in the interactive shell session you booked with salloc. From here, you can run any programs you want, including srun and sbatch. For the whole duration of the allocation, your programs have unrestricted use of all the resources you reserved with salloc.
    Important! Any job run within the shell session is subject to the time limit (i.e., maximum duration) imposed by the partition it is running on! Therefore, if the job reaches the time limit, it gets forcibly terminated by SLURM. Termination depends exclusively from the time limit: so it occurs even if the end time for the allocation has not been reached yet. (Of course, the job also gets terminated if the allocation ends.)
  11. Once the interactive shell session is not needed anymore, cancel it by exiting from the session with
    exit
    (Note that if you get to the end of the time period you specified in your request without closing the shell session, SLURM does it for you, killing any programs still running.)
  12. You are now back to your screen. Destroy it by pressing ctrl + A followed by \ (i.e., backslash) to get back to the Mufasa shell.

Cancelling a resource request made with salloc

To cancel a request for resources made as explained in How to use salloc, follow these steps:

  1. Connect to Mufasa with SSH.
  2. Once you are in the Mufasa shell, reattach to the screen where you used command salloc with command
    screen -r
  3. You should see the message "salloc: Pending job allocation XXXX" (if the allocation is still pending) or ""salloc: job XXXX queued and waiting for resources" (if the allocation is done and waiting for its start time). Now just press Ctrl + C. This communicates to SLURM your intention to cancel your request for resources.
  4. SLURM will communicate the cancellation with message
    salloc: Job allocation XXXX has been revoked.
  5. Destroy the screen by pressing ctrl + A followed by \ (i.e., backslash) to get back to the Mufasa shell.

Automatic job caching

When a job is run via SLURM (with or without an execution script), Mufasa exploits a (fully tranparent) caching mechanism to speed up its execution. The speedup is obtained by removing the need for the running job to execute accesses to the (mechanical and therefore relatively slow) HDDs where /home partitions reside, substituting them with accesses to (solid-state and therefore much faster) SSDs.

Each time a job is run via SLURM, this is what happens automatically:

  1. Mufasa temporarily copies code and associated data from the directory where the executables are located (in the user's own /home) to a cache space located on system SSDs
  2. Mufasa launches the cached copy of the user executables, using the cached copies of the data as its input files
  3. The executables create their output files in the cache space
  4. When the user jobs end, Mufasa copies the output files from the cache space back to the user's own /home

The whole process is completely transparent to the user. The user simply prepares the executable (or the execution script) in a subdirectory of their /home directory and runs the job. When job execution is complete, the user finds their output data in the origin subdirectory of /home, exactly as if the execution actually occurred there.

Important! The caching mechanism requires that during job execution the user does not modify the contents of the /home subdirectory where executable and data were at execution time. Any such change, in fact, will be overwritten by Mufasa at the end of the execution, when files are copied back from the caching space.

Monitoring and managing jobs

SLURM provides Job Users with tools to inspect and manage jobs. While a Job User is able to see all users' jobs, they are only allowed to interact with their own.

The main commands used to interact with jobs are squeue to inspect the scheduling queues and scancel to terminate queued or running jobs.

Inspecting jobs with squeue

Running command

squeue

provides an output similar to the following:

JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
  520       fat     bash acasella  R 2-04:10:25      1 gn01
  523       fat     bash amarzull  R    1:30:35      1 gn01
  522       gpu     bash    clena  R   20:51:16      1 gn01

This output comprises the following information:

JOBID
Numerical identifier of the job assigned by SLURM
This identifier is used to intervene on the job, for instance with scancel
PARTITION
the partition that the job is run on
NAME
the name assigned to the job; can be personalised using the --job-name option
USER
username of the user who launched the job
ST
job state (see Job state for further information)
TIME
time that has passed since the beginning of job execution
NODES
number of nodes where the job is being executed (for Mufasa, this is always 1 as it is a single machine)
NODELIST (REASON)
name of the nodes where the job is being executed: for Mufasa it is always gn01, which is the name of the node corresponding to Mufasa.


To limit the output of squeue to the jobs owned by user <username>, it can be used like this:

squeue -u <username>

Interpreting Job state as provided by squeue

Jobs typically pass through several states in the course of their execution. Job state is shown in column "ST" of the output of squeue as an abbreviated code (e.g., "R" for RUNNING).

The most relevant codes and states are the following:

PD PENDING
Job is awaiting resource allocation.
R RUNNING
Job currently has an allocation.
S SUSPENDED
Job has an allocation, but execution has been suspended and CPUs have been released for other jobs.
CG COMPLETING
Job is in the process of completing. Some processes on some nodes may still be active.
CD COMPLETED
Job has terminated all processes on all nodes with an exit code of zero.

Beyond these, there are other (less frequent) job states. The SLURM doc page for squeue provides a complete list of them.

Knowing when jobs are expected to end or start

If you are interested in understanding when jobs are expected to start or end, use command

squeue -o "%5i %8u %10P %.2t |%19S |%.11L|"

which provides an output is similar to the following:

JOBID USER     PARTITION  ST |START_TIME          |  TIME_LEFT|
5307  thuynh   fat        PD |2022-11-11T17:55:54 | 3-00:00:00|
5308  thuynh   fat        PD |2022-11-11T17:55:54 | 3-00:00:00|
5296  cziyang  fat         R |2022-11-08T16:58:03 | 1-00:48:14|
5306  thuynh   fat         R |2022-11-10T08:13:30 | 2-16:03:41|
5297  gnannini fat         R |2022-11-08T17:55:54 | 1-01:46:05|
5336  ssaitta  gpu         R |2022-11-10T08:13:00 |    6:03:11|
5358  dmilesi  gpulong     R |2022-11-10T15:11:32 | 2-23:01:43|
5338  cziyang  gpulong     R |2022-11-10T09:45:01 | 1-17:35:12|
For running jobs (state R)
column "START_TIME" tells you when the job started its execution
column "TIME_LEFT" tells you how much remains of the running time requested by the job
For pending jobs (state PD)
column "START_TIME" tells you when the job is expected to start its execution
column "TIME_LEFT" tells you how much running time has been requested by the job

Important! Start and end times are forecasts based on the features of current jobs in the queues, and may change if running jobs end prematurely and/or if new jobs with higher priority are added to the queues. So these times should never be considered as certain.

If you simply want to know when pending jobs (state PD) are expected to begin execution, use

squeue --start

which lists pending jobs in order of increasing START_TIME (the job on top is the one which will be run first). For each pending job the command provides an output similar to the example below:

JOBID PARTITION     NAME     USER ST          START_TIME  NODES SCHEDNODES           NODELIST(REASON)
 5090       fat training   thuynh PD 2022-10-27T09:28:01      1 (null)               (Resources)

Getting detailed information about a job

If needed, complete information about a job (either pending or running) can be obtained using command

scontrol show job <JOBID>

where <JOBID> is the number from the first column of the output of squeue. The output of this command is similar to the following:

JobId=936 JobName=bash
   UserId=acasella(1001) GroupId=acasella(1001) MCS_label=N/A
   Priority=7885 Nice=0 Account=research QOS=normal
   JobState=RUNNING Reason=None Dependency=(null)
   Requeue=0 Restarts=0 BatchFlag=0 Reboot=0 ExitCode=0:0
   RunTime=03:21:59 TimeLimit=3-00:00:00 TimeMin=N/A
   SubmitTime=2022-02-08T11:57:24 EligibleTime=2022-02-08T11:57:24
   AccrueTime=Unknown
   StartTime=2022-02-08T11:57:24 EndTime=2022-02-11T11:57:24 Deadline=N/A
   PreemptEligibleTime=2022-02-08T11:57:24 PreemptTime=None
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2022-02-08T11:57:24 Scheduler=Main
   Partition=fat AllocNode:Sid=rk018445:4034
   ReqNodeList=(null) ExcNodeList=(null)
   NodeList=gn01
   BatchHost=gn01
   NumNodes=1 NumCPUs=8 NumTasks=1 CPUs/Task=8 ReqB:S:C:T=0:0:*:*
   TRES=cpu=8,mem=128G,node=1,billing=8,gres/gpu:40gb=1
   Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
   MinCPUsNode=8 MinMemoryNode=128G MinTmpDiskNode=0
   Features=(null) DelayBoot=00:00:00
   OverSubscribe=YES Contiguous=0 Licenses=(null) Network=(null)
   Command=/bin/bash
   WorkDir=/home/acasella
   Power=
   TresPerNode=gres:gpu:40gb:1

In particular, the line beginning with "StartTime=" provides expected times for the start and end of job execution. As explained in Knowing when jobs are expected to end or start, start time is only a prediction and subject to change.

Canceling a job with scancel

It is possible to cancel a job using command scancel, either while it is waiting for execution or when it is in execution (in this case you can choose what system signal to send the process in order to terminate it). The following are some examples of use of scancel adapted from SLURM's documentation.

scancel <JOBID>

removes queued job <JOBID> from the execution queue.

scancel --signal=TERM <JOBID>

terminates execution of job <JOBID> with signal SIGTERM (request to stop).

scancel --signal=KILL <JOBID>

terminates execution of job <JOBID> with signal SIGKILL (force stop).

scancel --state=PENDING --user=<username> --partition=<partition_name>

cancels all pending jobs belonging to user <username> in partition <partition_name>.

Knowing what jobs you ran today

Command

sacct -X

provides a list of all jobs run today by your user.