Difference between revisions of "System"

From Mufasa (BioHPC)
Jump to navigation Jump to search
 
(68 intermediate revisions by 2 users not shown)
Line 5: Line 5:
Remote access to Mufasa is performed using the [[System#Accessing Mufasa|SSH protocol]] for the execution of commands and the [[System#File transfer|SFTP protocol]] for the exchange of files. Once logged in, a user interacts with Mufasa via a terminal (text-based) interface.
Remote access to Mufasa is performed using the [[System#Accessing Mufasa|SSH protocol]] for the execution of commands and the [[System#File transfer|SFTP protocol]] for the exchange of files. Once logged in, a user interacts with Mufasa via a terminal (text-based) interface.


= Hardware =


= Hardware =
[[File:hw.png|right|320px]]
[[File:hw.png|right|320px]]
Mufasa is a server for massively parallel computation. It has been set up and configured by [https://www.e4company.com/en/ E4 Computer Engineering] with the support of the [http://www.biomech.polimi.it/ Biomechanics Group], the [http://www.cartcas.polimi.it/ CartCasLab] laboratory and the [https://nearlab.polimi.it/ NearLab] laboratory.
Mufasa is a server for massively parallel computation. It has been set up and configured by [https://www.e4company.com/en/ E4 Computer Engineering] with the support of the [http://www.biomech.polimi.it/ Biomechanics Group], the [http://www.cartcas.polimi.it/ CartCasLab] laboratory and the [https://nearlab.polimi.it/ NearLab] laboratory.
Line 12: Line 12:
Mufasa's main hardware components are:
Mufasa's main hardware components are:


* 2 AMD Epyc 7542 32-core, 32-thread processors (64 CPU cores in total)
* 2 AMD Epyc 7542 32-core processors (64 CPU cores total)
* 1 TB RAM
* 1 TB RAM
* 9 TB of SSDs (for OS and [[User Jobs#Automatic job caching|job caching]])
* 9 TB of SSDs (for OS and [[User Jobs#Automatic job caching|job caching]])
Line 20: Line 20:


Usually each of these resources (e.g., a GPU) is not fully assigned to a single user or a single job. On the contrary, resources are shared among different users and processes in order to optimise their usage and availability. Most of the management of this sharing is done by [[System#The SLURM job scheduling system|SLURM]].
Usually each of these resources (e.g., a GPU) is not fully assigned to a single user or a single job. On the contrary, resources are shared among different users and processes in order to optimise their usage and availability. Most of the management of this sharing is done by [[System#The SLURM job scheduling system|SLURM]].
== Disk quotas ==
On Mufasa, disk space (including <code>/home</code> directories) must be used as a temporary storage area for user programs and their data, limited to the execution period. Disk space on Mufasa is not intended for long-term storage. For this reason, disk usage is subjected to a quota system.
=== User quotas ===
Each user is assigned a ''disk quota'', i.e. an amount of space that they can use before the user is blocked by the quota system. Note that the quota applies not only to the data created and/or uploaded by you as a user, but also to data created by programs run by your user.
The quotas assigned to your user and the amount of it that you are currently using can be inspected with command
<pre style="color: lightgrey; background: black;">
quota -s
</pre>
The output of <code>quota -s</code> is similar to the following:
<pre style="color: lightgrey; background: black;">
Filesystem  space  quota  limit  grace  files  quota  limit  grace
/dev/sdb1  11104K    100G    150G              1      0      0       
/dev/sdc2  5552K    100G    150G              60      0      0       
</pre>
Here is a simple guide to the output of <code>quota -s</code>.
: Column "Filesystems" identifies the filesystems where the user has been assigned a disk quota. On Mufasa, <code>/dev/sdb1</code> is the SSD disk space used as [[User Jobs#Automatic job caching|cache space]], while <code>/dev/sdc2</code> is the HDD space used for the <code>/home</code> directories.
: Columns titled "space" and "files" tell the user how much of their quota they are actually using: the first in term of bytes, the second in term of number of files (more precisely, of ''inodes'').
: Columns titled "quota" tell the user how much is their ''soft limit'', in term of bytes and files respectively. If the value is 0, it means there is no limit.
: Columns titled "limit" tell the user how much is their ''hard limit'', in term of bytes and files respectively. If the value is 0, it means there is no limit.
: Columns titled "grace" tell the user how long they are allowed to stay above their ''soft limit'',  for what concerns bytes and files respectively. When these columns are empty (as in the example above) the user is not over quota.
The meaning of '''soft limit''' and '''hard limit''' is the following.
The hard limit cannot be exceeded. When a user reaches their hard limit, they cannot use any more disk space: for them, the filesystem behaves as if the disks are out of space. Disk writes will fail, temporary files will fail to be created, and the user will start to see warnings and errors while performing common tasks. The only disk operation allowed is file deletion.
The soft limit is, as the word goes, softer. When a user exceeds it, they are not immediately prevented from using more disk space (provided that they stay below the hard limit). However, as the user goes beyond the soft limit, their '''grace period''' begins: i.e. a period within which the user must reduce their amount of data back to below the soft limit. During the grace period, the "grace" column(s) of the output of <code>quota</code> show how much of the grace period remains to the user. If the user is still above their soft limit at the end of the grace period, the quota system will treat the soft limit as a hard limit: i.e. it will force the user to delete data until they are below the soft limit before they can write on disk again.
In the output of <code>quota -s</code>, the grace columns are blank except when a soft limit has been exceeded.
=== Group and project quotas ===
While on Mufasa disk quotas are usually assigned ''per-user'', the quota system also enables the setup of ''per-group'' quotas (i.e., limits to the disk space that, collectively, a group of users can use) and ''per-project'' quotas (i.e., limits to the amount of data that a specific directory and all its subdirectories can contain).
A comprehensive view of the quota situation for one's user and user groups is provided by command
<pre style="color: lightgrey; background: black;">
quotainfo
</pre>
For what concerns project quotas, these are applied to storage spaces in <code>/home</code> that are shared among multiple users (typically people of the same research group).
=== Finding out how much disk space you are using ===
If your user is the owner of directory <code>/path/to/dir/</code> you can find out how much disk space it uses with command <code>du</code> like this:
<pre style="color: lightgrey; background: black;">
du -h /path/to/dir/
</pre>
The <code>-h</code> flag is used to ask for ''human-readable'' values (in bytes).
In particular, you can find out how much disk space is used by your /home directory with command
<pre style="color: lightgrey; background: black;">
du -h ~
</pre>
(in Linux, the symbol <code>~</code> is shorthand for the path to the current user's home directory). Its output is similar to the following:
<pre style="color: lightgrey; background: black;">
gfontana@rk018445:~$ du /home/gfontana/
12 /home/gfontana/.ssh
356 /home/gfontana/.cache/gstreamer-1.0
5048 /home/gfontana/.cache/tracker
5404 /home/gfontana/.cache
816 /home/gfontana/.local/share/tracker/data
816 /home/gfontana/.local/share/tracker
36 /home/gfontana/.local/share/gvfs-metadata
4 /home/gfontana/.local/share/nano
4 /home/gfontana/.local/share/keyrings
860 /home/gfontana/.local/share
860 /home/gfontana/.local
28 /home/gfontana/.config/pulse
0 /home/gfontana/.config/goa-1.0
4 /home/gfontana/.config/htop
32 /home/gfontana/.config
8 /home/gfontana/.slurm
16 /home/gfontana/varie/residui
36 /home/gfontana/varie
6404 /home/gfontana/
</pre>
In Linux, directory names with a leading "." in their name correspond to ''hidden'' directories in Linux. These do not appear in listings, such as the output of the <code>ls</code> command, but still use up space. So <code>du</code> can help you understand why the quota system says that you are using more disk space than reported by <code>ls</code>.


== CPUs and GPUs ==
== CPUs and GPUs ==
Line 118: Line 25:
Mufasa is fitted with two 32-core CPU, so the system has a total of 64 phyical CPUs (each of which can run 2 threads). Of the 64 CPUs, 2 are reserved for jobs run outside the [[System#The SLURM job scheduling system|SLURM job scheduling system]] (i.e., for low-power "housekeeping" tasks) while the remaining 62 are reserved for jobs run via SLURM.
Mufasa is fitted with two 32-core CPU, so the system has a total of 64 phyical CPUs (each of which can run 2 threads). Of the 64 CPUs, 2 are reserved for jobs run outside the [[System#The SLURM job scheduling system|SLURM job scheduling system]] (i.e., for low-power "housekeeping" tasks) while the remaining 62 are reserved for jobs run via SLURM.


For what concerns GPUs, some of the 5 physical A100 processing cards (i.e., GPUs) are subdivided into “virtual” GPUs with different capabilities using [https://docs.nvidia.com/datacenter/tesla/mig-user-guide/ Nvidia's MIG system]. From MIG's user guide:
For what concerns GPUs, some of the 5 physical A100 processing cards (i.e., GPUs) are subdivided into “virtual” GPUs with different capabilities using [https://docs.nvidia.com/datacenter/tesla/mig-user-guide/ Nvidia's MIG system]. Command
 
<blockquote>“''The Multi-Instance GPU (MIG) feature allows GPUs based on the NVIDIA Ampere architecture (such as NVIDIA A100) to be securely partitioned into up to seven separate GPU Instances for CUDA applications, providing multiple users with separate GPU resources for optimal GPU utilization. This feature is particularly beneficial for workloads that do not fully saturate the GPU’s compute capacity and therefore users may want to run different workloads in parallel to maximize utilization.''”
</blockquote>
 
In practice, MIG allows flexible partitioning of a very powerful (but single) GPU to create multiple virtual GPUs with different capabilities, that are then made available to users as if they were separate devices.
 
Command
 
<pre style="color: lightgrey; background: black;">
nvidia-smi
</pre>
 
(see [https://developer.nvidia.com/nvidia-system-management-interface nvidia-smi Nvidia's documentation]) provides an overview of the physical and virtual GPUs available to users in a system (“smi” stands for System Management Interface). On Mufasa, this command may require to be launched via the SLURM job scheduling system (as explained in Section 2 of this document) in order to be able to access the GPUs. Its output (not reported here since it is quite extensive) is subdivided into three parts:
 
* the first part describes the physical GPUs
* the second describes the virtual GPUs obtained by subdividing physical GPUs using MIG
* the third describes processes currently using the GPUs
 
The following is an example of the output of <code>nvidia-smi</code>:
 
<pre style="color: lightgrey; background: black;">
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02    Driver Version: 470.57.02    CUDA Version: 11.4    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|        Memory-Usage | GPU-Util  Compute M. |
|                              |                      |              MIG M. |
|===============================+======================+======================|
|  0  NVIDIA A100-PCI...  On  | 00000000:01:00.0 Off |                  On |
| N/A  37C    P0    35W / 250W |    24MiB / 40536MiB |    N/A      Default |
|                              |                      |              Enabled |
+-------------------------------+----------------------+----------------------+
|  1  NVIDIA A100-PCI...  On  | 00000000:41:00.0 Off |                  On |
| N/A  61C    P0  146W / 250W |  17258MiB / 40536MiB |    N/A      Default |
|                              |                      |              Enabled |
+-------------------------------+----------------------+----------------------+
|  2  NVIDIA A100-PCI...  On  | 00000000:61:00.0 Off |                    0 |
| N/A  57C    P0  119W / 250W |  5466MiB / 40536MiB |    46%      Default |
|                              |                      |            Disabled |
+-------------------------------+----------------------+----------------------+
|  3  NVIDIA A100-PCI...  On  | 00000000:81:00.0 Off |                  On |
| N/A  33C    P0    35W / 250W |    24MiB / 40536MiB |    N/A      Default |
|                              |                      |              Enabled |
+-------------------------------+----------------------+----------------------+
|  4  NVIDIA A100-PCI...  On  | 00000000:C1:00.0 Off |                    0 |
| N/A  54C    P0    72W / 250W |  39872MiB / 40536MiB |      0%      Default |
|                              |                      |            Disabled |
+-------------------------------+----------------------+----------------------+
 
+-----------------------------------------------------------------------------+
| MIG devices:                                                                |
+------------------+----------------------+-----------+-----------------------+
| GPU  GI  CI  MIG |        Memory-Usage |        Vol|        Shared        |
|      ID  ID  Dev |          BAR1-Usage | SM    Unc| CE  ENC  DEC  OFA  JPG|
|                  |                      |        ECC|                      |
|==================+======================+===========+=======================|
|  0    2  0  0  |    10MiB / 20096MiB | 42      0 |  3  0    2    0    0 |
|                  |      0MiB / 32767MiB |          |                      |
+------------------+----------------------+-----------+-----------------------+
|  0    3  0  1  |      6MiB /  9984MiB | 28      0 |  2  0    1    0    0 |
|                  |      0MiB / 16383MiB |          |                      |
+------------------+----------------------+-----------+-----------------------+
|  0    4  0  2  |      6MiB /  9984MiB | 28      0 |  2  0    1    0    0 |
|                  |      0MiB / 16383MiB |          |                      |
+------------------+----------------------+-----------+-----------------------+
|  1    1  0  0  |  14048MiB / 20096MiB | 42      0 |  3  0    2    0    0 |
|                  |      2MiB / 32767MiB |          |                      |
+------------------+----------------------+-----------+-----------------------+
|  1    5  0  1  |  3202MiB /  9984MiB | 28      0 |  2  0    1    0    0 |
|                  |      2MiB / 16383MiB |          |                      |
+------------------+----------------------+-----------+-----------------------+
|  1    6  0  2  |      6MiB /  9984MiB | 28      0 |  2  0    1    0    0 |
|                  |      0MiB / 16383MiB |          |                      |
+------------------+----------------------+-----------+-----------------------+
|  3    2  0  0  |    10MiB / 20096MiB | 42      0 |  3  0    2    0    0 |
|                  |      0MiB / 32767MiB |          |                      |
+------------------+----------------------+-----------+-----------------------+
|  3    3  0  1  |      6MiB /  9984MiB | 28      0 |  2  0    1    0    0 |
|                  |      0MiB / 16383MiB |          |                      |
+------------------+----------------------+-----------+-----------------------+
|  3    4  0  2  |      6MiB /  9984MiB | 28      0 |  2  0    1    0    0 |
|                  |      0MiB / 16383MiB |          |                      |
+------------------+----------------------+-----------+-----------------------+
                                                                             
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU  GI  CI        PID  Type  Process name                  GPU Memory |
|        ID  ID                                                  Usage      |
|=============================================================================|
|    1    1    0      35533      C  python                          14033MiB |
|    1    5    0    191336      C  python                          3191MiB |
|    2  N/A  N/A    22846      C  python                          5463MiB |
|    4  N/A  N/A    12622      C  /usr/bin/python3                39869MiB |
+-----------------------------------------------------------------------------+
</pre>
 
Note, in particular, how the bottom part of this output provides information about processes using the GPUs and the amount of GPU resources they are using.
 
<code>nvidia-smi</code> can also be used to provide an overview of of available GPU resources. For this, you must use command


<pre style="color: lightgrey; background: black;">
<pre style="color: lightgrey; background: black;">
Line 223: Line 31:
</pre>
</pre>


The output of this command, in fact, shows a list of all the GPUs available in the system, both physical and virtual:
provides an overview of the physical and virtual GPUs available to users in a system. (On Mufasa, this command may require to be launched in a bash shell via the SLURM job scheduling system (as explained in Section 2 of this document) in order to be able to access the GPUs.) The output of <code>nvidia-smi -L</code> is similar to the following:


<pre style="color: lightgrey; background: black;">
<small><pre style="color: lightgrey; background: black;">
GPU 0: NVIDIA A100-PCIE-40GB (UUID: GPU-a9f6e4f2-2877-8642-1802-5eeb3518d415)
GPU 0: NVIDIA A100-PCIE-40GB (UUID: GPU-a9f6e4f2-2877-8642-1802-5eeb3518d415)
   MIG 3g.20gb    Device  0: (UUID: MIG-abe13a42-013b-5bef-aa5e-bbd268d72447)
   MIG 3g.20gb    Device  0: (UUID: MIG-dd1ccc27-d106-5cd9-80f1-b6291f0d682d)
   MIG 2g.10gb     Device  1: (UUID: MIG-268c6b30-d10c-59db-babd-3eda7b89da34)
   MIG 3g.20gb     Device  1: (UUID: MIG-abe13a42-013b-5bef-aa5e-bbd268d72447)
  MIG 2g.10gb    Device  2: (UUID: MIG-90e26aa7-cf69-5672-b758-419679238cd3)
GPU 1: NVIDIA A100-PCIE-40GB (UUID: GPU-5f28ca0a-5b2c-bfc7-5b9f-581b5ca1d110)
GPU 1: NVIDIA A100-PCIE-40GB (UUID: GPU-5f28ca0a-5b2c-bfc7-5b9f-581b5ca1d110)
   MIG 3g.20gb    Device  0: (UUID: MIG-07372a92-2e37-5ad6-b334-add0100cf5e3)
   MIG 3g.20gb    Device  0: (UUID: MIG-07372a92-2e37-5ad6-b334-add0100cf5e3)
   MIG 2g.10gb     Device  1: (UUID: MIG-4ca248b0-ab87-5f91-a788-5fe169d0623e)
   MIG 3g.20gb     Device  1: (UUID: MIG-a704d927-7303-5077-ab7c-6ead57329233)
  MIG 2g.10gb    Device  2: (UUID: MIG-a93ffffb-9a0d-51d1-b9df-36bc624a2084)
GPU 2: NVIDIA A100-PCIE-40GB (UUID: GPU-fb86701b-5781-b63c-5cda-911cff3a5edb)
GPU 2: NVIDIA A100-PCIE-40GB (UUID: GPU-fb86701b-5781-b63c-5cda-911cff3a5edb)
GPU 3: NVIDIA A100-PCIE-40GB (UUID: GPU-bbeed512-ab4c-e984-cfea-8067c009a600)
GPU 3: NVIDIA A100-PCIE-40GB (UUID: GPU-bbeed512-ab4c-e984-cfea-8067c009a600)
   MIG 3g.20gb    Device  0: (UUID: MIG-bdbcf24a-a0aa-56fb-a7e4-fc18f17b7f24)
   MIG 3g.20gb    Device  0: (UUID: MIG-0d1232cd-6b37-5ac7-b00f-a9fdf6997b72)
   MIG 2g.10gb     Device  1: (UUID: MIG-4c44132b-7499-562d-a85f-55a0a2cbb5ba)
   MIG 3g.20gb     Device  1: (UUID: MIG-bdbcf24a-a0aa-56fb-a7e4-fc18f17b7f24)
  MIG 2g.10gb    Device  2: (UUID: MIG-fe354ead-4f87-53ab-9271-1d98190248f4)
GPU 4: NVIDIA A100-PCIE-40GB (UUID: GPU-a9511357-2476-7ddf-c4c5-c90feb68acfd)
GPU 4: NVIDIA A100-PCIE-40GB (UUID: GPU-a9511357-2476-7ddf-c4c5-c90feb68acfd)
</pre>
</pre></small>


As <code>nvidia-smi -L</code> shows, the physical Nvidia A100 GPUs installed on Mufasa have been so subdivided:
This output shows that the physical Nvidia A100 GPUs installed on Mufasa have been so subdivided:


* two of the physical GPUs (GPU 2 and GPU 4) have not been subdivided at all
* two of the physical GPUs (GPU 2 and GPU 4) have not been subdivided at all
* three of the physical GPUs (GPU 0, GPU 1 and GPU 3) have been subdivided into 3 virtual GPUs each:
* three of the physical GPUs (GPU 0, GPU 1 and GPU 3) have been subdivided into 2 virtual GPUs with 20 GB of RAM each
** one virtual GPU with 20 GB of RAM
** two virtual GPUs with 10 GB of RAM each


Thanks to MIG, users can use all the GPUs listed above as if they were all physical devices installed on Mufasa, without having to worry (or even know) which actually are and which instead are virtual GPUs.
Thanks to MIG, users can use all the GPUs listed above as if they were all physical devices installed on Mufasa, without having to worry (or even know) which actually are and which instead are virtual GPUs.


All in all, then, users of Mufasa are provided with the following set of '''11 GPUs''':
All in all, then, users of Mufasa are provided with the following set of '''8 GPUs''':


* '''2 GPUs with 40 GB of RAM each'''
:; 2 GPUs with 40 GB of RAM each
* '''3 GPUs with 20 GB of RAM each'''
:; 6 GPUs with 20 GB of RAM each
* '''6 GPUs with 10 GB of RAM each'''


How these devices are made available to Mufasa users is explained in [[User Jobs]].
How these devices are made available to Mufasa users is explained in [[User Jobs]].
Line 300: Line 102:


== VPN ==
== VPN ==
To be able to connect to Mufasa, your computer must belong to Polimi's LAN. This happens either because the computer is physically located at Politecnico di Milano and connected via ethernet, or because you are using Polimi's VPN (Virtual Private Network) to connect to its LAN from somewhere else (such as your home). In particular, using the VPN is the ''only'' way to use Mufasa from outside Polimi. See [https://intranet.deib.polimi.it/ita/vpn-wifi this DEIB webpage] for instructions about how to activate VPN access.
To be able to connect to Mufasa, your computer must belong to Polimi's LAN. This happens either because the computer is physically located at Politecnico di Milano and connected via ethernet, or because you are using Polimi's VPN (Virtual Private Network) to connect to its LAN from somewhere else (such as your home). In particular, using the VPN is the ''only'' way to use Mufasa from outside Polimi. See [https://intranet.deib.polimi.it/ita/vpn-wifi this DEIB webpage] for instructions about how to activate VPN access.


Line 312: Line 115:
<code> ssh -X <your username on Mufasa>@<Mufasa's IP address></code>
<code> ssh -X <your username on Mufasa>@<Mufasa's IP address></code>


= File transfer =
== File transfer ==


Uploading files from local machine to Mufasa and downloading files from Mufasa onto local machines is done using the ''SFTP'' protocol (''Secure File Transfer Protocol'').  
Uploading files from local machine to Mufasa and downloading files from Mufasa onto local machines is done using the ''SFTP'' protocol (''Secure File Transfer Protocol'').  
Line 362: Line 165:


in the address bar of Nautilus, where <code>username</code> is your username on Mufasa and <code><IP_address></code> is either <code>10.79.23.96</code> or <code>10.79.23.97</code>. Nautilus becomes a graphical interface to Mufasa's remote filesystem.
in the address bar of Nautilus, where <code>username</code> is your username on Mufasa and <code><IP_address></code> is either <code>10.79.23.96</code> or <code>10.79.23.97</code>. Nautilus becomes a graphical interface to Mufasa's remote filesystem.
= Using Mufasa =
This section provide a brief guide to Mufasa users (especially those who are not experienced in the use of Linux and/or remote servers) about interacting with the system.
== Storage spaces ==
User jobs require storage of programs and data files. On Mufasa, the space available to users for data storage is the <code>/home/</code> directory. <code>/home/</code> contains three types of directories:
; Personal directories
: Each user has a personal ''home directory'' where they can store their own files. The home directory is the one with the same name of the user. By default, only the owner of a home directory can access its contents.
; Group directories
: Each research group has a common ''group directory'' where group members can store files that they share with other group members. The group directory is the one called <code>shared-<groupname></code>, where <code><groupname></code> is the corresponding [[Users#Group_names|user group]]. The owner of group directory is user <code>root</code>, while group ownership is assigned to <code><groupname></code>. On Mufasa, group directories have GUID activated. This means that any file or directory created inside <code>shared-<groupname></code> has group ownership assigned to <code><groupname></code>: so editing permissions on the new file or directory extend to all group members.
; The ''<code>shared-public</code>'' directory
: This is a shared directory common to all users of Mufasa. Users that share files but do not belong to the same research group can use it to store their shared files.
== Disk quotas ==
On Mufasa, the directories in <code>/home/</code> must be used as a temporary storage area for user programs and their data, limited to the execution period of the jobs that use the data. They are not intended for long-term storage. For this reason, disk usage is subjected to a quota system.
=== User quotas ===
Each user is assigned a ''disk quota'', i.e. an amount of space that they can use before the user is blocked by the quota system. Note that the quota applies not only to the data created and/or uploaded by you as a user, but also to data created by programs run by your user.
The quotas assigned to your user and the amount of it that you are currently using can be inspected with command
<pre style="color: lightgrey; background: black;">
quota -s
</pre>
The output of <code>quota -s</code> is similar to the following:
<pre style="color: lightgrey; background: black;">
Filesystem  space  quota  limit  grace  files  quota  limit  grace
/dev/sdb1  11104K    100G    150G              1      0      0       
/dev/sdc2  5552K    100G    150G              60      0      0       
</pre>
Here is a simple guide to the output of <code>quota -s</code>.
:; Column "Filesystems"
:: identifies the filesystems where the user has been assigned a disk quota. On Mufasa, <code>/dev/sdb1</code> is the SSD disk space used as [[User Jobs#Automatic job caching|cache space]], while <code>/dev/sdc2</code> is the HDD space used for the <code>/home</code> directories.
:; Columns titled "space" and "files"
:: tell the user how much of their quota they are actually using: the first in term of bytes, the second in term of number of files (more precisely, of ''inodes'').
:; Columns titled "quota"
:: tell the user how much is their ''soft limit'', in term of bytes and files respectively. If the value is 0, it means there is no limit.
:; Columns titled "limit"
:: tell the user how much is their ''hard limit'', in term of bytes and files respectively. If the value is 0, it means there is no limit.
:; Columns titled "grace"
:: tell the user how long they are allowed to stay above their ''soft limit'',  for what concerns bytes and files respectively. When these columns are empty (as in the example above) the user is not over quota.
The meaning of '''soft limit''' and '''hard limit''' is the following.
The hard limit cannot be exceeded. When a user reaches their hard limit, they cannot use any more disk space: for them, the filesystem behaves as if the disks are out of space. Disk writes will fail, temporary files will fail to be created, and the user will start to see warnings and errors while performing common tasks. The only disk operation allowed is file deletion.
The soft limit is, as the word goes, softer. When a user exceeds it, they are not immediately prevented from using more disk space (provided that they stay below the hard limit). However, as the user goes beyond the soft limit, their '''grace period''' begins: i.e. a period within which the user must reduce their amount of data back to below the soft limit. During the grace period, the "grace" column(s) of the output of <code>quota</code> show how much of the grace period remains to the user. If the user is still above their soft limit at the end of the grace period, the quota system will treat the soft limit as a hard limit: i.e. it will force the user to delete data until they are below the soft limit before they can write on disk again.
In the output of <code>quota -s</code>, the grace columns are blank except when a soft limit has been exceeded.
=== Group and project quotas ===
While on Mufasa disk quotas are usually assigned ''per-user'', the quota system also enables the setup of ''per-group'' quotas (i.e., limits to the disk space that, collectively, a group of users can use) and ''per-project'' quotas (i.e., limits to the amount of data that a specific directory and all its subdirectories can contain).
A comprehensive view of the quota situation for one's user and user groups is provided by command
<pre style="color: lightgrey; background: black;">
quotainfo
</pre>
For what concerns project quotas, on Mufasa they are applied to group directories in <code>/home/</code>.
== Finding out how much disk space you are using ==
If your user is the owner of directory <code>/path/to/dir/</code> you can find out how much disk space is used by the directory with command <code>du</code> like this:
<pre style="color: lightgrey; background: black;">
du -sh /path/to/dir/
</pre>
The <code>-sh</code> flag is used to ask for options <code>-s</code> (which provides the overall size of the directory) and <code>-h</code> (which provides ''human-readable'' values using measurement units such as K (KBytes), M (MBytes), G (GBytes)).
In particular, you can find out how much disk space is used by your home directory with command
<pre style="color: lightgrey; background: black;">
du -sh ~
</pre>
In fact, in Linux the symbol <code>~</code> is shorthand for the path to the current user's home directory.
If you want a detailed summary of how much disk space is used by each item (i.e., subdirectory or file) in a directory you own, use command
<pre style="color: lightgrey; background: black;">
du -h /path/to/dir/
</pre>
For instance, for user gfontana the output of
<pre style="color: lightgrey; background: black;">
du -h ~
</pre>
may be similar to the following
<pre style="color: lightgrey; background: black;">
gfontana@rk018445:~$ du -h ~
12K /home/gfontana/.ssh
356K /home/gfontana/.cache/gstreamer-1.0
5.0M /home/gfontana/.cache/tracker
5.3M /home/gfontana/.cache
  [...other similar lines...]
4.0K /home/gfontana/.config/htop
32K /home/gfontana/.config
8.0K /home/gfontana/.slurm
6.3M /home/gfontana</pre>
=== Hidden files and directories ===
In Linux, directories and files with a leading "." in their name are ''hidden''. These do not appear in listings, such as the output of the <code>ls</code> command, to avoid cluttering them up: however, they still occupy disk space.
The output of command <code>du</code>, however, also considers hidden elements and provides their size: therefore it can help you understand why the quota system says that you are using more disk space than reported by <code>ls</code>.
== Changing file/directory ownership and permissions ==
Every file or directory in a Linux system is owned by both a user and a group. User and group ownerships are not connected, so a file can have as group owner a group that its user ownwer does not belong to.
Being able to manipulate who owns a file and what permissions any user has on that file is often important in a multi-user system such as Mufasa. This is a recapitulation of the main Linux commands to manipulate file permissions. Key commands are
:'''<code>chown</code>''' to change ownership - user part
:'''<code>chgrp</code>''' to change ownership - group part
:'''<code>chmod</code>''' to change access permissions
All three accept option <code>-R</code> (uppercase) for recursive operation, so -if needed- you can change ownership and/or permissions of all contents of a directory and its subdirectories with a single command.
The syntax of <code>chown</code> commands is
<pre style="color: lightgrey; background: black;">
chown <new_owner> <path/to/file>
</pre>
where <code><new_owner></code> is the user part of the new file ownership.
The syntax of <code>chgrp</code> commands is
<pre style="color: lightgrey; background: black;">
chgrp <new_group> <path/to/file>
</pre>
where <code><new_owner></code> is the group part of the new file ownership.
User and group ownership for a file can also be both changed at the same time with
<pre style="color: lightgrey; background: black;">
chown <new_owner>:<new_group> <path/to/file>
</pre>
For what concerns <code>chmod</code>, the easiest way to use it makes use of symbolic descriptions of the permissions. The format for this is
<pre style="color: lightgrey; background: black;">
chmod [users]<+|-><permissions> <path/to/file>
</pre>
where
:<code><path/to/file></code> is the file or directory that the change is applied to
:<code>[users]</code> is '''<code>ugo</code>''' or a subset of it; the three letters correspond respectively:
:::to the '''u'''ser who owns <code><path/to/file></code> (also used if <code>[users]</code> is not specified)
:::to the '''g'''roup that owns <code><path/to/file></code>
:::to everyone else ('''o'''thers)
:'''<code>+</code>''' or '''<code>-</code>''' correspond to adding or removing permissions
:<code><permissions></code> is '''<code>rwx</code>''' or a subset, corresponding to '''r'''ead, '''w'''rite and e'''x'''ecute permissions
Note that <code>r</code>, <code>w</code> and <code>x</code> permission have a different meaning for files and for directories.
;For files:
: permission <code>r</code> allows to read the contents of the file
: permission <code>w</code> allows to change the contents of the file
: permission <code>x</code> allows to execute the file (provided that it is a program: e.g., a shell script)
;For directories:
: permission <code>r</code> allows to list the files within the directory
: permission <code>w</code> allows to create, rename, or delete files within the directory
: permission <code>x</code> allows to enter the directory (i.e., <code>cd</code> into it) and access its files
For instance
<pre style="color: lightgrey; background: black;">
chmod g+rwx myfile.txt
</pre>
adds permission to read, write and execute myfile.txt to all the Linux users of the same group of the user that the file belongs to;
<pre style="color: lightgrey; background: black;">
chmod go-x mydir
</pre>
takes away permission to enter directory <dirname> from everyone except the user who owns the directory.
If you want additional information about file and directory permissions in a Linux system work, [https://www.redhat.com/sysadmin/linux-file-permissions-explained this is a good online guide].


= Docker containers =
= Docker containers =

Latest revision as of 09:34, 1 October 2024

Mufasa is a Linux server located in a server room managed by the System Administrators.

Job Users and Job Administrators can only access Mufasa remotely.

Remote access to Mufasa is performed using the SSH protocol for the execution of commands and the SFTP protocol for the exchange of files. Once logged in, a user interacts with Mufasa via a terminal (text-based) interface.

Hardware

Hw.png

Mufasa is a server for massively parallel computation. It has been set up and configured by E4 Computer Engineering with the support of the Biomechanics Group, the CartCasLab laboratory and the NearLab laboratory.

Mufasa's main hardware components are:

  • 2 AMD Epyc 7542 32-core processors (64 CPU cores total)
  • 1 TB RAM
  • 9 TB of SSDs (for OS and job caching)
  • 28TB of HDDs (for user /home directories)
  • 5 Nvidia A100 GPUs [based on the Ampere architecture]
  • Ubuntu Linux operating system

Usually each of these resources (e.g., a GPU) is not fully assigned to a single user or a single job. On the contrary, resources are shared among different users and processes in order to optimise their usage and availability. Most of the management of this sharing is done by SLURM.

CPUs and GPUs

Mufasa is fitted with two 32-core CPU, so the system has a total of 64 phyical CPUs (each of which can run 2 threads). Of the 64 CPUs, 2 are reserved for jobs run outside the SLURM job scheduling system (i.e., for low-power "housekeeping" tasks) while the remaining 62 are reserved for jobs run via SLURM.

For what concerns GPUs, some of the 5 physical A100 processing cards (i.e., GPUs) are subdivided into “virtual” GPUs with different capabilities using Nvidia's MIG system. Command

nvidia-smi -L

provides an overview of the physical and virtual GPUs available to users in a system. (On Mufasa, this command may require to be launched in a bash shell via the SLURM job scheduling system (as explained in Section 2 of this document) in order to be able to access the GPUs.) The output of nvidia-smi -L is similar to the following:

GPU 0: NVIDIA A100-PCIE-40GB (UUID: GPU-a9f6e4f2-2877-8642-1802-5eeb3518d415)
  MIG 3g.20gb     Device  0: (UUID: MIG-dd1ccc27-d106-5cd9-80f1-b6291f0d682d)
  MIG 3g.20gb     Device  1: (UUID: MIG-abe13a42-013b-5bef-aa5e-bbd268d72447)
GPU 1: NVIDIA A100-PCIE-40GB (UUID: GPU-5f28ca0a-5b2c-bfc7-5b9f-581b5ca1d110)
  MIG 3g.20gb     Device  0: (UUID: MIG-07372a92-2e37-5ad6-b334-add0100cf5e3)
  MIG 3g.20gb     Device  1: (UUID: MIG-a704d927-7303-5077-ab7c-6ead57329233)
GPU 2: NVIDIA A100-PCIE-40GB (UUID: GPU-fb86701b-5781-b63c-5cda-911cff3a5edb)
GPU 3: NVIDIA A100-PCIE-40GB (UUID: GPU-bbeed512-ab4c-e984-cfea-8067c009a600)
  MIG 3g.20gb     Device  0: (UUID: MIG-0d1232cd-6b37-5ac7-b00f-a9fdf6997b72)
  MIG 3g.20gb     Device  1: (UUID: MIG-bdbcf24a-a0aa-56fb-a7e4-fc18f17b7f24)
GPU 4: NVIDIA A100-PCIE-40GB (UUID: GPU-a9511357-2476-7ddf-c4c5-c90feb68acfd)

This output shows that the physical Nvidia A100 GPUs installed on Mufasa have been so subdivided:

  • two of the physical GPUs (GPU 2 and GPU 4) have not been subdivided at all
  • three of the physical GPUs (GPU 0, GPU 1 and GPU 3) have been subdivided into 2 virtual GPUs with 20 GB of RAM each

Thanks to MIG, users can use all the GPUs listed above as if they were all physical devices installed on Mufasa, without having to worry (or even know) which actually are and which instead are virtual GPUs.

All in all, then, users of Mufasa are provided with the following set of 8 GPUs:

2 GPUs with 40 GB of RAM each
6 GPUs with 20 GB of RAM each

How these devices are made available to Mufasa users is explained in User Jobs.

Accessing Mufasa

User access to Mufasa is always remote and exploits the SSH (Secure SHell) protocol.

To open a remote connection to Mufasa, open a local terminal on your computer and, in it, run command

ssh <username>@<IP_address>

where username is the username on Mufasa of the user and <IP_address> is one of the IP addresses of Mufasa, i.e. either 10.79.23.96 or 10.79.23.97

For example, user mrossi may access Mufasa with command

ssh mrossi@10.79.23.97

Access via SSH works with Linux, MacOs and Windows 10 (and later) terminals. For Windows users, a handy alternative tool (also including an X server, required to run on Mufasa Linux programs with a graphical user interface) is MobaXterm.

If you don't have a user account on Mufasa, you first have to ask your supervisor for one. See Users for more information about Mufasa's users.

As soon as you launch the ssh command, you will be asked to type the password (i.e., the one of your user account on Mufasa). Once you provide the password, the local terminal on your computer becomes a remote terminal (a “remote shell”) through which you interact with Mufasa. The remote shell sports a command prompt such as

<username>@rk018445:~$

(rk018445 is the Linux hostname of Mufasa). For instance, user mrossi will see a prompt similar to this:

mrossi@rk018445:~$

In the remote shell, you can issue commands to Mufasa by typing them after the prompt, then pressing the enter key. Being Mufasa a Linux server, it will respond to all the standard Linux system commands such as pwd (which prints the path to the current directory) or cd <destination_dir> (which changes the current directory). On the internet you can find many tutorials about the Linux command line, such as this one.

To close the SSH session run

exit

from the command prompt of the remote shell.

VPN

To be able to connect to Mufasa, your computer must belong to Polimi's LAN. This happens either because the computer is physically located at Politecnico di Milano and connected via ethernet, or because you are using Polimi's VPN (Virtual Private Network) to connect to its LAN from somewhere else (such as your home). In particular, using the VPN is the only way to use Mufasa from outside Polimi. See this DEIB webpage for instructions about how to activate VPN access.

SSH timeout

SSH sessions to Mufasa may be subjected to an inactivity timeout: i.e., after a given inactivity period the ssh session gets automatically closed. Users who need to be able to reconnect to the very same shell where they launched a program (for instance because their program is interactive or because it provides progress update messages) should use the screen command.

SSH and graphics

The standard form of the ssh command, i.e. the one described at the beginning of Accessing Mufasa, should always be preferred. However, it only allows text communication with Mufasa. In special cases it may be necessary to remotely run (on Mufasa) Linux programs that have a graphical user interface. These programs require interaction with the X server of the remote user's machine (which must use Linux as well). A special mode of operation of ssh is needed to enable this. This mode is engaged by running command ssh like this:

ssh -X <your username on Mufasa>@<Mufasa's IP address>

File transfer

Uploading files from local machine to Mufasa and downloading files from Mufasa onto local machines is done using the SFTP protocol (Secure File Transfer Protocol).

Linux and MacOS users can directly use the sftp package, as explained (for instance) by this guide. Windows users can interact with Mufasa via SFTP protocol using the MobaXterm software package. MacOS users can interact with Mufasa via SFTP also with the Cyberduck software package.

For Linux and MacOS user, file transfer to/from Mufasa occurs via an interactive sftp shell, i.e. a remote shell very similar to the one one described in Accessing Mufasa. The first thing to do is to open a terminal and run the following command (note the similarity to SSH connections):

sftp <username>@<IP_address>

where username is the username on Mufasa of the user, and <IP_address> is either 10.79.23.96 or 10.79.23.97

You will be asked your password. Once you provide it, you access an interactive sftp shell, where the command prompt takes the form

sftp>

From this shell you can run the commands to exchange files. Most of these commands have two forms: one to act on the remote machine (in this case, Mufasa) and one to act on the local machine (i.e. your own computer). To differentiate, the “local” versions usually have names that start with the letter “l” (lowercase L).

cd <path>

to change directory to <path> on the remote machine.

lcd <path>

to change directory to <path> on the local machine.

get <filename>

to download (i.e. copy) <filename> from the current directory of the remote machine to the current directory of the local machine.

put <filename>

to upload (i.e. copy) <filename> from the current directory of the local machine to the current directory of the remote machine.

Naturally, a user can only upload files to directories where they have write permission (usually only their own /home directory and its subdirectories). Also, users can only download files from directories where they have read permission. (File permission on Mufasa follow the standard Linux rules.)

In addition to the terminal interface, users of Linux distributions based on Gnome (such as Ubuntu) can use a handy graphical tool to exchange files with Mufasa. In Gnome's Nautilus file manager, write

sftp://<username>@<IP_address>

in the address bar of Nautilus, where username is your username on Mufasa and <IP_address> is either 10.79.23.96 or 10.79.23.97. Nautilus becomes a graphical interface to Mufasa's remote filesystem.

Using Mufasa

This section provide a brief guide to Mufasa users (especially those who are not experienced in the use of Linux and/or remote servers) about interacting with the system.

Storage spaces

User jobs require storage of programs and data files. On Mufasa, the space available to users for data storage is the /home/ directory. /home/ contains three types of directories:

Personal directories
Each user has a personal home directory where they can store their own files. The home directory is the one with the same name of the user. By default, only the owner of a home directory can access its contents.
Group directories
Each research group has a common group directory where group members can store files that they share with other group members. The group directory is the one called shared-<groupname>, where <groupname> is the corresponding user group. The owner of group directory is user root, while group ownership is assigned to <groupname>. On Mufasa, group directories have GUID activated. This means that any file or directory created inside shared-<groupname> has group ownership assigned to <groupname>: so editing permissions on the new file or directory extend to all group members.
The shared-public directory
This is a shared directory common to all users of Mufasa. Users that share files but do not belong to the same research group can use it to store their shared files.

Disk quotas

On Mufasa, the directories in /home/ must be used as a temporary storage area for user programs and their data, limited to the execution period of the jobs that use the data. They are not intended for long-term storage. For this reason, disk usage is subjected to a quota system.

User quotas

Each user is assigned a disk quota, i.e. an amount of space that they can use before the user is blocked by the quota system. Note that the quota applies not only to the data created and/or uploaded by you as a user, but also to data created by programs run by your user.

The quotas assigned to your user and the amount of it that you are currently using can be inspected with command

quota -s

The output of quota -s is similar to the following:

Filesystem   space   quota   limit   grace   files   quota   limit   grace
 /dev/sdb1  11104K    100G    150G               1       0       0        
 /dev/sdc2   5552K    100G    150G              60       0       0        

Here is a simple guide to the output of quota -s.

Column "Filesystems"
identifies the filesystems where the user has been assigned a disk quota. On Mufasa, /dev/sdb1 is the SSD disk space used as cache space, while /dev/sdc2 is the HDD space used for the /home directories.
Columns titled "space" and "files"
tell the user how much of their quota they are actually using: the first in term of bytes, the second in term of number of files (more precisely, of inodes).
Columns titled "quota"
tell the user how much is their soft limit, in term of bytes and files respectively. If the value is 0, it means there is no limit.
Columns titled "limit"
tell the user how much is their hard limit, in term of bytes and files respectively. If the value is 0, it means there is no limit.
Columns titled "grace"
tell the user how long they are allowed to stay above their soft limit, for what concerns bytes and files respectively. When these columns are empty (as in the example above) the user is not over quota.

The meaning of soft limit and hard limit is the following.

The hard limit cannot be exceeded. When a user reaches their hard limit, they cannot use any more disk space: for them, the filesystem behaves as if the disks are out of space. Disk writes will fail, temporary files will fail to be created, and the user will start to see warnings and errors while performing common tasks. The only disk operation allowed is file deletion.

The soft limit is, as the word goes, softer. When a user exceeds it, they are not immediately prevented from using more disk space (provided that they stay below the hard limit). However, as the user goes beyond the soft limit, their grace period begins: i.e. a period within which the user must reduce their amount of data back to below the soft limit. During the grace period, the "grace" column(s) of the output of quota show how much of the grace period remains to the user. If the user is still above their soft limit at the end of the grace period, the quota system will treat the soft limit as a hard limit: i.e. it will force the user to delete data until they are below the soft limit before they can write on disk again.

In the output of quota -s, the grace columns are blank except when a soft limit has been exceeded.

Group and project quotas

While on Mufasa disk quotas are usually assigned per-user, the quota system also enables the setup of per-group quotas (i.e., limits to the disk space that, collectively, a group of users can use) and per-project quotas (i.e., limits to the amount of data that a specific directory and all its subdirectories can contain).

A comprehensive view of the quota situation for one's user and user groups is provided by command

quotainfo

For what concerns project quotas, on Mufasa they are applied to group directories in /home/.

Finding out how much disk space you are using

If your user is the owner of directory /path/to/dir/ you can find out how much disk space is used by the directory with command du like this:

du -sh /path/to/dir/

The -sh flag is used to ask for options -s (which provides the overall size of the directory) and -h (which provides human-readable values using measurement units such as K (KBytes), M (MBytes), G (GBytes)).

In particular, you can find out how much disk space is used by your home directory with command

du -sh ~

In fact, in Linux the symbol ~ is shorthand for the path to the current user's home directory.

If you want a detailed summary of how much disk space is used by each item (i.e., subdirectory or file) in a directory you own, use command

du -h /path/to/dir/

For instance, for user gfontana the output of

du -h ~

may be similar to the following

gfontana@rk018445:~$ du -h ~
12K	/home/gfontana/.ssh
356K	/home/gfontana/.cache/gstreamer-1.0
5.0M	/home/gfontana/.cache/tracker
5.3M	/home/gfontana/.cache
  [...other similar lines...]
4.0K	/home/gfontana/.config/htop
32K	/home/gfontana/.config
8.0K	/home/gfontana/.slurm
6.3M	/home/gfontana

Hidden files and directories

In Linux, directories and files with a leading "." in their name are hidden. These do not appear in listings, such as the output of the ls command, to avoid cluttering them up: however, they still occupy disk space.

The output of command du, however, also considers hidden elements and provides their size: therefore it can help you understand why the quota system says that you are using more disk space than reported by ls.

Changing file/directory ownership and permissions

Every file or directory in a Linux system is owned by both a user and a group. User and group ownerships are not connected, so a file can have as group owner a group that its user ownwer does not belong to.

Being able to manipulate who owns a file and what permissions any user has on that file is often important in a multi-user system such as Mufasa. This is a recapitulation of the main Linux commands to manipulate file permissions. Key commands are

chown to change ownership - user part
chgrp to change ownership - group part
chmod to change access permissions

All three accept option -R (uppercase) for recursive operation, so -if needed- you can change ownership and/or permissions of all contents of a directory and its subdirectories with a single command.

The syntax of chown commands is

chown <new_owner> <path/to/file>

where <new_owner> is the user part of the new file ownership.

The syntax of chgrp commands is

chgrp <new_group> <path/to/file>

where <new_owner> is the group part of the new file ownership.

User and group ownership for a file can also be both changed at the same time with

chown <new_owner>:<new_group> <path/to/file>

For what concerns chmod, the easiest way to use it makes use of symbolic descriptions of the permissions. The format for this is

chmod [users]<+|-><permissions> <path/to/file>

where

<path/to/file> is the file or directory that the change is applied to
[users] is ugo or a subset of it; the three letters correspond respectively:
to the user who owns <path/to/file> (also used if [users] is not specified)
to the group that owns <path/to/file>
to everyone else (others)
+ or - correspond to adding or removing permissions
<permissions> is rwx or a subset, corresponding to read, write and execute permissions

Note that r, w and x permission have a different meaning for files and for directories.

For files
permission r allows to read the contents of the file
permission w allows to change the contents of the file
permission x allows to execute the file (provided that it is a program: e.g., a shell script)
For directories
permission r allows to list the files within the directory
permission w allows to create, rename, or delete files within the directory
permission x allows to enter the directory (i.e., cd into it) and access its files

For instance

chmod g+rwx myfile.txt

adds permission to read, write and execute myfile.txt to all the Linux users of the same group of the user that the file belongs to;

chmod go-x mydir

takes away permission to enter directory <dirname> from everyone except the user who owns the directory.

If you want additional information about file and directory permissions in a Linux system work, this is a good online guide.

Docker containers

262px-docker logo cropped.jpg

As a general rule, all computation performed on Mufasa must occur within Docker containers. From Docker's documentation:

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure.

Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host.

A container is a sandboxed process on your machine that is isolated from all other processes on the host machine. When running a container, it uses an isolated filesystem. [containing] everything needed to run an application - all dependencies, configuration, scripts, binaries, etc. The image also contains other configuration for the container, such as environment variables, a default command to run, and other metadata.

Using Docker allows each user of Mufasa to build the software environment that their job(s) require. In particular, using Docker containers enables users to configure their own (containerized) system and install any required libraries on their own, without need to ask administrators to modify the configuration of Mufasa. As a consequence, users can freely experiment with their (containerized) system without risk to the work of other users and to the stability and reliability of Mufasa. In particular, containers allow users to run jobs that require multiple and/or obsolete versions of the same library.

A large number of preconfigured Docker containers are already available, so users do not usually need to start from scratch in preparing the environment where their jobs will run on Mufasa. The official Docker container repository is dockerhub.

How to run Docker containers on Mufasa is explained in User Jobs. There is also a page of this wiki dedicated to the preparation of Docker containers.

The SLURM job scheduling system

262px-Slurm logo.png

Mufasa uses SLURM (Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management) to manage shared access to its resources.

Users of Mufasa must use SLURM to run and manage all processing-heavy jobs they run on the machine. It is possible for users to run jobs without using SLURM; however, running jobs run this way is only intended for “housekeeping” activities and only provides access to a small subset of Mufasa's resources. For instance, jobs run outside SLURM cannot access the GPUs, can only use a few processor cores, can only access a small portion of RAM. Using SLURM is therefore necessary for any resource-intensive job.

From SLURM's documentation:

Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.

The use of a job scheduling system such as SLURM ensures that Mufasa's resources are exploited in an efficient way. The fact that a schedule exists means that usually a job does not get immediately executed as soon as it is launched: instead, the job gets queued and will be executed as soon as possible, according to the availability of resources in the machine.

Useful references for SLURM users are the collected man pages and the command overview.

SLURM is capable of managing complex computing systems composed of multiple clusters (i.e. sets) of machines, each comprising one node (i.e. machine) or more. The case of Mufasa is the simplest of all: Mufasa is the single node (called gn01) of a SLURM computing cluster composed of that single machine.

In order to let SLURM schedule job execution, before launching a job a user must specify what resources (such as RAM, processor cores, GPUs, ...) it requires. In managing process queues, SLURM considers such requirements and matches them with available resources. As a consequence, resource-heavy jobs generally take longer before thet get executed, while less demanding jobs are usually put into execution quickly. Processes that -while they are running- try to use more resources than they requested at launch time get killed by SLURM.

All in all, the take-away message is: consider carefully how much of each resource to ask for your job.

In User Jobs it will be explained how the process of requesting resources is greatly simplified by making use of process queues with predefined resource allocations called partitions.