System
Mufasa is a Linux server located in a server room managed by the System Administrators. Job Users and Job Administrators can only access Mufasa remotely.
Remote access to Mufasa is performed using the SSH protocol for the execution of commands and the SFTP protocol for the exchange of files. Once logged in, a user interacts with Mufasa via a terminal (text-based) interface.
Hardware
Mufasa is a server for massively parallel computation. It has been set up and configured by E4 Computer Engineering with the support of NearLab, Biomechanics Group and CartCasLab.
Mufasa's main hardware components are:
- 32-core, 64-thread AMD processor
- 1 TB RAM
- 9 TB of SSDs (for OS and execution cache)
- 28TB of HDDs (for user /home directories)
- 5 Nvidia A100 GPUs [based on the Ampere architecture]
- Linux Ubuntu operating system
Usually each of these resources (e.g., a GPU) is not fully assigned to a single user or a single job. On the contrary, access resources are shared among different users and processes in order to optimise their usage and availability.
CPUs and GPUs
Mufasa is fitted with a 32-core CPU. Each core is able to run 2 threads in parallel, so the system has a total of 64 virtual CPUs. Of these, 2 are reserved for jobs run outside the SLURM job scheduling system (i.e., for low-power "housekeeping" tasks) while the remaining 62 are reserved for jobs run via SLURM.
For what concerns GPUs, some of the 5 physical A100 processing cards (i.e., GPUs) are subdivided into “virtual” GPUs with different capabilities using Nvidia's MIG system. From MIG's user guide:
“The Multi-Instance GPU (MIG) feature allows GPUs based on the NVIDIA Ampere architecture (such as NVIDIA A100) to be securely partitioned into up to seven separate GPU Instances for CUDA applications, providing multiple users with separate GPU resources for optimal GPU utilization. This feature is particularly beneficial for workloads that do not fully saturate the GPU’s compute capacity and therefore users may want to run different workloads in parallel to maximize utilization.”
In practice, MIG allows flexible partitioning of a very powerful (but single) GPU to create multiple virtual GPUs with different capabilities, that are then made available to users as if they were separate devices.
Command
nvidia-smi
(see nvidia-smi Nvidia's documentation) provides an overview of the physical and virtual GPUs available to users in a system (“smi” stands for System Management Interface). On Mufasa, this command may require to be launched via the SLURM job scheduling system (as explained in Section 2 of this document) in order to be able to access the GPUs. Its output (not reported here since it is quite extensive) is subdivided into three parts:
- the first part describes the physical GPUs
- the second describes the virtual GPUs obtained by subdividing physical GPUs using MIG
- the third describes processes currently using the GPUs
The following is an example of the output of nvidia-smi
:
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100-PCI... On | 00000000:01:00.0 Off | On | | N/A 37C P0 35W / 250W | 24MiB / 40536MiB | N/A Default | | | | Enabled | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A100-PCI... On | 00000000:41:00.0 Off | On | | N/A 61C P0 146W / 250W | 17258MiB / 40536MiB | N/A Default | | | | Enabled | +-------------------------------+----------------------+----------------------+ | 2 NVIDIA A100-PCI... On | 00000000:61:00.0 Off | 0 | | N/A 57C P0 119W / 250W | 5466MiB / 40536MiB | 46% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 3 NVIDIA A100-PCI... On | 00000000:81:00.0 Off | On | | N/A 33C P0 35W / 250W | 24MiB / 40536MiB | N/A Default | | | | Enabled | +-------------------------------+----------------------+----------------------+ | 4 NVIDIA A100-PCI... On | 00000000:C1:00.0 Off | 0 | | N/A 54C P0 72W / 250W | 39872MiB / 40536MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | MIG devices: | +------------------+----------------------+-----------+-----------------------+ | GPU GI CI MIG | Memory-Usage | Vol| Shared | | ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG| | | | ECC| | |==================+======================+===========+=======================| | 0 2 0 0 | 10MiB / 20096MiB | 42 0 | 3 0 2 0 0 | | | 0MiB / 32767MiB | | | +------------------+----------------------+-----------+-----------------------+ | 0 3 0 1 | 6MiB / 9984MiB | 28 0 | 2 0 1 0 0 | | | 0MiB / 16383MiB | | | +------------------+----------------------+-----------+-----------------------+ | 0 4 0 2 | 6MiB / 9984MiB | 28 0 | 2 0 1 0 0 | | | 0MiB / 16383MiB | | | +------------------+----------------------+-----------+-----------------------+ | 1 1 0 0 | 14048MiB / 20096MiB | 42 0 | 3 0 2 0 0 | | | 2MiB / 32767MiB | | | +------------------+----------------------+-----------+-----------------------+ | 1 5 0 1 | 3202MiB / 9984MiB | 28 0 | 2 0 1 0 0 | | | 2MiB / 16383MiB | | | +------------------+----------------------+-----------+-----------------------+ | 1 6 0 2 | 6MiB / 9984MiB | 28 0 | 2 0 1 0 0 | | | 0MiB / 16383MiB | | | +------------------+----------------------+-----------+-----------------------+ | 3 2 0 0 | 10MiB / 20096MiB | 42 0 | 3 0 2 0 0 | | | 0MiB / 32767MiB | | | +------------------+----------------------+-----------+-----------------------+ | 3 3 0 1 | 6MiB / 9984MiB | 28 0 | 2 0 1 0 0 | | | 0MiB / 16383MiB | | | +------------------+----------------------+-----------+-----------------------+ | 3 4 0 2 | 6MiB / 9984MiB | 28 0 | 2 0 1 0 0 | | | 0MiB / 16383MiB | | | +------------------+----------------------+-----------+-----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 1 1 0 35533 C python 14033MiB | | 1 5 0 191336 C python 3191MiB | | 2 N/A N/A 22846 C python 5463MiB | | 4 N/A N/A 12622 C /usr/bin/python3 39869MiB | +-----------------------------------------------------------------------------+
Note, in particular, how the bottom part of this output provides information about processes using the GPUs and the amount of GPU resources they are using.
nvidia-smi
can also be used to provide an overview of of available GPU resources. For this, you must use command
nvidia-smi -L
The output of this command, in fact, shows a list of all the GPUs available in the system, both physical and virtual:
GPU 0: NVIDIA A100-PCIE-40GB (UUID: GPU-a9f6e4f2-2877-8642-1802-5eeb3518d415) MIG 3g.20gb Device 0: (UUID: MIG-abe13a42-013b-5bef-aa5e-bbd268d72447) MIG 2g.10gb Device 1: (UUID: MIG-268c6b30-d10c-59db-babd-3eda7b89da34) MIG 2g.10gb Device 2: (UUID: MIG-90e26aa7-cf69-5672-b758-419679238cd3) GPU 1: NVIDIA A100-PCIE-40GB (UUID: GPU-5f28ca0a-5b2c-bfc7-5b9f-581b5ca1d110) MIG 3g.20gb Device 0: (UUID: MIG-07372a92-2e37-5ad6-b334-add0100cf5e3) MIG 2g.10gb Device 1: (UUID: MIG-4ca248b0-ab87-5f91-a788-5fe169d0623e) MIG 2g.10gb Device 2: (UUID: MIG-a93ffffb-9a0d-51d1-b9df-36bc624a2084) GPU 2: NVIDIA A100-PCIE-40GB (UUID: GPU-fb86701b-5781-b63c-5cda-911cff3a5edb) GPU 3: NVIDIA A100-PCIE-40GB (UUID: GPU-bbeed512-ab4c-e984-cfea-8067c009a600) MIG 3g.20gb Device 0: (UUID: MIG-bdbcf24a-a0aa-56fb-a7e4-fc18f17b7f24) MIG 2g.10gb Device 1: (UUID: MIG-4c44132b-7499-562d-a85f-55a0a2cbb5ba) MIG 2g.10gb Device 2: (UUID: MIG-fe354ead-4f87-53ab-9271-1d98190248f4) GPU 4: NVIDIA A100-PCIE-40GB (UUID: GPU-a9511357-2476-7ddf-c4c5-c90feb68acfd)
As nvidia-smi -L
shows, the physical Nvidia A100 GPUs installed on Mufasa have been so subdivided:
- two of the physical GPUs (GPU 2 and GPU 4) have not been subdivided at all
- three of the physical GPUs (GPU 0, GPU 1 and GPU 3) have been subdivided into 3 virtual GPUs each:
- one virtual GPU with 20 GB of RAM
- two virtual GPUs with 10 GB of RAM each
Thanks to MIG, users can use all the GPUs listed above as if they were all physical devices installed on Mufasa, without having to worry (or even know) which actually are and which instead are virtual GPUs.
All in all, then, users of Mufasa are provided with the following set of 11 GPUs:
- 2 GPUs with 40 GB of RAM each
- 3 GPUs with 20 GB of RAM each
- 6 GPUs with 10 GB of RAM each
How these devices are made available to Mufasa users is explained in User Jobs.
Accessing Mufasa
User access to Mufasa is always remote and exploits the SSH (Secure SHell) protocol.
To open a remote connection to Mufasa, open a local terminal on your computer and, in it, run command
ssh <username>@<IP_address>
where username
is the username on Mufasa of the user and <IP_address>
is one of the IP addresses of Mufasa, i.e. either 10.79.23.96
or 10.79.23.97
For example, user mrossi
may access Mufasa with command
ssh mrossi@10.79.23.97
Access via SSH works with Linux, MacOs and Windows 10 (and later) terminals. For Windows users, a handy alternative tool (also including an X server, required to run on Mufasa Linux programs with a graphical user interface) is MobaXterm.
If you don't have a user account on Mufasa, you first have to ask your supervisor for one. See Users and groups for more information about Mufasa's users.
As soon as you launch the ssh command, you will be asked to type the password (i.e., the one of your user account on Mufasa). Once you provide the password, the local terminal on your computer becomes a remote terminal (a “remote shell”) through which you interact with Mufasa. The remote shell sports a command prompt such as
<username>@rk018445:~$
(rk018445 is the Linux hostname of Mufasa). For instance, user mrossi
will see a prompt similar to this:
mrossi@rk018445:~$
In the remote shell, you can issue commands to Mufasa by typing them after the prompt, then pressing the enter key. Being Mufasa a Linux server, it will respond to all the standard Linux system commands such as pwd
(which prints the path to the current directory) or cd <destination_dir>
(which changes the current directory). On the internet you can find many tutorials about the Linux command line, such as this one.
To close the SSH session run
exit
from the command prompt of the remote shell.
VPN
To be able to connect to Mufasa, your computer must belong to Polimi's LAN. This happens either because the computer is physically located at Politecnico di Milano and connected via ethernet, or because you are using Polimi's VPN (Virtual Private Network) to connect to its LAN from somewhere else (such as your home). In particular, using the VPN is the only way to use Mufasa from outside Polimi. See this DEIB webpage for instructions about how to activate VPN access.
SSH timeout
SSH sessions to Mufasa may be subjected to an inactivity timeout: i.e., after a given inactivity period the ssh session gets automatically closed. Users who need to be able to reconnect to the very same shell where they launched a program (for instance because their program is interactive or because it provides progress update messages) should use the screen command.
SSH and graphics
The standard form of the ssh command, i.e. the one described at the beginning of Accessing Mufasa, should always be preferred. However, it only allows text communication with Mufasa. In special cases it may be necessary to remotely run (on Mufasa) Linux programs that have a graphical user interface. These programs require interaction with the X server of the remote user's machine (which must use Linux as well). A special mode of operation of ssh is needed to enable this. This mode is engaged by running command ssh
like this:
ssh -X <your username on Mufasa>@<Mufasa's IP address>
File transfer
Uploading files from local machine to Mufasa and downloading files from Mufasa onto local machines is done using the SFTP protocol (Secure File Transfer Protocol).
Linux and MacOS users can directly use the sftp package, as explained (for instance) by this guide. Windows users can interact with Mufasa via SFTP protocol using the MobaXterm software package. MacOS users can interact with Mufasa via SFTP also with the Cyberduck software package.
For Linux and MacOS user, file transfer to/from Mufasa occurs via an interactive sftp shell, i.e. a remote shell very similar to the one one described in Accessing Mufasa. The first thing to do is to open a terminal and run the following command (note the similarity to SSH connections):
sftp <username>@<IP_address>
where username
is the username on Mufasa of the user, and <IP_address>
is either 10.79.23.96
or 10.79.23.97
You will be asked your password. Once you provide it, you access an interactive sftp shell, where the command prompt takes the form
sftp>
From this shell you can run the commands to exchange files. Most of these commands have two forms: one to act on the remote machine (in this case, Mufasa) and one to act on the local machine (i.e. your own computer). To differentiate, the “local” versions usually have names that start with the letter “l” (lowercase L).
cd <path>
to change directory to <path>
on the remote machine.
lcd <path>
to change directory to <path>
on the local machine.
get <filename>
to download (i.e. copy) <filename>
from the current directory of the remote machine to the current directory of the local machine.
put <filename>
to upload (i.e. copy) <filename>
from the current directory of the local machine to the current directory of the remote machine.
Naturally, a user can only upload files to directories where they have write permission (usually only their own /home directory and its subdirectories). Also, users can only download files from directories where they have read permission. (File permission on Mufasa follow the standard Linux rules.)
In addition to the terminal interface, users of Linux distributions based on Gnome (such as Ubuntu) can use a handy graphical tool to exchange files with Mufasa. In Gnome's Nautilus file manager, write
sftp://<username>@<IP_address>
in the address bar of Nautilus, where username
is your username on Mufasa and <IP_address>
is either 10.79.23.96
or 10.79.23.97
. Nautilus becomes a graphical interface to Mufasa's remote filesystem.
Docker containers
As a general rule, all computation performed on Mufasa must occur within Docker containers. From Docker's documentation:
“Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure.
Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host.
A container is a sandboxed process on your machine that is isolated from all other processes on the host machine. When running a container, it uses an isolated filesystem. [containing] everything needed to run an application - all dependencies, configuration, scripts, binaries, etc. The image also contains other configuration for the container, such as environment variables, a default command to run, and other metadata.”
Using Docker allows each user of Mufasa to build the software environment that their job(s) require. In particular, using Docker containers enables users to configure their own (containerized) system and install any required libraries on their own, without need to ask administrators to modify the configuration of Mufasa. As a consequence, users can freely experiment with their (containerized) system without risk to the work of other users and to the stability and reliability of Mufasa. In particular, containers allow users to run jobs that require multiple and/or obsolete versions of the same library.
A large number of preconfigured Docker containers are already available, so users do not usually need to start from scratch in preparing the environment where their jobs will run on Mufasa. The official Docker container repository is dockerhub.
How to run Docker containers on Mufasa is explained in User Jobs. See Docker for directions about preparing Docker containers.
The SLURM job scheduling system
Mufasa uses SLURM (Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management) to manage shared access to its resources.
Users of Mufasa must use SLURM to run and manage all processing-heavy jobs they run on the machine. It is possible for users to run jobs without using SLURM; however, running jobs run this way is only intended for “housekeeping” activities and only provides access to a small subset of Mufasa's resources. For instance, jobs run outside SLURM cannot access the GPUs, can only use a few processor cores, can only access a small portion of RAM. Using SLURM is therefore necessary for any resource-intensive job.
From SLURM's documentation:
“Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.”
The use of a job scheduling system such as SLURM ensures that Mufasa's resources are exploited in an efficient way. The fact that a schedule exists means that usually a job does not get immediately executed as soon as it is launched: instead, the job gets queued and will be executed as soon as possible, according to the availability of resources in the machine.
Useful references for SLURM users are the collected man pages and the command overview.
In order to let SLURM schedule job execution, before launching a job a user must specify what resources (such as RAM, processor cores, GPUs, ...) it requires. In managing process queues, SLURM considers such requirements and matches them with available resources. As a consequence, resource-heavy jobs generally take longer before thet get executed, while less demanding jobs are usually put into execution quickly. Processes that -while they are running- try to use more resources than they requested at launch time get killed by SLURM.
All in all, the take-away message is: consider carefully how much of each resource to ask for your job.
In User Jobs it will be explained how the process of requesting resources is greatly simplified by making use of process queues with predefined resource allocations called partitions.
Users and groups
Only Mufasa users (i.e., people with a user account on Mufasa) can access the machine and interact with it. Creation of new users is done by Job Administrators or by specially designated users within each research group.
Mufasa usernames have the form xyyy
(all lowercase), where x
is the first letter of the first name of the person, and yyy
is their complete surname. For instance, a person called Mario Rossi will be assigned username mrossi
. If multiple users with the same surname and first letter of the first name exist, those created after the very first one are given usernames including a two-digit counter: mrossi
, mrossi01
, mrossi02
and so on.
On Linux machines such as Mufasa, users belong to groups. On Mufasa, groups are used to identify the research group that a specific user is part of. Assigment of Mufasa's users to groups follow these rules:
- All users corresponding to people belong to group
users
- Additionally, each user must belong to one and only one of the following groups (within brackets is the name of the faculty who is in charge of Mufasa for each group):
cartcas
, i.e. CartCasLab (prof. Cerveri);biomech
, i.e. Biomechanics Research Group (prof. Votta);nearmrs
, i.e. Medical Robotics Section of NearLab (prof. De Momi);nearnes
, i.e. NeuroEngineering Section of NearLab (prof. Ferrante);bio
, for BioEngineering users not belonging to any of the research groups listed above.
Mufasa users who have the power to create new users do so with command
sudo /opt/share/sbin/add_user.sh -u <user> -g users,<group>
where <user>
is the username of the new user and <group>
is one of the 6 groups from the list above.
For instance, in order to create a user on Mufasa for a person named Mario Rossi belonging to CartCasLab, the following command will be used:
sudo /opt/share/sbin/add_user.sh -u mrossi -g users,cartcas
At first login, new users will be asked to change the password initially assigned to them. For security reason, it is important that such first login occurs as soon as possible after user creation.