Scientific Computing and Data / High Performance Computing / Documentation / Running Container: Singularity
On Minerva HPC cluster, singularity tool is supported for the container platform, instead of docker due to security concerns. However, Docker container images can easily be run as Singularity container images, which are safe to run on the cluster.
Singularity containers allows you to create and run containers that package up pieces of software in a way that is portable and reproducible. Your container is a single file and can be run on different systems. I can use singularity on Minerva when
- The software I want to use is so complicated that I can’t get it to work on my computer anyhow
- The software can’t be installed on Minerva because of new kernel or system level library requirements
- I want to rerun my analysis sometime ago; I want to reproduce my collaborator’s pipelines or results
Singularity training slides on Minerva HPC
See a detailed demo at Minerva Singularity slides.
Some Helpful External Web Sites
How to use Singularity on Minerva HPC
Load the singularity module:
$ module load singularity/3.6.4
You can use “singularity pull” to download a container image from a give URI, such as pull the image from Sylabs Cloud library://, Singularity Hub shub:// and Docker Hub docker://
$ singularity pull --name hello-world_latest.sif shub://vsoch/hello-world
To pull a docker image:
$singularity pull ubuntu_latest.sif docker://ubuntu:latest
To create a container within a writable directory (called a sandbox):
$singularity build --sandbox lolcow/ shub://GodloveD/lolcow
Running a Singularity Container
When running within a Singularity container, a user has the same permissions and privileges that he or she would have outside the container.
Once you have the images, you can shell into it (The shell subcommand is useful for interactive work)
# From the cluster singularity shell ubuntu_latest.sif
To Run a container with the default runscript command:
$ singularity run hello-world_latest.sif
Or run a custom command with exec, or pipes for batch processing (e.g. within a LSF job), For example:
singularity exec ubuntu_latest.sif /path/to/my/script.sh
Where script.sh script contains the processing steps you want to run within the LSF batch job. You can also pass a generic Linux command to the exec subcommand. Pipes and redirected output are supported with the exec subcommand. Below is a quick demonstration showing the change in Linux distribution:
gail01@li03c02: $ cat /etc/*release CentOS Linux release 7.6.1810 (Core) NAME="CentOS Linux" VERSION="7 (Core)" ………. CentOS Linux release 7.6.1810 (Core) CentOS Linux release 7.6.1810 (Core) gail01@li03c02: $ singularity shell ubuntu_latest.sif Singularity> cat /etc/*release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS" NAME="Ubuntu" VERSION="20.04.4 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.4 LTS" VERSION_ID="20.04" ………… Singularity>
Building External Directories
Binding a directory to your Singularity container allows you to access files in a host system directory from within your container. By default,
/tmp, user home directory
is automatically mounted into the singularity image, which should be enough for most of the cases. You can also bind other directories into your Singularity container yourself. To get a shell with a specified dir mounted in the image.
$ singularity run -B /user/specified/dir ubuntu_latest.sif
Warning: Sometimes libraries or packages in
$HOME got picked up in container, especially for python packages at
R library $_LIBS_USER at ~/.Rlib)
Running GPU-Enabled Containers
Singularity supports running on GPUs from within containers. The
--nv option must be passed to the exec or shell subcommand. For example:
$ module load singularity/3.6.4 $ singularity pull docker://tensorflow/tensorflow:latest-gpu $ singularity shell --nv -B /run tensorflow_latest-gpu-jupyter.sif
Singularity gives you the ability to install and run applications in your own Linux environment with your own customized software stack. While the HPC staff can provide guidance on how to create and use singularity containers, we do not have the resources to manage containers for individual users. If you decide to use Singularity, it is your responsibility to build and manage your own containers, and software within your own Linux environment.
Building Your Own Containers
Although there are a lot of container images contributed by various users on Docker Hub and Singularity Hub, there is time that you want to create/build your own containers. You can build your own container images either use Singularity Remote Builder or your local Linux workstation with which you have the root access. If you don’t have a Linux system you could easily install one in a virtual machine using software like VirtualBox, Vagrant, VMware, or Parallels.
- Singularity build is not fully supported on Minerva due to the sudo privileges for users
- Using the Remote Builder, you can easily and securely create containers for your applications without special privileges or set up in your local environment from a Linux machine where you have administrative access (i.e. a personal machine)
- rite your recipe file/definition file following the guide. You can easily find some examples for recipe files online, for example this GitHub repo.
- Convert docker recipe files to singularity recipe files:
$ml python $spython recipe Dockerfile Singularity
Example Singularity Applications on Minerva
minerva-rstudio-web-r4.sh please check R webpage at https://labs.icahn.mssm.edu/minervalab/documentation/r/