Installing SingularityCE

This section will guide you through the process of installing SingularityCE main via several different methods. (For instructions on installing earlier versions of SingularityCE please see earlier versions of the docs.)

Installation on Linux

SingularityCE can be installed on any modern Linux distribution, on bare-metal or inside a Virtual Machine. Nested installations inside containers are not recommended, and require the outer container to be run with full privilege.

System Requirements

SingularityCE requires ~163MiB disk space once compiled and installed.

There are no specific CPU or memory requirements at runtime, though 2GB of RAM is recommended when building from source.

Full functionality of SingularityCE requires that the kernel supports:

  • OverlayFS mounts - (minimum kernel >=3.18) Required for full flexibility in bind mounts to containers, and to support persistent overlays for writable containers.

  • Unprivileged user namespaces - (minimum kernel >=3.8, >=3.18 recommended) Required to run containers without root or setuid privilege. Required to build containers unprivileged in --fakeroot mode. Required to run containers using the experimental --oci mode.

  • Unprivileged overlay - (minimum kernel >=5.11, >=5.13 recommended) Required to use --overlay, to mount a persistent overlay directory onto the container, when running without root or setuid.

External Binaries

Singularity depends on a number of external binaries for full functionality. From SingularityCE 3.9, the methods that are used to find these binaries have been standardized as below.

Configurable Paths

The following binaries are found on $PATH during build time when ./mconfig is run, and their location is added to the singularity.conf configuration file. At runtime this configured location is used. To specify an alternate executable, change the relevant path entry in singularity.conf.

  • cryptsetup version 2 with kernel LUKS2 support is required for building or executing encrypted containers.

  • ldconfig is used to resolve library locations / symlinks when using the -nv or --rocm GPU support.

  • nvidia-container-cli is used to configure a container for Nvidia GPU / CUDA support when running with the experimental --nvccli option.

For the following additional binaries, if the singularity.conf entry is left blank, then $PATH will be searched at runtime.

  • go is required to compile plugins, and must be an identical version as that used to build SingularityCE.

  • mksquashfs from squashfs-tools 4.3+ is used to create the squashfs container filesystem that is embedded into SIF container images. The mksquashfs procs and mksquashfs mem directives in singularity.conf can be used to control its resource usage.

  • unsquashfs from squashfs-tools 4.3+ is used to extract the squashfs container filesystem from a SIF file when necessary.

Searching $PATH

The following utilities are always found by searching $PATH at runtime:

  • true

  • mkfs.ext3 is used to create overlay images.

  • cp

  • dd

  • newuidmap and newgidmap are distribution provided setuid binaries used to configure subuid/gid mappings for --fakeroot in non-setuid installs.

  • crun or runc are OCI runtimes used for the singularity oci commands and experimental --oci mode for run / shell / exec. crun is preferred over runc if it is available. runc is provided by a package in all common Linux distributions. crun is packaged in more recent releases of common Linux distributions.

  • proot is an optional dependency that can be used to permit limited unprivileged builds without user namespace / subuid support. It is packaged in the community repositories for common Linux distributions, and is available as a static binary from proot-me.github.io.

Bootstrap Utilities

The following utilities are required to bootstrap containerized distributions using their native tooling:

  • mount, umount, pacstrap for Arch Linux.

  • mount, umount, mknod, debootstrap for Debian based distributions.

  • dnf or yum, rpm, curl for EL derived RPM based distributions.

  • uname, zypper, SUSEConnect for SLES derived RPM based distributions.

Non-standard ldconfig / Nix & Guix Environments

If SingularityCE is installed under a package manager such as Nix or Guix, but on top of a standard Linux distribution (e.g. CentOS or Debian), it may be unable to correctly find the libraries for --nv and --rocm GPU support. This issue occurs as the package manager supplies an alternative ldconfig, which does not identify GPU libraries installed from host packages.

To allow SingularityCE to locate the host (i.e. CentOS / Debian) GPU libraries correctly, set ldconfig path in singularity.conf to point to the host ldconfig. I.E. it should be set to /sbin/ldconfig or /sbin/ldconfig.real rather than a Nix or Guix related path.

Filesystem support / limitations

SingularityCE supports most filesystems, but there are some limitations when installing SingularityCE on, or running containers from, common parallel / network filesystems. In general:

  • We strongly recommend installing SingularityCE on local disk on each compute node.

  • If SingularityCE is installed to a network location, a --localstatedir should be provided on each node, and Singularity configured to use it.

  • The --localstatedir filesystem should support overlay mounts.

  • TMPDIR / SINGULARITY_TMPDIR should be on a local filesystem wherever possible.

Note

Set the --localstatedir location by by providing --localstatedir my/dir as an option when you configure your SingularityCE build with ./mconfig.

Disk usage at the --localstatedir location is negligible (<1MiB). The directory is used as a location to mount the container root filesystem, overlays, bind mounts etc. that construct the runtime view of a container. You will not see these mounts from a host shell, as they are made in a separate mount namespace.

Overlay support

Various features of SingularityCE, such as the --writable-tmpfs and --overlay, options use the Linux overlay filesystem driver to construct a container root filesystem that combines files from different locations. Not all filesystems can be used with the overlay driver, so when containers are run from these filesystems some SingularityCE features may not be available.

Overlay support has two aspects:

  • lowerdir support for a filesystem allows a directory on that filesystem to act as the ‘base’ of a container. A filesystem must support overlay lowerdir for you be able to run a Singularity sandbox container on it, while using functionality such as --writable-tmpfs / --overlay.

  • upperdir support for a filesystem allows a directory on that filesystem to be merged on top of a lowerdir to construct a container. If you use the --overlay option to overlay a directory onto a container, then the filesystem holding the overlay directory must support upperdir.

Note that any overlay limitations mainly apply to sandbox (directory) containers only. A SIF container is mounted into the --localstatedir location, which should generally be on a local filesystem that supports overlay.

Fakeroot / (sub)uid/gid mapping

When SingularityCE is run using the fakeroot option it creates a user namespace for the container, and UIDs / GIDs in that user namespace are mapped to different host UID / GIDs.

Most local filesystems (ext4/xfs etc.) support this uid/gid mapping in a user namespace.

Most network filesystems (NFS/Lustre/GPFS etc.) do not support this uid/gid mapping in a user namespace. Because the fileserver is not aware of the mappings it will deny many operations, with ‘permission denied’ errors. This is currently a generic problem for rootless container runtimes.

SingularityCE cache / atomic rename

SingularityCE will cache SIF container images generated from remote sources, and any OCI/docker layers used to create them. The cache is created at $HOME/.singularity/cache by default. The location of the cache can be changed by setting the SINGULARITY_CACHEDIR environment variable.

The directory used for SINGULARITY_CACHEDIR should be:

  • A unique location for each user. Permissions are set on the cache so that private images cached for one user are not exposed to another. This means that SINGULARITY_CACHEDIR cannot be shared.

  • Located on a filesystem with sufficient space for the number and size of container images anticipated.

  • Located on a filesystem that supports atomic rename, if possible.

In SingularityCE version 3.6 and above the cache is concurrency safe. Parallel runs of SingularityCE that would create overlapping cache entries will not conflict, as long as the filesystem used by SINGULARITY_CACHEDIR supports atomic rename operations.

Support for atomic rename operations is expected on local POSIX filesystems, but varies for network / parallel filesystems and may be affected by topology and configuration. For example, Lustre supports atomic rename of files only on a single MDT. Rename on NFS is only atomic to a single client, not across systems accessing the same NFS share.

If you are not certain that your $HOME or SINGULARITY_CACHEDIR filesystems support atomic rename, do not run singularity in parallel using remote container URLs. Instead use singularity pull to create a local SIF image, and then run this SIF image in a parallel step. An alternative is to use the --disable-cache option, but this will result in each SingularityCE instance independently fetching the container from the remote source, into a temporary location.

NFS

NFS filesystems support overlay mounts as a lowerdir only, and do not support user-namespace (sub)uid/gid mapping.

  • Containers run from SIF files located on an NFS filesystem do not have restrictions.

  • You cannot use --overlay mynfsdir/ to overlay a directory onto a container when the overlay (upperdir) directory is on an NFS filesystem.

  • When using --fakeroot to build or run a container, your TMPDIR / SINGULARITY_TMPDIR should not be set to an NFS location.

  • You should not run a sandbox container with --fakeroot from an NFS location.

Lustre / GPFS / PanFS

Lustre, GPFS, and PanFS do not have sufficient upperdir or lowerdir overlay support for certain SingularityCE features, and do not support user-namespace (sub)uid/gid mapping.

  • You cannot use --overlay or --writable-tmpfs with a sandbox container that is located on a Lustre, GPFS, or PanFS filesystem. SIF containers on Lustre, GPFS, and PanFS will work correctly with these options.

  • You cannot use --overlay to overlay a directory onto a container, when the overlay (upperdir) directory is on a Lustre, GPFS, or PanFS filesystem.

  • When using --fakeroot to build or run a container, your TMPDIR/SINGULARITY_TMPDIR should not be a Lustre, GPFS, or PanFS location.

  • You should not run a sandbox container with --fakeroot from a Lustre, GPFS, or PanFS location.

Install from Provided RPM / Deb Packages

Sylabs provides .rpm packages of SingularityCE, for mainstream-supported versions of RHEL and derivatives (e.g. Alma Linux / Rocky Linux). We also provide .deb packages for current Ubuntu LTS releases.

These packages can be downloaded from the GitHub release page and installed using your distribution’s package manager.

The packages are provided as a convenience for users of the open source project, and are built in our public CircleCI workflow. They are not signed, but SHA256 sums are provided on the release page.

Install from Source

To use the latest version of SingularityCE from GitHub you will need to build and install it from source. This may sound daunting, but the process is straightforward, and detailed below.

If you have an earlier version of SingularityCE installed, you should remove it before executing the installation commands. You will also need to install some dependencies and install Go.

Install Dependencies

On Red Hat Enterprise Linux or CentOS install the following dependencies:

# Install basic tools for compiling
sudo yum groupinstall -y 'Development Tools'
# Install RPM packages for dependencies
sudo yum install -y \
   libseccomp-devel \
   glib2-devel \
   squashfs-tools \
   cryptsetup \
   runc

On Ubuntu or Debian install the following dependencies:

# Ensure repositories are up-to-date
sudo apt-get update
# Install debian packages for dependencies
sudo apt-get install -y \
   build-essential \
   libseccomp-dev \
   libglib2.0-dev \
   pkg-config \
   squashfs-tools \
   cryptsetup \
   runc

Note

You can build SingularityCE without cryptsetup available, but will not be able to use encrypted containers without it installed on your system.

If you will not use the singularity oci commands, runc is not required.

Install Go

SingularityCE is written in Go, and aims to maintain support for the two most recent stable versions of Go. This corresponds to the Go Release Maintenance Policy and Security Policy, ensuring critical bug fixes and security patches are available for all supported language versions.

Building SingularityCE may require a newer version of Go than is available in the repositories of your distribution. We recommend installing the latest version of Go from the [official binaries](https://golang.org/dl/).

This is one of several ways to install and configure Go.

Note

If you have previously installed Go from a download, rather than an operating system package, you should remove your go directory, e.g. rm -r /usr/local/go before installing a newer version. Extracting a new version of Go over an existing installation can lead to errors when building Go programs, as it may leave old files, which have been removed or replaced in newer versions.

Visit the Go download page and pick a package archive to download. Copy the link address and download with wget. Then extract the archive to /usr/local (or use other instructions on go installation page).

$ export VERSION=1.20.4 OS=linux ARCH=amd64 && \
    wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
    sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
    rm go$VERSION.$OS-$ARCH.tar.gz

Then, set up your environment for Go.

$ echo 'export GOPATH=${HOME}/go' >> ~/.bashrc && \
    echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc && \
    source ~/.bashrc

Download SingularityCE from a release

You can download SingularityCE from one of the releases. To see a full list, visit the GitHub release page. After deciding on a release to install, you can run the following commands to proceed with the installation.

$ export VERSION=main && # adjust this as necessary \
    wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-ce-${VERSION}.tar.gz && \
    tar -xzf singularity-ce-${VERSION}.tar.gz && \
    cd singularity-ce-${VERSION}

Checkout Code from Git

The following commands will install SingularityCE from the GitHub repo to /usr/local. This method will work for >=vmain. To install an older tagged release see older versions of the docs.

When installing from source, you can decide to install from either a tag, a release branch, or from the main branch.

  • tag: GitHub tags form the basis for releases, so installing from a tag is the same as downloading and installing a specific release. Tags are expected to be relatively stable and well-tested.

  • release branch: A release branch represents the latest version of a minor release with all the newest bug fixes and enhancements (even those that have not yet made it into a point release). For instance, to install v3.10 with the latest bug fixes and enhancements checkout release-3.10. Release branches may be less stable than code in a tagged point release.

  • main branch: The main branch contains the latest, bleeding edge version of SingularityCE. This is the default branch when you clone the source code, so you don’t have to check out any new branches to install it. The main branch changes quickly and may be unstable.

To ensure that the SingularityCE source code is downloaded to the appropriate directory use these commands.

$ git clone --recurse-submodules https://github.com/sylabs/singularity.git && \
    cd singularity && \
    git checkout --recurse-submodules vmain

Compile Singularity

SingularityCE uses a custom build system called makeit. mconfig is called to generate a Makefile and then make is used to compile and install.

To support the SIF image format, automated networking setup etc., and older Linux distributions without user namespace support, Singularity must be make install``ed as root or with ``sudo, so it can install the libexec/singularity/bin/starter-setuid binary with root ownership and setuid permissions for privileged operations. If you need to install as a normal user, or do not want to use setuid functionality see below.

$ ./mconfig && \
    make -C ./builddir && \
    sudo make -C ./builddir install

By default SingularityCE will be installed in the /usr/local directory hierarchy. You can specify a custom directory with the --prefix option, to mconfig like so:

$ ./mconfig --prefix=/opt/singularity

This option can be useful if you want to install multiple versions of SingularityCE, install a personal version of SingularityCE on a shared system, or if you want to remove SingularityCE easily after installing it.

For a full list of mconfig options, run mconfig --help. Here are some of the most common options that you may need to use when building SingularityCE from source.

  • --sysconfdir: Install read-only config files in sysconfdir. This option is important if you need the singularity.conf file or other configuration files in a custom location.

  • --localstatedir: Set the state directory where containers are mounted. This is a particularly important option for administrators installing SingularityCE on a shared file system. The --localstatedir should be set to a directory that is present on each individual node.

  • -b: Build SingularityCE in a given directory. By default this is ./builddir.

  • --without-conmon: Do not build the conmon OCI container monitor. Use this option if you are certain you will not use the singularity oci commands, or wish to use conmon >=2.0.24 provided by your distribution, and available on $PATH.

  • --reproducible: Enable support for reproducible builds. Ensures

    that the compiled binaries do not include any temporary paths, the source directory path, etc. This disables support for building plugins.

Unprivileged (non-setuid) Installation

If you need to install SingularityCE as a non-root user, or do not wish to allow the use of a setuid root binary, you can configure SingularityCE with the --without-suid option to mconfig:

$ ./mconfig --without-suid --prefix=/home/dave/singularity-ce && \
    make -C ./builddir && \
    make -C ./builddir install

If you have already installed SingularityCE you can disable the setuid flow by setting the option allow setuid = no in etc/singularity/singularity.conf within your installation directory.

When SingularityCE does not use setuid all container execution will use a user namespace. This requires support from your operating system kernel, and imposes some limitations on functionality. You should review the requirements and limitations in the user namespace section of this guide.

Relocatable Installation

Since SingularityCE 3.8, an unprivileged (non-setuid) installation is relocatable. As long as the structure inside the installation directory (--prefix) is maintained, it can be moved to a different location and SingularityCE will continue to run normally.

Relocation of a default setuid installation is not supported, as restricted location / ownership of configuration files is important to security.

Source bash completion file

To enjoy bash shell completion with SingularityCE commands and options, source the bash completion file:

$ . /usr/local/etc/bash_completion.d/singularity

Add this command to your ~/.bashrc file so that bash completion continues to work in new shells. (Adjust the path if you installed SingularityCE to a different location.)

Build and install an RPM

If you use RHEL, CentOS or SUSE, building and installing a Singularity RPM allows your SingularityCE installation be more easily managed, upgraded and removed. In SingularityCE >=v3.0.1 you can build an RPM directly from the release tarball.

Note

Be sure to download the correct asset from the GitHub releases page. It should be named singularity-ce-<version>.tar.gz.

After installing the dependencies and installing Go as detailed above, you are ready to download the tarball and build and install the RPM.

$ export VERSION=main && # adjust this as necessary \
    wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-ce-${VERSION}.tar.gz && \
    rpmbuild -tb singularity-ce-${VERSION}.tar.gz && \
    sudo rpm -ivh ~/rpmbuild/RPMS/x86_64/singularity-ce-$VERSION-1.el7.x86_64.rpm && \
    rm -rf ~/rpmbuild singularity-ce-$VERSION*.tar.gz

If you encounter a failed dependency error for golang but installed it from source, build with this command:

rpmbuild -tb --nodeps singularity-ce-${VERSION}.tar.gz

Options to mconfig can be passed using the familiar syntax to rpmbuild. For example, if you want to force the local state directory to /mnt (instead of the default /var) you can do the following:

rpmbuild -tb --define='_localstatedir /mnt' singularity-ce-$VERSION.tar.gz

Note

It is very important to set the local state directory to a directory that physically exists on nodes within a cluster when installing SingularityCE in an HPC environment with a shared file system.

Build an RPM from Git source

Alternatively, to build an RPM from a branch of the Git repository you can clone the repository, directly make an rpm, and use it to install Singularity:

$ ./mconfig && \
make -C builddir rpm && \
sudo rpm -ivh ~/rpmbuild/RPMS/x86_64/singularity-ce-main.el7.x86_64.rpm # or whatever version you built

To build an rpm with an alternative install prefix set RPMPREFIX on the make step, for example:

$ make -C builddir rpm RPMPREFIX=/usr/local

For finer control of the rpmbuild process you may wish to use make dist to create a tarball that you can then build into an rpm with rpmbuild -tb as above.

Remove an old version

In a standard installation of SingularityCE 3.0.1 and beyond (when building from source), the command sudo make install lists all the files as they are installed. You must remove all of these files and directories to completely remove SingularityCE.

$ sudo rm -rf \
    /usr/local/libexec/singularity \
    /usr/local/var/singularity \
    /usr/local/etc/singularity \
    /usr/local/bin/singularity \
    /usr/local/bin/run-singularity \
    /usr/local/etc/bash_completion.d/singularity

If you anticipate needing to remove SingularityCE, it might be easier to install it in a custom directory using the --prefix option to mconfig. In that case SingularityCE can be uninstalled simply by deleting the parent directory. Or it may be useful to install SingularityCE using a package manager so that it can be updated and/or uninstalled with ease in the future.

Testing & Checking the Build Configuration

After installation you can perform a basic test of Singularity functionality by executing a simple container from the Sylabs Cloud library:

$ singularity exec library://alpine cat /etc/alpine-release
3.10.0

See the user guide for more information about how to use SingularityCE.

singularity buildcfg

Running singularity buildcfg will show the build configuration of an installed version of SingularityCE, and lists the paths used by SingularityCE. Use singularity buildcfg to confirm paths are set correctly for your installation, and troubleshoot any ‘not-found’ errors at runtime.

$ singularity buildcfg
PACKAGE_NAME=singularity
PACKAGE_VERSION=main
BUILDDIR=/home/dtrudg/Sylabs/Git/singularity/builddir
PREFIX=/usr/local
EXECPREFIX=/usr/local
BINDIR=/usr/local/bin
SBINDIR=/usr/local/sbin
LIBEXECDIR=/usr/local/libexec
DATAROOTDIR=/usr/local/share
DATADIR=/usr/local/share
SYSCONFDIR=/usr/local/etc
SHAREDSTATEDIR=/usr/local/com
LOCALSTATEDIR=/usr/local/var
RUNSTATEDIR=/usr/local/var/run
INCLUDEDIR=/usr/local/include
DOCDIR=/usr/local/share/doc/singularity
INFODIR=/usr/local/share/info
LIBDIR=/usr/local/lib
LOCALEDIR=/usr/local/share/locale
MANDIR=/usr/local/share/man
SINGULARITY_CONFDIR=/usr/local/etc/singularity
SESSIONDIR=/usr/local/var/singularity/mnt/session

Note that the LOCALSTATEDIR and SESSIONDIR should be on local, non-shared storage.

The list of files installed by a successful setuid installation of SingularityCE can be found in the appendix, installed files section.

Test Suite

The SingularityCE codebase includes a test suite that is run during development using CI services.

If you would like to run the test suite locally you can run the test targets from the builddir directory in the source tree:

  • make check runs source code linting and dependency checks

  • make unit-test runs basic unit tests

  • make integration-test runs integration tests

  • make e2e-test runs end-to-end tests, which exercise a large number of operations by calling the SingularityCE CLI with different execution profiles.

Note

Running the full test suite requires a docker installation, and nc in order to test docker and instance/networking functionality.

SingularityCE must be installed in order to run the full test suite, as it must run the CLI with setuid privilege for the starter-suid binary.

Warning

sudo privilege is required to run the full tests, and you should not run the tests on a production system. We recommend running the tests in an isolated development or build environment.

Installation on Windows or Mac

Linux container runtimes like SingularityCE cannot run natively on Windows or Mac because of basic incompatibilities with the host kernel. (Contrary to a popular misconception, MacOS does not run on a Linux kernel. It runs on a kernel called Darwin originally forked from BSD.)

For this reason, the SingularityCE community maintains a set of Vagrant Boxes via Vagrant Cloud, one of Hashicorp’s open source tools. The current versions can be found under the sylabs organization.

Windows

Install the following programs:

Mac

SingularityCE is available via Vagrant (installable with Homebrew or manually)

To use Vagrant via Homebrew:

$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
$ brew install --cask virtualbox vagrant vagrant-manager

SingularityCE Vagrant Box

Run Git Bash (Windows) or open a terminal (Mac) and create and enter a directory to be used with your Vagrant VM.

$ mkdir vm-singularity-ce && \
    cd vm-singularity-ce

If you have already created and used this folder for another VM, you will need to destroy the VM and delete the Vagrantfile.

$ vagrant destroy && \
    rm Vagrantfile

Then issue the following commands to bring up the Virtual Machine. (Substitute a different value for the $VM variable if you like.)

$ export VM=sylabs/singularity-ce-3.8-ubuntu-bionic64 && \
    vagrant init $VM && \
    vagrant up && \
    vagrant ssh

You can check the installed version of SingularityCE with the following:

vagrant@vagrant:~$ singularity version
main

Of course, you can also start with a plain OS Vagrant box as a base and then install SingularityCE using one of the above methods for Linux.

SingularityCE Docker Image

It is possible to use a Dockerized Singularity, here is a sample compose.yaml (Singularity version 3.7.4) for use with Docker Compose:

services:
  singularity:
    image: quay.io/singularity/singularity:v3.7.4-slim
    stdin_open: true
    tty: true
    privileged: true
    volumes:
      - .:/root
    entrypoint: ["/bin/sh"]

Singularity in Docker can have various disadvantages, but basic container operations will work. Currently, the intended use case is continuous integration, meaning that you should be able to build a Singularity container using this Docker Compose file. For more information see issue#5 and the image’s source repo