Build a Container

The build command is the “Swiss army knife” of container creation. You can use it to download and assemble existing containers from external resources like the Container Library and Docker Hub. You can use it to convert containers between the formats supported by SingularityCE. And you can use it in conjunction with a SingularityCE definition file to create a container from scratch and customized it to fit your needs.


The build command accepts a target as input and produces a container as output.

The type of target given determines the method that build will use to create the container. It can be one of the following:

  • URI beginning with library:// to build from the Container Library

  • URI beginning with docker:// to build from Docker Hub

  • URI beginning with shub:// to build from Singularity Hub

  • path to an existing container on your local machine

  • path to a directory to build from a sandbox

  • path to a SingularityCE definition file

build can produce containers in two different formats, which can be specified as follows:

  • a compressed read-only Singularity Image File (SIF) format, suitable for production (default)

  • a writable (ch)root directory called a sandbox, for interactive development ( --sandbox option)

Because build can accept an existing container as a target and create a container in either supported format, you can use it to convert existing containers from one format to another.

Downloading an existing container from the Container Library

You can use the build command to download a container from the Container Library:

$ sudo singularity build lolcow.sif library://lolcow

The first argument (lolcow.sif) specifies the path and name for your container. The second argument (library://lolcow) gives the Container Library URI from which to download. By default, the container will be converted to a compressed, read-only SIF. If you want your container in a writable format, use the --sandbox option.

Downloading an existing container from Docker Hub

You can use build to download layers from Docker Hub and assemble them into SingularityCE containers.

$ sudo singularity build lolcow.sif docker://sylabsio/lolcow

Creating writable --sandbox directories

If you want to create a container within a writable directory (called a sandbox) you can do so with the --sandbox option. It’s possible to create a sandbox without root privileges, but to ensure proper file permissions, it is recommended to do so as root:

$ sudo singularity build --sandbox lolcow/ library://lolcow

The resulting directory operates just like a container in a SIF file. To make persistent changes within the sandbox container, use the --writable flag when you invoke your container. It’s a good idea to do this as root to ensure you have permission to access the files and directories that you want to change.

$ sudo singularity shell --writable lolcow/

Converting containers from one format to another

If you already have a container saved locally, you can use it as a target to build a new container. This allows you convert containers from one format to another. For example, if you had a sandbox container called development/ and you wanted to convert it to SIF container called production.sif, you could do so as follows:

$ sudo singularity build production.sif development/

Use care when converting a sandbox directory to the default SIF format. If changes were made to the writable container before conversion, there is no record of those changes in the SingularityCE definition file, which compromises the reproducibility of your container. It is therefore preferable to build production containers directly from a SingularityCE definition file, instead.

Building containers from SingularityCE definition files

SingularityCE definition files are the most powerful type of target when building a container. For detailed information on writing SingularityCE definition files, please see the Container Definitions documentation. Suppose you already have the following container definition file, called lolcow.def, and you want to use it to build a SIF container:

Bootstrap: docker
From: ubuntu:22.04

    apt-get -y update
    apt-get -y install cowsay lolcat

    export LC_ALL=C
    export PATH=/usr/games:$PATH

    date | cowsay | lolcat

You can do so with the following command.

$ sudo singularity build lolcow.sif lolcow.def

In this case, we’re running singularity build with sudo because installing software with apt-get, as in the %post section, requires the root privileges. By default, when you run SingularityCE, you are the same user inside the container as on the host machine. Using sudo on the host, to acquire root privileges, ensures we can use apt-get as root inside the container.

If you aren’t able or do not wish to use sudo when building a container, SingularityCE offers several other options: --remote builds, a --fakeroot mode, and limited unprivileged builds using proot.

--remote builds

Singularity Container Services and Singularity Enterprise provide a Remote Build Service. This service can perform a container build, as the root user, inside a secure single-use virtual machine.

Remote builds do not have the system requirements of --fakeroot builds, or the limitations of unprivileged proot builds. They are a convenient way to build SingularityCE containers on systems where sudo rights are not available.

To perform a remote build, you will need a Singularity Container Services account. (If you do not already have an account, you can create one on the site.) Once you have an account, ensure you are logged in from your command-line environment by running:

$ singularity remote login

You can then add the --remote flag to your build command:

$ singularity build --remote lolcow.sif lolcow.def

The build will be sent to the remote build service, and the progress and output of your build will be displayed on your local machine. When the build is complete, the resulting SIF container image will be downloaded to your machine.

--fakeroot builds

A build run with the --fakeroot flag uses certain Linux kernel features to enable you to run as an emulated, ‘fake’ root user inside the container, while running as your regular user (and not as root) on the host system.

The --fakeroot feature has particular requirements in terms of the capabilities and configuration of the host system. This is covered further in the fakeroot fakeroot section of this user guide, as well as in the admin guide.

If your system is configured for --fakeroot support, then you can run the above build without using sudo, by adding the --fakeroot flag:

$ singularity build --fakeroot lolcow.sif lolcow.def

Unprivilged proot builds

SingularityCE 3.11 introduces the ability to run some definition file builds without --fakeroot or sudo. This is useful on systems where you cannot sudo, and the administrator cannot perform the configurations necessary for --fakeroot support.

Unprivileged proot builds are automatically performed when proot is available on the system PATH, and singularity build is run by a non-root user against a definition file:

$ singularity build lolcow.sif lolcow.def
INFO:    Using proot to build unprivileged. Not all builds are supported. If build fails, use --remote or --fakeroot.
INFO:    Starting build...

Unprivileged builds that use proot have limitations, because proot’s emulation of the root user is not complete. In particular, such builds:

  • Do not support arch / debootstrap / yum / zypper bootstraps. Use localimage, library, oras, or one of the docker/oci sources.

  • Do not support %pre and %setup sections of definition files.

  • Run the %post sections of a build in the container as an emulated root user.

  • Run the %test section of a build as the non-root user, like singularity test.

  • Are subject to any restrictions imposed in singularity.conf.

  • Incur a performance penalty due to the``ptrace``-based interception of syscalls used by proot.

  • May fail if the %post script requires privileged operations that proot cannot emulate.

Generally, if your definition file starts from an existing SIF/OCI container image, and adds software using system package managers, an unprivileged proot build is appropriate. If your definition file compiles and installs large complex software from source, you may wish to investigate --remote or --fakeroot builds instead.

Building encrypted containers

Starting with SingularityCE 3.4.0, it is possible to build and run encrypted containers. The containers are decrypted at runtime entirely in kernel space, meaning that no intermediate decrypted data is ever written to disk. See encrypted containers for more details.

Build options


SingularityCE 3.0 introduces the option to perform a remote build. The --builder option allows you to specify a URL to a different build service. For instance, you may need to specify a URL pointing to an on-premises installation of the remote builder. This option must be used in conjunction with --remote.


When used in combination with the --remote option, the --detached option will detach the build from your terminal and allow it to build in the background without echoing any output to your terminal.


Specifies that SingularityCE should use a secret saved in either the SINGULARITY_ENCRYPTION_PASSPHRASE or SINGULARITY_ENCRYPTION_PEM_PATH environment variable to build an encrypted container. See encrypted containers for more details.


Gives users a way to build containers without root privileges. See the fakeroot feature for details.


The --force option will delete and overwrite an existing SingularityCE image without presenting the normal interactive confirmation prompt.


The --json option will force SingularityCE to interpret a given definition file as JSON.


This command allows you to set a different image library. (The default library is “”)


If you don’t want to run the %test section during the container build, you can skip it using the --notest option. For instance, you might be building a container intended to run in a production environment with GPUs, while your local build resource does not have GPUs. You want to include a %test section that runs a short validation, but you don’t want your build to exit with an error because it cannot find a GPU on your system. In such a scenario, passing the --notest flag would be appropriate.


This flag allows you to pass a plaintext passphrase to encrypt the container filesystem at build time. See encrypted containers for more details.


This flag allows you to pass the location of a public key to encrypt the container file system at build time. See encrypted containers for more details.


SingularityCE 3.0 introduces the ability to build a container on an external resource running a remote builder. (The default remote builder is located at “”.)


Build a sandbox (container in a directory) instead of the default SIF format.


Instead of running the entire definition file, only run a specific section or sections. This option accepts a comma-delimited string of definition file sections. Acceptable arguments include all, none or any combination of the following: setup, post, files, environment, test, labels.

Under normal build conditions, the SingularityCE definition file is saved into a container’s metadata so that there is a record of how the container was built. The --section option may render this metadata inaccurate, compromising reproducibility, and should therefore be used with care.


You can build into the same sandbox container multiple times (though the results may be unpredictable, and under most circumstances, it would be preferable to delete your container and start from scratch).

By default, if you build into an existing sandbox container, the build command will prompt you to decide whether or not to overwrite existing container data. Instead of this behavior, you can use the --update option to build into an existing container. This will cause SingularityCE to skip the definition-file’s header, and build any sections that are in the definition file into the existing container.

The --update option is only valid when used with sandbox containers.


This flag allows you to mount the NVIDIA CUDA libraries from your host environment into your build environment. Libraries are mounted during the execution of post and test sections.


This flag allows you to mount the AMD Rocm libraries from your host environment into your build environment. Libraries are mounted during the execution of post and test sections.


This flag allows you to mount a directory, file or image during build. It works the same way as --bind for the shell, exec and run subcommands of SingularityCE, and can be specified multiple times. See user defined bind paths. Bind mounts occur during the execution of post and test sections.


This flag will run the %test section of the build with a writable tmpfs overlay filesystem in place. This allows the tests to create files, which will be discarded at the end of the build. Other portions of the build do not use this temporary filesystem.

More Build topics

  • If you want to customize the cache location (where Docker layers are downloaded on your system), specify Docker credentials, or apply other custom tweaks to your build environment, see build environment.

  • If you want to make internally modular containers, check out the Getting Started guide here.

  • If you want to build your containers on the Remote Builder, (because you don’t have root access on a Linux machine, or you want to host your container on the cloud), check out this site.

  • If you want to build a container with an encrypted file system consult the SingularityCE documentation on encryption here.