SCS Library


The Singularity Container Services (SCS) Library is the place to push your containers to the cloud so other users can pull, verify, and use them.

SCS also provides a Remote Builder, allowing you to build containers on a secure remote service. This is convenient so that you can build containers on systems where you do not have root privileges.

Make an Account

Making an account is easy, and straightforward:

  1. Go to:

  2. Click “Sign in to Sylabs” (top right corner).

  3. Select your method to sign in, with Google, GitHub, GitLab, or Microsoft.

  4. Type your passwords, and that’s it!

Creating a Access token

Access tokens for pushing a container, and remote builder.

To generate a access token, do the following steps:

  1. Go to:

  2. Click “Sign In” and follow the sign in steps.

  3. Click on your login id (same and updated button as the Sign in one).

  4. Select “Access Tokens” from the drop down menu.

  5. Enter a name for your new access token, such as “test token”

  6. Click the “Create a New Access Token” button.

  7. Click “Copy token to Clipboard” from the “New API Token” page.

  8. Run singularity remote login and paste the access token at the prompt.

Now that you have your token, you are ready to push your container!

Pushing a Container

The singularity push command will push a container to the container library with the given URL. Here’s an example of a typical push command:

$ singularity push my-container.sif library://your-name/project-dir/my-container:latest

The :latest is the container tag. Tags are used to have different version of the same container.


When pushing your container, there’s no need to add a .sif (Singularity Image Format) to the end of the container name, (like on your local machine), because all containers on the library are SIF containers.

Let’s assume you have your container (v1.0.1), and you want to push that container without deleting your :latest container, then you can add a version tag to that container, like so:

$ singularity push my-container.sif library://your-name/project-dir/my-container:1.0.1

You can download the container with that tag by replacing the :latest, with the tagged container you want to download.

To set a description against the container image as you push it, use the -D flag introduced in SingularityCE 3.7. This provides an alternative to setting the description via the web interface:

$ singularity push -D "My alpine 3.11 container" alpine_3.11.sif library://myuser/examples/alpine:3.11
2.7MiB / 2.7MiB [=========================================================================] 100 % 1.1 MiB/s 0s

Library storage: using 13.24 MiB out of 11.00 GiB quota (0.1% used)
Container URL:

Note that when you push to a library that supports it, SingularityCE 3.7 and above will report your quota usage and the direct URL to view the container in your web browser.

OCI-SIF Images

If you are using SingularityCE 4’s new OCI-mode, you may wish to push OCI-SIF images to a library:// destination. The standard push command can be used, and SingularityCE will perform the push as an OCI image.

Instead of uploading the container as a single SIF file, the OCI configuration and layer blobs that are encapsulated in the OCI-SIF will be uploaded to the OCI registry that sits behind the SCS / Singularity Enterprise Library.


The OCI image specification doesn’t support SIF signatures, or any additional partitions that can be added to SIF (including OCI-SIF) files by SingularityCE.

If you have signed an OCI-SIF image locally, the signature(s) will not be pushed to the library. You may wish to push the OCI-SIF, as a single file, to an OCI registry using the oras:// protocol instead.

Pushing OCI-SIF containers to the library in this manner means that they can be accessed by other OCI tools. For example, you can use the Skopeo CLI tool to examine the image in the registry after it has been pushed. First, push an OCI-SIF to the SCS library://. The -U option is needed because the image is unsigned.

$ singularity push -U alpine_latest.oci.sif library://example/userdoc/alpine:latest
WARNING: Skipping container verification
INFO:    Pushing an OCI-SIF to the library OCI registry. Use `--oci` to pull this image.

Now use skopeo to access the image in the library. This requires authentication, which is handled automatically when you use singularity push. For other tools, SingularityCE provides a command singularity remote get-login-password that will provide a token that we can use to login to, which is the address of the OCI registry backing the SCS library.

$ singularity remote get-login-password | \
    skopeo login -u example --password-stdin
Login Succeeded!

Finally, use skopeo inspect to examine the image pushed earlier:

$ skopeo inspect docker://
   "Name": "",
   "Digest": "sha256:d08ad9745675812310727c0a99a4472b82fb1cc81e5c42ceda023f1bc35ca34a",
   "RepoTags": [
   "Created": "2023-08-07T20:16:26.309461618Z",
   "DockerVersion": "",
   "Labels": null,
   "Architecture": "amd64",
   "Os": "linux",
   "Layers": [
   "LayersData": [
            "MIMEType": "application/vnd.sylabs.image.layer.v1.squashfs",
            "Digest": "sha256:a0c5ced3a57bd1d0d71aaf4a0ea6131d5f163a4a8c5355468c18d4ef006f5d7d",
            "Size": 3248128,
            "Annotations": null
   "Env": [

Because the OCI-SIF was pushed as an OCI image, skopeo inspect is able to show the image configuration. This is not possible for non-OCI-SIF images:

$ skopeo inspect docker://
FATA[0001] unsupported image-specific operation on artifact with type "application/vnd.sylabs.sif.config.v1+json"

Pulling a container

The singularity pull command will download a container from the Library (library://), Docker Hub (docker://), and also Shub (shub://).


When pulling from Docker, the container will automatically be converted to a SIF (Singularity Image Format) container.

Here’s a typical pull command:

$ singularity pull file-out.sif library://alpine:latest

# or pull from docker:

$ singularity pull file-out.sif docker://alpine:latest


If there’s no tag after the container name, SingularityCE automatically will pull the container with the :latest tag.

To pull a container with a specific tag, just add the tag to the library URL:

$ singularity pull file-out.sif library://alpine:3.8

Of course, you can pull your own containers. Here’s what that will look like:

Pulling your own container

Pulling your own container is just like pulling from Github, Docker, etc…

$ singularity pull out-file.sif library://your-name/project-dir/my-container:latest

# or use a different tag:

$ singularity pull out-file.sif library://your-name/project-dir/my-container:1.0.1


You don’t have to specify a output file, one will be created automatically, but it’s good practice to always specify your output file.

OCI-SIF Images

If you are using SingularityCE 4’s new OCI-mode and have pushed OCI-SIF containers to the SCS library, they are stored as OCI images in the OCI registry that backs the library. You can pull these images with the standard pull command:

$ singularity pull library://sylabs/test/alpine-oci-sif:latest
INFO:    sylabs/test/alpine-oci-sif:latest is an OCI image, attempting to fetch as an OCI-SIF
Getting image source signatures
Copying blob af32528d4445 done
Copying config a5d222bd0d done
Writing manifest to image destination
INFO:    Writing OCI-SIF image
INFO:    Cleaning up.
WARNING: integrity: signature not found for object group 1
WARNING: Skipping container verification

Note that SingularityCE detects the image is an OCI image, and automatically retrieves it to an OCI-SIF file.

If the image was a non-OCI-SIF, built for SingularityCE’s default native mode, then it would be retrieved as-is. To ensure that an image retrieved from a library:// URI is an OCI-SIF, use the --oci flag. This will produce an error if a non-OCI-SIF is pulled:

$ singularity pull --oci library://sylabs/examples/ruby
Getting image source signatures
Copying blob a21814eefb7f done
Copying config 5211e7986c done
Writing manifest to image destination
INFO:    Cleaning up.
FATAL:   While pulling library image: error fetching image: while creating OCI-SIF: while checking OCI image: json: cannot unmarshal string into Go struct field ConfigFile.rootfs of type v1.RootFS

Specifying a platform / architecture

By default, singularity pull from a library:// URI will attempt to fetch a container that matches the architecture of your host system. If you need to retrieve a container that does not have the same architecture as your host (e.g. an arm64 container on an amd64 host), you can use the --platform or --arch options.

The --arch option accepts a CPU architecture only. For example, to pull an Ubuntu image for a 64-bit ARM system:

$ singularity pull --arch arm64 library://ubuntu

The --platform option accepts an OCI platform string. This has two or three parts, separated by forward slashes (/):

  • An OS value. Only linux is supported by SingularityCE.

  • A CPU architecture value, e.g. arm64.

  • An optional CPU variant, e.g. v8.

Note that the library does not support CPU variants. Any CPU variant provided will be ignored.

To pull an Ubuntu image for a 64-bit ARM system from the library, using the --platform option:

$ singularity pull --platform linux/arm64 library://ubuntu

Verify/Sign your Container

Verify containers that you pull from the library, ensuring they are bit-for-bit reproductions of the original image.

Check out this page on how to: verify a container, making PGP key, and sign your own containers.

Searching the Library for Containers

To find interesting or useful containers in the library, you can open in your browser and search from there through the web GUI.

Alternatively, from the CLI you can use singularity search <query>. This will search the library for container images matching <query>.

Remote Builder

The remote builder service can build your container in the cloud removing the requirement for root access.

Here’s a typical remote build command:

$ singularity build --remote file-out.sif docker://ubuntu:22.04

Building from a definition file:

This is our definition file. Let’s call it ubuntu.def:

bootstrap: library
from: ubuntu:22.04

    echo "hello world from ubuntu container!"

Now, to build the container, use the --remote flag, and without sudo:

$ singularity build --remote ubuntu.sif ubuntu.def


Make sure you have a access token, otherwise the build will fail.

After building, you can test your container like so:

$ ./ubuntu.sif
hello world from ubuntu container!

You can also use the web GUI to build containers remotely. First, go to (make sure you are signed in). Then you can copy and paste, upload, or type your definition file. When you are finished, click build. Then you can download the container with the URL.