Git Product home page Git Product logo

nix2container's Introduction

nix2container

nix2container provides an efficient container development workflow with images built by Nix: it doesn't write tarballs to the Nix store and allows to skip already pushed layers (without having to rebuild them).

This is based on ideas developed in this blog post.

Getting started

{
  inputs.nix2container.url = "github:nlewo/nix2container";

  outputs = { self, nixpkgs, nix2container }: let
    pkgs = import nixpkgs { system = "x86_64-linux"; };
    nix2containerPkgs = nix2container.packages.x86_64-linux;
  in {
    packages.x86_64-linux.hello = nix2containerPkgs.nix2container.buildImage {
      name = "hello";
      config = {
        entrypoint = ["${pkgs.hello}/bin/hello"];
      };
    };
  };
}

This image can then be loaded into Docker with

$ nix run .#hello.copyToDockerDaemon
$ docker run hello:latest
Hello, world!

More Examples

To load and run the bash example image into Podman:

$ nix run github:nlewo/nix2container#examples.bash.copyToPodman
$ podman run -it bash

Functions documentation

nix2container.buildImage

Function arguments are:

  • name (required): the name of the image.

  • tag (defaults to the image output hash): the tag of the image.

  • config (defaults to {}): an attribute set describing an image configuration as defined in the OCI image specification.

  • copyToRoot (defaults to null): a derivation (or list of derivations) copied in the image root directory (store path prefixes /nix/store/hash-path are removed, in order to relocate them at the image /).

    pkgs.buildEnv can be used to build a derivation which has to be copied to the image root. For instance, to get bash and coreutils in the image /bin:

    copyToRoot = pkgs.buildEnv {
      name = "root";
      paths = [ pkgs.bashInteractive pkgs.coreutils ];
      pathsToLink = [ "/bin" ];
    };
    
  • fromImage (defaults to null): an image that is used as base image of this image; use pullImage or pullImageFromManifest to supply this.

  • maxLayers (defaults to 1): the maximum number of layers to create. This is based on the store path "popularity" as described in this blog post. Note this is applied on the image layers and not on layers added with the buildImage.layers attribute.

  • perms (defaults to []): a list of file permisssions which are set when the tar layer is created: these permissions are not written to the Nix store.

    Each element of this permission list is a dict such as

    { path = "a store path";
      regex = ".*";
      mode = "0664";
    }
    

    The mode is applied on a specific path. In this path subtree, the mode is then applied on all files matching the regex.

  • initializeNixDatabase (defaults to false): to initialize the Nix database with all store paths added into the image. Note this is only useful to run nix commands from the image, for instance to build an image used by a CI to run Nix builds.

  • layers (defaults to []): a list of layers built with the buildLayer function: if a store path in deps or contents belongs to one of these layers, this store path is skipped. This is pretty useful to isolate store paths that are often updated from more stable store paths, to speed up build and push time.

nix2container.pullImage

Pull an image from a container registry by name and tag/digest, storing the entirety of the image (manifest and layer tarballs) in a single store path. The supplied sha256 is the narhash of that store path.

Function arguments are:

  • imageName (required): the name of the image to pull.

  • imageDigest (required): the digest of the image to pull.

  • sha256 (required): the sha256 of the resulting fixed output derivation.

  • os (defaults to linux)

  • arch (defaults to x86_64)

  • tlsVerify (defaults to true)

nix2container.pullImageFromManifest

Pull a base image from a container registry using a supplied manifest file, and the hashes contained within it. The advantages of this over the basic pullImage:

  • Each layer archive is in its own store path, which means each will download just once and naturally deduplicate for multiple base images that share layers.
  • There is no Nix-specific hash, so it's possible update the base image by simply re-fetching the manifest.json from the registry; no need to actually pull the whole image just to compute a new narhash for it.

With this function the manifest.json acts as a lockfile meant to be stored in source control alongside the Nix container definitions. As a convenience, the manifest can be fetched/updated using the supplied passthru script, eg:

nix run .#examples.fromImageManifest.fromImage.getManifest > examples/alpine-manifest.json

Function arguments are:

  • imageName (required): the name of the image to pull.

  • imageManifest (required): the manifest file of the image to pull.

  • imageTag (defaults to latest)

  • os (defaults to linux)

  • arch (defaults to x86_64)

  • tlsVerify (defaults to true)

  • registryUrl (defaults to registry.hub.docker.com)

Note that imageTag, os, and arch do not affect the pulled image; that is governed entirely by the supplied manifest.json file. These arguments are used for the manifest-selection logic in the included getManifest script.

Authentication

If the Nix daemon is used for building, here is how to set up registry authentication.

  1. docker login URL to whatever it is
  2. Copy ~/.docker/config.json to /etc/nix/skopeo/auth.json
  3. Make the directory and all the files readable to the nixbld group:
    sudo chmod -R g+rx /etc/nix/skopeo
    sudo chgrp -R nixbld /etc/nix/skopeo
    
  4. Bind mount the file into the Nix build sandbox
    extra-sandbox-paths = /etc/skopeo/auth.json=/etc/nix/skopeo/auth.json
    

Every time a new registry authentication has to be added, update /etc/nix/skopeo/auth.json file.

nix2container.buildLayer

For most use cases, this function is not required. However, it could be useful to explicitly isolate some parts of the image in dedicated layers, for caching (see the "Isolate dependencies in dedicated layers" section) or non reproducibility (see the reproducible argument) purposes.

Function arguments are:

  • deps (defaults to []): a list of store paths to include in the layer.

  • copyToRoot (defaults to null): a derivation (or list of derivations) copied in the image root directory (store path prefixes /nix/store/hash-path are removed, in order to relocate them at the image /).

    pkgs.buildEnv can be used to build a derivation which has to be copied to the image root. For instance, to get bash and coreutils in the image /bin:

    copyToRoot = pkgs.buildEnv {
      name = "root";
      paths = [ pkgs.bashInteractive pkgs.coreutils ];
      pathsToLink = [ "/bin" ];
    };
    
  • reproducible (defaults to true): If false, the layer tarball is stored in the store path. This is useful when the layer dependencies are not bit reproducible: it allows to have the layer tarball and its hash in the same store path.

  • maxLayers (defaults to 1): the maximum number of layers to create. This is based on the store path "popularity" as described in this blog post. Note this is applied on the image layers and not on layers added with the buildLayer.layers attribute.

  • perms (defaults to []): a list of file permisssions which are set when the tar layer is created: these permissions are not written to the Nix store.

    Each element of this permission list is a dict such as

    { path = "a store path";
      regex = ".*";
      mode = "0664";
    }
    

    The mode is applied on a specific path. In this path subtree, the mode is then applied on all files matching the regex.

  • layers (defaults to []): a list of layers built with the buildLayer function: if a store path in deps or contents belongs to one of these layers, this store path is skipped. This is pretty useful to isolate store paths that are often updated from more stable store paths, to speed up build and push time.

  • ignore (defaults to null): a store path to ignore when building the layer. This is mainly useful to ignore the configuration file from the container layer.

Isolate dependencies in dedicated layers

It is possible to isolate application dependencies in a dedicated layer. This layer is built by its own derivation: if storepaths composing this layer don't change, the layer is not rebuilt. Moreover, Skopeo can avoid to push this layer if it has already been pushed.

Let's consider an application printing a conversation. This script depends on bash and the hello binary. Because most of the changes concern the script itself, it would be nice to isolate scripts dependencies in a dedicated layer: when we modify the script, we only need to rebuild and push the layer containing the script. The layer containing dependencies won't be rebuilt and pushed.

As shown below, the buildImage.layers attribute allows to explicitly specify a set of dependencies to isolate.

{ pkgs }:
let
  application = pkgs.writeScript "conversation" ''
    ${pkgs.hello}/bin/hello
    echo "Haaa aa... I'm dying!!!"
  '';
in
pkgs.nix2container.buildImage {
  name = "hello";
  config = {
    entrypoint = ["${pkgs.bash}/bin/bash" application];
  };
  layers = [
    (pkgs.nix2container.buildLayer { deps = [pkgs.bash pkgs.hello]; })
  ];
}

This image contains 2 layers: a layer with bash and hello closures and a second layer containing the script only.

In real life, the isolated layer can contains a Python environment or Node modules.

See Nix & Docker: Layer explicitly without duplicate packages! for learning how to avoid duplicate store paths in your explicitly layered images.

Quick and dirty benchmarks

The main goal of nix2container is to provide fast rebuild/push container cycles. In the following, we provide an order of magnitude of rebuild and repush time, for the uwsgi image.

warning: this is quick and dirty benchmarks which only provide an order of magnitude

We build the container and push the container. We then made a small change in the hello.py file to trigger a rebuild and a push.

Method Rebuild/repush time Executed command
nix2container.buildImage ~1.8s nix run .#example.uwsgi.copyToRegistry
dockerTools.streamLayeredImage ~7.5s nix build .#example.uwsgi | docker load
dockerTools.buildImage ~10s nix build .#example.uwsgi; skopeo copy docker-archive://./result docker://localhost:5000/uwsgi:latest

Note we could not compare the same distribution mechanisms because

  • Skopeo is not able to skip already loaded layers by the Docker daemon and
  • Skopeo failed to push to the registry an image streamed to stdin.

Run the tests

nix run .#tests.all

This builds several example images with Nix, loads them with Skopeo, runs them with Podman, and test output logs.

Not that, unfortunately, these tests are not executed in the Nix sandbox because it is currently not possible to run a container in the Nix sandbox.

It is also possible to run a specific test:

nix run .#tests.basic

The nix2container Go library

This library is currently used by the Skopeo nix transport available in this branch.

For more information, refer to the Go documentation.

nix2container's People

Contributors

bbjubjub2494 avatar blaggacao avatar domenkozar avatar ghostbuster91 avatar jbott avatar jmgilman avatar kolloch avatar manveru avatar mic92 avatar mickours avatar mikepurvis avatar nlewo avatar nrdxp avatar shlevy avatar szucsitg avatar testwill avatar yoriksar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nix2container's Issues

Layer digest mismatch bug

Thank you for this wonderful tool!

I ran into a weird digest mismatch bug when a layer (transitively) includes the ncurses package. Here's a reproducer:

In a fresh flake-based project that depends on nix2container, given this flake.nix file

{
  description = "Reproducer";

  inputs = {
    nix2container.url = "github:nlewo/nix2container";
    nixpkgs.follows = "nix2container/nixpkgs";
    flakeUtils.follows = "nix2container/flake-utils";
  };

  outputs = { self, nixpkgs, flakeUtils, nix2container }:
    flakeUtils.lib.eachDefaultSystem
      (system:
        let
          pkgs = import nixpkgs { inherit system; };
          nix2containerPkgs = nix2container.packages.${system};
        in
        rec {
          devShell = pkgs.mkShellNoCC rec {
            buildInputs = builtins.attrValues {
              inherit (nix2containerPkgs)
                skopeo-nix2container;
            };
          };
          packages = {
            reproducer = nix2containerPkgs.nix2container.buildImage {
              name = "reproducer";
              config = {
                entrypoint = [ "${pkgs.ncurses}/bin/clear" ];
              };
            };
          };
        }
      );
}

nix build ... generates the image config with no issues:

 nix build -L -v '.#packages.x86_64-linux.reproducer'

However, skopeo copy ... will fail:

nix develop -c skopeo copy --insecure-policy nix:./result docker-daemon:reproducer:latest  

with the following error:

Getting image source signatures
Copying blob ba1cde528580 done  
FATA[0000] writing blob: writing to temporary on-disk layer: happened during read: Digest did not match, expected sha256:ba1cde52858099b4cfdde3e4503eab88300593d4bab604a85a2d04f418b6f39b, got sha256:758480d0c9fd87ccdd36a577e8333b5dd3642b62260e85ffbfb12c13e94d0dd6

In a more complex image with lots of other layers, the digests would match perfectly fine, as long as a layer doesn't include that ncurses package, for whatever reason :(

expose `deps` in `buildImage`

Assume a container that needs healthchecks packaged.

  • With contents ? [] you are forced to invoke in a non-fully qualified manner:
    /bin/cardano-wallet-network-sync-check

  • With deps ? [] you would be able to invoke it in a fully qualified manner:
    ${healthChecks.wallet-network-sync}/bin/cardano-wallet-network-sync-check

Empty history can cause issues with certain tools

Some tools need a history entry per layer (Artifactory, in particular). An empty history works, but if you try to create an image via a traditional Dockerfile using a nix2container-built image with n layers as a base, this will produce an image with n+1 layers but only 1 history entry, thus leading to "manifest invalid" errors on push. This issue is more well documented here and in the eventual fix here. Perhaps we can populate the image history with simple created_by entries of "nix2container" or similar as a quick fix and potentially allow for custom entries later on.

option to keep derivations

There are a variety of Nix commands that can be run on derivation files themselves to avoid additional evaluation.

For example, if one calls nix print-dev-env on an outpath whose derivation exists in the store, then that derivation will be used as the basis for the shell that is loaded. If the derivation does not exist in the store the command fails. I would like to use this command to enter a devshell in a container without any extra evaluation (the devshell outpath already exists in the image), but I cannot do this because the derivation path is stripped out.

I also cannot reference drvPath directly as it adds over 20G of dependencies to the images, which is obviously not cool. Instead, It'd be nice to give users the option to copy over the derivation files for the outpaths included in the image for these types of usecases, similar to Nix's own keep-derivations nix.conf setting.

Use sudo/doas if permission denied while trying to connect to the Docker daemon socketpermission denied while trying to connect to the Docker daemon socket

I think some users do not give themselves direct access to the docker daemon so a feature like this could be handy.

Could maybe be implemented by wrapping the scripts here

copyToDockerDaemon = image: pkgs.writeShellScriptBin "copy-to-docker-daemon" ''
echo "Copy to Docker daemon image ${image.imageName}:${image.imageTag}"
${skopeo-nix2container}/bin/skopeo --insecure-policy copy nix:${image} docker-daemon:${image.imageName}:${image.imageTag}
${skopeo-nix2container}/bin/skopeo --insecure-policy inspect docker-daemon:${image.imageName}:${image.imageTag}
'';
copyToRegistry = image: pkgs.writeShellScriptBin "copy-to-registry" ''
echo "Copy to Docker registry image ${image.imageName}:${image.imageTag}"
${skopeo-nix2container}/bin/skopeo --insecure-policy copy nix:${image} docker://${image.imageName}:${image.imageTag} $@
'';
copyTo = image: pkgs.writeShellScriptBin "copy-to" ''
echo Running skopeo --insecure-policy copy nix:${image} $@
${skopeo-nix2container}/bin/skopeo --insecure-policy copy nix:${image} $@
'';
copyToPodman = image: pkgs.writeShellScriptBin "copy-to-podman" ''
echo "Copy to podman image ${image.imageName}:${image.imageTag}"
${skopeo-nix2container}/bin/skopeo --insecure-policy copy nix:${image} containers-storage:${image.imageName}:${image.imageTag}
${skopeo-nix2container}/bin/skopeo --insecure-policy inspect containers-storage:${image.imageName}:${image.imageTag}
'';

In a script such as this:

#!/bin/sh
# shellcheck disable=SC2068
# ^ Intentionally unquoted arugments

permission() {
  # "Server" is only present if we have docker permissions
  if docker version 2> /dev/null | grep -q Server > /dev/null 2>&1; then
    return 0
  else
    return 1
  fi
}

if permission; then
  $@
elif command -v sudo > /dev/null 2>&1; then
  sudo $@
elif command -v doas > /dev/null 2>&1; then
  doas $@
fi

I'm sure there is a better way to check for permission than the above and I'm not sure if the highlighted scripts from default.nix are the only places where using sudo/doas may be relevant. Perhaps it could be interesting to explore? :)

images are created with 0600 /nix and /nix/store

This breaks any images that need to run as non-root.

bash-5.1# ls -hal /nix
total 12K
drw-------   3 0 0   19 Mar 18 08:51 .
drwxr-xr-x  14 0 0  175 Mar 18 08:52 ..
drw------- 107 0 0 8.0K Mar 18 08:52 store

`./result/bin/copy-to-podman` results in duplicated content

I'm not sure if it's a bug in your image for with the nix:// feature or in upstream Skopeo, but if I build an image (with like 5 layers) and then do copy-to-podman and then look in ${HOME}/.local/share/containers/storage/vfs/dir (doesn't need Podman on the system) I see files inside there duplicated for each layer.

This inflates the image in my case from ~500MB docker image ls <my image name> to 2.4GB.

Can be reproduced by doing:

# 3 Layers image
nix build .\#examples.layered.copyToPodman
HOME=/tmp/ ./result/bin/copy-to-podman
nix run nixpkgs\#jdupes -- -r "/tmp/.local/share/containers/storage/vfs/dir" | grep libunistring.so
# file duplicated once for each layer
# libunistring-1.0/lib/libunistring.so.2.2.0
# libunistring-1.0/lib/libunistring.so.2.2.0
# libunistring-1.0/lib/libunistring.so.2.2.0
sudo rm -rf "/tmp/.local"

Issue doesn't appear to happen for docker images.

How to get cedentials to `pullImage`

nixpkgs does some hackery, but I think we also need an escape hatch for nix2container.

Do you have any idea on this?

Here is the nixpkgs hackery, that I found: https://github.com/NixOS/nixpkgs/blob/cc8750c8ce9b3759020ec385c7d473a4889c2218/pkgs/build-support/fetchdocker/generic-fetcher.nix#L37-L82

Another option via impureEnvVars: https://github.com/NixOS/nixpkgs/blob/cc8750c8ce9b3759020ec385c7d473a4889c2218/pkgs/build-support/fetchdocker/generic-fetcher.nix#L92

Allow changing ownership

It appears that all paths are set to root as the owner by default with no way to interrupt this behavior. Since we can already modify the file permissions, it would be nice to also control the file ownership.

In a lot of containers, a default (non-root) user is created with a working directory that they have ownership over. As far as I know, it's not currently possible to emulate this with nix2container. Note that the nix docker tools has fakeRootCommands to more or less emulate this.

Is this something we can add to nix2container?

podman example in README doesn't quite work

$ nix --extra-experimental-features nix-command --extra-experimental-features flakes run github:nlewo/nix2container#examples.bash.copyToPodman
                                                                                                                                                                             
Copy to podman image bash:k8g30kq8xhw50pd9ch321md9n2cylppw
Getting image source signatures
Copying blob a546c70d96cd skipped: already exists  
Copying config 7202af4391 done  
Writing manifest to image destination
Storing signatures
{
    "Name": "docker.io/library/bash",
    "Digest": "sha256:517f33e35f1c73b81c94d231e363155d10ad24d9f8f06f47071b271315db4f9b",
    "RepoTags": [],
    "Created": null,
    "DockerVersion": "",
    "Labels": null,
    "Architecture": "amd64",
    "Os": "linux",
    "Layers": [
        "sha256:a546c70d96cd7c28b43e0effee39c7b1e102e5d52dda968a14975972f85304f5"
    ],
    "LayersData": [
        {
            "MIMEType": "application/vnd.oci.image.layer.v1.tar",
            "Digest": "sha256:a546c70d96cd7c28b43e0effee39c7b1e102e5d52dda968a14975972f85304f5",
            "Size": 56502272,
            "Annotations": null
        }
    ],
    "Env": null
}

$ podman run -it --rm bash                  
Error: short-name "bash" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"

$ podman run -it --rm docker.io/library/bash
Trying to pull docker.io/library/bash:latest...
^C
$ podman run -it --rm docker.io/library/bash@sha256:517f33e35f1c73b81c94d231e363155d10ad24d9f8f06f47071b271315db4f9b
bash-5.2#                                                                                                                                          

Sorry I'm not very familiar with nix, but I don't see even how to get the @sha256:... hash automatically. I think something like docker build's --iidfile would be nice?

Support multiple Tags

Per the title, is it currently possible to provide multiple tags for an image? For example, tag it with the output hash and the git commit SHA simultaneously? Currently, it seems like you need two different derivations (identical at the binary level but configured with different tags at the manifest level).

My current theory is to call copyToDockerDaemon and manually tag/publish the image myself, but it would be nice to be able to use copyToRegistry with multiple tags defined. Thoughts?

Example for creating users

I came across this thread recently where I noticed @nlewo had responded. The perms argument is great for modifying file permissions, especially combined with runCommand.

However, I have a use case that still requires a local user to be created. This is primarily because I'm running binaries that assume $HOME is set up and available for the user running the container. Can we get an example of how to go about configuring a new user and using it as the default user for the container?

It looks like the nix Docker tools have a runAsRoot option which can be used with something like shadowSetup to create the appropriate files. What's the alternative option for nix2container?

largest closures should get their own layers first

After trying to get very clever with the layer creation I realized that nix2container can do this much better on it's own. I found by just setting the maxLayers to a very large value I could basically get a 1:1 drv:layer. However, for large images this isn't always desirable. For one image in particular this resulted in an OCI image with 542 layers, which podman didn't like very much and refused to launch (at least with the overlay driver).

It would be pretty nice if the automatic layer creation function could somehow sort the derivation closures in order of largest to smallest so that when setting a maxLayers of say, 25, that the 24 larget derivation closures get their own layer and the 25th layer is a collection of everything that's left.

Hopefully this wouldn't add undue complexity.

Improve image introspection to let tools report CVEs & SBOM

Hi Team,
Is there anything that can be done to introspect the built images. I was interested to check if existing SBOM tools & CVE scanners can derive some insights from the images generated via nix2container.

In addition, I wanted to understand the reason for the current size of the images. I believe they are not minimal & perhaps can be customised further s.t. the final image has less size.

E.g. I was doing a comparative study between the nginx image from chainguard

Note: nginx:x7fh59yyzwlrfsc98rw4bpkm4hcmg4dz was generated using nix2container

docker images
REPOSITORY                 TAG                                IMAGE ID       CREATED        SIZE
cgr.dev/chainguard/nginx   latest                             8506f9922bea   13 hours ago   21.3MB
nginx                      x7fh59yyzwlrfsc98rw4bpkm4hcmg4dz   5d209e8e196b   N/A            122MB
grype cgr.dev/chainguard/nginx:latest
 ✔ Vulnerability DB        [no update available]
 ✔ Loaded image            
 ✔ Parsed image            
 ✔ Cataloged packages      [30 packages]
 ✔ Scanning image...       [0 vulnerabilities]
   ├── 0 critical, 0 high, 0 medium, 0 low, 0 negligible
   └── 0 fixed
No vulnerabilities found
syft cgr.dev/chainguard/nginx:latest
 ✔ Loaded image            
 ✔ Parsed image            
 ✔ Cataloged packages      [30 packages]
NAME                    VERSION      TYPE 
ca-certificates-bundle  20230106-r0  apk   
execline                2.9.2.1-r1   apk   
glibc                   2.37-r6      apk   
glibc-locale-posix      2.37-r6      apk   
ld-linux                2.37-r6      apk   
libcrypto3              3.1.0-r2     apk   
libgcc                  12.2.0-r9    apk   
libssl3                 3.1.0-r2     apk   
libstdc++               12.2.0-r9    apk   
nginx                   1.23.3-r1    apk   
pcre                    8.45-r0      apk   
s6                      2.11.3.0-r0  apk   
skalibs                 2.13.1.0-r1  apk   
wolfi-baselayout        20230201-r0  apk   
zlib                    1.2.13-r3    apk   

versus.

grype nginx:x7fh59yyzwlrfsc98rw4bpkm4hcmg4dz
 ✔ Vulnerability DB        [no update available]
 ✔ Loaded image            
 ✔ Parsed image            
 ✔ Cataloged packages      [0 packages]
 ✔ Scanning image...       [0 vulnerabilities]
   ├── 0 critical, 0 high, 0 medium, 0 low, 0 negligible
   └── 0 fixed
syft nginx:x7fh59yyzwlrfsc98rw4bpkm4hcmg4dz
 ✔ Loaded image            
 ✔ Parsed image            
 ✔ Cataloged packages      [0 packages]
No packages discovered

`/nix` & `/nix/store` default permissions in the tar

For some reason, just /nix & /nix/store have unfortunate default permission so that when using su-exec, this is required:

chmod 755 /nix /nix/store
su-exec postgres patroni "$@"

Out of my recollection it was something like 700 or 600 on root(0).

Improve compiling containers on non-Linux systems

This issue documents a usability issue but does not currently offer a solution.

As it stands, executing copy2DockerDaemon will build the container locally and then push it to whatever Docker registry is configured on the local system. In non-Linux systems, this presents a problem because the Docker daemon is running inside a Linux VM. The result is that the container is typically built for macOS/Windows and then uploaded to a Docker daemon running under Linux (this is true for any runtime, including podman, containerd, etc.).

I'm not sure if this problem can be solved entirely within nix2container or if some documentation can be generated with a workaround for users in this context. The most obvious solution is to just run everything in a Linux VM and forward the Docker socket to the local machine (this is what I'm currently doing using lima).

Another potential solution that I think is worth exploring is using remote builders to build the OCI image. The primary problem I see here is that the image isn't actually stored in the nix store (which I understand is a good thing), so figuring out how to actually get the image built remotely is the primary challenge.

There may be other solutions that I'm not aware of. I figured this issue is a good starting place to discuss potential solutions and their pros/cons.

Layers contains duplicate dependencies

When using layers results in layers containing multiple identical packages
Example:

{
  inputs.nix2container.url = "github:nlewo/nix2container";
  inputs.nixpkgs.follows = "nix2container/nixpkgs";

  outputs = { self, nixpkgs, nix2container }:
    let
      pkgs = import nixpkgs { system = "x86_64-linux"; };
      n2c = nix2container.packages.x86_64-linux.nix2container;
    in
    {
      packages.x86_64-linux.default = n2c.buildImage {
        maxLayers = 100;
        name = "hello";
        config = {
          entrypoint = [ "${pkgs.hello}/bin/hello" ];
        };
        layers = [
          (n2c.buildLayer {
            deps = [ pkgs.bash ];
          })
          (n2c.buildLayer {
            deps = [ pkgs.zsh ];
          })
        ];
      };
    };
}

Result:
glibc (and few other deps) exist in both bash and zsh layers

I noticed there is ignore option but seems to accept only a single store path. Is there a way to deduplicate derivations?

s/deamon/daemon/

You currently have copyToDockerDeamon, but the usual spelling is "daemon" (that's ae rather than ea).

Layers with symlinks differ depending on OS

I've noticed that when I run patched skopeo on Linux, it succeeds, but when I run it with the same JSON on Darwin, I get

FATA[0001] writing blob: writing to temporary on-disk layer: happened during read: Digest did not match, expected sha256:..., got sha256:...

And the layer in question is a .so library that has symlinks in it.

It looks like nix2containers produces different hashes on different OSs if the layer contains symlinks. To check this theory, I've added a symlink to test data and ran in on two different systems, I got on Darwin:

$ ln -s file1 data/tar-directory/symlink; go test ./nix -run TestTar
--- FAIL: TestTar (0.00s)
    tar_test.go:19: Digest is sha256:9a9402c601020cb24b71d4287b143d7bda875869970a1ed35792caa1702fdb69 while it should be sha256:efccbbe35209d59cfeebd8e73785258d3679fa258f72a7dfbc2eec65695fd5c8
FAIL
FAIL    github.com/nlewo/nix2container/nix      0.244s
FAIL

And on Linux:

$ ln -s file1 data/tar-directory/symlink; go test ./nix -run TestTar
--- FAIL: TestTar (0.00s)
    tar_test.go:19: Digest is sha256:077af73ad0fb226436e92a272318b777b6976b85c3a05d86183274818dd634f8 while it should be sha256:efccbbe35209d59cfeebd8e73785258d3679fa258f72a7dfbc2eec65695fd5c8
FAIL
FAIL    github.com/nlewo/nix2container/nix      0.013s
FAIL

Note that the digests are different in both cases, even though they should match. Without this symlink, test passes, so the digests are the same.

Note that this prevents me from using nix2container on macOS with Linux builder.

Pull image with only the API manifest hashes

It's a bummer right now that pullImage requires giving the fixed output derivation hash, since anything that updates an image specified in Nix source (whether manually or with automation) has to pull the entire image in order to compute that hash.

Would it be possible to implement a pull in Nix that only requires the content hashes already in the image manifest supplied by a registry? A sketch of this would be:

  • Individually pull each layer archive into a fixed-output-derivation, validated by the manifest-supplied hashes.
  • In the sandbox, combine those layers into the final image, in a new store path that can be used as a fromImage for further nix2container operations.

Not sure if the second would have to be implemented from scratch or just using the skopeo JSON manifests that nix2container already uses for building— if the layers had to be unpacked anyway, then it would be maybe possible to hardlink the files from the final derivation to avoid paying the disk cost twice. A bunch of cp -Rl actions is pretty cheap relative to assembling another JSON and calling a separate tool.


Just to put a bit more flesh on these bones— here's how to list the tags for a particular image, the latest of which at time of writing is 251a921be086aa489705e31fa5bd59f2dadfa0824aa7f362728dfe264eb6a3d2:

https://hub.docker.com/v2/repositories/nixos/nix/tags/latest

The manifest and blob APIs require a bearer token:

export repo=nixos/nix
export token=$(curl -sSL "https://auth.docker.io/token?service=registry.docker.io&scope=repository:${repo}:pull" | jq --raw-output .token)

# Gets us the list of all the layers in the image specified by the digest
curl -H "Authorization: Bearer ${token}" "https://registry.hub.docker.com/v2/${repo}/manifests/sha256:251a921be086aa489705e31fa5bd59f2dadfa0824aa7f362728dfe264eb6a3d2"

# Pull a specific layer's archive
curl -H "Authorization: Bearer ${token}" "https://registry.hub.docker.com/v2/${repo}/blobs/sha256:7eec37a2649230139dc4534fd8b5cec45986588eee6bb30625c5b2bcf87d368c" -L --output layer.bin

We can validate that the layer matches the given content hash:

$ sha256sum layer.bin
7eec37a2649230139dc4534fd8b5cec45986588eee6bb30625c5b2bcf87d368c  layer.bin

And also verify that it's just a normal tarball of files:

$ tar -tvzf layer.bin
tar: Removing leading `/' from member names
dr-xr-xr-x root/root         0 1969-12-31 19:00 /nix/
dr-xr-xr-x root/root         0 1969-12-31 19:00 /nix/store/
dr-xr-xr-x root/root         0 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/
dr-xr-xr-x root/root         0 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/
-r--r--r-- root/root    134932 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlicommon-static.a
lrwxrwxrwx root/root         0 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlicommon.so -> libbrotlicommon.so.1
lrwxrwxrwx root/root         0 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlicommon.so.1 -> libbrotlicommon.so.1.0.9
-r-xr-xr-x root/root    142960 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlicommon.so.1.0.9
-r--r--r-- root/root     57752 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlidec-static.a
lrwxrwxrwx root/root         0 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlidec.so -> libbrotlidec.so.1
lrwxrwxrwx root/root         0 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlidec.so.1 -> libbrotlidec.so.1.0.9
-r-xr-xr-x root/root     55152 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlidec.so.1.0.9
-r--r--r-- root/root    741648 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlienc-static.a
lrwxrwxrwx root/root         0 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlienc.so -> libbrotlienc.so.1
lrwxrwxrwx root/root         0 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlienc.so.1 -> libbrotlienc.so.1.0.9
-r-xr-xr-x root/root    664120 1969-12-31 19:00 /nix/store/9iy1ng7h1l6jdmjk157jra8n4hkrfdj1-brotli-1.0.9-lib/lib/libbrotlienc.so.1.0.9

So this definitely looks doable to download and unpack the layers into the Nix store just using manifest information. My main questions then would be—

  • Is this something that has already been implemented somewhere and I just haven't found it?
  • If it hasn't been done, would there be interest in having it as part of nix2container?

If it's a new implementation, I'm assuming it would make most sense to just have Nix directly parse the manifest JSON, similar to how poetry2nix works. Then the manifest can be stored in the repo and it's trivial for users to update it with:

skopeo inspect docker://docker.io/nixos/nix@sha256:251a921be086aa489705e31fa5bd59f2dadfa0824aa7f362728dfe264eb6a3d2 --raw > manifest.json

Looks like the format of a nix2container.pullImage is really just a modified manifest pointing to the layer archives anyway, eg:

{
        "version": 1,
        "image-config": {},
        "layers": [
                {
                        "digest": "sha256:9123ac7c32f74759e6283f04dbf571f18246abe5bb2c779efcb32cd50f3ff13c",
                        "size": 0,
                        "diff_ids": "sha256:39db6acceed35328fae0746f9125ee85ea6e3600ed2c35b81fff757783b30209",
                        "mediatype": "application/vnd.oci.image.layer.v1.tar+gzip",
                        "layer-path": "/nix/store/bq9m85r2ylfx1fqmqgjxw9gb9c35nxlf-docker-image-alpine/9123ac7c32f74759e6283f04dbf571f18246abe5bb2c779efcb32cd50f3ff13c"
                }
        ],
        "arch": ""
}

So there's really no need to bother about actually unpacking anything— just pull the archives and make up a similar manifest file pointing at them in their individual store paths.

Incorrect permissions on /nix/var/nix/profiles/per-user

I'm creating a container with nix initialized. I mounted a local flake to it and ran nix flake show as a test but it failed with the following error:

error: could not set permissions on '/nix/var/nix/profiles/per-user' to 755: Operation not permitted

Note that I'm not running my container as the root user.

Debugging the permissions on the folder, I see:

bash-5.1$ ls -la /nix/var/nix/profiles/         
total 12
dr-xr-xr-x 1 0 0 4096 Jan  1  1970 .
dr-xr-xr-x 1 0 0 4096 Jan  1  1970 ..
dr-xr-xr-x 1 0 0 4096 Jan  1  1970 per-user

I cannot manually set the permissions on this folder because whatever I do gets overridden by whatever the nix initialization code is doing. I'm not sure what the best path forward is here.

Example defaults to amd64 architecture

Running the example in the README:

> nix run github:nlewo/nix2container#examples.bash.copyToDockerDaemon  
> docker run -it bash:2bq5nc0g665z2ixfv1lzypxki5cqjf42 
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
standard_init_linux.go:228: exec user process caused: exec format error

This is running on a Macbook M1 Pro. I've tried parsing through some of the logic and I'm not familiar enough with the backend to understand why it's defaulting to this architecture. Any ideas?

Weirdness with fromImage

I'm trying to use nix2container to build a google cloudshell container, and I'm running into multiple issues :D

My basic configuration looks something like this:

nix2containerPkgs.nix2container.buildImage {
      name = "gcr.io/myproject/mycloudshell";
      tag = "latest";
      fromImage = nix2containerPkgs.nix2container.pullImage {
        imageName = "gcr.io/cloudshell-images/cloudshell";
        imageDigest = "sha256:68f5f1a01574bd795192098d676ac4150610ab89d4c0c23e72f9a0f7ec2cf1db";
        sha256 = "sha256-i++Camqmzugr3aq56UvGujuFqDo8Cj6gD9IY/a/0HpI=";
      };
      maxLayers = 200;
       config.env = [
         "DEBIAN_FRONTEND=noninteractive"
         "PATH=/opt/gradle/bin:/opt/maven/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin:/usr/local/nvm/versions/node/v16.4.0/bin:/usr/local/rvm/bin:/google/go_appengine:/google/google_appengine"
         "PORT=8080"
         "CLOUD_SDK=/usr/lib/google-cloud-sdk"
         "GCLOUD_CONTAINER_SERVER=gcr.io"
         "LOG=file"
         "DOCKER_HOST=unix:///var/run/docker.sock"
         "NODEJS_VERSION=16.4.0"
         "DEVSHELL_CLIENTS_DIR=/var/run/google/devshell"
         "CREDENTIALS_SERVICE_PORT=8998"
         "KUBECTX_VERSION=0.9.4"
      ];
      config.entrypoint = [
            "/bin/bash"
            "/google/scripts/onrun.sh"
      ];
      config.volumes."/var/lib/docker" = {};
    };
  };

funnily enough, despite it being public (as per gcr), cloud shell complains it can't find the image.

If I on the other hand construct an empty image (using google base image), and google cloud shell's guide (which uses docker I believe), I get a working image. If I then use google cloud shell's guild but use my image (produced as above) as a base image, google cloud shell can find it, but fails to load it.

What I've found so far is that the date tags are different (of couse, nix is at unix:0), but a bit more unexpected is that the layer hashes are all different in both images. E.g. using skopeo inspect the layers all have different hashes; and the nix2container one also misses all the history items, though I doubt that's much of an issue.

The nix2container generated one also doesn't set the

    "Created": "2022-06-04T08:28:10.227221947Z",
    "DockerVersion": "18.09.0",

values. And gcr also seems to be unable to compute the size of the image.

`imageRefUnsafe` is undocumented

I noticed that this is a really useful feature to automate certain workflows after the image has been built. However, I had to read the code in order to figure out how to use it. Is this stable enough to document? I would gladly write a PR if so.

Make `nix flake check` working

Currently, nix flake check fails with:

$ nix flake check 
error: flake attribute 'packages.aarch64-linux.nix2container' is not a derivation

How could we expose the nix2container attribute set?

Avoid pulling dependencies from entrypoint

Take the following example:

{
  inputs.nix2container.url = "github:nlewo/nix2container";
  inputs.nixpkgs.follows = "nix2container/nixpkgs";

  outputs = { self, nixpkgs, nix2container }:
    let
      pkgs = import nixpkgs { system = "aarch64-linux"; };
      n2c = nix2container.packages.aarch64-linux.nix2container;
      entrypoint = pkgs.writeShellApplication {
        name = "entrypoint";
        runtimeInputs = [ pkgs.git ];
        text = ''
          git --help
        '';
      };
    in
    {
      packages.aarch64-linux.hello = n2c.buildImage {
        maxLayers = 100;
        name = "hello";
        config = {
          entrypoint = [ "${entrypoint}/bin/entrypoint" ];
        };
      };
    };
}

This will build an image whose layers follow this rough pattern:

  1. git dependencies
  2. Entrypoint derivation

If I were to set maxLayers to the default value, then git and the entrypoint derivation would be put in the same layer. The problem is that increasing maxLayers does not seem to guarantee that the entrypoint derivation will be in its own layer (it really depends on how large its dependencies are). Is there a way I can avoid pulling in the dependencies of the entrypoint so that I can put them in a separate layer?

I tried something like this:

{
  inputs.nix2container.url = "github:nlewo/nix2container";
  inputs.nixpkgs.follows = "nix2container/nixpkgs";

  outputs = { self, nixpkgs, nix2container }:
    let
      pkgs = import nixpkgs { system = "aarch64-linux"; };
      n2c = nix2container.packages.aarch64-linux.nix2container;
      entrypoint = pkgs.writeShellApplication {
        name = "entrypoint";
        runtimeInputs = [ pkgs.git ];
        text = ''
          git --help
        '';
      };
    in
    {
      packages.aarch64-linux.hello = n2c.buildImage {
        maxLayers = 100;
        name = "hello";
        config = {
          entrypoint = [ "${entrypoint}/bin/entrypoint" ];
        };
        layers = [
          (n2c.buildLayer {
            deps = [ pkgs.git ];
          })
        ];
      };
    };
}

It does isolate the dependencies into a single layer, but it puts the entrypoint layer before the dependency layer. I'm new to making Docker containers with Nix, does the order of layers event matter when using nix2container?

Any help is appreciated, thanks!

How to use `meta`?

I just came across this project when trying to migrate from a custom Dockerfile to building my containers via pure nix. Thank you for the effort, the project looks really capable!

In #65, you introduced the option to pass metadata. Sadly, I was not able to find any documentation on how to use it. More specifically, I would like to use the GitHub action docker/metadata-action to generate the relevant metadata for my containers (e.g., multiple tags like the name of the branch as well as different versions like 1, 1.2, and 1.2.3. Here is my current code:

As you can see in the workflow, I also output the generated metadata so that I can debug it locally. Here is the output:

{
  "tags": [
    "ghcr.io/mirkolenz/makejinja:1.1.5-beta.1",
    "ghcr.io/mirkolenz/makejinja:beta"
  ],
  "labels": {
    "org.opencontainers.image.title": "makejinja",
    "org.opencontainers.image.description": "Automatically generate files based on Jinja templates. Use it to easily generate complex Home Assistant dashboards!",
    "org.opencontainers.image.url": "https://github.com/mirkolenz/makejinja",
    "org.opencontainers.image.source": "https://github.com/mirkolenz/makejinja",
    "org.opencontainers.image.version": "1.1.5-beta.1",
    "org.opencontainers.image.created": "2023-05-08T10:06:29.527Z",
    "org.opencontainers.image.revision": "acd7609c4cdfaf546e4d28eee14642a8e9f580e5",
    "org.opencontainers.image.licenses": "MIT"
  }
}

I tried running it locally with

DOCKER_METADATA_OUTPUT_JSON='{"tags":["ghcr.io/mirkolenz/makejinja:1.1.5-beta.1","ghcr.io/mirkolenz/makejinja:beta"],"labels":{"org.opencontainers.image.title":"makejinja","org.opencontainers.image.description":"Automatically generate fil
es based on Jinja templates. Use it to easily generate complex Home Assistant dashboards!","org.opencontainers.image.url":"https://github.com/mirkolenz/makejinja","org.opencontainers.image.source":"https://github.com/mirkolenz/makejinja","org.openconta
iners.image.version":"1.1.5-beta.1","org.opencontainers.image.created":"2023-05-08T10:06:29.527Z","org.opencontainers.image.revision":"acd7609c4cdfaf546e4d28eee14642a8e9f580e5","org.opencontainers.image.licenses":"MIT"}}' nix build .#docker

but the result has no information about my custom tags:

{
	"version": 1,
	"image-config": {
		"Entrypoint": [
			"/nix/store/6z467z6z1shdrrfgxx1gr1gd8dd9zbxg-python3.11-makejinja-1.1.5b1/bin/makejinja"
		],
		"Cmd": [
			"--help"
		]
	},
	"layers": [
		{
			"digest": "sha256:92511c4791db62cc79e1a681d338aee00724dd14084a4b8926d217b08c109584",
			"size": 194385408,
			"diff_ids": "sha256:92511c4791db62cc79e1a681d338aee00724dd14084a4b8926d217b08c109584",
			"paths": [
				{
					"path": "/nix/store/34xlpp3j3vy7ksn09zh44f1c04w77khf-libunistring-1.0"
				},
				{
					"path": "/nix/store/5mh5019jigj0k14rdnjam1xwk5avn1id-libidn2-2.3.2"
				},
				{
					"path": "/nix/store/vnwdak3n1w2jjil119j65k8mw1z23p84-glibc-2.35-224"
				},
				{
					"path": "/nix/store/iqfkdkydx77zcivgkwxhhmz76d0zmm4m-ncurses-6.3-p20220507"
				},
				{
					"path": "/nix/store/xbm6sj00r5kxvpwf34vysiij5zn3i3mw-zlib-1.2.13"
				},
				{
					"path": "/nix/store/kga2r02rmyxl14sg96nxbdhifq3rb8lc-bash-5.1-p16"
				},
				{
					"path": "/nix/store/z0kg1c0f8fx6r4rgg5bdy01lb2b9izqg-tzdata-2023a"
				},
				{
					"path": "/nix/store/yl0shr26syxmdncnmfbbgy5vf2yaa66h-bzip2-1.0.8"
				},
				{
					"path": "/nix/store/vd4x8xqaw4dygaajjkn19yq3kxxlyyl9-openssl-3.0.8"
				},
				{
					"path": "/nix/store/rc25n5hl56yw5dwifphwm08vkply4nyi-libffi-3.4.4"
				},
				{
					"path": "/nix/store/khd1c8kyxx0600wy8syjz76amqbxbh1r-readline-6.3p08"
				},
				{
					"path": "/nix/store/g7sy7nkx25z1000fbrkgxxq9imgk1zc0-gdbm-1.23"
				},
				{
					"path": "/nix/store/64vcf3x48sjfckvwzbdsr3a9yjf0jcbq-libxcrypt-4.4.30"
				},
				{
					"path": "/nix/store/5sfp3cdgxha9qdqc0pmiar1ql0bgk8g7-xz-5.2.7"
				},
				{
					"path": "/nix/store/5n9zk19mv4daf178ihigmianx5iq5r99-mailcap-2.1.53"
				},
				{
					"path": "/nix/store/3hnkf4972i1555cbr0l7r73wlf0xf36j-expat-2.5.0"
				},
				{
					"path": "/nix/store/1rjl52i7gh0wasc7a1p82xk3lc79icww-sqlite-3.39.4"
				},
				{
					"path": "/nix/store/536aws8kkljng4ggdakk8i1nfdqb4f4x-python3-3.11.3"
				},
				{
					"path": "/nix/store/i6hczyl3s12630nhibd120sb5frfz61w-python3.11-mdurl-0.1.2"
				},
				{
					"path": "/nix/store/69fyphng2164d9z9jz0ang4j2m9c61ya-python3.11-attrs-23.1.0"
				},
				{
					"path": "/nix/store/fwrr56ajwhgzc8l1fqypvj03j6xr5w8n-python3.11-markdown-it-py-2.2.0"
				},
				{
					"path": "/nix/store/5mr4xgpzljkwrpzfy84y1j8hkaq3gv88-python3.11-pygments-2.15.1"
				},
				{
					"path": "/nix/store/jy9gi2938ab7y5lszvr33r72avbqr4w6-python3.11-cattrs-22.2.0"
				},
				{
					"path": "/nix/store/4pyv984b34ydbp1vskvrjpaxbah0v8d8-python3.11-markupsafe-2.1.2"
				},
				{
					"path": "/nix/store/7rdhl9gi0lx2iaz635ajpkmsag7m27ry-python3.11-click-8.1.3"
				},
				{
					"path": "/nix/store/4xdp4s8wfs67aj96jyd0qy98w4pxi9hi-python3.11-rich-13.3.5"
				},
				{
					"path": "/nix/store/zmcjlzx89q4kfs0na0fyz6ldvl3ws8sr-python3.11-pyyaml-6.0"
				},
				{
					"path": "/nix/store/k5k8qy8kp89zdydqmljrbmgwfnybg934-python3.11-typed-settings-23.0.0"
				},
				{
					"path": "/nix/store/7nkziny8q1c0hq44cmry2fm59vykbwak-python3.11-jinja2-3.1.2"
				},
				{
					"path": "/nix/store/4ayvgyis9kqhg82q4parfvh3ks1jkyg4-python3.11-rich-click-1.6.1"
				},
				{
					"path": "/nix/store/6z467z6z1shdrrfgxx1gr1gd8dd9zbxg-python3.11-makejinja-1.1.5b1"
				}
			],
			"mediatype": "application/vnd.oci.image.layer.v1.tar"
		}
	],
	"arch": "amd64"
}

I also tried copying the resulting container to my docker daemon, but it also has no tag assigned to it. Do you have an idea what I am doing wrong?

skopeo copy failure on RedHat EL 8

Hi, thanks for your work on nix2container.
I've stumbled upon your work when experimenting with devenv.sh
Copy the genereated image spec Json to the containers-storage using the bundled/patched skopeo doesn't work for me on RHEL. It fails with a rather unspecific error similar* to "can't determine user: unknown, userid:123456789".
The same error is generated when using a 'standard' docker transport instead of nix.

However, with the stock skopeo copying to containers-storage works as expected.

Any pointers?

  • This happened on my work machine, i don't have the exact error message right now.

Impure layer?

thanks for this awesome project @nlewo :)

I wonder if it would be possible to have a layer that's built impurely on top of the previous layers?

Something like running my-build-command that has access to the internet?

Unknown media type error for `crazymax/alpine-s6:3.17-edge`

When trying to use the following base image:

s6Base = pullImage {
  imageName   = "crazymax/alpine-s6";
  imageDigest = "sha256:74a0a9b44234ac35ce33eb2ccda7888e1060d6c352aa47cce43c369a692ed5d1"; # 3.17-edge
  arch        = "amd64";
  sha256      = "sha256-PqKYounBiJhjzOSESDuJSpqe3kZkt/q7C1VAOKpC/Lo=";
};

the build fails with this error:

Unknown media type: "application/vnd.oci.image.layer.v1.tar+gzip"

conflicting `/nix-support/setup-hook` from multiple packages

When I specify two packages that both implement /nix-support/setup-hook in contents = [nixpkgs.iana-etc nixpkgs.cacert];, I eventually result with this error:

       > The file /nix-support/setup-hook overrides a file with different attributes (previous: &tar.Header{Typeflag:0x30, Name:"/nix-support/setup-hook", Linkname:"", Size:396, Mode:292, Uid:0, Gid:0, Uname:"root", Gname:"root", ModTime:time.Time{wall:0x0, ext:62135596800, loc:(*time.Location)(nil)}, AccessTime:time.Time{wall:0x0, ext:62135596800, loc:(*time.Location)(nil)}, ChangeTime:time.Time{wall:0x0, ext:62135596800, loc:(*time.Location)(nil)}, Devmajor:0, Devminor:0, Xattrs:map[string]string(nil), PAXRecords:map[string]string(nil), Format:0} current: &tar.Header{Typeflag:0x30, Name:"/nix-support/setup-hook", Linkname:"", Size:200, Mode:292, Uid:0, Gid:0, Uname:"root", Gname:"root", ModTime:time.Time{wall:0x0, ext:62135596800, loc:(*time.Location)(nil)}, AccessTime:time.Time{wall:0x0, ext:62135596800, loc:(*time.Location)(nil)}, ChangeTime:time.Time{wall:0x0, ext:62135596800, loc:(*time.Location)(nil)}, Devmajor:0, Devminor:0, Xattrs:map[string]string(nil), PAXRecords:map[string]string(nil), Format:0})
       For full logs, run 'nix log /nix/store/kgp2sn9pknhp5nbf3kjryxpyv6w0djzi-layers.json.drv'.

... while I'm trying to figure out a work-around, I wanted to already drop a bug report in here. Maybe it's already known?

Integration point with `arion`

https://github.com/hercules-ci/arion is docker-compose for Nix.

I've recently prototyped a workflow, but didn't discover an immediate integration point between n2c and arion.

arion may assume derivations that represent the final tar image to load, while n2c tries to avoid exactly that for very good reasons.

Now, regardless, how would you imagine an integration point between the two?

Happy to work on something if we have consensus.

cc @roberth

Regex matching for permissions is broken

{
  inputs.nix2container.url = "github:nlewo/nix2container";
  inputs.nixpkgs.follows = "nix2container/nixpkgs";

  outputs = { self, nixpkgs, nix2container }:
    let
      pkgs = import nixpkgs { system = "aarch64-linux"; };
      n2c = nix2container.packages.aarch64-linux.nix2container;
      entrypoint = pkgs.writeShellApplication {
        name = "entrypoint";
        text = ''
          echo "Hello, world!"
        '';
      };
      test = pkgs.runCommand "test" { } ''
        mkdir -p $out/tmp
        touch $out/tmp/test1.txt
        touch $out/tmp/test2.txt
      '';
    in
    {
      packages.aarch64-linux.hello = n2c.buildImage {
        maxLayers = 100;
        name = "hello";
        config = {
          entrypoint = [ "${entrypoint}/bin/entrypoint" ];
        };
        copyToRoot = [ test pkgs.bash pkgs.coreutils ];
        perms = [
          {
            path = test;
            regex = "foobar";
            mode = "0777";
          }
        ];
      };
    };
}

Inside the resulting container:

bash-5.1# ls -la /tmp
total 8
drwxrwxrwx 2 0 0 4096 Jan  1  1970 .
drwxr-xr-x 1 0 0 4096 Sep 21 17:20 ..
-rwxrwxrwx 1 0 0    0 Jan  1  1970 test1.txt
-rwxrwxrwx 1 0 0    0 Jan  1  1970 test2.txt

The regex expression isn't being evaluated. The current behavior seems to apply permissions to every path in the closure.

Cannot connect to the docker daemon when running on rootless installation

Hi,

When trying to build a docker image using nix2container I am always getting:

 Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

I think that this is because I have installed docker as a non-root user (https://docs.docker.com/engine/security/rootless/)

Skopeo under the hood uses unix:///var/run/docker.sock as a default location for the socket but on my machine the socket is in:

unix:///run/user/1000/docker.sock

This location can be however overridden by providing --src-daemon-host=... option when interacting with skopeo.

Now, I could implement that but I am not sure what is the best way to approach that in your project.
We would have to somehow pass it into:

  copyToDockerDaemon = image: pkgs.writeShellScriptBin "copy-to-docker-daemon" ''
    echo "Copy to Docker daemon image ${image.imageName}:${image.imageTag}"
    ${skopeo-nix2container}/bin/skopeo --insecure-policy copy nix:${image} docker-daemon:${image.imageName}:${image.imageTag}
    ${skopeo-nix2container}/bin/skopeo --insecure-policy inspect docker-daemon:${image.imageName}:${image.imageTag}
  '';

Which means either adding another parameter or extracting that information from environment variable. I am not a fan of the latter and I only mention it for the sake of completeness.

I am afraid that there is no option to add another parameter without introducing a breaking change :(

Here we have few options again:

  1. Add it to copyToDockerDaemon and similar like
    copyToDockerDaemon = image: daemon: ...
    
  2. Convert the function parameter to a set and add it there
    copyToDockerDaemon = { image, daemon }: 
    
  3. Since it is a parameter that is common for multiple commands we could provide it earlier
    someWrapper= daemon: 
       copyToDockerDaemon = image: ...
    

I am not a nix expert so I am not really sure which one is the most suitable. Let me know what do you think.

`nix-user-flakes`: `nix run` results in SSL connect error (35)

The nix-user.nix example seems to miss some setup step to make HTTPS work in the container.

I assume that "something" is missing e.g. in /etc.

Reproducing the problem

If I use a slightly modified version of the container so that I can use flakes and have bash in the entry point, I get:

❯ nix run git+https://gitlab.com/nexxiot-labs/nix-user-container#curl.copyToDockerDaemon \
  && docker run -it nix-user-flakes
Copy to Docker daemon image curl:latest
Getting image source signatures
Copying blob b11cc9fe06f2 done  
Copying config 667d3ab77c done  
Writing manifest to image destination
Running checks against store uri: local
[PASS] PATH contains only one nix version.
[PASS] All profiles are gcroots.
[PASS] Client protocol matches store protocol.
[INFO] You are trusted by store uri: local
total 24
drwxr-xr-x  1 user user 4096 Jan  1  1970 .
drwxr-xr-x  1    0    0 4096 Nov  3 12:24 ..
drwxr-xr-x 56 user user 4096 Jan  1  1970 store
drwxr-xr-x  1 user user 4096 Jan  1  1970 var
bash-5.2$ nix run nixpkgs#hello
warning: error: unable to download 'https://channels.nixos.org/flake-registry.json': SSL connect error (35); retrying in 315 ms
warning: error: unable to download 'https://channels.nixos.org/flake-registry.json': SSL connect error (35); retrying in 513 ms
warning: error: unable to download 'https://channels.nixos.org/flake-registry.json': SSL connect error (35); retrying in 1202 ms
warning: error: unable to download 'https://channels.nixos.org/flake-registry.json': SSL connect error (35); retrying in 2146 ms
error: unable to download 'https://channels.nixos.org/flake-registry.json': SSL connect error (35)
bash-5.2$ 

Minimal curl example

One can reproduce the same error just using a container with curl:

❯ nix run git+https://gitlab.com/nexxiot-labs/nix-user-container#curl.copyToDockerDaemon \
  && docker run -it curl https://nixos.org

Copy to Docker daemon image curl:latest
Getting image source signatures
Copying blob b11cc9fe06f2 done  
Copying config 667d3ab77c done  
Writing manifest to image destination
curl: (35) OpenSSL/3.0.11: error:16000069:STORE routines::unregistered scheme

What I tested

I originally tested a more elaborate setup that creates the nix config in /etc
and also tries to link cacert into etc... Doesn't make a difference as far as I can see.

  let
    user = "user";
    group = "user";
    uid = "1000";
    gid = "1000";

    mkUser = nixpkgs.runCommand "mkUser" {} ''
      mkdir -p $out/etc/{pam.d,nix}

      echo "${user}:x:${uid}:${gid}::" > $out/etc/passwd
      echo "${user}:!x:::::::" > $out/etc/shadow

      echo "${group}:x:${gid}:" > $out/etc/group
      echo "${group}:x::" > $out/etc/gshadow

      cat > $out/etc/pam.d/other <<EOF
      account sufficient pam_unix.so
      auth sufficient pam_rootok.so
      password requisite pam_unix.so nullok sha512
      session required pam_unix.so
      EOF

      cat > $out/etc/nix/nix.conf <<EOF
      experimental-features = nix-command flakes repl-flake
      max-jobs = auto
      extra-nix-path = nixpkgs=flake:nixpkgs
      max-silent-time = 10
      fsync-metadata = false
      EOF

      touch $out/etc/login.defs
      mkdir -p $out/home/${user}
    '';

    mkHome = nixpkgs.runCommand "mkHome" {} ''
      mkdir -p $out/home/${user}
    '';
  in
    nix2container.buildImage {
      name = "nix-user";
      tag = "latest";

      initializeNixDatabase = true;
      nixUid = l.toInt uid;
      nixGid = l.toInt gid;

      copyToRoot = [
        (nixpkgs.buildEnv {
          name = "root";
          paths = with nixpkgs; [coreutils nix openssl curl mkUser strace cacert];
          pathsToLink = ["/bin" "/etc" "/lib"];
        })
        mkHome
      ];

      perms = [
        {
          path = mkHome;
          regex = "/home/${user}";
          mode = "0744";
          uid = l.toInt uid;
          gid = l.toInt gid;
          uname = user;
          gname = group;
        }
      ];

      config = {
        Entrypoint = ["${nixpkgs.bash}/bin/bash"];
        User = "user";
        WorkingDir = "/home/user";
        Env = [
          "HOME=/home/user"
          "NIX_PAGER=cat"
          "USER=user"
        ];
      };
    };

Support emulating single-user system

When using initializeNixDatabase, the container is set up in such a way that the nix CLI can only be used by the root user. This is because everything in /nix is owned by root and attempting to use any other user results in cascading errors. In order to overcome this, we need the ability to configure the container much like Nix would configure a single-user system. In other words, everything under /nix should be owned by the user.

Unfortunately, accomplishing this is non-trivial. I was able to get most of the way there with some sample hacking:

diff --git a/default.nix b/default.nix
index 730a479..618a150 100644
--- a/default.nix
+++ b/default.nix
@@ -288,10 +288,28 @@ let
       # This layer contains all config dependencies. We ignore the
       # configFile because it is already part of the image, as a
       # specific blob.
+
+      perms' = perms ++ [
+        {
+          path = nixDatabase;
+          regex = ".*";
+          uid = 1000;
+          gid = 1000;
+        }
+        {
+          path = nixDatabase;
+          regex = "/nix/var/nix";
+          mode = "0700";
+          uid = 1000;
+          gid = 1000;
+        }
+      ];
+
       customizationLayer = buildLayer {
-        inherit perms maxLayers;
+        inherit maxLayers;
+        perms = perms';
         copyToRoot = if initializeNixDatabase
-                   then copyToRootList ++ [nixDatabase]
+                   then [nixDatabase] ++ copyToRootList
                    else copyToRootList;
         deps = [configFile];
         ignore = configFile;

Here I set the owner of everything to uid 1000 and also fixed some incorrect permissions on /nix/var/nix. This gets us most of the way there, however, Nix still needs to write to /nix/store in order to create lock files. Here is where the problem lies: even if I were to set the owner of /nix/store to uid 1000 in my above change, it would immediately get overridden the first time a path was written to the store (since the default uid/gid is for root).

The only way I can see around this is to somehow always ensure a final layer is written that sets the owner of /nix/store to the desired user. Is this possible with the current implementation? The other solution is a lot more convoluted and would require intercepting writes to /nix/store and dynamically setting the owner.

The goal of this change is to allow running the nix CLI in containers as a non-root user.

cross-copying docker images

Hi! I have a CI environment where I have access to remote Nix builders of both amd64 and aarch64 varieties and would like to make multi-arch images. Because I have the builders available, I can do any build required, but the aarch64 image demands an aarch64 version of skopeo.

This is inconvenient, as it means I have to run a three build setup -[(build x86), (build aarch64)], and (merge their manifests into a multi-arch image), which adds some latency.

I'm wondering if it'd be possible to modify nix2container to use a platform-native versions of the tools (e.g. skopeo-nix2container, your go cli) but reference an image for a different platform?

Ideally it'd be possible to actually even create a multi-arch image natively, so I don't have to do any manifest rewriting!

Support multiple registries

Similar to #59, an image is limited to a single registry. This is because skopeo uses the image name to determine where to publish the image when copyToRegistry is called (i.e.,my-registry.com/image-name). Since the name can only be set once in the call to buildImage, currently pushing to multiple registries would require using multiple derivations, which in turn would (I believe) change the image hash and not be a suitable method.

Instead, we could add a registries option to buildImage, which automatically prepends the image name with each registry and pushes the image accordingly. I don't proclaim to be an expert on this subject, so correct me if I'm wrong here.

Popularity vs Volatitity

The current algorithm is based on popularity with a lump-sum cut off. Therefore this popularity is a "global" popularity that optimizes registry-wide storage for the stabily most popular packages.

However, the lump-sum contains the entry points of the directed graph (the least "popular" packages).

These packages are also the most volatile packages over time and within the context of an image name.

Let's devise an pigeon-hole algorithm that optimizes:

  • For popularity in the high order popularity
  • For volatility in the low order popularity
  • With a cut-off crunch sum in between

While certainly not the best statistical algorithm, it is a pareto-efficient improvement when sum(layers) > maxLayers and a no-op otherwise.

Setting the "Created" Timestamp

Currently, nix2container does not set the Created timestamp. Is there a way to enable nix2conatiner to set the timestamp?

I am asking because GitLabs cleanup policies utilize the created timestamp to decide which containers to keep. In our case, this results in no Images being cleaned up, as Gitlab eternally interprets this as "Published just now".

Same path added to layer over and over

I'm trying to diagnose an issue I'm seeing with digest mismatches. This is the original buildImage that I had

buildImage {
  name = "core";
  # the base image must be pulled manually outside nix as it's from a private registry
  # and nixpkgs.dockerTools.pullImage doesn't support using login credentials
  fromImage = nixpkgs.runCommand "nix2container-alpine.json" { } ''
    ${nix2container-bin}/bin/nix2container image-from-dir $out ${./alpine}
  '';

  maxLayers = 100;
  copyToRoot = [
    (nixpkgs.symlinkJoin {
      name = "root";
      paths = utils ++ [ exe core-setup gui-assets ];
    })
  ];

  # put all the utils in a single layer. they will rarely change and having them in multiple
  # layers puts us over the 100 layer limit we set
  layers = [ (buildLayer { deps = utils; }) ];

  config = {
    Workingdir = "/main";
  };
}

This worked pretty well as it put all the things in utils (bashInteractive, cacerts, iana-etc, coreutils) in a single layer and created symlinks in /bin et al. My exe and gui-assets were similarly included and linked. Further, the runtime dependencies of my exe appeared in separate layers, which is what I want because they will rarely change too.

To debug the issue I was having, I need to add reproducible = false;. But buildImage doesn't support that directly. What I wanted to do was simply move everyting into a layer like this

buildImage {
  name = "core";
  # the base image must be pulled manually outside nix as it's from a private registry
  # and nixpkgs.dockerTools.pullImage doesn't support using login credentials
  fromImage = nixpkgs.runCommand "nix2container-alpine.json" { } ''
    ${nix2container-bin}/bin/nix2container image-from-dir $out ${./alpine}
  '';

  layers = [ 
    (buildLayer { 
      reproducible = false;
      maxLayers = 100;
      copyToRoot = [
        (nixpkgs.symlinkJoin {
          name = "root";
          paths = utils ++ [ exe core-setup gui-assets ];
        })
      ];

      # put all the utils in a single layer. they will rarely change and having them in multiple
      # layers puts us over the 100 layer limit we set
      layers = [ (buildLayer { deps = utils; }) ];
    })
  ];
      config = {
        Workingdir = "/main";
      };
    }

When I do that, I see the same path being added over and over

[2023-08-28 22:28:45] layers.json> INFO[0003] Adding 1 paths to layer (size:834097152 digest:sha256:3d908d03bd87b94fe7d22bfe70b1da714407dce6618a964c1aca1cc8d248b9ab)
[2023-08-28 22:28:51] layers.json> INFO[0009] Adding 1 paths to layer (size:834097152 digest:sha256:3d908d03bd87b94fe7d22bfe70b1da714407dce6618a964c1aca1cc8d248b9ab)
[2023-08-28 22:28:57] layers.json> INFO[0015] Adding 1 paths to layer (size:834097152 digest:sha256:3d908d03bd87b94fe7d22bfe70b1da714407dce6618a964c1aca1cc8d248b9ab)
[2023-08-28 22:29:03] layers.json> INFO[0021] Adding 1 paths to layer (size:834097152 digest:sha256:3d908d03bd87b94fe7d22bfe70b1da714407dce6618a964c1aca1cc8d248b9ab)

Is there a different way of structuring this to achieve the same thing?

Incompatibility with AWS Lambda

Hi, I have been trying to build a container with nix2container to use with AWS Lambda. Unfortunately, attempting to create a Lambda function with such a container results in the error MissingParentDirectory: Parent directory does not exist for file: /nix/store/<some folder in the container>. The same container produced with dockerTools.buildImage loads successfully.

I have created https://github.com/armeenm/n2c-lambda to help reproduce this issue. Please let me know if you need anything else to debug.

Thanks!

Image tag differs depending on OS

Currently image tag depends on which OS nix2container is executed on.

For example, with flake.nix:

{
  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
    nix2container.url = "github:nlewo/nix2container";
    flake-utils.url = "github:numtide/flake-utils";
  };
  outputs = { self, nixpkgs, nix2container, flake-utils }:
    flake-utils.lib.eachDefaultSystem (system: {
      packages.image = nix2container.packages.${system}.nix2container.buildImage {
        name = "registry.example.com";
        contents = [ nixpkgs.legacyPackages.x86_64-linux.bashInteractive ];
      };
    });
}

We get different tags for different systems:

 % nix eval .#packages.x86_64-linux.image.imageTag
"35gn4cdzaky8lvp576msc101djl8xz51"
 % nix eval .#packages.x86_64-darwin.image.imageTag
"bjnx63baqkxwb9317s1908yjvvg1r3l8"

This is caused by nix2container using outPath of pkgs.runCommand that depends on the system for which scripts are generated, not on the actual image inputs.

My use case is: I want to be able to push images from macOS and from Linux in the same way and get the same results.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.