I’ve been interested in Nix for a while. A build tool that manages the full dependency graph across languages in a declarative fashion sounds amazing. I came across a note on building Docker images for Rust applications . Soon afterwards, the note’s author also suggested I look into Nix . I tried using it to build images for my personal cluster and infrastructure, but struggled to get it right. Instead, I revisited the tool recently for the same Rust application discussed in the Reddit post, which refused to produce a statically-linked binary.

Nix, Windows, and Docker

Nix doesn’t run natively on Windows and can only run in single-user mode under WSL , so I thought I’d try it via Docker. After lots of trial and error, I arrived at a functional sequence of steps for building a flake:

  1. Start the Docker container:

    docker run -it --rm -v "$(pwd):/app" nixos/nix
    
  2. (In the container from this point on.) Enable Flakes and whatever nix-command is:

    echo 'experimental-features = nix-command flakes' >> /etc/nix/nix.conf
    
  3. Allow building non-free packages if your application needs it:

    export NIXPKGS_ALLOW_UNFREE=1
    
  4. Enter the app directory:

    cd /app
    
  5. Build the flake (--impure is required to read NIXPKGS_ALLOW_UNFREE, while --print-build-logs shows the full logs instead of hiding them in a single ever-changing line):

    nix build --impure --print-build-logs
    

Or, putting the prep in one command for quick iteration:

echo 'experimental-features = nix-command flakes' >> /etc/nix/nix.conf && export NIXPKGS_ALLOW_UNFREE=1 && cd /app

(I later discovered that NIXPKGS_ALLOW_UNFREE and --impure are unnecessary if I configure nixpkgs in my flake.)

The biggest problem I encountered throughout is the well-known lack of comprehensive documentation with examples, particularly given the idiosyncratic syntax, leading to a lot of cross-referencing the documentation—scattered between the Nix language, NixOS, and nixpkgs—with source code for packages you’re using, search results in the nixpkgs repository, and obscure forum threads. This isn’t an uncommon experience in software development, but the problem seemed particularly severe here.

One thing that would have saved me a lot of time is knowing in advance that Nix only deals with what Git is aware of by default, so you must at least stage files if you want them to be used in the build.

The build itself

I used this flake.nix for my build:

{
  description = "Package description";

  inputs = {
    nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
    flake-utils.url = "github:numtide/flake-utils";
    rust-overlay.url = "github:oxalica/rust-overlay";
  };

  outputs = { self, nixpkgs, flake-utils, rust-overlay }:
    flake-utils.lib.eachSystem ["x86_64-linux"] (system:
      let
        overlays = [ (import rust-overlay) ];
        pkgs = import nixpkgs { inherit system overlays; };
        rustVersion = (pkgs.rust-bin.fromRustupToolchainFile ./rust-toolchain.toml);
        rustPlatform = pkgs.makeRustPlatform {
          cargo = rustVersion;
          rustc = rustVersion;
        };

        appName = "myapp";

        appRustBuild = rustPlatform.buildRustPackage {
          pname = appName;
          version = "0.1.0";
          src = ./.;
          cargoLock.lockFile = ./Cargo.lock;
        };

        dockerImage = pkgs.dockerTools.buildImage {
          name = appName;
          config = { Entrypoint = [ "${appRustBuild}/bin/${appName}" ]; };
        };
      in
        { 
          packages = {
            rustPackage = appRustBuild;
            docker = dockerImage;
          };
          defaultPackage = dockerImage;
          devShell = pkgs.mkShell {
            buildInputs =
              [ (rustVersion.override { extensions = [ "rust-src" ]; }) ];
          };
        });
}

(I would love to know how to read the application name and version from Cargo.toml. Note that using Cmd rather than Entrypoint makes it impractical to pass arguments at runtime.)

nix build produces a tarball that can be imported into Docker, but as a symlink to a store path, rendering it unusable outside the Nix container. I used docker cp to copy the actual file out of the container and docker load -i nix-image.tar.gz to load it into Docker.

The Nix effect

The original image, built from a Dockerfile, is 88 MB. Even though it uses a multi-stage build, the final stage is based on Debian and has to install a bunch of packages. The Grype vulnerability scanner reports 143 vulnerabilities, none of them from my application. This is despite the dive inspection tool reporting an efficiency score of 98%.

In contrast, the Nix-based image is only 42 MB and contains exactly the required dependencies—nothing more. dive gives it an efficiency score of 100%. As you’d expect, Grype finds zero publicly-known vulnerabilities.[1] It works exactly like the Debian variant, and I didn’t have to use Alpine Linux, musl, or cargo-chef. Nor did I have to worry about static and dynamic linking.

On the whole, I’d say this was a big success, enough to convince me to install Nix in single-user mode under WSL instead of using it in Docker. The next step is switching the project’s pipelines from Docker to Nix.

The one limitation I’ve encountered is being unable to apply dynamic labels. I can add Label to the config I pass to buildLayeredImage, but passing external arguments to flakes is currently unimplemented , so there’s no way to use the output of docker/metadata-action with the flake. I’ll have to create a temporary Dockerfile deriving from the same image as part of my pipeline just to add my labels and re-tag it (at least this can be streamlined ).

Addendum: CA certificates for TLS

I discovered the application was silently failing despite its apparent successful output. It exited with an error code, so Kubernetes told me it had failed, but the output showed no issues.[2] The root cause was missing TLS certificates (No CA certificates found). I had to add the cacerts package:

        dockerImage = pkgs.dockerTools.buildLayeredImage {
          name = appName;
          config = { Entrypoint = [ "${appRustBuild}/bin/${appName}" ]; };
          contents = [ appRustBuild pkgs.cacert ];
        };

Since I’m indirectly using the openssl-probe crate , I didn’t need to set the SSL_CERT_FILE variable, but this is how I could have done it:

        dockerImage = pkgs.dockerTools.buildLayeredImage {
          name = appName;
          config = {
            Entrypoint = [ "${appRustBuild}/bin/${appName}" ];
            Env = [ "SSL_CERT_FILE=${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt" ];
          };
          contents = [ appRustBuild pkgs.cacert ];
        };

  1. Which doesn’t preclude vulnerabilities in my own application, obviously, but that’s unrelated to the choice of build tool.
  2. A mystery in its own right.