helm-kubeconform-action: Validate Helm Charts with Kubeconform on GitHub
I just released shivjm/helm-kubeconform-action, a GitHub Action that runs Kubeconform on all Helm charts in a given directory, with support for multiple test values files. As I wrote in the README:
I needed an action to validate some Helm charts. nlamirault/helm-kubeconform-action doesn’t offer enough flexibility and downloads two Git repositories during execution. It was a good opportunity to try writing some bad Go and dip my toes into the world of writing GitHub Actions—specifically, a Docker container action.
Switching to a Helm charts monorepo
All this began because I wanted to make it possible to install my webmentiond Helm chart remotely, without cloning the Git repository, like with any ordinary chart. To do that, I needed to upload it to a Helm charts repository. There’s a great tool called chart-releaser that lets you host one yourself from a GitHub repository using GitHub Pages, and a chart-releaser action to automate it. However, chart-releaser is designed to work with a Helm monorepo, i.e. a repository containing many charts.
Inspired by the example of mvisonneau/helm-charts, I
decided to move to a monorepo too. I
the webmentiond-helm repository to
shivjm/helm-charts and moved the solitary chart into a
subdirectory. I then wanted to use Kubeval to automatically test the
charts, but it expects a single set of manifests, which means a single
chart and set of values at a time. While I could have written a bit of Bash myself, borrowing from
my PR for the gitlab-ci-pipelines-exporter
I wanted something more general and robust, so I looked around for an existing Kubeval GitHub
Action. Before I ever came across the promising
wiremind/helm-kubeval-action, I found
Kubeconform is a Kubernetes manifests validation tool. Build it into your CI to validate your Kubernetes configuration!
It is inspired by, contains code from and is designed to stay close to Kubeval, but with the following improvements:
high performance: will validate & download manifests over multiple routines, caching downloaded files in memory
configurable list of remote, or local schemas locations, enabling validating Kubernetes custom resources (CRDs) and offline validation capabilities
uses by default a self-updating fork of the schemas registry maintained by the kubernetes-json-schema project - which guarantees up-to-date schemas for all recent versions of Kubernetes.
I liked the sound of that. However, the only existing GitHub integration I could find was nlamirault/helm-kubeconform-action. As I mentioned before, it doesn’t offer enough flexibility—for example, there’s no way to specify additional schema paths—and downloads two hard-coded Git repositories during every execution. I thought I’d submit a PR to address both issues. In fact, I had soon forked the repository and adjusted the script. As I was creating the pull request, however, I found myself frustrated by the limitations of Bash scripting. Granted, this is a small script with minimal requirements, but I have an obsession with doing things ‘the right way’, and in this case I felt I could build something more generally useful without too much effort, so I should make my own.
Now, while I’m not particularly fond of or even familiar with the language, I think Go is well suited to these small projects. It doesn’t require as much thought and planning as Rust does, but it’s statically typed, fast, and capable of building standalone binaries. Given how popular it is even for major projects, I’m always looking for opportunities to improve my risible skills. This seemed like a good exercise, even if running in Docker would probably erase any performance benefits from compact standalone binaries.
Writing some bad Go
I spent half a day poring over documentation and piecing together a simple program that essentially passes the output of helm template to Kubeconform. Although I wanted to use the Kubeconform API from Go rather than run the binary, to do so would necessitate reimplementing much of the application’s existing CLI. I might look into it another time, along with using the Helm API instead of its CLI. At any rate, after I had a first working version that delegates to the Helm and Kubeconform binaries, I switched to parsing the configuration with caarlos0/env and added logging with zerolog.
Next, I added a multi-stage Dockerfile that copies Kubeconform, Helm, and the helm-kubeconform-action binary into a distroless image. I initially used the scratch image for maximal minimalism, but although this bundle requires no external libraries, it can’t connect to remote repositories and download schemas without the root CA certificates that distroless very conveniently provides.
Speaking of downloading schemas, in my innocence, I thought I’d fetch the Kubernetes JSON schemas during the Docker build to avoid needing to fetch them again at runtime. Not only was this impossible because of the sheer size of the schema repository—the build reliably ran out of disk space both locally and on GitHub—it was also unnecessary, because Kubeconform will only fetch the individual schemas that are required at runtime. It doesn’t need the entire repository in the first place. What’s more, it transpires that GitHub’s treatment of Docker container actions would have rendered it a futile effort in any case; but I’ll come to that.
Once the Dockerfile was ready, I wrote a brief README and published the action again… and again, and
again, repeatedly creating and deleting the same tags and releases to allow the updated code to be
run. Using the helm-charts monorepo to test it throughout was a tiresome affair. I later refactored
it into a
main function that essentially loads the configuration and a
run function that does
the work so I could test the logic from Go itself, but I’ve yet to actually write any tests due to
my unfamiliarity with the language and tooling (not to mention the anticipated tedium of devising
Avoiding the Docker rebuild
What I didn’t know before starting is that a Docker container action is rebuilt on every execution, apparently without any caching. Here’s a snippet from the output of a successful run:
2021-09-27T11:09:46.5947637Z Download action repository 'firstname.lastname@example.org' (SHA:8342a82924e7fe1229efd8af2ed0b95e9ddaca6d) 2021-09-27T11:09:46.9876161Z ##[group]Build container for action use: '/home/runner/work/_actions/shivjm/helm-kubeconform-action/v0.0.1/Dockerfile'. 2021-09-27T11:09:46.9937496Z ##[command]/usr/bin/docker build -t e1cc51:e608e0dee2a648e39984e9224dabb2a5 -f "/home/runner/work/_actions/shivjm/helm-kubeconform-action/v0.0.1/Dockerfile" "/home/runner/work/_actions/shivjm/helm-kubeconform-action/v0.0.1" 2021-09-27T11:09:47.2402587Z Sending build context to Docker daemon 26.11kB
This happens before any steps are run. It didn’t take very long, as GitHub Actions are quite fast (compared to GitLab CI, at least) and the Dockerfile is simple, but it was still a nuisance, and would have rendered quite useless my attempts to download the Kubernetes schema in advance.
Luckily, when I first found Kubeconform, I noted its example of using the Docker image
It was hard to find more information on the
docker:// syntax (though it’s possible I was looking
in the wrong place). Still, with some experimentation, I was able to understand the fundamentals,
with the result that I could offer a quicker (but slightly more complex)
In light of this discovery, I may switch to Confita to allow
supplying the configuration on the command line as well.
Using the action
You can find the action on the Marketplace as helm-kubeconform-with-schema-support. I’ve released it as version 0.1.0 because I wouldn’t consider it production ready, but I’m already using it for its intended purpose and I hope to improve it with time. If you use it, I’d love to hear about any issues or suggestions on GitHub, including ways to improve the Go.
It would be nice if the GitHub Container Registry didn’t require logging in to access public images, and if Docker container actions weren’t rebuilt on every execution, but this will do. A third alternative I might offer in future is downloading the binary and using it outside Docker, for which I would need to add cross-platform binaries to the releases, at which point I might move the core logic into a separate repository entirely—perhaps even as a Helm plugin. It’s already quite confusing:
- I use Git tags to mark releases, as is usual.
- Docker tags are automatically created along with every Git tag but those images have their own simultaneously independent & dependent existence as packages under the main repository.
- GitHub releases are created manually from those tags but have their own independent existence.
- The Marketplace listing is based on the GitHub releases, and I don’t believe it cares about the Docker tags at all, since the action is rebuilt on every execution.
Adding the Go binary would only further complicate things.
Anyway, those are all worries for another day. I’ll consider that yak shaved for the moment. Now to
get back to releasing
those Helm charts that Helm chart.
- A unique name was required.↩