Once I had set up a few applications managed by Argo CD as part of rebuilding my Kubernetes cluster, I didn’t want to have to wait repeatedly for it to check the repository at its configured interval before it picked up new changes. The solution was to set up a Git webhook, i.e. have the Git server make a request to the configured URL each time the relevant repository and branch are updated.

This was my erstwhile, unnecessarily-complicated existing process for setting up my Argo CD domain:

  1. Change the argocd-server Service to type: LoadBalancer.

  2. Get the external IP once it was up. The Terraform kustomization_resource couldn’t provide that information because it treats manifests as opaque strings that it has no special knowledge of. The kubernetes_service data source tries to retrieve the IP too early and fails.

    To solve this, I’d moved argocd-server out of the Kustomize manifests and installed it separately using the generic kubernetes_manifest resource with wait_for:

    Terraformresource "kubernetes_manifest" "argocd_server" {
      manifest = yamldecode(file("manifests/argocd-server.yaml"))
    
      wait_for = {
        fields = { "status.loadBalancer.ingress[0].ip" = "^(\\d+(\\.|$)){4}" }
      }
    }
  3. Update the DNS records via Terraform.

I switched to automating the whole affair with external-dns. It initially wouldn’t pick up the temporary Ingress resources that cert-manager creates while provisioning certificates, so I had to remove domainFilters. I then added the Contour route.

I wanted to set up the webhook automatically, which could be done through a GitLab Personal Access Token (PAT). However, I didn’t want to give Terraform access to my entire account and there was no way to restrict PATs to specific projects. I tried adding a separate user but I couldn’t understand how to make it a guest in the group while also letting it be a Maintainer in the appropriate repository. Meanwhile, project access tokens are a paid feature.

I decided to do it by hand. Configuring the webhook secret was problematic. I could create the Secret resource via Terraform, but I learned that Argo places its own data in the same Secret. Re-applying the Terraform configuration would overwrite that data, rendering Argo non-functional. I couldn’t leave out the Secret entirely from Terraform since it was part of Argo’s manifests, and the provider doesn’t support ignore_changes. Maybe it would have been useful to be able to configure the path of the webhook instead, to make that a Secret.

I explored some alternative formulations. One possibility was to run kubectl apply in a local-exec provisioner after installing Argo in order to merge the data fields. That required generating the YAML, which required creating a template, substituting the secret from the Terraform variable, and piping it. I got it to produce the file, but the next question was how to express that so both PowerShell (on my Windows machine) and Bash (on the server) would understand.

All of that seemed like too much work just to avoid a small delay when pushing changes to the repository. I ended up dropping the webhook altogether, and while I could revisit the idea now, I can’t say it’s been much of a hindrance in practice. I just refresh Applications on demand in the UI.

Next in series: (#10 in The Death and Rebirth of a Cluster)