CONFIG.SYS
  • ALL_POSTS.BAT
  • ABOUT.EXE

Run KinD in GitLab CI - Mon, Apr 25, 2022

Run KinD in GitLab CI/CD

Run KinD in GitLab CI

Recently while working on a pull request for a helm charts of Prometheus , I came across the GitHub-Kind-Action , which allows you to spin up a kubernetes cluster when a GitHub workflow is started using KinD .
In the case of the helm charts, these were deployed and tested using the chart testing tool whenever changes were pushed.
And I wanted to do the same as part of a GitLab CI/CD pipeline . So here is the job of the .gitlab-ci.yml that does exactly that*:
* Beware though that this approach only works for GitLab runners having set privileged=true in their config, which might cause security issues.

test:
  image: tom1299/docker:20.10.14-dind
  services:
    - name: tom1299/docker:20.10.14-dind
  stage: test
  before_script:
    - kind create cluster --wait 5m --config=./config.yaml
    - export runner_name=$(docker ps --format '{{.Names}}' | grep -i runner)
    - docker network connect kind $runner_name
    - kubectl config set clusters.kind-kind.server https://kind-control-plane:6443
  script:
    - kubectl cluster-info
  tags:
    - docker_runner
  when: manual

All files used in this example can also be viewed in this gist . The creation of the cluster is done in the before_script part. The job uses GitLab CI with DinD . This means that we get at least two docker containers: One for the ci job itself and one for each node of the Kubernetes cluster:

docker ps --format '{{.ID}}, {{.Names}}'
11acbebe6cbd, runner-0-93c5f296b4ab3e07
40b4d75e31a7, kind-control-plane

In addition runner and kind container do not share the same network:

docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
ca47e5e0e86c   bridge    bridge    local
029d1734e277   host      host      local
fbfd76203581   kind      bridge    local
bd5b4148d85d   none      null      local

Note the separate network kind for the cluster. In order to enable communication from the runner to the cluster I had to connect the two networks:

- export runner_name=$(docker ps --format '{{.Names}}' | grep -i runner)
- docker network connect kind $runner_name

This code get the name of runners container and connects it with network kind which is the network created by the kind create cluster command. The line:

- kubectl config set clusters.kind-kind.server https://kind-control-plane:6443

alters the api server address of the context from 127.0.0.1 (which is the default) to the DNS name kind-control-plane which is the hostname of kinds api server. When everything works out fine the script part should print something like this:

$ kubectl cluster-info
Kubernetes control plane is running at https://kind-control-plane:6443
CoreDNS is running at https://kind-control-plane:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

The cluster is running: Mission accomplished. Beware though that the cluster will only be available as part of this job. So all testing and deployment code needs to be part of this job as well.

The Dockerfile

You may have noticed that I did not use the dind image directly but rather a derived one. That image does not contain any surprises. The only purpose of this image is to have kind and kubectl pre installed in the dind image:

FROM docker:20.10.14-dind

RUN apk update && apk add curl && \
    curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.12.0/kind-linux-amd64 && \
    chmod +x ./kind && \
    mv ./kind /usr/local/bin/kind

RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
    chmod +x kubectl && \
    mv ./kubectl /usr/local/bin/kubectl

Other tools like helm or flux could be installed as well.

Conclusion

Using KinD for a GitLab CI/CD pipelines can be done using Docker-in-Docker running on privileged runners. The advantages of using a cluster in CI/CD include:

  • A clean cluster for each pipeline. No leftovers from the last deployment or tests, making side effects influences more unlikely
  • Extend tests to actual runtime environment. Tests can be done in an environment more similar to the actual runtime. Thus making these tests more reassuring
  • Test the deployments earlier. The deployment can be tested earlier in a CI/CD cycle. This is especially helpful for projects containing deployment artifacts like helm charts.

As a trade off CI/CD runs might take longer since spinning up a cluster takes some time. So you should think about the jobs and stages you use this kind of approach for (e.g only for merge requests).

Back to Home


21st century version | © Thomas Reuhl 2020-2022 | Disclaimer | Built on Hugo

Linkedin GitHub