GitLab Documentation

The Kubernetes executor

GitLab Runner can use Kubernetes to run builds on a kubernetes cluster. This is possible with the use of the Kubernetes executor.

The Kubernetes executor, when used with GitLab CI, connects to the Kubernetes API in the cluster creating a Pod for each GitLab CI Job. This Pod is made up of, at the very least, a build container and an additional container for each service defined by the GitLab CI yaml. The names for these containers are as follows:

Note that when services and containers are running in the same Kubernetes pod, they are all sharing the same localhost address. The following restrictions are then applicable:

Workflow

The Kubernetes executor divides the build into multiple steps:

  1. Prepare: Create the Pod against the Kubernetes Cluster. This creates the containers required for the build and services to run.
  2. Pre-build: Clone, restore cache and download artifacts from previous stages. This is run on a special container as part of the Pod.
  3. Build: User build.
  4. Post-build: Create cache, upload artifacts to GitLab. This also uses the special container as part of the Pod.

Connecting to the Kubernetes API

The following options are provided, which allow you to connect to the Kubernetes API:

The user account provided must have permission to create, list and attach to Pods in the specified namespace in order to function.

If you are running the GitLab CI Runner within the Kubernetes cluster you can omit all of the above fields to have the Runner auto-discovery the Kubernetes API. This is the recommended approach.

If you are running it externally to the Cluster then you will need to set each of these keywords and make sure that the Runner has access to the Kubernetes API on the cluster.

The keywords

The following keywords help to define the behaviour of the Runner within Kubernetes:

Configuring executor Service Account

You can set the KUBERNETES_SERVICE_ACCOUNT environment variable or use --service-account flag

Overwriting Kubernetes Namespace

Additionally, Kubernetes namespace can be overwritten on .gitlab-ci.yml file, by using the variable KUBERNETES_NAMESPACE_OVERWRITE.

This approach allow you to create a new isolated namespace dedicated for CI purposes, and deploy a custom set of Pods. The Pods spawned by the runner will take place on the overwritten namespace, for simple and straight forward access between container during the CI stages.

variables:
  KUBERNETES_NAMESPACE_OVERWRITE: ci-${CI_COMMIT_REF_NAME}

Furthermore, to ensure only designated namespaces will be used during CI runs, inform the configuration namespace_overwrite_allowed with proper regular expression. When left empty the overwrite behaviour is disabled.

Overwriting Kubernetes Default Service Account

Additionally, Kubernetes service account can be overwritten on .gitlab-ci.yml file, by using the variable KUBERNETES_SERVICE_ACCOUNT_OVERWRITE.

This approach allow you to specify a service account that is attached to the namespace, usefull when dealing with complex RBAC configurations.

variables:
  KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: ci-service-account

usefull when overwritting the namespace and RBAC is setup in the cluster.

To ensure only designated service accounts will be used during CI runs, inform the configuration service_account_overwrite_allowed or set the environment variable KUBERNETES_SERVICE_ACCOUNT_OVERWRITE_ALLOWED with proper regular expression. When left empty the overwrite behaviour is disabled.

Setting Bearer Token to be Used When Making Kubernetes API calls

In conjunction with setting the namespace and service account as mentioned above, you may set the bearer token used when making API calls to create the build pods. This will allow project owners to use project secret variables to specify a bearer token. When specifying the bearer token, it is required that you set the Host config keyword.

variables:
  KUBERNETES_BEARER_TOKEN: thebearertokenfromanothernamespace

Define keywords in the config toml

Each of the keywords can be defined in the config.toml for the gitlab runner.

Here is an example config.toml:

concurrent = 4

[[runners]]
  name = "Kubernetes Runner"
  url = "https://gitlab.com/ci"
  token = "......"
  executor = "kubernetes"
  [runners.kubernetes]
    host = "https://45.67.34.123:4892"
    cert_file = "/etc/ssl/kubernetes/api.crt"
    key_file = "/etc/ssl/kubernetes/api.key"
    ca_file = "/etc/ssl/kubernetes/ca.crt"
    namespace = "gitlab"
    namespace_overwrite_allowed = "ci-.*"
    bearer_token_overwrite_allowed = true
    privileged = true
    cpu_limit = "1"
    memory_limit = "1Gi"
    service_cpu_limit = "1"
    service_memory_limit = "1Gi"
    helper_cpu_limit = "500m"
    helper_memory_limit = "100Mi"
    poll_interval = 5
    poll_timeout = 3600
    [runners.kubernetes.node_selector]
      gitlab = "true"

Using volumes

As described earlier, volumes can be mounted in the build container. At this time hostPath, PVC, configMap, and secret volume types are supported. User can configure any number of volumes for each of mentioned types.

Here is an example configuration:

concurrent = 4

[[runners]]
  # usual configuration
  executor = "kubernetes"
  [runners.kubernetes]
    [[runners.kubernetes.volumes.host_path]]
      name = "hostpath-1"
      mount_path = "/path/to/mount/point"
      read_only = true
      host_path = "/path/on/host"
    [[runners.kubernetes.volumes.host_path]]
      name = "hostpath-2"
      mount_path = "/path/to/mount/point_2"
      read_only = true
    [[runners.kubernetes.volumes.pvc]]
      name = "pvc-1"
      mount_path = "/path/to/mount/point1"
    [[runners.kubernetes.volumes.config_map]]
      name = "config-map-1"
      mount_path = "/path/to/directory"
      [runners.kubernetes.volumes.config_map.items]
        "key_1" = "relative/path/to/key_1_file"
        "key_2" = "key_2"
    [[runners.kubernetes.volumes.secret]]
      name = "secrets"
      mount_path = "/path/to/directory1"
      read_only = true
      [runners.kubernetes.volumes.secret.items]
        "secret_1" = "relative/path/to/secret_1_file"
    [[runners.kubernetes.volumes.empty_dir]]
      name = "empty_dir"
      mount_path = "/path/to/empty_dir"
      medium = "Memory"

Host Path volumes

HostPath volume configuration instructs Kubernetes to mount a specified host path inside of the container. The volume can be configured with following options:

Option Type Required Description
name string yes The name of the volume
mount_path string yes Path inside of container where the volume should be mounted
host_path string no Host's path that should be mounted as volume. If not specified then set to the same path as mount_path.
read_only boolean no Set's the volume in read-only mode (defaults to false)

PVC volumes

PVC volume configuration instructs Kubernetes to use a PersistentVolumeClaim that is defined in Kubernetes cluster and mount it inside of the container. The volume can be configured with following options:

Option Type Required Description
name string yes The name of the volume and at the same time the name of PersistentVolumeClaim that should be used
mount_path string yes Path inside of container where the volume should be mounted
read_only boolean no Set's the volume in read-only mode (defaults to false)

Config Map volumes

ConfigMap volume configuration instructs Kubernetes to use a configMap that is defined in Kubernetes cluster and mount it inside of the container.

Option Type Required Description
name string yes The name of the volume and at the same time the name of configMap that should be used
mount_path string yes Path inside of container where the volume should be mounted
read_only boolean no Set's the volume in read-only mode (defaults to false)
items map[string]string no Key-to-path mapping for keys from the configMap that should be used.

When using configMap volume, each key from selected configMap will be changed into a file stored inside of the selected mount path. By default all keys are present, configMap's key is used as file's name and value is stored as file's content. The default behavior can be changed with items option.

items option is defining a mapping between key that should be used and path (relative to volume's mount path) where configMap's value should be saved. When using items option only selected keys will be added to the volumes and all other will be skipped.

Notice: If a non-existing key will be used then job will fail on Pod creation stage.

Secret volumes

Secret volume configuration instructs Kubernetes to use a secret that is defined in Kubernetes cluster and mount it inside of the container.

Option Type Required Description
name string yes The name of the volume and at the same time the name of secret that should be used
mount_path string yes Path inside of container where the volume should be mounted
read_only boolean no Set's the volume in read-only mode (defaults to false)
items map[string]string no Key-to-path mapping for keys from the secret that should be used.

When using secret volume each key from selected secret will be changed into a file stored inside of the selected mount path. By default all keys are present, secret's key is used as file's name and value is stored as file's content. The default behavior can be changed with items option.

items option is defining a mapping between key that should be used and path (relative to volume's mount path) where secret's value should be saved. When using items option only selected keys will be added to the volumes and all other will be skipped.

Notice: If a non-existing key will be used then job will fail on Pod creation stage.

Empty Dir volumes

emptyDir volume configuration instructs Kubernetes to mount an empty directory inside of the container.

Option Type Required Description
name string yes The name of the volume
mount_path string yes Path inside of container where the volume should be mounted
medium String no "Memory" will provide a tmpfs, otherwise it defaults to the node disk storage (defaults to "")

Using Docker in your builds

There are a couple of caveats when using docker in your builds while running on a kubernetes cluster. Most of these issues are already discussed in the Using Docker Build section of the gitlab-ci documentation but it is worth it to revisit them here as you might run into some slightly different things when running this on your cluster.

Exposing /var/run/docker.sock

Exposing your host's /var/run/docker.sock into your build container, using the runners.kubernetes.volumes.host_path option, brings the same risks with it as always. That node's containers are accessible from the build container and depending if you are running builds in the same cluster as your production containers it might not be wise to do that.

Using docker:dind

Running the docker:dind also known as the docker-in-docker image is also possible but sadly needs the containers to be run in privileged mode. If you're willing to take that risk other problems will arise that might not seem as straight forward at first glance. Because the docker daemon is started as a service usually in your .gitlab-ci.yaml it will be run as a separate container in your Pod. Basically containers in Pods only share volumes assigned to them and an IP address by which they can reach each other using localhost. /var/run/docker.sock is not shared by the docker:dind container and the docker binary tries to use it by default. To overwrite this and make the client use tcp to contact the docker daemon in the other container be sure to include DOCKER_HOST=tcp://localhost:2375 in your environment variables of the build container.

Not supplying git

Do not try to use an image that doesn't supply git and add the GIT_STRATEGY=none environment variable for a job that you think doesn't need to do a fetch or clone. Because Pods are ephemeral and do not keep state of previously run jobs your checked out code will not exist in both the build and the docker service container. Error's you might run into are things like could not find git binary and the docker service complaining that it cannot follow some symlinks into your build context because of the missing code.

Resource separation

In both the docker:dind and /var/run/docker.sock cases the docker daemon has access to the underlying kernel of the host machine. This means that any limits that had been set in the Pod will not work when building docker images. The docker daemon will report the full capacity of the node regardless of the limits imposed on the docker build containers spawned by kubernetes.

One way to help minimize the exposure of the host's kernel to any build container when running in privileged mode or by exposing /var/run/docker.sock is to use the node_selector option to set one or more labels that have to match a node before any containers are deployed to it. For example build containers may only run on nodes that are labeled with role=ci while running all other production services on other nodes.


Leave a comment below if you have any feedback on the documentation. For support and other inquires, see getting help.