Group-level Kubernetes clusters

Introduced in GitLab 11.6.

Warning: Group Cluster integration is currently in Beta.

Overview

Similar to project Kubernetes clusters, Group-level Kubernetes clusters allow you to connect a Kubernetes cluster to your group, enabling you to use the same cluster across multiple projects.

Installing applications

GitLab provides a one-click install for various applications that can be added directly to your cluster.

Note: Applications will be installed in a dedicated namespace called gitlab-managed-apps. If you have added an existing Kubernetes cluster with Tiller already installed, you should be careful as GitLab cannot detect it. In this event, installing Tiller via the applications will result in the cluster having it twice. This can lead to confusion during deployments.
Application GitLab version Description Helm Chart
Helm Tiller 10.2+ Helm is a package manager for Kubernetes and is required to install all the other applications. It is installed in its own pod inside the cluster which can run the helm CLI in a safe environment. n/a
Ingress 10.2+ Ingress can provide load balancing, SSL termination, and name-based virtual hosting. It acts as a web proxy for your applications and is useful if you want to use Auto DevOps or deploy your own web apps. stable/nginx-ingress

RBAC compatibility

For each project under a group with a Kubernetes cluster, GitLab will create a restricted service account with edit privileges in the project namespace.

Note: RBAC support was introduced in GitLab 11.4, and Project namespace restriction was introduced in GitLab 11.5.

Cluster precedence

GitLab will use the project’s cluster before using any cluster belonging to the group containing the project if the project’s cluster is available and not disabled.

In the case of sub-groups, GitLab will use the cluster of the closest ancestor group to the project, provided the cluster is not disabled.

Multiple Kubernetes clusters

With GitLab Premium, you can associate more than one Kubernetes clusters to your group. That way you can have different clusters for different environments, like dev, staging, production, etc.

Add another cluster similar to the first one and make sure to set an environment scope that will differentiate the new cluster from the rest.

Note: Auto DevOps is not supported for a group with multiple clusters, as it is not possible to set AUTO_DEVOPS_DOMAIN per environment on the group level. This will be resolved in the future with the following issue.

Environment scopes

When adding more than one Kubernetes cluster to your project, you need to differentiate them with an environment scope. The environment scope associates clusters with environments similar to how the environment-specific variables work.

While evaluating which environment matches the environment scope of a cluster, cluster precedence will take effect. The cluster at the project level will take precedence, followed by the closest ancestor group, followed by that groups’ parent and so on.

For example, let’s say we have the following Kubernetes clusters:

Cluster Environment scope Where
Project * Project
Staging staging/* Project
Production production/* Project
Test test Group
Development * Group

And the following environments are set in .gitlab-ci.yml:

stages:
- test
- deploy

test:
  stage: test
  script: sh test

deploy to staging:
  stage: deploy
  script: make deploy
  environment:
    name: staging/$CI_COMMIT_REF_NAME
    url: https://staging.example.com/

deploy to production:
  stage: deploy
  script: make deploy
  environment:
    name: production/$CI_COMMIT_REF_NAME
    url: https://example.com/

The result will then be:

  • The Project cluster will be used for the test job.
  • The Staging cluster will be used for the deploy to staging job.
  • The Production cluster will be used for the deploy to production job.