- Installing applications
- RBAC compatibility
- Cluster precedence
- Multiple Kubernetes clusters
- GitLab-managed clusters
- Base domain
- Environment scopes
- Cluster environments
- Security of Runners
Introduced in GitLab 11.6.
Similar to project-level and instance-level Kubernetes clusters, group-level Kubernetes clusters allow you to connect a Kubernetes cluster to your group, enabling you to use the same cluster across multiple projects.
GitLab can install and manage some applications in your group-level cluster. For more information on installing, upgrading, uninstalling, and troubleshooting applications for your group cluster, see GitLab Managed Apps.
For each project under a group with a Kubernetes cluster, GitLab will
create a restricted service account with
in the project namespace.
GitLab will use the project’s cluster before using any cluster belonging to the group containing the project if the project’s cluster is available and not disabled.
In the case of sub-groups, GitLab will use the cluster of the closest ancestor group to the project, provided the cluster is not disabled.
With GitLab Premium, you can associate more than one Kubernetes clusters to your group. That way you can have different clusters for different environments, like dev, staging, production, etc.
Add another cluster similar to the first one and make sure to set an environment scope that will differentiate the new cluster from the rest.
You can choose to allow GitLab to manage your cluster for you. If your cluster is managed by GitLab, resources for your projects will be automatically created. See the Access controls section for details on which resources will be created.
For clusters not managed by GitLab, project-specific resources will not be created automatically. If you are using Auto DevOps for deployments with a cluster not managed by GitLab, you must ensure:
- The project’s deployment service account has permissions to deploy to
KUBECONFIGcorrectly reflects any changes to
KUBE_NAMESPACE(this is not automatic). Editing
KUBE_NAMESPACEdirectly is discouraged.
Introduced in GitLab 12.6.
If you choose to allow GitLab to manage your cluster for you, GitLab stores a cached version of the namespaces and service accounts it creates for your projects. If you modify these resources in your cluster manually, this cache can fall out of sync with your cluster, which can cause deployment jobs to fail.
To clear the cache:
- Navigate to your group’s Kubernetes page, and select your cluster.
- Expand the Advanced settings section.
- Click Clear cluster cache.
Introduced in GitLab 11.8.
Domains at the cluster level permit support for multiple domains
per multiple Kubernetes clusters. When specifying a domain,
this will be automatically set as an environment variable (
the Auto DevOps stages.
The domain should have a wildcard DNS configured to the Ingress IP address.
When adding more than one Kubernetes cluster to your project, you need to differentiate them with an environment scope. The environment scope associates clusters with environments similar to how the environment-specific variables work.
While evaluating which environment matches the environment scope of a cluster, cluster precedence will take effect. The cluster at the project level will take precedence, followed by the closest ancestor group, followed by that groups’ parent and so on.
For example, let’s say we have the following Kubernetes clusters:
And the following environments are set in
stages: - test - deploy test: stage: test script: sh test deploy to staging: stage: deploy script: make deploy environment: name: staging/$CI_COMMIT_REF_NAME url: https://staging.example.com/ deploy to production: stage: deploy script: make deploy environment: name: production/$CI_COMMIT_REF_NAME url: https://example.com/
The result will then be:
- The Project cluster will be used for the
- The Staging cluster will be used for the
deploy to stagingjob.
- The Production cluster will be used for the
deploy to productionjob.
For important information about securely configuring GitLab Runners, see Security of Runners documentation for project-level clusters.