- Prerequisites
- Installing the GitLab Operator
- Recommended next steps
- Uninstall the GitLab Operator
- Troubleshoot the GitLab Operator
Installation
Minimal
to Viable
Epic for more information.This document describes how to deploy the GitLab Operator via manifests in your Kubernetes or OpenShift cluster.
If using OpenShift, these steps are typically handled by the Operator Lifecycle Manager (OLM) once an operator bundle is published. However, to test the most recent operator images, users may need to install the operator using the deployment manifests available in the operator repository.
Prerequisites
- Create or use an existing Kubernetes or OpenShift cluster
- Install pre-requisite services and software
- Configure Domain Name Services
Cluster
To create a traditional Kubernetes cluster, consider using official tooling or your preferred method of installation.
The GitLab Operator supports the following Kubernetes versions:
- A cluster running Kubernetes 1.20 or newer is required for all components to work.
- 1.26 support is fully tested as of Operator 0.24.0. The GitLab Operator supports the following Kubernetes versions:
Operator Version | Minimum Kubernetes version | Maximum Kubernetes version | Partially tested Kubernetes version(s) |
---|---|---|---|
0.24.0
| 1.20
| 1.26
|
1.27 , 1.28
|
The last column lists newer versions of Kubernetes that have undergone initial testing but are not yet fully validated. You can track progress toward support for new Kubernetes versions in Epic 11331.
The GitLab Operator aims to support new minor Kubernetes versions four months after their initial release. We welcome any compatibility issues with releases newer than those listed above in our issue tracker.
Some GitLab features might not work on versions older than the versions listed above.
For some components, like the agent for Kubernetes and GitLab Charts, GitLab might support different cluster versions.
To create an OpenShift cluster, see the OpenShift cluster setup documentation for an example of how to create a development environment.
GitLab Operator supports OpenShift 4.10 through 4.13.
Cluster nodes must use the x86-64 architecture. Support for multiple architectures, including AArch64/ARM64, is under active development. See issue 2899 for more information.
Ingress controller
An Ingress controller is required to provide external access to the application and secure communication between components.
The GitLab Operator deploys our forked NGINX chart from the GitLab Helm Chart by default.
If you prefer to use an external Ingress controller, use NGINX Ingress by the Kubernetes community to deploy an Ingress Controller. Follow the relevant instructions in the link based on your platform and preferred tooling. Take note of the Ingress class value for later (it typically defaults to nginx
).
When configuring the GitLab CR, be sure to set nginx-ingress.enabled=false
to disable the NGINX objects from the GitLab Helm Chart.
TLS certificates
To create a certificate for the operator’s Kubernetes webhook, cert-manager is used. You should use cert-manager for the GitLab certificates as well.
Because the operator needs a certificate for the Kubernetes webhook, you can’t use the cert-manager bundled with the GitLab Chart. Instead, install cert-manager before you install the operator.
To install cert-manager, see the installation documentation for your platform and tooling.
Our codebase targets cert-manager 1.6.1.
Metrics
Install the metrics server so the HorizontalPodAutoscalers can retrieve pod metrics.
OpenShift ships with Prometheus Adapter by default, so there is no manual action required here.
Configure Domain Name Services
You need an internet-accessible domain to which you can add a DNS record.
See our networking and DNS documentation for more details on connecting your domain to the GitLab components. You use the configuration mentioned in this section when defining your GitLab custom resource (CR).
Ingress in OpenShift requires extra consideration. See our notes on OpenShift Ingress for more information.
Installing the GitLab Operator
-
Deploy the GitLab Operator.
GL_OPERATOR_VERSION=<your_desired_version> # https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/releases PLATFORM=kubernetes # or "openshift" kubectl create namespace gitlab-system kubectl apply -f https://gitlab.com/api/v4/projects/18899486/packages/generic/gitlab-operator/${GL_OPERATOR_VERSION}/gitlab-operator-${PLATFORM}-${GL_OPERATOR_VERSION}.yaml
This command first deploys the service accounts, roles and role bindings used by the operator, and then the operator itself.
By default, the Operator watches the namespace where it is deployed. To instead watch at the cluster scope, remove the
WATCH_NAMESPACE
environment variable from the Deployment in the manifest under:spec.template.spec.containers[0].env
and re-run thekubectl apply
command above.Running the Operator at the cluster scope is considered experimental. See issue #100 for more information.Experimental: Alternatively, deploy the GitLab Operator via Helm.
helm repo add gitlab-operator https://gitlab.com/api/v4/projects/18899486/packages/helm/stable helm repo update helm install gitlab-operator gitlab-operator/gitlab-operator --create-namespace --namespace gitlab-system
-
Create a GitLab custom resource (CR).
Create a new file named something like
mygitlab.yaml
.Here is an example of the content to put in this file:
apiVersion: apps.gitlab.com/v1beta1 kind: GitLab metadata: name: gitlab spec: chart: version: "X.Y.Z" # https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/0.8.1/CHART_VERSIONS values: global: hosts: domain: example.com # use a real domain here ingress: configureCertmanager: true certmanager-issuer: email: youremail@example.com # use your real email address here
For more details on configuration options to use under
spec.chart.values
, see the GitLab Helm Chart documentation. -
Deploy a GitLab instance using your new GitLab CR.
kubectl -n gitlab-system apply -f mygitlab.yaml
This command sends your GitLab CR up to the cluster for the GitLab Operator to reconcile. You can watch the progress by tailing the logs from the controller pod:
kubectl -n gitlab-system logs deployment/gitlab-controller-manager -c manager -f
You can also list GitLab resources and check their status:
$ kubectl -n gitlab-system get gitlab NAME STATUS VERSION gitlab Ready 5.2.4
When the CR is reconciled (the status of the GitLab resource is Running
), you can access GitLab in your browser at https://gitlab.example.com
.
To log in you need to retrieve the initial root password for your deployment. See the Helm Chart documentation for further instructions.
Recommended next steps
After completing your installation, consider taking the recommended next steps, including authentication options and sign-up restrictions.
Uninstall the GitLab Operator
Follow the steps below to remove the GitLab Operator and its associated resources.
Items to note prior to uninstalling the operator:
- The operator does not delete the Persistent Volume Claims or Secrets when a GitLab instance is deleted.
- When deleting the Operator, the namespace where it is installed (
gitlab-system
by default) is not deleted automatically. This ensures that persistent volumes are not lost unintentionally.
Uninstall an instance of GitLab
kubectl -n gitlab-system delete -f mygitlab.yaml
This removes the GitLab instance, and all associated objects except for Persistent Volume Claims as noted above).
Uninstall the GitLab Operator
GL_OPERATOR_VERSION=<your_installed_version> # https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/releases
PLATFORM=kubernetes # or "openshift"
kubectl delete -f https://gitlab.com/api/v4/projects/18899486/packages/generic/gitlab-operator/${GL_OPERATOR_VERSION}/gitlab-operator-${PLATFORM}-${GL_OPERATOR_VERSION}.yaml
This deletes the Operator’s resources, including the running Deployment of the Operator. This does not delete objects associated with a GitLab instance.
Troubleshoot the GitLab Operator
Troubleshooting the Operator can be found in troubleshooting.md.