- Next steps
- Additional information
Before deploying GitLab to your Kubernetes cluster, there are some tools you must have installed locally.
kubectl is the tool that talks to the Kubernetes API. kubectl 1.13 or higher is required and it needs to be compatible with your cluster (+/- 1 minor release from your cluster).
The server version of kubectl cannot be obtained until we connect to a cluster. Proceed with setting up Helm.
Helm is the package manager for Kubernetes. The
gitlab chart is tested and
supported with Helm v2 (2.12 or higher required, excluding 2.15).
Starting with version
v3.0.0 of the chart, Helm v3 (3.0.2 or higher required)
is also fully supported.
Note the following:
- We are not using Helm v3 for testing in CI. If you find issues specific to Helm v3, please create
an issue in our issue tracker and start
the issue title with the keyword
- Helm v2 consists of two parts, the
helm(client) installed locally, and
tiller(server) installed inside Kubernetes.
- If you need to run Helm v2 and are not able to run Tiller in your cluster, for example on OpenShift, it’s possible to use Tiller locally and avoid deploying it into the cluster. This should only be used when Tiller cannot be normally deployed.
- Helm v2.15.x series contained multiple severe bugs that affect the use of this chart. Do not use these versions!
Tiller is deployed into the cluster and interacts with the Kubernetes API to deploy your applications. If role based access control (RBAC) is enabled, Tiller will need to be granted permissions to allow it to talk to the Kubernetes API.
If RBAC is not enabled, skip to initializing Helm.
If you are not sure whether RBAC is enabled in your cluster, or to learn more, read through our RBAC documentation.
kubectlinstalled and it’s up to date. Older versions do not have support for RBAC and will generate errors.
Helm v3.0 does not install Tiller in the cluster and as such uses the user’s RBAC permissions to perform the deployment of the chart.
Prior versions of Helm do install Tiller on the cluster and will need to be granted permissions to perform operations. These instructions grant cluster wide permissions, however for more advanced deployments permissions can be restricted to a single namespace.
To grant access to the cluster, we will create a new
tiller service account
and bind it to the
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
For ease of use, these instructions will utilize the sample YAML file in this repository. To apply the configuration, we first need to connect to the cluster.
The command to connect to the cluster can be obtained from the Google Cloud Platform Console by the individual cluster, by looking for the Connect button in the clusters list page.
Alternatively, use the command below, filling in your cluster’s information:
gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-id>
For the most up to date instructions, follow the Amazon EKS documentation on connecting to a cluster.
If you are doing local development, you can use
minikube as your
local cluster. If
kubectl cluster-info is not showing
minikube as the current
kubectl config set-cluster minikube to set the active cluster.
For GKE, you need to grab the administrator credentials:
gcloud container clusters describe <cluster-name> --zone <zone> --project <project-id> --format='value(masterAuth.password)'
This command will output the administrator password. We need the password to authenticate
kubectl and create the role.
We will also create an administrator user for this cluster. Use a name you prefer but for this example we will include the cluster’s name in it:
CLUSTER_NAME=name-of-cluster kubectl config set-credentials $CLUSTER_NAME-admin-user --username=admin --password=xxxxxxxxxxxxxx kubectl --user=$CLUSTER_NAME-admin-user create -f https://gitlab.com/gitlab-org/charts/gitlab/raw/master/doc/installation/examples/rbac-config.yaml
For other clusters like Amazon EKS, you can directly upload the RBAC configuration:
kubectl create -f https://gitlab.com/gitlab-org/charts/gitlab/raw/master/doc/installation/examples/rbac-config.yaml
If Helm v3 is being used, there no longer is an
init sub command and the
command is ready to be used once it is installed. Otherwise if Helm v2 is
being used, then Helm needs to deploy Tiller with a service account:
helm init --service-account tiller
If your cluster previously had Helm/Tiller installed, run the following command to ensure that the deployed version of Tiller matches the local Helm version:
helm init --upgrade --service-account tiller
Once kubectl and Helm are configured, you can continue to configuring your Kubernetes cluster.
The Distribution Team has a training presentation for Helm Charts.
Some information on how all the inner workings behave:
Helm repository has some additional information on developing with Helm in its tips and tricks section.
If you are using Helm v2 and not able to run Tiller in your cluster, a script is included that should allow you to use Helm with running Tiller in your cluster. The script uses your personal Kubernetes credentials and configuration to apply the chart.
To use the script, skip this entire section about initializing Helm. Instead, make sure you have Docker installed locally and run:
After that, you can substitute
bin/localtiller-helm anywhere these
instructions direct you to run