Required tools

Before deploying GitLab to your Kubernetes cluster, there are some tools you must have installed locally.

kubectl

kubectl is the tool that talks to the Kubernetes API. kubectl 1.12 or higher is required and it needs to be compatible with your cluster (+/- 1 minor release from your cluster).

> Install kubectl locally by following the Kubernetes documentation.

The server version of kubectl cannot be obtained until we connect to a cluster. Proceed with setting up Helm.

Helm

Helm is the package manager for Kubernetes. The gitlab chart is only tested and supported with Helm v2 and Helm 2.12 or higher is required. Helm v1 is explicitly not supported. Helm v3 is not yet supported, and open issues can be found under our Helm 3 issue label.

Helm consists of two parts, the helm (client) installed locally, and tiller (server) installed inside Kubernetes.

Note: If you are not able to run Tiller in your cluster, for example on OpenShift, it’s possible to use Tiller locally and avoid deploying it into the cluster. This should only be used when Tiller cannot be normally deployed.

Getting Helm

You can get Helm from the project’s releases page, or follow other options under the official documentation of installing Helm.

Tiller is deployed into the cluster and interacts with the Kubernetes API to deploy your applications. If role based access control (RBAC) is enabled, Tiller will need to be granted permissions to allow it to talk to the Kubernetes API.

If RBAC is not enabled, skip to initializing Helm.

If you are not sure whether RBAC is enabled in your cluster, or to learn more, read through our RBAC documentation.

Preparing for Helm with RBAC

Note: Ensure you have kubectl installed and it’s up to date. Older versions do not have support for RBAC and will generate errors.

Helm’s Tiller will need to be granted permissions to perform operations. These instructions grant cluster wide permissions, however for more advanced deployments permissions can be restricted to a single namespace.

To grant access to the cluster, we will create a new tiller service account and bind it to the cluster-admin role:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

For ease of use, these instructions will utilize the sample YAML file in this repository. To apply the configuration, we first need to connect to the cluster.

Connecting to the GKE cluster

The command to connect to the cluster can be obtained from the Google Cloud Platform Console by the individual cluster, by looking for the Connect button in the clusters list page.

Alternatively, use the command below, filling in your cluster’s information:

gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-id>

Connecting to an EKS cluster

For the most up to date instructions, follow the Amazon EKS documentation on connecting to a cluster.

Connect to a local minikube cluster

If you are doing local development, you can use minikube as your local cluster. If kubectl cluster-info is not showing minikube as the current cluster, use kubectl config set-cluster minikube to set the active cluster.

Upload the RBAC config

Upload the RBAC config in GKE

For GKE, you need to grab the admin credentials:

gcloud container clusters describe <cluster-name> --zone <zone> --project <project-id> --format='value(masterAuth.password)'

This command will output the admin password. We need the password to authenticate with kubectl and create the role.

We will also create an admin user for this cluster. Use a name you prefer but for this example we will include the cluster’s name in it:

CLUSTER_NAME=name-of-cluster
kubectl config set-credentials $CLUSTER_NAME-admin-user --username=admin --password=xxxxxxxxxxxxxx
kubectl --user=$CLUSTER_NAME-admin-user create -f https://gitlab.com/gitlab-org/charts/gitlab/raw/master/doc/installation/examples/rbac-config.yaml

Upload the RBAC config in non-GKE clusters

For other clusters like Amazon EKS, you can directly upload the RBAC configuration:

kubectl create -f https://gitlab.com/gitlab-org/charts/gitlab/raw/master/doc/installation/examples/rbac-config.yaml

Initializing Helm

Finally, deploy Helm Tiller with a service account:

helm init --service-account tiller

If your cluster previously had Helm/Tiller installed, run the following command to ensure that the deployed version of Tiller matches the local Helm version:

helm init --upgrade --service-account tiller

Next steps

Once kubectl and Helm are configured, you can continue to configuring your Kubernetes cluster.

Additional information

The Distribution Team has a training presentation for Helm Charts.

Templates

Templating in Helm is done via golang’s text/template and sprig.

Some information on how all the inner workings behave:

Tips and tricks

Helm repository has some additional information on developing with Helm in it’s tips and tricks section.

Local Tiller

Not recommended: This method is not well supported, but should work.

If you are not able to run Tiller in your cluster, a script is included that should allow you to use Helm with running Tiller in your cluster. The script uses your personal Kubernetes credentials and configuration to apply the chart.

To use the script, skip this entire section about initializing Helm. Instead, make sure you have Docker installed locally and run:

bin/localtiller-helm --client-only

After that, you can substitute bin/localtiller-helm anywhere these instructions direct you to run helm.