- Restoring the secrets
- Restoring the backup file
- Enable Kubernetes related settings
- Restart the pods
- (Optional) Reset the root user’s password
- Additional Information
Restoring a GitLab installation
To obtain a backup tarball of an existing GitLab instance that used other installation methods like the Linux package or GitLab Helm chart, follow the instructions given in documentation.
If you are restoring a backup taken from another instance, you must migrate your existing instance to using object storage before taking the backup. See issue 646.
It is recommended that you restore a backup to the same version of GitLab on which it was created.
GitLab backup restores are taken by running the backup-utility
command on the Toolbox pod provided in the chart.
Before running the restore for the first time, you should ensure the Toolbox is properly configured for access to object storage
The backup utility provided by GitLab Helm chart supports restoring a tarball from any of the following locations
- The
gitlab-backups
bucket in the object storage service associated to the instance. This is the default scenario. - A public URL that can be accessed from the pod.
- A local file that you can copy to the Toolbox pod using
kubectl cp
Restoring the secrets
Restore the rails secrets
The GitLab chart expects rails secrets to be provided as a Kubernetes Secret with content in YAML. If you are restoring
the rails secret from a Linux package instance, secrets are stored in JSON format in the /etc/gitlab/gitlab-secrets.json
file. To convert the file and create the secret in YAML format:
-
Copy the file
/etc/gitlab/gitlab-secrets.json
to the workstation where you runkubectl
commands. -
Install the yq tool (version 4.21.1 or later) on your workstation.
-
Run the following command to convert your
gitlab-secrets.json
to YAML format:yq -P '{"production": .gitlab_rails}' gitlab-secrets.json -o yaml >> gitlab-secrets.yaml
-
Check that the new
gitlab-secrets.yaml
file has the following contents:production: db_key_base: <your key base value> secret_key_base: <your secret key base value> otp_key_base: <your otp key base value> openid_connect_signing_key: <your openid signing key> ci_jwt_signing_key: <your ci jwt signing key>
To restore the rails secrets from a YAML file:
-
Find the object name for the rails secrets:
kubectl get secrets | grep rails-secret
-
Delete the existing secret:
kubectl delete secret <rails-secret-name>
-
Create the new secret using the same name as the old, and passing in your local YAML file
kubectl create secret generic <rails-secret-name> --from-file=secrets.yml=gitlab-secrets.yaml
Restart the pods
In order to use the new secrets, the Webservice, Sidekiq and Toolbox pods need to be restarted. The safest way to restart those pods is to run:
kubectl delete pods -lapp=sidekiq,release=<helm release name>
kubectl delete pods -lapp=webservice,release=<helm release name>
kubectl delete pods -lapp=toolbox,release=<helm release name>
Restoring the backup file
The steps for restoring a GitLab installation are
-
Make sure you have a running GitLab instance by deploying the charts. Ensure the Toolbox pod is enabled and running by executing the following command
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
-
Get the tarball ready in any of the above locations. Make sure it is named in the
<timestamp>_gitlab_backup.tar
format. Read what the backup timestamp is about. -
Note the current number of replicas for database clients for subsequent restart:
kubectl get deploy -n <namespace> -lapp=sidekiq,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}' kubectl get deploy -n <namespace> -lapp=webservice,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}' kubectl get deploy -n <namespace> -lapp=prometheus,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
-
Stop the clients of the database to prevent locks interfering with the restore process:
kubectl scale deploy -lapp=sidekiq,release=<helm release name> -n <namespace> --replicas=0 kubectl scale deploy -lapp=webservice,release=<helm release name> -n <namespace> --replicas=0 kubectl scale deploy -lapp=prometheus,release=<helm release name> -n <namespace> --replicas=0
-
Run the backup utility to restore the tarball
kubectl exec <Toolbox pod name> -it -- backup-utility --restore -t <timestamp>
Here,
<timestamp>
is from the name of the tarball stored ingitlab-backups
bucket. In case you want to provide a public URL, use the following command:kubectl exec <Toolbox pod name> -it -- backup-utility --restore -f <URL>
You can provide a local path as a URL as long as it’s in the format:
file:///<path>
- This process will take time depending on the size of the tarball.
-
The restoration process will erase the existing contents of database, move existing repositories to temporary locations and extract the contents of the tarball. Repositories will be moved to their corresponding locations on the disk and other data, like artifacts, uploads, LFS etc. will be uploaded to corresponding buckets in Object Storage.
-
Restart the application:
kubectl scale deploy -lapp=sidekiq,release=<helm release name> -n <namespace> --replicas=<value> kubectl scale deploy -lapp=webservice,release=<helm release name> -n <namespace> --replicas=<value> kubectl scale deploy -lapp=prometheus,release=<helm release name> -n <namespace> --replicas=<value>
Restore the runner registration token
After restoring, the included runner will not be able to register to the instance because it no longer has the correct registration token. Follow these troubleshooting steps to get it updated.
Enable Kubernetes related settings
If the restored backup was not from an existing installation of the chart, you will also need to enable some Kubernetes specific features after the restore. Such as incremental CI job logging.
-
Find your Toolbox pod by executing the following command
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
-
Run the instance setup script to enable the necessary features
kubectl exec <Toolbox pod name> -it -- gitlab-rails runner -e production /scripts/custom-instance-setup
Restart the pods
In order to use the new changes, the Webservice and Sidekiq pods need to be restarted. The safest way to restart those pods is to run:
kubectl delete pods -lapp=sidekiq,release=<helm release name>
kubectl delete pods -lapp=webservice,release=<helm release name>
(Optional) Reset the root user’s password
The restoration process does not update the gitlab-initial-root-password
secret with the value from backup. For logging in as root
, use the original password included in the backup. In the case that the password is no longer accessible, follow the steps below to reset it.
-
Attach to the Webservice pod by executing the command
kubectl exec <Webservice pod name> -it -- bash
-
Run the following command to reset the password of
root
user. Replace#{password}
with a password of your choice/srv/gitlab/bin/rails runner "user = User.first; user.password='#{password}'; user.password_confirmation='#{password}'; user.save!"