- UPGRADE FAILED: “$name” has no deployed releases
- Error: this command needs 2 arguments: release name, chart path
- Application containers constantly initializing
- Applying configuration changes
- Included GitLab Runner failing to register
- Too many redirects
- Upgrades fail with Immutable Field Error
Failed to pull imageand
- UPGRADE FAILED: “cannot patch …” after
helm 2to3 convert
- Restoration failure:
ERROR: cannot drop view pg_stat_statements because extension pg_stat_statements requires it
- Bundled PostgreSQL pod fails to start:
database files are incompatible with server
- Bundled NGINX Ingress pod fails to start:
Failed to watch *v1beta1.Ingress
- Increased load on
This error will occur on your second install/upgrade if your initial install failed.
If your initial install completely failed, and GitLab was never operational, you should first purge the failed install before installing again.
helm uninstall <release-name>
If instead, the initial install command timed out, but GitLab still came up successfully,
you can add the
--force flag to the
helm upgrade command to ignore the error
and attempt to update the release.
Otherwise, if you received this error after having previously had successful deploys of the GitLab chart, then you are encountering a bug. Please open an issue on our issue tracker, and also check out issue #630 where we recovered our CI server from this problem.
An error like this could occur when you run
and there are some spaces in the parameters. In the following
Test Username is the culprit:
helm upgrade gitlab gitlab/gitlab --timeout 600s --set global.email.display_name=Test Username ...
To fix it, pass the parameters in single quotes:
helm upgrade gitlab gitlab/gitlab --timeout 600s --set global.email.display_name='Test Username' ...
If you experience Sidekiq, Webservice, or other Rails based containers in a constant
state of Initializing, you’re likely waiting on the
dependencies container to
If you check the logs of a given Pod specifically for the
you may see the following repeated:
Checking database connection and schema version WARNING: This version of GitLab depends on gitlab-shell 8.7.1, ... Database Schema Current version: 0 Codebase version: 20190301182457
This is an indication that the
migrations Job has not yet completed. The purpose
of this Job is to both ensure that the database is seeded, as well as all
relevant migrations are in place. The application containers are attempting to
wait for the database to be at or above their expected database version. This is
to ensure that the application does not malfunction to the schema not matching
expectations of the codebase.
- Find the
kubectl get job -lapp=migrations
- Find the Pod being run by the Job.
kubectl get pod -ljob-name=<job-name>
- Examine the output, checking the
Running, continue. If the
Completed, the application containers should start shortly after the next check passes.
Examine the logs from this pod.
kubectl logs <pod-name>
Any failures during the run of this job should be addressed. These will block the use of the application until resolved. Possible problems are:
- Unreachable or failed authentication to the configured PostgreSQL database
- Unreachable or failed authentication to the configured Redis services
- Failure to reach a Gitaly instance
The following command will perform the necessary operations to apply any updates made to
helm upgrade <release name> <chart path> -f gitlab.yaml
This can happen when the runner registration token has been changed in GitLab. (This often happens after you have restored a backup)
- Find the new shared runner token located on the
admin/runnerswebpage of your GitLab installation.
Find the name of existing runner token Secret stored in Kubernetes
kubectl get secrets | grep gitlab-runner-secret
Delete the existing secret
kubectl delete secret <runner-secret-name>
Create the new secret with two keys, (
runner-registration-tokenwith your shared token, and an empty
kubectl create secret generic <runner-secret-name> --from-literal=runner-registration-token=<new-shared-runner-token> --from-literal=runner-token=""
This can happen when you have TLS termination before the NGINX Ingress, and the tls-secrets are specified in the configuration.
Update your values to set
Via a values file:
# values.yaml global: ingress: annotations: "nginx.ingress.kubernetes.io/ssl-redirect": "false"
Via the Helm CLI:
helm ... --set-string global.ingress.annotations."nginx.ingress.kubernetes.io/ssl-redirect"=false
Apply the change.
Prior to the 3.0.0 release of these charts, the
had been populated into several Services
despite having no actual value (
""). This was a bug, and causes problems with Helm 3’s three-way
merge of properties.
Once the chart was deployed with Helm 3, there would be no possible upgrade path unless one
clusterIP properties from the various Services and populated those into the values
provided to Helm, or the affected services are removed from Kubernetes.
The 3.0.0 release of this chart corrected this error, but it requires manual correction.
This can be solved by simply removing all of the affected services.
Remove all affected services:
kubectl delete services -lrelease=RELEASE_NAME
- Perform an upgrade via Helm.
- Future upgrades will not face this error.
LoadBalancerfor NGINX Ingress from this chart, if in use. See global Ingress settings documentation for more details regarding
externalIP. You may be required to update DNS records!
Sidekiq pods did not receive a unique selector prior to chart release
3.0.0. The problems with this were documented in.
3.0.0 using Helm will automatically delete the old Sidekiq deployments and create new ones by appending
-v1 to the
name of the Sidekiq
If you continue to run into this error on the Sidekiq deployment when installing
3.0.0, resolve these with the following
Remove Sidekiq services
kubectl delete deployment --cascade -lrelease=RELEASE_NAME,app=sidekiq
Perform an upgrade via Helm.
Upgrading from CertManager version
0.10 introduced a number of
breaking changes. The old Custom Resource Definitions must be uninstalled
and removed from Helm’s tracking and then re-installed.
The Helm chart attempts to do this by default but if you encounter this error you may need to take manual action.
If this error message was encountered, then upgrading requires one more step than normal in order to ensure the new Custom Resource Definitions are actually applied to the deployment.
Remove the old CertManager Deployment.
kubectl delete deployments -l app=cert-manager --cascade
Run the upgrade again. This time install the new Custom Resource Definitions
helm upgrade --install --values - YOUR-RELEASE-NAME gitlab/gitlab < <(helm get values YOUR-RELEASE-NAME)
If you are using
start by removing that property.
Check the version mappings between the chart and GitLab
and specify a compatible version of the
gitlab/gitlab chart in your
This is a known issue. After migrating a Helm 2 release to Helm 3, the subsequent upgrades may fail. You can find the full explanation and workaround in Migrating from Helm v2 to Helm v3.
ERROR: cannot drop view pg_stat_statements because extension pg_stat_statements requires it
You may face this error when restoring a backup on your Helm chart instance. Use the following steps as a workaround:
task-runnerpod open the DB console:
/srv/gitlab/bin/rails dbconsole -p
Drop the extension:
DROP EXTENSION pg_stat_statements
- Perform the restoration process.
After the restoration is complete, re-create the extension in the DB console:
CREATE EXTENSION pg_stat_statements
If you encounter the same issue with the
follow the same steps above to drop and re-create it.
You can find more details about this error in issue #2469.
The following error message may appear in the bundled PostgreSQL pod after upgrading to a new version of the GitLab Helm chart:
gitlab-postgresql FATAL: database files are incompatible with server gitlab-postgresql DETAIL: The data directory was initialized by PostgreSQL version 11, which is not compatible with this version 12.7.
To address this, perform a Helm rollback to the previous version of the chart and then follow the steps in the upgrade guide to upgrade the bundled PostgreSQL version. Once PostgreSQL is properly upgraded, try the GitLab Helm chart upgrade again.
The following error message may appear in the bundled NGINX Ingress controller pod if running Kubernetes version 1.22 or later:
Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
To address this, ensure the Kubernetes version is 1.21 or older. See #2852 for more information regarding NGINX Ingress support for Kubernetes 1.22 or later.
You may face this issue if the option
workhorse.keywatcher was set to
false for the deployment servicing
Use the following steps to verify:
Access the container
gitlab-workhorsein the pod serving
kubectl exec -it --container=gitlab-workhorse <gitlab_api_pod> -- /bin/bash
Inspect the file
[redis]configuration might be missing:
cat /srv/gitlab/config/workhorse-config.toml | grep '\[redis\]'
[redis] configuration is not present, the
workhorse.keywatcher flag was set to
false during deployment
thus causing the extra load in the
/api/v4/jobs/requests endpoint. To fix this, enable the
keywatcher in the
workhorse: keywatcher: true