Upgrade GitLab Helm chart instances
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
Upgrade a GitLab Helm chart instance to a later version of GitLab.
Prerequisites
Before upgrading a GitLab Helm chart instance:
- Consult information you need before you upgrade.
- Because GitLab Helm chart versions don’t follow the same numbering as GitLab versions, see version mappings to find the GitLab Helm chart version you need.
- See the CHANGELOG corresponding to the specific release you want to upgrade to.
- If you’re upgrading from versions of the GitLab Helm chart version earlier than 8.x, see the GitLab documentation archives to access older versions of the documentation.
- Perform a backup.
Upgrade a GitLab Helm chart instance
To upgrade a GitLab Helm chart instance:
Consider turning on maintenance mode during the upgrade to restrict users from write operations to help not disturb any workflows.
Upgrade GitLab Runner to the same version as your target GitLab version.
Extract your previously provided values:
helm get values gitlab > gitlab.yamlDecide on all the values you need to carry through as you upgrade. You should only keep a minimal set of values that you want to explicitly set and pass those during the upgrade process. You should otherwise rely on GitLab default values.
Upgrade with zero downtime
Upgrade a live GitLab environment without taking it offline.
Requirements
The zero-downtime upgrade process requires:
- A multi-node GitLab Helm chart deployment with multiple replicas configured for Webservice and Sidekiq.
- Upgrade one minor release at a time. So from 18.0 to 18.1, not to 18.2. If you skip releases, database modifications might be run in the wrong sequence and leave the database schema in a broken state.
Considerations
When considering a zero-downtime upgrade, be aware that:
- Gitaly in Kubernetes does not support zero-downtime upgrades and requires downtime.
- Most of the time, you can safely upgrade from a patch release to the next minor release if the patch release is not the latest. For example, upgrading from 18.0.5 to 18.1.0 should be safe even if 18.0.6 exists. We do recommend you check the version-specific upgrade notes for the version you are upgrading to.
- Ensure your deployment has sufficient resources to run both old and new pods simultaneously during the rolling update. The amount of additional resources required depends on your maxSurge settings. For example, with maxSurge: 10%, you need 10% additional capacity for the new pods to use.
Recommended deployment settings
To ensure smooth rolling updates, the settings below are required to control the upgrade process and achieve zero downtime.
These settings are baseline recommendations. You will need to adjust them based on your deployment’s resource availability, replica counts, and performance requirements. Ensure you have sufficient cluster resources to support the maxSurge setting, which temporarily creates additional pods during an upgrade.
If you have an existing GitLab deployment without these rolling update settings configured, you must apply them before attempting a zero-downtime upgrade. Applying these settings for the first time triggers a rolling restart of your pods, which may cause brief service interruptions.
To minimize impact, apply these settings during a maintenance window before your planned upgrade. After configured, future upgrades can be performed with zero downtime.
global:
extraEnv:
BYPASS_SCHEMA_VERSION: true
gitlab:
webservice:
deployment:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: "10%"
maxUnavailable: 0
terminationGracePeriodSeconds: 60
sidekiq:
deployment:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: "10%"
maxUnavailable: 0
terminationGracePeriodSeconds: 600
gitlab-shell:
deployment:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: "10%"
maxUnavailable: 0
terminationGracePeriodSeconds: 60
registry:
deployment:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: "10%"
maxUnavailable: 0
terminationGracePeriodSeconds: 60
nginx-ingress:
controller:
deployment:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: "10%"
maxUnavailable: 0
terminationGracePeriodSeconds: 300
minReadySeconds: 10When configuring the terminationGracePeriodSeconds for Sidekiq, you will need to consider your longest running jobs to ensure that they have enough time to complete before the grave period expires.
These settings ensure:
- At least one pod is always available during updates.
- New pods are brought up before old ones are terminated.
- Pods have time to gracefully shut down and drain connections.
- Pods are stable before being considered ready.
Upgrade process
The deployment names used below are examples based on a default GitLab Helm chart installation. Deployment names may vary depending on your configuration, such as when deploying multiple Sidekiq queues.
To find the correct deployment names for your installation:
kubectl get deployments -lapp=webservice -n <namespace>
kubectl get deployments -lapp=sidekiq -n <namespace>To upgrade GitLab:
Pause deployments:
kubectl rollout pause deployment/gitlab-webservice-default kubectl rollout pause deployment/gitlab-sidekiq-all-in-1-v2Begin the upgrade to the new version:
helm upgrade gitlab gitlab/gitlab \ --version <GitLab Helm chart version> \ -f values.yaml \ --set gitlab.migrations.extraEnv.SKIP_POST_DEPLOYMENT_MIGRATIONS=trueWait for pre-migrations and upgrades to complete:
kubectl get jobs -lrelease=gitlab,chart=migrations-<GitLab version> -n <namespace> kubectl wait --for=condition=complete job/<job name> --timeout=600sUnpause deployments for Sidekiq:
kubectl rollout resume deployment/gitlab-sidekiq-all-in-1-v2 kubectl rollout status deployment/gitlab-sidekiq-all-in-1-v2 --timeout=15mUnpause deployments for Webservice:
kubectl rollout resume deployment/gitlab-webservice-default kubectl rollout status deployment/gitlab-webservice-default --timeout=15mRun post-migrations:
helm upgrade gitlab gitlab/gitlab \ --version <GitLab Helm chart version> \ -f values.yamlWait for post-migrations to complete:
kubectl get jobs -lrelease=gitlab,chart=migrations-<GitLab version> -n <namespace> kubectl wait --for=condition=complete job/<job name> --timeout=600sDepending on your deployment, a
600swait time for the migrations to complete might not be enough. You can increase this timeout to fit your needs or periodically check up on the job to ensure it is complete before moving onto the next step.
Upgrade with downtime
Perform the upgrade, with values extracted and reviewed in previous steps:
helm upgrade gitlab gitlab/gitlab \ --version <new version> \ -f gitlab.yaml \ --set gitlab.migrations.enabled=true \ --set ...During a major database upgrade, you should set
gitlab.migrations.enabledtofalse. Ensure that you explicitly set it back totruefor future updates.
After you upgrade
- If enabled, turn off maintenance mode.
- Run upgrade health checks.