Upgrade a multi-node instance with downtime

  • Tier: Free, Premium, Ultimate
  • Offering: GitLab Self-Managed

To upgrade a multi-node GitLab instance with downtime:

  1. Shut down the GitLab application.
  2. Upgrade Consul servers.
  3. Upgrade Gitaly, Rails, PostgreSQL, Redis, and PgBouncer in any order. If you use PostgreSQL or Redis from your cloud platform and upgrades are required, substitute these instructions for your cloud provider’s instructions.
  4. Upgrade the GitLab application (Sidekiq, Puma) and start the application.

Before you begin an upgrade with downtime, consider your downtime options.

Shut down the GitLab application

Before upgrading, you must stop writes to the database by shutting down the GitLab application. The process is different depending on your installation method.

Shut down Puma and Sidekiq on all servers running these processes:

sudo gitlab-ctl stop sidekiq
sudo gitlab-ctl stop puma

For Helm chart instances:

  1. Note the current number of replicas for database clients for subsequent restart:
kubectl get deploy -n <namespace> -l release=<helm release name> -l 'app in (prometheus,webservice,sidekiq)' -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.replicas}{"\n"}{end}'
  1. Stop the clients of the database:
kubectl scale deploy -n <namespace> -l release=<helm release name> -l 'app in (prometheus,webservice,sidekiq)' --replicas=0

Upgrade the Consul nodes

Follow instructions for upgrading the Consul nodes. In summary:

  1. Check the Consul nodes are all healthy.

  2. Upgrade GitLab on all Consul servers.

  3. Restart all GitLab services one node at a time:

    sudo gitlab-ctl restart

Your Consul cluster processes might not be on their own servers and are shared with another service such as Redis HA or Patroni. In this case, when upgrading those servers:

  • Restart services on only one server at a time.
  • Check the Consul cluster is healthy before upgrading or restarting services.

Upgrade Gitaly and Gitaly Cluster (Praefect)

For Gitaly servers that are not part of Gitaly Cluster (Praefect), upgrade GitLab. If you have multiple Gitaly shards, you can upgrade the Gitaly servers in any order.

If you’re running Gitaly Cluster (Praefect), follow the zero-downtime upgrade process for Gitaly Cluster (Praefect).

When using Amazon Machine Images

If you are using Amazon Machine Images (AMIs) on AWS, you can upgrade the Gitaly nodes using an AMI redeployment process. To use this process, you must use Elastic network interfaces (ENIs). Gitaly Cluster (Praefect) tracks replicas of Git repositories by the server hostname. ENIs can ensure the private DNS name stays the same when the instance is redeployed. If the nodes are redeployed with new hostnames, even if the storage is the same, Gitaly Cluster (Praefect) cannot work.

If you are not using ENIs, you must upgrade the Gitaly nodes by using the Linux package.

To upgrade Gitaly Cluster (Praefect) nodes by using an AMI redeployment process:

  1. The AMI redeployment process must include gitlab-ctl reconfigure. Set praefect['auto_migrate'] = false on the AMI so all nodes get this. This setting prevents reconfigure from automatically running database migrations.
  2. The first node to be redeployed with the upgraded image should be your deploy node.
  3. After it’s deployed, set praefect['auto_migrate'] = true in gitlab.rb and apply with gitlab-ctl reconfigure.
  4. This command runs the database migrations.
  5. Redeploy your other Gitaly Cluster (Praefect) nodes.

Upgrade the PostgreSQL nodes

For non-clustered PostgreSQL servers:

  1. Upgrade GitLab.

  2. Because the upgrade process does not restart PostgreSQL when the binaries are upgraded, restart to load the new version:

    sudo gitlab-ctl restart

Upgrade Patroni nodes

Patroni is used to achieve high availability with PostgreSQL.

If a PostgreSQL major version upgrade is required, follow the major version process.

The upgrade process for all other versions is performed on all replicas first. After the replicas are upgraded, a cluster failover occurs from the leader to one of the upgraded replicas. This process ensures that only one failover is needed and, once complete, the new leader is upgraded.

To upgrade Patroni nodes:

  1. Identify the leader and replica nodes, and verify that the cluster is healthy. On a database node, run:

    sudo gitlab-ctl patroni members
  2. Upgrade GitLab on one of the replica nodes.

  3. Restart to load the new version:

    sudo gitlab-ctl restart
  4. Verify that the cluster is healthy.

  5. Repeat the upgrade, restart, and health check steps for the other replicas.

  6. Upgrade the leader node following the same Linux package upgrade as the replicas.

  7. Restart all services on the leader node to load the new version and also trigger a cluster failover:

    sudo gitlab-ctl restart
  8. Check the cluster is healthy

Upgrade the PgBouncer nodes

If you run PgBouncer on your GitLab application (Rails) nodes, then PgBouncer is upgraded as part of the application server upgrade. Otherwise, upgrade GitLab on the PgBouncer nodes.

Upgrade the Redis node

Upgrade a standalone Redis server by upgrading GitLab on the Redis node.

Upgrade Redis HA (using Sentinel)

  • Tier: Premium, Ultimate
  • Offering: GitLab Self-Managed

If you use Redis HA, follow the zero-downtime instructions for upgrading your Redis HA cluster.

Upgrade the GitLab application components

The process for upgrading the GitLab application depends on your installation method.

All the Puma and Sidekiq processes were previously shut down. On each GitLab application node:

  1. Ensure /etc/gitlab/skip-auto-reconfigure does not exist.

  2. Check that Puma and Sidekiq are shut down:

    ps -ef | egrep 'puma: | puma | sidekiq '

Select one node that runs Puma as your deploy node that is responsible for running all database migrations. On the deploy node:

  1. Ensure the server is configured to permit regular migrations. Check that /etc/gitlab/gitlab.rb does not contain gitlab_rails['auto_migrate'] = false. Either set it specifically gitlab_rails['auto_migrate'] = true or omit it for the default behavior (true).

  2. If you’re using PgBouncer, you must bypass PgBouncer and connect directly to PostgreSQL before running migrations.

    Rails uses an advisory lock when attempting to run a migration to prevent concurrent migrations from running on the same database. These locks are not shared across transactions, resulting in ActiveRecord::ConcurrentMigrationError errors and other issues when running database migrations using PgBouncer in transaction pooling mode.

    1. If you’re running Patroni, find the leader node. Run on a database node:

      sudo gitlab-ctl patroni members
    2. Update gitlab.rb on the deploy node. Change gitlab_rails['db_host'] and gitlab_rails['db_port'] to either:

      • The host and port for your database server (non-clustered PostgreSQL).
      • The host and port for your cluster leader if you’re running Patroni.
    3. Apply the changes:

      sudo gitlab-ctl reconfigure
  3. Upgrade GitLab.

  4. If you modified gitlab.rb on the deploy node to bypass PgBouncer:

    1. Update gitlab.rb on the deploy node. Change gitlab_rails['db_host'] and gitlab_rails['db_port'] back to your PgBouncer settings.

    2. Apply the changes:

      sudo gitlab-ctl reconfigure
  5. To ensure all services are running the upgraded version, and (if applicable) accessing the database using PgBouncer, restart all services on the deploy node:

    sudo gitlab-ctl restart

Next, upgrade all the other Puma and Sidekiq nodes. The setting gitlab_rails['auto_migrate'] can be set to anything in gitlab.rb on these nodes.

They can be upgraded in parallel:

  1. Upgrade GitLab.

  2. Ensure all services are restarted:

    sudo gitlab-ctl restart

After all stateful components are upgraded, follow GitLab chart upgrade steps to upgrade the stateless components (Webservice, Sidekiq, other supporting services).

After you perform the GitLab chart upgrade, resume the database clients:

kubectl scale deploy -lapp=sidekiq,release=<helm release name> -n <namespace> --replicas=<value>
kubectl scale deploy -lapp=webservice,release=<helm release name> -n <namespace> --replicas=<value>
kubectl scale deploy -lapp=prometheus,release=<helm release name> -n <namespace> --replicas=<value>

Upgrade the monitor node

You might have configured Prometheus to act as a standalone monitoring node. For example, as part of configuring a 60 RPS or 3,000 users reference architecture.

To upgrade the monitor node, upgrade GitLab on the node.