Scaling and High Availability

GitLab supports a number of scaling options to ensure that your self-managed instance is able to scale out to meet your organization’s needs when scaling up is no longer practical or feasible.

GitLab also offers high availability options for organizations that require the fault tolerance and redundancy necessary to maintain high-uptime operations.

Scaling and high availability can be tackled separately as GitLab comprises modular components which can be individually scaled or made highly available depending on your organization’s needs and resources.

On this page, we present examples of self-managed instances which demonstrate how GitLab can be scaled out and made highly available. These examples progress from simple to complex as scaling or highly-available components are added.

For larger setups serving 2,000 or more users, we provide reference architectures based on GitLab’s experience with GitLab.com and internal scale testing that aim to achieve the right balance of scalability and availability.

For detailed insight into how GitLab scales and configures GitLab.com, you can watch this 1 hour Q&A with John Northrup, and live questions coming in from some of our customers.

Scaling examples

Single-node Omnibus installation

This solution is appropriate for many teams that have a single server at their disposal. With automatic backup of the GitLab repositories, configuration, and the database, this can be an optimal solution if you don’t have strict availability requirements.

You can also optionally configure GitLab to use an external PostgreSQL service or an external object storage service for added performance and reliability at a relatively low complexity cost.

References:

Omnibus installation with multiple application servers

This solution is appropriate for teams that are starting to scale out when scaling up is no longer meeting their needs. In this configuration, additional application nodes will handle frontend traffic, with a load balancer in front to distribute traffic across those nodes. Meanwhile, each application node connects to a shared file server and PostgreSQL and Redis services on the back end.

The additional application servers adds limited fault tolerance to your GitLab instance. As long as one application node is online and capable of handling the instance’s usage load, your team’s productivity will not be interrupted. Having multiple application nodes also enables zero-downtime updates.

References:

High-availability examples

Omnibus installation with automatic database failover

By adding automatic failover for database systems, we can enable higher uptime with an additional layer of complexity.

  • For PostgreSQL, we provide repmgr for server cluster management and failover and a combination of PgBouncer and Consul for database client cutover.
  • For Redis, we use Redis Sentinel for server failover and client cutover.

You can also optionally run additional Sidekiq processes on dedicated hardware and configure individual Sidekiq processes to process specific background job queues if you need to scale out background job processing.

GitLab Geo

GitLab Geo allows you to replicate your GitLab instance to other geographical locations as a read-only fully operational instance that can also be promoted in case of disaster.

This configuration is supported in GitLab Premium and Ultimate.

References:

GitLab components and configuration instructions

The GitLab application depends on the following components. It can also depend on several third party services depending on your environment setup. Here we’ll detail both in the order in which you would typically configure them along with our recommendations for their use and configuration.

Third party services

Here’s some details of several third party services a typical environment will depend on. The services can be provided by numerous applications or providers and further advice can be given on how best to select. These should be configured first, before the GitLab components.

ComponentDescriptionConfiguration instructions
Load Balancer(s)1Handles load balancing for the GitLab nodes where requiredLoad balancer HA configuration
Cloud Object Storage service2Recommended store for shared data objectsCloud Object Storage configuration
NFS3 4Shared disk storage service. Can be used as an alternative for Gitaly or Object Storage. Required for GitLab PagesNFS configuration

GitLab components

Next are all of the components provided directly by GitLab. As mentioned earlier, they are presented in the typical order you would configure them.

ComponentDescriptionConfiguration instructions
Consul5Service discovery and health checks/failoverConsul HA configuration
PostgreSQLDatabaseDatabase HA configuration
PgBouncerDatabase Pool ManagerPgBouncer HA configuration
Redis5 with Redis SentinelKey/Value store for shared data with HA watcher serviceRedis HA configuration
Gitaly6 3 4Recommended high-level storage for Git repository dataGitaly HA configuration
SidekiqAsynchronous/Background jobsSidekiq configuration
GitLab application nodes7(Unicorn / Puma, Workhorse) - Web-requests (UI, API, Git over HTTP)GitLab app HA/scaling configuration
Prometheus and GrafanaGitLab environment monitoringMonitoring node for scaling/HA

In some cases, components can be combined on the same nodes to reduce complexity as well.

  • 1 - 1000 Users: A single-node Omnibus setup with frequent backups. Refer to the requirements page for further details of the specs you will require.
  • 1000 - 10000 Users: A scaled environment based on one of our Reference Architectures, without the HA components applied. This can be a reasonable step towards a fully HA environment.
  • 2000 - 50000+ Users: A scaled HA environment based on one of our Reference Architectures below.

Reference architectures

In this section we’ll detail the Reference Architectures that can support large numbers of users. These were built, tested and verified by our Quality and Support teams.

Testing was done with our GitLab Performance Tool at specific coded workloads, and the throughputs used for testing were calculated based on sample customer data. We test each endpoint type with the following number of requests per second (RPS) per 1000 users:

  • API: 20 RPS
  • Web: 2 RPS
  • Git: 2 RPS
Note: Note that depending on your workflow the below recommended reference architectures may need to be adapted accordingly. Your workload is influenced by factors such as - but not limited to - how active your users are, how much automation you use, mirroring, and repo/change size. Additionally the shown memory values are given directly by GCP machine types. On different cloud vendors a best effort like for like can be used.

2,000 user configuration

ServiceNodesConfiguration8GCP type
GitLab Rails738 vCPU, 7.2GB Memoryn1-highcpu-8
PostgreSQL32 vCPU, 7.5GB Memoryn1-standard-2
PgBouncer32 vCPU, 1.8GB Memoryn1-highcpu-2
Gitaly6 3 4X4 vCPU, 15GB Memoryn1-standard-4
Redis532 vCPU, 7.5GB Memoryn1-standard-2
Consul + Sentinel532 vCPU, 1.8GB Memoryn1-highcpu-2
Sidekiq42 vCPU, 7.5GB Memoryn1-standard-2
Cloud Object Storage2---
NFS Server3 414 vCPU, 3.6GB Memoryn1-highcpu-4
Monitoring node12 vCPU, 1.8GB Memoryn1-highcpu-2
External load balancing node112 vCPU, 1.8GB Memoryn1-highcpu-2
Internal load balancing node112 vCPU, 1.8GB Memoryn1-highcpu-2

5,000 user configuration

ServiceNodesConfiguration8GCP type
GitLab Rails7316 vCPU, 14.4GB Memoryn1-highcpu-16
PostgreSQL32 vCPU, 7.5GB Memoryn1-standard-2
PgBouncer32 vCPU, 1.8GB Memoryn1-highcpu-2
Gitaly6 3 4X8 vCPU, 30GB Memoryn1-standard-8
Redis532 vCPU, 7.5GB Memoryn1-standard-2
Consul + Sentinel532 vCPU, 1.8GB Memoryn1-highcpu-2
Sidekiq42 vCPU, 7.5GB Memoryn1-standard-2
Cloud Object Storage2---
NFS Server3 414 vCPU, 3.6GB Memoryn1-highcpu-4
Monitoring node12 vCPU, 1.8GB Memoryn1-highcpu-2
External load balancing node112 vCPU, 1.8GB Memoryn1-highcpu-2
Internal load balancing node112 vCPU, 1.8GB Memoryn1-highcpu-2

10,000 user configuration

ServiceNodesConfiguration8GCP type
GitLab Rails7332 vCPU, 28.8GB Memoryn1-highcpu-32
PostgreSQL34 vCPU, 15GB Memoryn1-standard-4
PgBouncer32 vCPU, 1.8GB Memoryn1-highcpu-2
Gitaly6 3 4X16 vCPU, 60GB Memoryn1-standard-16
Redis5 - Cache34 vCPU, 15GB Memoryn1-standard-4
Redis5 - Queues / Shared State34 vCPU, 15GB Memoryn1-standard-4
Redis Sentinel5 - Cache31 vCPU, 1.7GB Memoryg1-small
Redis Sentinel5 - Queues / Shared State31 vCPU, 1.7GB Memoryg1-small
Consul32 vCPU, 1.8GB Memoryn1-highcpu-2
Sidekiq44 vCPU, 15GB Memoryn1-standard-4
Cloud Object Storage2---
NFS Server3 414 vCPU, 3.6GB Memoryn1-highcpu-4
Monitoring node14 vCPU, 3.6GB Memoryn1-highcpu-4
External load balancing node112 vCPU, 1.8GB Memoryn1-highcpu-2
Internal load balancing node112 vCPU, 1.8GB Memoryn1-highcpu-2

25,000 user configuration

ServiceNodesConfiguration8GCP type
GitLab Rails7532 vCPU, 28.8GB Memoryn1-highcpu-32
PostgreSQL38 vCPU, 30GB Memoryn1-standard-8
PgBouncer32 vCPU, 1.8GB Memoryn1-highcpu-2
Gitaly6 3 4X32 vCPU, 120GB Memoryn1-standard-32
Redis5 - Cache34 vCPU, 15GB Memoryn1-standard-4
Redis5 - Queues / Shared State34 vCPU, 15GB Memoryn1-standard-4
Redis Sentinel5 - Cache31 vCPU, 1.7GB Memoryg1-small
Redis Sentinel5 - Queues / Shared State31 vCPU, 1.7GB Memoryg1-small
Consul32 vCPU, 1.8GB Memoryn1-highcpu-2
Sidekiq44 vCPU, 15GB Memoryn1-standard-4
Cloud Object Storage2---
NFS Server3 414 vCPU, 3.6GB Memoryn1-highcpu-4
Monitoring node14 vCPU, 3.6GB Memoryn1-highcpu-4
External load balancing node112 vCPU, 1.8GB Memoryn1-highcpu-2
Internal load balancing node114 vCPU, 3.6GB Memoryn1-highcpu-4

50,000 user configuration

ServiceNodesConfiguration8GCP type
GitLab Rails71232 vCPU, 28.8GB Memoryn1-highcpu-32
PostgreSQL316 vCPU, 60GB Memoryn1-standard-16
PgBouncer32 vCPU, 1.8GB Memoryn1-highcpu-2
Gitaly6 3 4X64 vCPU, 240GB Memoryn1-standard-64
Redis5 - Cache34 vCPU, 15GB Memoryn1-standard-4
Redis5 - Queues / Shared State34 vCPU, 15GB Memoryn1-standard-4
Redis Sentinel5 - Cache31 vCPU, 1.7GB Memoryg1-small
Redis Sentinel5 - Queues / Shared State31 vCPU, 1.7GB Memoryg1-small
Consul32 vCPU, 1.8GB Memoryn1-highcpu-2
Sidekiq44 vCPU, 15GB Memoryn1-standard-4
NFS Server3 414 vCPU, 3.6GB Memoryn1-highcpu-4
Cloud Object Storage2---
Monitoring node14 vCPU, 3.6GB Memoryn1-highcpu-4
External load balancing node112 vCPU, 1.8GB Memoryn1-highcpu-2
Internal load balancing node118 vCPU, 7.2GB Memoryn1-highcpu-8
  1. Our architectures have been tested and validated with HAProxy as the load balancer. However other reputable load balancers with similar feature sets should also work instead but be aware these aren’t validated.  2 3 4 5 6 7 8 9 10 11

  2. For data objects such as LFS, Uploads, Artifacts, etc. We recommend a Cloud Object Storage service over NFS where possible, due to better performance and availability.  2 3 4 5 6

  3. NFS can be used as an alternative for both repository data (replacing Gitaly) and object storage but this isn’t typically recommended for performance reasons. Note however it is required for GitLab Pages 2 3 4 5 6 7 8 9 10 11 12

  4. We strongly recommend that any Gitaly and / or NFS nodes are set up with SSD disks over HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write as these components have heavy I/O. These IOPS values are recommended only as a starter as with time they may be adjusted higher or lower depending on the scale of your environment’s workload. If you’re running the environment on a Cloud provider you may need to refer to their documentation on how configure IOPS correctly.  2 3 4 5 6 7 8 9 10 11 12

  5. Recommended Redis setup differs depending on the size of the architecture. For smaller architectures (up to 5,000 users) we suggest one Redis cluster for all classes and that Redis Sentinel is hosted alongside Consul. For larger architectures (10,000 users or more) we suggest running a separate Redis Cluster for the Cache class and another for the Queues and Shared State classes respectively. We also recommend that you run the Redis Sentinel clusters separately as well for each Redis Cluster.  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

  6. Gitaly node requirements are dependent on customer data, specifically the number of projects and their sizes. We recommend 2 nodes as an absolute minimum for HA environments and at least 4 nodes should be used when supporting 50,000 or more users. We also recommend that each Gitaly node should store no more than 5TB of data and have the number of gitaly-ruby workers set to 20% of available CPUs. Additional nodes should be considered in conjunction with a review of expected data size and spread based on the recommendations above.  2 3 4 5 6

  7. In our architectures we run each GitLab Rails node using the Puma webserver and have its number of workers set to 90% of available CPUs along with 4 threads.  2 3 4 5 6

  8. The architectures were built and tested with the Intel Xeon E5 v3 (Haswell) CPU platform on GCP. On different hardware you may find that adjustments, either lower or higher, are required for your CPU or Node counts accordingly. For more info a Sysbench benchmark of the CPU can be found here 2 3 4 5