Reference architecture: up to 1,000 users

Tier: Free, Premium, Ultimate Offering: Self-managed

This page describes the GitLab reference architecture designed for the load of up to 1,000 users with notable headroom (non-HA standalone).

For a full list of reference architectures, see Available reference architectures.

Users Configuration GCP AWS Azure
Up to 1,000 8 vCPU, 7.2 GB memory n1-highcpu-8 c5.2xlarge F8s v2

The diagram above shows that while GitLab can be installed on a single server, it is internally composed of multiple services. As a GitLab instance is scaled, each of these services are broken out and independently scaled according to the demands placed on them. In some cases PaaS can be leveraged for some services (for example, Cloud Object Storage for some file systems). For the sake of redundancy some of the services become clusters of nodes storing the same data. In a horizontal configuration of GitLab there are various ancillary services required to coordinate clusters or discover of resources (for example, PgBouncer for PostgreSQL connection management, Consul for Prometheus end point discovery).

Requirements

Before starting, see the requirements for reference architectures.

caution
The node’s specifications are based on high percentiles of both usage patterns and repository sizes in good health. However, if you have large monorepos (larger than several gigabytes) or additional workloads these can significantly impact the performance of the environment and further adjustments may be required. If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your Customer Success Manager or our Support team for further guidance.

Testing methodology

The 1k architecture is designed to cover a large majority of workflows and is regularly smoke and performance tested by the Quality Engineering team against the following endpoint throughput targets:

  • API: 20 RPS
  • Web: 2 RPS
  • Git (Pull): 2 RPS
  • Git (Push): 1 RPS

The above targets were selected based on real customer data of total environmental loads corresponding to the user count, including CI and other workloads along with additional substantial headroom added.

If you have metrics to suggest that you have regularly higher throughput against the above endpoint targets, large monorepos or notable additional workloads these can notably impact the performance environment and further adjustments may be required. If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your Customer Success Manager or our Support team for further guidance.

Testing is done regularly via our GitLab Performance Tool (GPT) and its dataset, which is available for anyone to use. The results of this testing are available publicly on the GPT wiki. For more information on our testing strategy refer to this section of the documentation.

Setup instructions

To install GitLab for this default reference architecture, use the standard installation instructions.

You can also optionally configure GitLab to use an external PostgreSQL service or an external object storage service for added performance and reliability at an increased complexity cost.

Tier: Premium, Ultimate Offering: Self-managed

You can leverage Elasticsearch and enable advanced search for faster, more advanced code search across your entire GitLab instance.

Elasticsearch cluster design and requirements are dependent on your specific data. For recommended best practices about how to set up your Elasticsearch cluster alongside your instance, read how to choose the optimal cluster configuration.

Cloud Native Hybrid reference architecture with Helm Charts

Cloud Native Hybrid Reference Architecture is an alternative approach where select stateless components are deployed in Kubernetes via our official Helm Charts, and stateful components are deployed in compute VMs with the Linux package.

The 2k GitLab Cloud Native Hybrid (non HA) and 3k GitLab Cloud Native Hybrid (HA) reference architectures are the smallest we recommend in Kubernetes. For environments that serve fewer users, you can lower the node specs. Depending on your user count, you can lower all suggested node specs as desired. However, it’s recommended that you don’t go lower than the general requirements.