A Geo data type is a specific class of data that is required by one or more GitLab features to store relevant information.
To replicate data produced by these features with Geo, we use several strategies to access, transfer, and verify them.
We currently distinguish between three different data types:
See the list below of each feature or component we replicate, its corresponding data type, replication, and verification methods:
|Type||Feature / component||Replication method||Verification method|
|Database||Application data in PostgreSQL||Native||Native|
|Database||Personal snippets||PostgreSQL Replication||PostgreSQL Replication|
|Database||Project snippets||PostgreSQL Replication||PostgreSQL Replication|
|Database||SSH public keys||PostgreSQL Replication||PostgreSQL Replication|
|Git||Project repository||Geo with Gitaly||Gitaly Checksum|
|Git||Project wiki repository||Geo with Gitaly||Gitaly Checksum|
|Git||Project designs repository||Geo with Gitaly||Gitaly Checksum|
|Git||Object pools for forked project deduplication||Geo with Gitaly||Not implemented|
|Blobs||User uploads (filesystem)||Geo with API||Not implemented|
|Blobs||User uploads (object storage)||Geo with API/Managed (2)||Not implemented|
|Blobs||LFS objects (filesystem)||Geo with API||Not implemented|
|Blobs||LFS objects (object storage)||Geo with API/Managed (2)||Not implemented|
|Blobs||CI job artifacts (filesystem)||Geo with API||Not implemented|
|Blobs||CI job artifacts (object storage)||Geo with API/Managed (2)||Not implemented|
|Blobs||Archived CI build traces (filesystem)||Geo with API||Not implemented|
|Blobs||Archived CI build traces (object storage)||Geo with API/Managed (2)||Not implemented|
|Blobs||Container registry (filesystem)||Geo with API/Docker API||Not implemented|
|Blobs||Container registry (object storage)||Geo with API/Managed/Docker API (2)||Not implemented|
- (1): Redis replication can be used as part of HA with Redis sentinel. It’s not used between Geo nodes.
- (2): Object storage replication can be performed by Geo or by your object storage provider/appliance native replication feature.
A GitLab instance can have one or more repository shards. Each shard has a Gitaly instance that is responsible for allowing access and operations on the locally stored Git repositories. It can run on a machine with a single disk, multiple disks mounted as a single mount-point (like with a RAID array), or using LVM.
It requires no special filesystem and can work with NFS or a mounted Storage Appliance (there may be performance limitations when using a remote filesystem).
Communication is done via Gitaly’s own gRPC API. There are three possible ways of synchronization:
- Using regular Git clone/fetch from one Geo node to another (with special authentication).
- Using repository snapshots (for when the first method fails or repository is corrupt).
- Manual trigger from the Admin UI (a combination of both of the above).
Each project can have at most 3 different repositories:
- A project repository, where the source code is stored.
- A wiki repository, where the wiki content is stored.
- A design repository, where design artifacts are indexed (assets are actually in LFS).
They all live in the same shard and share the same base name with a
for Wiki and Design Repository cases.
GitLab stores files and blobs such as Issue attachments or LFS objects into either:
- The filesystem in a specific location.
- An Object Storage solution. Object Storage solutions can be:
- Cloud based like Amazon S3 Google Cloud Storage.
- Hosted by you (like MinIO).
- A Storage Appliance that exposes an Object Storage-compatible API.
When using the filesystem store instead of Object Storage, you need to use network mounted filesystems to run GitLab when using more than one server (for example with a High Availability setup).
With respect to replication and verification:
- We transfer files and blobs using an internal API request.
- With Object Storage, you can either:
- Use a cloud provider replication functionality.
- Have GitLab replicate it for you.
GitLab relies on data stored in multiple databases, for different use-cases. PostgreSQL is the single point of truth for user-generated content in the Web interface, like issues content, comments as well as permissions and credentials.
PostgreSQL can also hold some level of cached data like HTML rendered Markdown, cached merge-requests diff (this can also be configured to be offloaded to object storage).
We use PostgreSQL’s own replication functionality to replicate data from the primary to secondary nodes.
We use Redis both as a cache store and to hold persistent data for our background jobs system. Because both use-cases has data that are exclusive to the same Geo node, we don’t replicate it between nodes.
Elasticsearch is an optional database, that can enable advanced searching capabilities, like improved Global Search in both source-code level and user generated content in Issues / Merge-Requests and discussions. Currently it’s not supported in Geo.
The following table lists the GitLab features along with their replication and verification status on a secondary node.
You can keep track of the progress to implement the missing items in these epics/issues:
|Application data in PostgreSQL||Yes||Yes|
|Project wiki repository||Yes||Yes|
|Project designs repository||Yes||No|
|Uploads||Yes||No||Verified only on transfer, or manually (1)|
|LFS objects||Yes||No||Verified only on transfer, or manually (1). Unavailable for new LFS objects in 11.11.x and 12.0.x (2).|
|CI job artifacts (other than traces)||Yes||No||Verified only manually (1)|
|Archived traces||Yes||No||Verified only on transfer, or manually (1)|
|Object pools for forked project deduplication||Yes||No|
|Server-side Git Hooks||No||No|
|External merge request diffs||No||No|
|Content in object storage||Yes||No|
- (1): The integrity can be verified manually using Integrity Check Rake Task on both nodes and comparing the output between them.
- (2): GitLab versions 11.11.x and 12.0.x are affected by a bug that prevents any new LFS objects from replicating.