Container registry for a secondary site

Tier: Premium, Ultimate Offering: Self-managed

You can set up a container registry on your secondary Geo site that mirrors the one on the primary Geo site.

note
The container registry replication is used only for disaster recovery purposes. We do not recommend pulling the container registry data from the secondary. For a feature proposal to implement it in the future, see Geo: Accelerate container images by serving read request from secondary site for details. You or your GitLab representative are encouraged to upvote this feature to register your interest.

Supported container registries

Geo supports the following types of container registries:

Supported image formats

The following container image formats are support by Geo:

In addition, Geo also supports BuildKit cache images.

Supported storage

Docker

For more information on supported registry storage drivers see Docker registry storage drivers

Read the Load balancing considerations when deploying the Registry, and how to set up the storage driver for the GitLab integrated container registry.

Registries that support OCI artifacts

The following registries support OCI artifacts:

  • CNCF Distribution - local/offline verification
  • Azure Container Registry (ACR)
  • Amazon Elastic Container Registry (ECR)
  • Google Artifact Registry (GAR)
  • GitHub Packages container registry (GHCR)
  • Bundle Bar

For more information, see the OCI Distribution Specification.

Configure container registry replication

You can enable a storage-agnostic replication so it can be used for cloud or local storage. Whenever a new image is pushed to the primary site, each secondary site pulls it to its own container repository.

To configure container registry replication:

  1. Configure the primary site.
  2. Configure the secondary site.
  3. Verify container registry replication.

Configure primary site

Make sure that you have container registry set up and working on the primary site before following the next steps.

To be able to replicate new container images, the container registry must send notification events to the primary site for every push. The token shared between the container registry and the web nodes on the primary is used to make communication more secure.

  1. SSH into your GitLab primary server and sign in as root (for GitLab HA, you only need a Registry node):

    sudo -i
    
  2. Edit /etc/gitlab/gitlab.rb:

    registry['notifications'] = [
      {
        'name' => 'geo_event',
        'url' => 'https://<example.com>/api/v4/container_registry_event/events',
        'timeout' => '500ms',
        'threshold' => 5,
        'backoff' => '1s',
        'headers' => {
          'Authorization' => ['<replace_with_a_secret_token>']
        }
      }
    ]
    
    note
    Replace <example.com> with the external_url defined in your primary site’s /etc/gitlab/gitlab.rb file, and replace <replace_with_a_secret_token> with a case sensitive alphanumeric string that starts with a letter. You can generate one with < /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c 32 | sed "s/^[0-9]*//"; echo
    note
    If you use an external Registry (not the one integrated with GitLab), you only need to specify the notification secret (registry['notification_secret']) in the /etc/gitlab/gitlab.rb file.
  3. For GitLab HA only. Edit /etc/gitlab/gitlab.rb on every web node:

    registry['notification_secret'] = '<replace_with_a_secret_token_generated_above>'
    
  4. Reconfigure each node you just updated:

    gitlab-ctl reconfigure
    

Configure secondary site

Make sure you have container registry set up and working on the secondary site before following the next steps.

The following steps should be done on each secondary site you’re expecting to see the container images replicated.

Because we need to allow the secondary site to communicate securely with the primary site container registry, we need to have a single key pair for all the sites. The secondary site uses this key to generate a short-lived JWT that is pull-only-capable to access the primary site container registry.

For each application and Sidekiq node on the secondary site:

  1. SSH into the node and sign in as the root user:

    sudo -i
    
  2. Copy /var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key from the primary to the node.

  3. Edit /etc/gitlab/gitlab.rb and add:

    gitlab_rails['geo_registry_replication_enabled'] = true
    
    # Primary registry's hostname and port, it will be used by
    # the secondary node to directly communicate to primary registry
    gitlab_rails['geo_registry_replication_primary_api_url'] = 'https://primary.example.com:5050/'
    
  4. Reconfigure the node for the change to take effect:

    gitlab-ctl reconfigure
    

Verify replication

To verify container registry replication is working, on the secondary site:

  1. On the left sidebar, at the bottom, select Admin.
  2. Select Geo > Nodes. The initial replication, or “backfill”, is probably still in progress.

You can monitor the synchronization process on each Geo site from the primary site’s Geo Nodes dashboard in your browser.

Troubleshooting

Confirm that container registry replication is enabled

This can be done with a check using the Rails console:

Geo::ContainerRepositoryRegistry.replication_enabled?

Missing container registry notification event

  1. When an image is pushed to the primary site’s container registry, it should trigger a Container Registry notification
  2. The primary site’s container registry calls the primary site’s API on https://<example.com>/api/v4/container_registry_event/events
  3. The primary site inserts a record to the geo_events table with replicable_name: 'container_repository', model_record_id: <ID of the container repository>.
  4. The record gets replicated by PostgreSQL to the secondary site’s database.
  5. The Geo Log Cursor service processes the new event and enqueues a Sidekiq job Geo::EventWorker

To verify this is working correctly, push an image to the registry on the primary site, and run the following command on the Rails console to verify that the notification was received, and processed into an event:

Geo::Event.where(replicable_name: 'container_repository')

You can further verify this by checking geo.log for entries from Geo::ContainerRepositorySyncService.

Registry events logs response status 401 Unauthorized unaccepted

401 Unauthorized errors indicate that the primary site’s container registry notification is not accepted by the Rails application, preventing it from notifying GitLab that something was pushed.

To fix this, make sure that the authorization headers being sent with the registry notification match what’s configured on the primary site, as should be done during step Configure primary site.

Registry error: token from untrusted issuer: "<token>"

To replicate a container image, Sidekiq uses JWT to authenticate itself towards the container registry. Geo replication takes it as a prerequisite that the container registry configuration has been done correctly.

Make sure that both sites share a single signing key pair, as instructed under Configure secondary site, and that both container registries, plus primary and secondary sites are all configured to use the same token issuer.

On multinode deployments, make sure that the issuer configured on the Sidekiq node matches the value configured on the registries.

Manually trigger a container registry sync event

To help with troubleshooting, you can manually trigger the container registry replication process:

  1. On the left sidebar, at the bottom, select Admin.
  2. Select Geo > Sites.
  3. In Replication Details for a Secondary Site, select Container Repositories.
  4. Select Resync for one row, or Resync all.

You can also manually trigger a resync by running the following commands on the secondary’s Rails console:

registry = Geo::ContainerRepositoryRegistry.first # Choose a Geo registry entry
registry.replicator.sync # Resync the container repository
pp registry.reload # Look at replication state fields

#<Geo::ContainerRepositoryRegistry:0x00007f54c2a36060
 id: 1,
 container_repository_id: 1,
 state: "2",
 retry_count: 0,
 last_sync_failure: nil,
 retry_at: nil,
 last_synced_at: Thu, 28 Sep 2023 19:38:05.823680000 UTC +00:00,
 created_at: Mon, 11 Sep 2023 15:38:06.262490000 UTC +00:00>

The state field represents sync state:

  • "0": pending sync (usually means it was never synced)
  • "1": started sync (a sync job is currently running)
  • "2": successfully synced
  • "3": failed to sync