Back up and restore large reference architectures

Tier: Free, Premium, Ultimate Offering: Self-managed

This document describes how to:

This document is intended for environments using:

Configure daily backups

Configure backup of PostgreSQL data

The backup command uses pg_dump, which is not appropriate for databases over 100 GB. You must choose a PostgreSQL solution which has native, robust backup capabilities.

AWS
  1. Configure AWS Backup to back up RDS (and S3) data. For maximum protection, configure continuous backups as well as snapshot backups.
  2. Configure AWS Backup to copy backups to a separate region. When AWS takes a backup, the backup can only be restored in the region the backup is stored.
  3. After AWS Backup has run at least one scheduled backup, then you can create an on-demand backup as needed.
Google

Schedule automated daily backups of Google Cloud SQL data. Daily backups can be retained for up to one year, and transaction logs can be retained for 7 days by default for point-in-time recovery.

Configure backup of object storage data

Object storage, (not NFS) is recommended for storing GitLab data, including blobs and Container registry.

AWS

Configure AWS Backup to back up S3 data. This can be done at the same time when configuring the backup of PostgreSQL data.

Google
  1. Create a backup bucket in GCS.
  2. Create Storage Transfer Service jobs which copy each GitLab object storage bucket to a backup bucket. You can create these jobs once, and schedule them to run daily. However this mixes new and old object storage data, so files that were deleted in GitLab will still exist in the backup. This wastes storage after restore, but it is otherwise not a problem. These files would be inaccessible to GitLab users since they do not exist in the GitLab database. You can delete some of these orphaned files after restore, but this clean up Rake task only operates on a subset of files.
    1. For When to overwrite, choose Never. GitLab object stored files are intended to be immutable. This selection could be helpful if a malicious actor succeeded at mutating GitLab files.
    2. For When to delete, choose Never. If you sync the backup bucket to source, then you cannot recover if files are accidentally or maliciously deleted from source.
  3. Alternatively, it is possible to backup object storage into buckets or subdirectories segregated by day. This avoids the problem of orphaned files after restore, and supports backup of file versions if needed. But it greatly increases backup storage costs. This can be done with a Cloud Function triggered by Cloud Scheduler, or with a script run by a cronjob. A partial example:

    # Set GCP project so you don't have to specify it in every command
    gcloud config set project example-gcp-project-name
    
    # Grant the Storage Transfer Service's hidden service account permission to write to the backup bucket. The integer 123456789012 is the GCP project's ID.
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.objectAdmin gs://backup-bucket
    
    # Grant the Storage Transfer Service's hidden service account permission to list and read objects in the source buckets. The integer 123456789012 is the GCP project's ID.
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-artifacts
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-ci-secure-files
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-dependency-proxy
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-lfs
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-mr-diffs
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-packages
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-pages
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-registry
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-terraform-state
    gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-uploads
    
    # Create transfer jobs for each bucket, targeting a subdirectory in the backup bucket.
    today=$(date +%F)
    gcloud transfer jobs create gs://gitlab-bucket-artifacts/ gs://backup-bucket/$today/artifacts/ --name "$today-backup-artifacts"
    gcloud transfer jobs create gs://gitlab-bucket-ci-secure-files/ gs://backup-bucket/$today/ci-secure-files/ --name "$today-backup-ci-secure-files"
    gcloud transfer jobs create gs://gitlab-bucket-dependency-proxy/ gs://backup-bucket/$today/dependency-proxy/ --name "$today-backup-dependency-proxy"
    gcloud transfer jobs create gs://gitlab-bucket-lfs/ gs://backup-bucket/$today/lfs/ --name "$today-backup-lfs"
    gcloud transfer jobs create gs://gitlab-bucket-mr-diffs/ gs://backup-bucket/$today/mr-diffs/ --name "$today-backup-mr-diffs"
    gcloud transfer jobs create gs://gitlab-bucket-packages/ gs://backup-bucket/$today/packages/ --name "$today-backup-packages"
    gcloud transfer jobs create gs://gitlab-bucket-pages/ gs://backup-bucket/$today/pages/ --name "$today-backup-pages"
    gcloud transfer jobs create gs://gitlab-bucket-registry/ gs://backup-bucket/$today/registry/ --name "$today-backup-registry"
    gcloud transfer jobs create gs://gitlab-bucket-terraform-state/ gs://backup-bucket/$today/terraform-state/ --name "$today-backup-terraform-state"
    gcloud transfer jobs create gs://gitlab-bucket-uploads/ gs://backup-bucket/$today/uploads/ --name "$today-backup-uploads"
    
    1. These Transfer Jobs are not automatically deleted after running. You could implement clean up of old jobs in the script.
    2. The example script does not delete old backups. You could implement clean up of old backups according to your desired retention policy.
  4. Ensure that backups are performed at the same time or later than Cloud SQL backups, to reduce data inconsistencies.

Configure backup of Git repositories

If using Cloud native hybrid, provision a VM running the GitLab Linux package:

  1. Spin up a VM with 8 vCPU and 7.2 GB memory. This node will be used to back up Git repositories. Adding support for Gitaly server-side backups to backup-utility is proposed in issue 438393, which would remove the need to provision a VM.
  2. Configure the node as another GitLab Rails node as defined in your reference architecture. As with other GitLab Rails nodes, this node must have access to your main PostgreSQL database, Redis, object storage, and Gitaly Cluster. Find your reference architecture and see the Configure GitLab Rails section for an example of how to set the server up. You might need to translate some Helm chart values to the equivalent ones for Linux package installations. Note that a Praefect node cannot be used to back up Git data. It must be a GitLab Rails node.
  3. Ensure the GitLab application isn’t running on this node by disabling most services:

    1. Edit /etc/gitlab/gitlab.rb to ensure the following services are disabled. roles(['application_role']) disables Redis, PostgreSQL, and Consul, and is the basis of the reference architecture Rails node definition.

      roles(['application_role'])
      gitlab_workhorse['enable'] = false
      puma['enable'] = false
      sidekiq['enable'] = false
      gitlab_kas['enable'] = false
      gitaly['enable'] = false
      prometheus_monitoring['enable'] = false
      
    2. Reconfigure GitLab:

      sudo gitlab-ctl reconfigure
      
    3. The only service that should be left is logrotate. To verify that logrotate is the only remaining service, run:

      gitlab-ctl status
      

    Issue 6823 proposes to add a role in the Linux package that meets these requirements.

To back up the Git repositories:

  1. Configure a server-side backup destination in all Gitaly nodes.
  2. Add the destination bucket to your backups of object storage data.
  3. Take a full backup of your Git data. Use the REPOSITORIES_SERVER_SIDE variable, and skip PostgreSQL data:

    sudo gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db
    

    This causes Gitaly nodes to upload the Git data and some metadata to remote storage. Blobs such as uploads, artifacts, and LFS do not need to be explicitly skipped, because the gitlab-backup command does not back up object storage by default.

  4. Note the backup ID of the backup, which is needed for the next step. For example, if the backup command outputs 2024-02-22 02:17:47 UTC -- Backup 1708568263_2024_02_22_16.9.0-ce is done., then the backup ID is 1708568263_2024_02_22_16.9.0-ce.
  5. Check that the full backup created data in the backup bucket.
  6. Run the backup command again, this time specifying incremental backup of Git repositories, and a backup ID. Using the example ID from the previous step, the command is:

    sudo gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db INCREMENTAL=yes PREVIOUS_BACKUP=1708568263_2024_02_22_16.9.0-ce
    

    The value of PREVIOUS_BACKUP is not used by this command, but it is required by the command. There is an issue for removing this unnecessary requirement, see issue 429141.

  7. Check that the incremental backup succeeded, and added data to object storage.
  8. Configure cron to make daily backups. Edit the crontab for the root user:

    sudo su -
    crontab -e
    
  9. There, add the following lines to schedule the backup for everyday of every month at 2 AM. To limit the number of increments needed to restore a backup, a full backup of Git repositories will be taken on the first of each month, and the rest of the days will take an incremental backup.:

    0 2 1 * * /opt/gitlab/bin/gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db CRON=1
    0 2 2-31 * * /opt/gitlab/bin/gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db INCREMENTAL=yes PREVIOUS_BACKUP=1708568263_2024_02_22_16.9.0-ce CRON=1
    

Configure backup of configuration files

If your configuration and secrets are defined outside of your deployment and then deployed into it, then the implementation of the backup strategy depends on your specific setup and requirements. As an example, you can store secrets in AWS Secret Manager with replication to multiple regions and configure a script to back up secrets automatically.

If your configuration and secrets are only defined inside your deployment:

  1. Storing configuration files describes how to extract configuration and secrets files.
  2. These files should be uploaded to a separate, more restrictive, object storage account.

Restore a backup

Restore a backup of a GitLab instance.

Prerequisites

Before restoring a backup:

  1. Choose a working destination GitLab instance.
  2. Ensure the destination GitLab instance is in a region where your AWS backups are stored.
  3. Check that the destination GitLab instance uses exactly the same version and type (CE or EE) of GitLab on which the backup data was created. For example, CE 15.1.4.
  4. Restore backed up secrets to the destination GitLab instance.
  5. Ensure that the destination GitLab instance has the same repository storages configured. Additional storages are fine.
  6. If the backed up GitLab instance had any blobs stored in object storage, ensure that object storage is configured for those kinds of blobs.
  7. If the backed up GitLab instance had any blobs stored on the file system, ensure that NFS is configured.
  8. To use new secrets or configuration, and to avoid unexpected configuration changes during restore:

    • Linux package installations on all nodes:
      1. Reconfigure the destination GitLab instance.
      2. Restart the destination GitLab instance.
    • Helm chart (Kubernetes) installations:

      1. On all GitLab Linux package nodes, run:

        sudo gitlab-ctl reconfigure
        sudo gitlab-ctl start
        
      2. Make sure you have a running GitLab instance by deploying the charts. Ensure the Toolbox pod is enabled and running by executing the following command:

        kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
        
      3. The Webservice, Sidekiq and Toolbox pods must be restarted. The safest way to restart those pods is to run:

        kubectl delete pods -lapp=sidekiq,release=<helm release name>
        kubectl delete pods -lapp=webservice,release=<helm release name>
        kubectl delete pods -lapp=toolbox,release=<helm release name>
        
  9. Confirm the destination GitLab instance still works. For example:

  10. Stop GitLab services which connect to the PostgreSQL database.

    • Linux package installations on all nodes running Puma or Sidekiq, run:

      sudo gitlab-ctl stop
      
    • Helm chart (Kubernetes) installations:

      1. Note the current number of replicas for database clients for subsequent restart:

        kubectl get deploy -n <namespace> -lapp=sidekiq,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
        kubectl get deploy -n <namespace> -lapp=webservice,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
        kubectl get deploy -n <namespace> -lapp=prometheus,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
        
      2. Stop the clients of the database to prevent locks interfering with the restore process:

        kubectl scale deploy -lapp=sidekiq,release=<helm release name> -n <namespace> --replicas=0
        kubectl scale deploy -lapp=webservice,release=<helm release name> -n <namespace> --replicas=0
        kubectl scale deploy -lapp=prometheus,release=<helm release name> -n <namespace> --replicas=0
        

Restore object storage data

AWS

Each bucket exists as a separate backup within AWS and each backup can be restored to an existing or new bucket.

  1. To restore buckets, an IAM role with the correct permissions is required:

    • AWSBackupServiceRolePolicyForBackup
    • AWSBackupServiceRolePolicyForRestores
    • AWSBackupServiceRolePolicyForS3Restore
    • AWSBackupServiceRolePolicyForS3Backup
  2. If existing buckets are being used, they must have Access Control Lists enabled.
  3. Restore the S3 buckets using built-in tooling.
  4. You can move on to Restore PostgreSQL data while the restore job is running.
Google
  1. Create Storage Transfer Service jobs to transfer backed up data to the GitLab buckets.
  2. You can move on to Restore PostgreSQL data while the transfer jobs are running.

Restore PostgreSQL data

AWS
  1. Restore the AWS RDS database using built-in tooling, which creates a new RDS instance.
  2. Because the new RDS instance has a different endpoint, you must reconfigure the destination GitLab instance to point to the new database:

  3. Before moving on, wait until the new RDS instance is created and ready to use.
Google
  1. Restore the Google Cloud SQL database using built-in tooling.
  2. If you restore to a new database instance, then reconfigure GitLab to point to the new database:

  3. Before moving on, wait until the Cloud SQL instance is ready to use.

Restore Git repositories

Select or create a node to restore:

To restore Git repositories:

  1. Ensure the node has enough attached storage to store both the .tar file of Git repositories, and its extracted data.
  2. SSH into the GitLab Rails node.
  3. As part of Restore object storage data, you should have restored a bucket containing the GitLab backup .tar file of Git repositories.
  4. Download the backup .tar file from its bucket into the backup directory described in the gitlab.rb configuration gitlab_rails['backup_path']. The default is /var/opt/gitlab/backups. The backup file must be owned by the git user.

    sudo cp 11493107454_2018_04_25_10.6.4-ce_gitlab_backup.tar /var/opt/gitlab/backups/
    sudo chown git:git /var/opt/gitlab/backups/11493107454_2018_04_25_10.6.4-ce_gitlab_backup.tar
    
  5. Restore the backup, specifying the ID of the backup you wish to restore:

    caution
    The restore command requires additional parameters when your installation is using PgBouncer, for either performance reasons or when using it with a Patroni cluster.
    # This command will overwrite the contents of your GitLab database!
    # NOTE: "_gitlab_backup.tar" is omitted from the name
    sudo gitlab-backup restore BACKUP=11493107454_2018_04_25_10.6.4-ce
    

    If there’s a GitLab version mismatch between your backup tar file and the installed version of GitLab, the restore command aborts with an error message. Install the correct GitLab version, and then try again.

  6. Restart and check GitLab:

    • Linux package installations:

      1. In all Puma or Sidekiq nodes, run:

        sudo gitlab-ctl restart
        
      2. In one Puma or Sidekiq node, run:

        sudo gitlab-rake gitlab:check SANITIZE=true
        
    • Helm chart (Kubernetes) installations:

      1. Start the stopped deployments, using the number of replicas noted in Prerequisites:

        kubectl scale deploy -lapp=sidekiq,release=<helm release name> -n <namespace> --replicas=<original value>
        kubectl scale deploy -lapp=webservice,release=<helm release name> -n <namespace> --replicas=<original value>
        kubectl scale deploy -lapp=prometheus,release=<helm release name> -n <namespace> --replicas=<original value>
        
      2. In the Toolbox pod, run:

        sudo gitlab-rake gitlab:check SANITIZE=true
        
  7. Check that database values can be decrypted especially if /etc/gitlab/gitlab-secrets.json was restored, or if a different server is the target for the restore:

    • For Linux package installations, in a Puma or Sidekiq node, run:

      sudo gitlab-rake gitlab:doctor:secrets
      
    • For Helm chart (Kubernetes) installations, in the Toolbox pod, run:

      sudo gitlab-rake gitlab:doctor:secrets
      
  8. For added assurance, you can perform an integrity check on the uploaded files:

    • For Linux package installations, in a Puma or Sidekiq node, run:

      sudo gitlab-rake gitlab:artifacts:check
      sudo gitlab-rake gitlab:lfs:check
      sudo gitlab-rake gitlab:uploads:check
      
    • For Helm chart (Kubernetes) installations, because these commands can take a long time because they iterate over all rows, run the following commands the GitLab Rails node, rather than a Toolbox pod:

      sudo gitlab-rake gitlab:artifacts:check
      sudo gitlab-rake gitlab:lfs:check
      sudo gitlab-rake gitlab:uploads:check
      

    If missing or corrupted files are found, it does not always mean the back up and restore process failed. For example, the files might be missing or corrupted on the source GitLab instance. You might need to cross-reference prior backups. If you are migrating GitLab to a new environment, you can run the same checks on the source GitLab instance to determine whether the integrity check result is preexisting or related to the backup and restore process.

The restoration should be complete.