GitLab 18 upgrade notes
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
This page contains upgrade information for minor and patch versions of GitLab 18. Ensure you review these instructions for:
- Your installation type.
- All versions between your current version and your target version.
For additional information for Helm chart installations, see the Helm chart 9.0 upgrade notes.
Required upgrade stops
To provide a predictable upgrade schedule for instance administrators, required upgrade stops occur at versions:
18.218.518.818.11
Issues to be aware of when upgrading from 17.11
PostgreSQL 14 is not supported starting from GitLab 18. Upgrade PostgreSQL to at least version 16.5 before upgrading to GitLab 18.0 or later. For more information, see installation requirements.
Automatic database version upgrades only apply to single node instances when using the Linux package. In all other cases, like Geo instances, PostgreSQL with high availability using the Linux package, or using an external PostgreSQL database (like Amazon RDS), you must upgrade PostgreSQL manually. See upgrading a Geo instance for detailed steps.
From September 29th, 2025 Bitnami will stop providing tagged PostgreSQL and Redis images. If you deploy GitLab 17.11 or earlier using the GitLab chart with bundled Redis or Postgres, you must manually update your values to use the legacy repository to prevent unexpected downtime. For more information, see issue 6089.
Known issue: The feature flag
ci_only_one_persistent_ref_creationcauses pipeline failures during zero-downtime upgrades when Rails is upgraded but Sidekiq remains on version 17.11 (see details in issue 558808).Prevention: Open the Rails console and enable the feature flag before upgrading:
$ sudo gitlab-rails console Feature.enable(:ci_only_one_persistent_ref_creation)If already affected: Run this command and retry the failed pipelines:
$ sudo gitlab-rails console Rails.cache.delete_matched("pipeline:*:create_persistent_ref_service")
18.8.0
Batched background migration for merge request merge data
A batched background migration copies merge request merge-related
data from the merge_requests table to a new dedicated merge_requests_merge_data table.
This migration is part of a database schema optimization initiative to normalize merge-specific attributes into a separate table, improving query performance and maintainability.
For more details about what data is migrated and how to estimate migration duration, see Merge request merge data migration details.
18.7.0
- A post deployment migration
schedules batched background migrations to copy CI builds metadata
to new optimized tables (
p_ci_job_definitions). This migration is part of an initiative to ultimately reduce CI database size (see epic 13886). If you have an instance with millions of jobs and want to speed up the migration, you can select what data is migrated.
Geo installations 18.7.0
Added a new
action_cable_allowed_originssetting to configure allowed origins for ActionCable websocket requests. Specify the allowed URLs when configuring the primary site to ensure proper cross-site WebSocket connectivity:
18.6.2
GitLab 18.6.2, 18.5.4, and 18.4.6 introduced size and rate limits on requests made to the following endpoints:
POST /projects/:id/repository/commits- Create a commit with multiple files and actionsPOST /projects/:id/repository/files/:file_path- Create new file in repositoryPUT /projects/:id/repository/files/:file_path- Update existing file in repository
GitLab responds to requests that exceed the size limit with a 413 Entity Too large status, and requests that exceed the rate limit with a 429 Too Many Requests status. For more information, see Commits and Files API limits
Duo Agent Platform
- Some runner restrictions have been introduced relating to which runners can be used with Duo Agent Platform.
Geo installations 18.5.2
- The missing Geo migration that prevents Geo log cursor on the secondary site to start is fixed.
18.5.0
A post deployment migration
20250922202128_finalize_correct_design_management_designs_backfillfinalizes a batched background migration that was scheduled in 18.4. If you skipped 18.4 in the upgrade path, the migration is fully executed when post deployment migrations are run. Execution time is directly related to the size of yourdesign_management_designstable. For most instances the migration should not take longer than 2 minutes, but for some larger instances, it could take up to 10 minutes. Please be patient and don’t interrupt the migration process.NGINX routing changes introduced in GitLab 18.5.0 can cause services to become inaccessible when using non-matching hostnames such as
localhostor alternative domain names. This issue causes:- Health check endpoints such as
/-/healthto return404errors instead of proper responses. - GitLab web interface showing
404error pages when accessed with hostnames other than the configured FQDN. - GitLab Pages potentially receiving traffic intended for other services.
- Problems with any requests using alternative hostnames that previously worked.
This issue is resolved in the Linux package by merge request 8805, and the fix will be available in GitLab 18.5.2 and 18.6.0.
Git operations such clone, push, and pull are unaffected by this issue.
- Health check endpoints such as
Geo installations 18.4.4
- The missing Geo migration that prevents Geo log cursor on the secondary site to start is fixed.
18.4.2
Upgrades to
18.4.2or18.4.3might fail with ano implicit conversion of nil into Stringerror for these batched background migrations:FixIncompleteInstanceExternalAuditDestinationsFinalizeAuditEventDestinationMigrations
To resolve this issue, upgrade to the latest patch release or use the workaround in issue 578938.
Geo installations 18.4.2
- The Geo bug that causes replication events to fail with the error message
no implicit conversion of String into Array (TypeError)is fixed.
18.4.1
GitLab 18.4.1, 18.3.3, and 18.2.7 introduced limits on JSON inputs to prevent denial of service attacks.
GitLab responds to HTTP requests that exceed these limits with a 400 Bad Request status.
For more information, see HTTP request limits.
18.4.0
- In secondary Geo sites, a bug causes replication events to fail with the error message
no implicit conversion of String into Array (TypeError). Redundancies such as re-verification ensure eventual consistency, but RPO is significantly increased. Versions affected: 18.4.0 and 18.4.1.
18.3.0
GitLab Duo
- A new worker
LdapAddOnSeatSyncWorkerwas introduced, which could unintentionally remove all users from GitLab Duo seats nightly when LDAP is enabled. This was fixed in GitLab 18.4.0 and 18.3.2. See issue 565064 for details.
Geo installations 18.3.0
- The issue that caused
rake gitlab:geo:checkto incorrectly report a failure when installing a Geo secondary site has been fixed in 18.3.0. - GitLab 18.3.0 includes a fix for issue 559196 where Geo verification could fail for Pages deployments with long filenames. The fix prevents filename trimming on Geo secondary sites to maintain consistency during replication and verification.
18.2.0
Zero-downtime upgrades
- Upgrades between 18.1.x and 18.2.x are affected by known issue 567543, which causes errors with pushing code to existing projects during an upgrade. To ensure no downtime during the upgrade between versions 18.1.x and 18.2.x, upgrade directly to version 18.2.6, which includes a fix.
Geo installations 18.2.0
- This version has a known issue that happens when
VerificationStateBackfillServiceruns due to changes in the primary key ofci_job_artifact_states. To resolve, upgrade to GitLab 18.2.2 or later. - GitLab 18.2.0 includes a fix for issue 559196 where Geo verification could fail for Pages deployments with long filenames. The fix prevents filename trimming on Geo secondary sites to maintain consistency during replication and verification.
18.1.0
- Elasticsearch indexing might fail with
strict_dynamic_mapping_exceptionerrors for Elasticsearch version 7. To resolve, see the “Possible fixes” section in issue 566413. - GitLab versions 18.1.0 and 18.1.1 show errors in PostgreSQL logs such as
ERROR: relation "ci_job_artifacts" does not exist at .... These errors in the logs can be safely ignored but could trigger monitoring alerts, including on Geo sites. To resolve this issue, update to GitLab 18.1.2 or later.
Geo installations 18.1.0
- GitLab version 18.1.0 has a known issue where Git operations that are proxied from a secondary Geo site fail with HTTP 500 errors. To resolve, upgrade to GitLab 18.1.1 or later.
- This version has a known issue that happens when
VerificationStateBackfillServiceruns due to changes in the primary key ofci_job_artifact_states. To resolve, upgrade to GitLab 18.1.4. - GitLab 18.1.0 includes a fix for issue 559196 where Geo verification could fail for Pages deployments with long filenames. The fix prevents filename trimming on Geo secondary sites to maintain consistency during replication and verification.
18.0.0
Migrate Gitaly configuration from git_data_dirs to storage
In GitLab 18.0 and later, you can no longer use the git_data_dirs setting to configure Gitaly storage locations.
If you are still using git_data_dirs, you must
migrate your Gitaly configuration before upgrading to GitLab 18.0.
Geo installations 18.0.0
If you deployed GitLab Enterprise Edition and then reverted to GitLab Community Edition, your database schema may deviate from the schema that the GitLab application expects, leading to migration errors. Four particular errors can be encountered on upgrade to 18.0.0 because a migration was added in that version which changes the defaults of those columns.
The errors are:
No such column: geo_nodes.verification_max_capacityNo such column: geo_nodes.minimum_reverification_intervalNo such column: geo_nodes.repos_max_capacityNo such column: geo_nodes.container_repositories_max_capacity
This migration was patched in GitLab 18.0.2 to add those columns if they are missing. See issue #543146.
Affected releases:
Affected minor releases Affected patch releases Fixed in 18.0 18.0.0 - 18.0.1 18.0.2 GitLab versions 18.0 through 18.0.2 have a known issue where Git operations that are proxied from a secondary Geo site fail with HTTP 500 errors. To resolve, upgrade to GitLab 18.0.3 or later.
This version has a known issue that happens when
VerificationStateBackfillServiceruns due to changes in the primary key ofci_job_artifact_states. To resolve, upgrade to GitLab 18.0.6.
PRNG is not seeded error on Docker installations
If you run GitLab on a Docker installation with a FIPS-enabled host, you
may see that SSH key generation or the OpenSSH server (sshd) fails to
start with the error message:
PRNG is not seededGitLab 18.0 updated the base image from Ubuntu 22.04 to 24.04. This error occurs because Ubuntu 24.04 no longer allows a FIPS host to use a non-FIPS OpenSSL provider.
To fix this issue, you have a few options:
- Disable FIPS on the host system.
- Disable the auto-detection of a FIPS-based kernel in the GitLab Docker container.
This can be done by setting the
OPENSSL_FORCE_FIPS_MODE=0environment variable with GitLab 18.0.2 or higher. - Instead of using the GitLab Docker image, install a native FIPS package on the host.
The last option is the recommended one to meet FIPS requirements. For legacy installations, the first two options can be used as a stopgap.
CI builds metadata migration details
Since GitLab 18.6, new pipelines write data exclusively to the new format (see issue 552065). This migration only copies existing data from the old format to the new one. No data is deleted.
Data not migrated will be removed in a future release (see epic 18271).
The migration duration is directly proportional to the total number of CI jobs in your instance. Jobs are processed from newest to oldest partitions to prioritize recent data.
You can reduce the number of jobs to migrate by enabling automatic pipeline cleanup on larger projects to delete old pipelines before upgrading.
The migration copies two types of data:
- Jobs processing data: Job execution configuration from
.gitlab-ci.yml(such asscript,variables) needed only for runners when executing jobs, not for the UI or API. - Job data visible to users: of all the job data, this migration only impacts job timeout value, job exit code values, exposed artifacts, and environment associations.
For GitLab Self-Managed and GitLab Dedicated instances with large CI datasets, you can speed up the migration by reducing the scope of data to migrate. To control the scope use the settings defined below.
Controlling the scope for jobs processing data
By default, the migration copies processing data for all existing jobs. You can cut down the scope by using one of the settings described below.
The value of the setting controls how much of jobs processing data you want to retain.
For example, set it to 6mo if you only expect jobs created in the last 6 months to be executed
(through retries,
execution of manual jobs,
environment auto-stop).
GitLab looks for the setting in order of precedence:
Pipeline archival setting (recommended best practice). Archived pipelines signal that jobs cannot be manually retried or re-run. If this setting is enabled, processing data for archived jobs don’t need to be migrated.
If the pipeline archival range is later extended, jobs without processing data will remain unexecutable.
GITLAB_DB_CI_JOBS_PROCESSING_DATA_CUTOFFenvironment variable, if pipeline archival is not configured or needs to be overridden for this migration. It accepts duration strings like1y(1 year),6mo(6 months),90d(90 days).GITLAB_DB_CI_JOBS_MIGRATION_CUTOFFenvironment variable, if neither of the above is set. It accepts duration strings like1y(1 year),6mo(6 months),90d(90 days). See Controlling the scope for job data visible to users.All data is copied if no configuration is found.
Controlling the scope for job data visible to users
The environment variable GITLAB_DB_CI_JOBS_MIGRATION_CUTOFF controls which jobs will have
their visible data migrated.
For example, GITLAB_DB_CI_JOBS_MIGRATION_CUTOFF=1y copies affected visible data
(timeout value, environment, exit codes, and metadata for exposed artifacts)
for jobs from the most recent year.
By default, there is no cutoff date and data for all jobs is migrated.
Estimating migration impact
For reference, for GitLab.com we expect to migrate 400 million rows in about 2 months.
To estimate the migration impact on your instance, you can run the following queries in the PostgreSQL console:
SELECT n.nspname AS schema_name, c.relname AS partition_name,
pg_size_pretty(pg_total_relation_size(c.oid)) AS total_size
FROM pg_inherits i
JOIN pg_class c ON c.oid = i.inhrelid
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_class p ON p.oid = i.inhparent
WHERE p.relname = 'p_ci_builds_metadata'
ORDER BY pg_total_relation_size(c.oid) DESC;The new tables require approximately 20% of this space.
This is an estimate from the PostgreSQL statistics table.
SELECT SUM(c.reltuples)::bigint AS estimated_jobs_count
FROM pg_class c
JOIN pg_inherits i ON c.oid = i.inhrelid
WHERE i.inhparent = 'p_ci_builds'::regclass;To find the number of jobs created in a specific time frame, we need to query the tables:
SELECT COUNT(*) FROM p_ci_builds WHERE created_at >= now() - '1 year'::interval;If the query times out, use the Rails console to batch over the data:
counts = []
CommitStatus.each_batch(of: 25000) do |batch|
counts << batch.where(created_at: 1.year.ago...).count
end
counts.sumMerge request merge data migration details
What data is migrated
The migration copies the following columns from merge_requests to merge_requests_merge_data:
merge_commit_shamerged_commit_shamerge_ref_shasquash_commit_shain_progress_merge_commit_shamerge_statusauto_merge_enabledsquashmerge_user_idmerge_paramsmerge_errormerge_jid
The migration processes the merge_requests table, copying data only for merge requests that don’t
already have corresponding entries in merge_requests_merge_data.
Since GitLab 18.7, new merge requests write data to both tables through dual-write mechanisms at the application level (see issue). This migration only copies existing data that has not been created or touched after the dual-write was implemented.
No data is deleted from the merge_requests table during this migration.
The migration is planned to be finalized in GitLab 18.9. For more information, see issue.
Estimating migration duration
The migration duration is directly proportional to the number of merge requests in your instance.
To estimate the impact:
PostgreSQL query:
-- Count total merge requests
SELECT COUNT(*) FROM merge_requests;
-- Estimate table size
SELECT pg_size_pretty(pg_total_relation_size('merge_requests')) AS table_size;Rails console:
# Count total merge requests
MergeRequest.count
# Count remaining merge requests to migrate
MergeRequest.left_joins(:merge_data)
.where(merge_requests_merge_data: { merge_request_id: nil })
.countThe migration processes merge requests in batches and should complete within hours to days for most instances.