- Database migrations failing because of batched background migration not finished
- Background migrations remain in the Sidekiq queue
- Background migrations stuck in ‘pending’ state
- What do you do if your advanced search migrations are stuck?
- What do you do for the error
Elasticsearch version not compatible
Troubleshooting
Database migrations failing because of batched background migration not finished
When updating to GitLab version 14.2 or later, database migrations might fail with a message like:
StandardError: An error has occurred, all later migrations canceled:
Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active':
{:job_class_name=>"CopyColumnUsingBackgroundMigrationJob",
:table_name=>"push_event_payloads",
:column_name=>"event_id",
:job_arguments=>[["event_id"],
["event_id_convert_to_bigint"]]
}
First, check if you have followed the version-specific upgrade instructions for 14.2. If you have, you can manually finish the batched background migration). If you haven’t, choose one of the following methods:
- Rollback and upgrade through one of the required versions before updating to 14.2+.
- Roll forward, staying on the current version and manually ensuring that the batched migrations complete successfully.
Roll back and follow the required upgrade path
- Rollback and restore the previously installed version
- Update to either 14.0.5 or 14.1 before updating to 14.2+
- Check the status of the batched background migrations and make sure they are all marked as finished before attempting to upgrade again. If any remain marked as active, you can manually finish them.
Roll forward and finish the migrations on the upgraded version
For a deployment with downtime
To run all the batched background migrations, it can take a significant amount of time depending on the size of your GitLab installation.
- Check the status of the batched background migrations in the database, and manually run them with the appropriate arguments until the status query returns no rows.
- When the status of all of all them is marked as complete, re-run migrations for your installation.
-
Complete the database migrations from your GitLab upgrade:
sudo gitlab-rake db:migrate
-
Run a reconfigure:
sudo gitlab-ctl reconfigure
- Finish the upgrade for your installation.
For a no-downtime deployment
As the failing migrations are post-deployment migrations, you can remain on a running instance of the upgraded version and wait for the batched background migrations to finish.
- Check the status of the batched background migration from the error message, and make sure it is listed as finished. If it is still active, either wait until it is done, or manually finish it.
- Re-run migrations for your installation, so the remaining post-deployment migrations finish.
Background migrations remain in the Sidekiq queue
Run the following check. If it returns non-zero and the count does not decrease over time, follow the rest of the steps in this section.
# For Linux package installations:
sudo gitlab-rails runner -e production 'puts Gitlab::BackgroundMigration.remaining'
# For self-compiled installations:
cd /home/git/gitlab
sudo -u git -H bundle exec rails runner -e production 'puts Gitlab::BackgroundMigration.remaining'
It is safe to re-execute the following commands, especially if you have 1000+ pending jobs which would likely overflow your runtime memory.
# Start the rails console
sudo gitlab-rails c
# Execute the following in the rails console
scheduled_queue = Sidekiq::ScheduledSet.new
pending_job_classes = scheduled_queue.select { |job| job["class"] == "BackgroundMigrationWorker" }.map { |job| job["args"].first }.uniq
pending_job_classes.each { |job_class| Gitlab::BackgroundMigration.steal(job_class) }
# Start the rails console
sudo -u git -H bundle exec rails RAILS_ENV=production
# Execute the following in the rails console
scheduled_queue = Sidekiq::ScheduledSet.new
pending_job_classes = scheduled_queue.select { |job| job["class"] == "BackgroundMigrationWorker" }.map { |job| job["args"].first }.uniq
pending_job_classes.each { |job_class| Gitlab::BackgroundMigration.steal(job_class) }
Background migrations stuck in ‘pending’ state
For background migrations stuck in pending, run the following check. If it returns non-zero and the count does not decrease over time, follow the rest of the steps in this section.
# For Linux package installations:
sudo gitlab-rails runner -e production 'puts Gitlab::Database::BackgroundMigrationJob.pending.count'
# For self-compiled installations:
cd /home/git/gitlab
sudo -u git -H bundle exec rails runner -e production 'puts Gitlab::Database::BackgroundMigrationJob.pending.count'
It is safe to re-attempt these migrations to clear them out from a pending status:
# Start the rails console
sudo gitlab-rails c
# Execute the following in the rails console
Gitlab::Database::BackgroundMigrationJob.pending.find_each do |job|
puts "Running pending job '#{job.class_name}' with arguments #{job.arguments}"
result = Gitlab::BackgroundMigration.perform(job.class_name, job.arguments)
puts "Result: #{result}"
end
# Start the rails console
sudo -u git -H bundle exec rails RAILS_ENV=production
# Execute the following in the rails console
Gitlab::Database::BackgroundMigrationJob.pending.find_each do |job|
puts "Running pending job '#{job.class_name}' with arguments #{job.arguments}"
result = Gitlab::BackgroundMigration.perform(job.class_name, job.arguments)
puts "Result: #{result}"
end
What do you do if your advanced search migrations are stuck?
In GitLab 15.0, an advanced search migration named DeleteOrphanedCommit
can be permanently stuck
in a pending state across upgrades. This issue
is corrected in GitLab 15.1.
If you are a self-managed customer who uses GitLab 15.0 with advanced search, you will experience performance degradation. To clean up the migration, upgrade to 15.1 or later.
For other advanced search migrations stuck in pending, see how to retry a halted migration.
If you upgrade GitLab before all pending advanced search migrations are completed, any pending migrations that have been removed in the new version cannot be executed or retried. In this case, you must re-create your index from scratch.
What do you do for the error Elasticsearch version not compatible
Confirm that your version of Elasticsearch or OpenSearch is compatible with your version of GitLab.