- Removing an inactive replication slot
- Message:
WARNING: oldest xmin is far in the past
andpg_wal
size growing - Message:
ERROR: replication slots can only be used if max_replication_slots > 0
? - Message:
replication slot "geo_secondary_my_domain_com" does not exist
- Message: “Command exceeded allowed execution time” when setting up replication?
- Message: “PANIC: could not write to file
pg_xlog/xlogtemp.123
: No space left on device” - Message: “ERROR: canceling statement due to conflict with recovery”
- Message:
server certificate for "PostgreSQL" does not match host name
- Message:
LOG: invalid CIDR mask in address
- Message:
LOG: invalid IP mask "md5": Name or service not known
- Message:
Found data in the gitlabhq_production database
- Message:
FATAL: could not map anonymous shared memory: Cannot allocate memory
- Investigate causes of database replication lag
Troubleshooting Geo PostgreSQL replication
The following sections outline troubleshooting steps for fixing replication error messages (indicated by Database replication working? ... no
in the
geo:check
output.
The instructions present here mostly assume a single-node Geo Linux package deployment, and might need to be adapted to different environments.
Removing an inactive replication slot
Replication slots are marked as ‘inactive’ when the replication client (a secondary site) connected to the slot disconnects. Inactive replication slots cause WAL files to be retained, because they are sent to the client when it reconnects and the slot becomes active once more. If the secondary site is not able to reconnect, use the following steps to remove its corresponding inactive replication slot:
-
Start a PostgreSQL console session on the Geo primary site’s database node:
sudo gitlab-psql -d gitlabhq_production
Usinggitlab-rails dbconsole
does not work, because managing replication slots requires superuser permissions. -
View the replication slots and remove them if they are inactive:
SELECT * FROM pg_replication_slots;
Slots where
active
isf
are inactive.- When this slot should be active, because you have a secondary site configured using that slot, look for the PostgreSQL logs for the secondary site, to view why the replication is not running.
-
If you are no longer using the slot (for example, you no longer have Geo enabled), or the secondary site is no longer able to reconnect, you should remove it using the PostgreSQL console session:
SELECT pg_drop_replication_slot('<name_of_inactive_slot>');
-
Follow either the steps to remove that Geo site if it’s no longer required, or re-initiate the replication process, which recreates the replication slot correctly.
Message: "Error during verification","error":"File is not checksummable"
If you encounter these errors in your primary site geo.log
, they’re also reflected in the UI under Admin > Geo > Sites. To remove those errors, you can identify the particular blob that generates the message so that you can inspect it.
- In a Puma or Sidekiq node in the primary site, open a Rails console.
- Run the following snippet to find the affected artifacts containing the
File is not checksummable
message:
JobArtifact
blob type; however, the same solution applies to any blob type that Geo uses.
artifacts = Ci::JobArtifact.verification_failed.where("verification_failure like '%File is not checksummable%'");1
puts "Found #{artifacts.count} artifacts that failed verification with 'File is not checksummable'. The first one:"
pp artifacts.first
If you determine that the affected files need to be recovered then you can explore these options (non-exhaustive) to recover the missing files:
- Check if the secondary site has the object and manually copy them to the primary.
- Look through old backups and manually copy the object back into the primary site.
- Spot check some to try to determine that it’s probably fine to destroy the records, for example, if they are all very old artifacts, then maybe they are not critical data.
Often, these kinds of errors happen when a file is checksummed by Geo, and then goes missing from the primary site. After you identify the affected files, you should check the projects that the files belong to from the UI to decide if it’s acceptable to delete the file reference. If so, you can destroy the references with the following irreversible snippet:
def destroy_artifacts_not_checksummable
artifacts = Ci::JobArtifact.verification_failed.where("verification_failure like '%File is not checksummable%'");1
puts "Found #{artifacts.count} artifacts that failed verification with 'File is not checksummable'."
puts "Enter 'y' to continue: "
prompt = STDIN.gets.chomp
if prompt != 'y'
puts "Exiting without action..."
return
end
puts "Destroying all..."
artifacts.destroy_all
end
destroy_artifacts_not_checksummable
Message: WARNING: oldest xmin is far in the past
and pg_wal
size growing
If a replication slot is inactive,
the pg_wal
logs corresponding to the slot are reserved forever
(or until the slot is active again). This causes continuous disk usage growth
and the following messages appear repeatedly in the
PostgreSQL logs:
WARNING: oldest xmin is far in the past
HINT: Close open transactions soon to avoid wraparound problems.
You might also need to commit or roll back old prepared transactions, or drop stale replication slots.
To fix this, you should remove the inactive replication slot and re-initiate the replication.
Message: ERROR: replication slots can only be used if max_replication_slots > 0
?
This means that the max_replication_slots
PostgreSQL variable needs to
be set on the primary database. This setting defaults to 1. You may need to
increase this value if you have more secondary sites.
Be sure to restart PostgreSQL for this to take effect. See the PostgreSQL replication setup guide for more details.
Message: replication slot "geo_secondary_my_domain_com" does not exist
This error occurs when PostgreSQL does not have a replication slot for the secondary site by that name:
FATAL: could not start WAL streaming: ERROR: replication slot "geo_secondary_my_domain_com" does not exist
You may want to rerun the replication process on the secondary site .
Message: “Command exceeded allowed execution time” when setting up replication?
This may happen while initiating the replication process on the secondary site, and indicates your initial dataset is too large to be replicated in the default timeout (30 minutes).
Re-run gitlab-ctl replicate-geo-database
, but include a larger value for
--backup-timeout
:
sudo gitlab-ctl \
replicate-geo-database \
--host=<primary_node_hostname> \
--slot-name=<secondary_slot_name> \
--backup-timeout=21600
This gives the initial replication up to six hours to complete, rather than the default 30 minutes. Adjust as required for your installation.
Message: “PANIC: could not write to file pg_xlog/xlogtemp.123
: No space left on device”
Determine if you have any unused replication slots in the primary database. This can cause large amounts of
log data to build up in pg_xlog
.
Removing the inactive slots can reduce the amount of space used in the pg_xlog
.
Message: “ERROR: canceling statement due to conflict with recovery”
This error message occurs infrequently under typical usage, and the system is resilient enough to recover.
However, under certain conditions, some database queries on secondaries may run excessively long, which increases the frequency of this error message. This can lead to a situation where some queries never complete due to being canceled on every replication.
These long-running queries are
planned to be removed in the future,
but as a workaround, we recommend enabling
hot_standby_feedback
.
This increases the likelihood of bloat on the primary site as it prevents
VACUUM
from removing recently-dead rows. However, it has been used
successfully in production on GitLab.com.
To enable hot_standby_feedback
, add the following to /etc/gitlab/gitlab.rb
on the secondary site:
postgresql['hot_standby_feedback'] = 'on'
Then reconfigure GitLab:
sudo gitlab-ctl reconfigure
To help us resolve this problem, consider commenting on the issue.
Message: server certificate for "PostgreSQL" does not match host name
If you see this error:
FATAL: could not connect to the primary server: server certificate for "PostgreSQL" does not match host name
This happens because the PostgreSQL certificate that the Linux package automatically creates contains
the Common Name PostgreSQL
, but the replication is connecting to a different host and GitLab attempts to use
the verify-full
SSL mode by default.
To fix this issue, you can either:
- Use the
--sslmode=verify-ca
argument with thereplicate-geo-database
command. - For an already replicated database, change
sslmode=verify-full
tosslmode=verify-ca
in/var/opt/gitlab/postgresql/data/gitlab-geo.conf
and rungitlab-ctl restart postgresql
. - Configure SSL for PostgreSQL with a custom certificate (including the host name that’s used to connect to the database in the CN or SAN) instead of using the automatically generated certificate.
Message: LOG: invalid CIDR mask in address
This happens on wrongly-formatted addresses in postgresql['md5_auth_cidr_addresses']
.
2020-03-20_23:59:57.60499 LOG: invalid CIDR mask in address "***"
2020-03-20_23:59:57.60501 CONTEXT: line 74 of configuration file "/var/opt/gitlab/postgresql/data/pg_hba.conf"
To fix this, update the IP addresses in /etc/gitlab/gitlab.rb
under postgresql['md5_auth_cidr_addresses']
to respect the CIDR format (for example, 10.0.0.1/32
).
Message: LOG: invalid IP mask "md5": Name or service not known
This happens when you have added IP addresses without a subnet mask in postgresql['md5_auth_cidr_addresses']
.
2020-03-21_00:23:01.97353 LOG: invalid IP mask "md5": Name or service not known
2020-03-21_00:23:01.97354 CONTEXT: line 75 of configuration file "/var/opt/gitlab/postgresql/data/pg_hba.conf"
To fix this, add the subnet mask in /etc/gitlab/gitlab.rb
under postgresql['md5_auth_cidr_addresses']
to respect the CIDR format (for example, 10.0.0.1/32
).
Message: Found data in the gitlabhq_production database
If you receive the error Found data in the gitlabhq_production database!
when running
gitlab-ctl replicate-geo-database
, data was detected in the projects
table. When one or more projects are detected, the operation
is aborted to prevent accidental data loss. To bypass this message, pass the --force
option to the command.
Message: FATAL: could not map anonymous shared memory: Cannot allocate memory
If you see this message, it means that the secondary site’s PostgreSQL tries to request memory that is higher than the available memory. There is an issue that tracks this problem.
Example error message in Patroni logs (located at /var/log/gitlab/patroni/current
for Linux package installations):
2023-11-21_23:55:18.63727 FATAL: could not map anonymous shared memory: Cannot allocate memory
2023-11-21_23:55:18.63729 HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 17035526144 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
The workaround is to increase the memory available to the secondary site’s PostgreSQL nodes to match the memory requirements of the primary site’s PostgreSQL nodes.
Investigate causes of database replication lag
If the output of sudo gitlab-rake geo:status
shows that Database replication lag
remains significantly high over time, the primary node in database replication can be checked to determine the status of lag for
different parts of the database replication process. These values are known as write_lag
, flush_lag
, and replay_lag
. For more information, see
the official PostgreSQL documentation.
Run the following command from the primary Geo node’s database to provide relevant output:
gitlab-psql -xc 'SELECT write_lag,flush_lag,replay_lag FROM pg_stat_replication;'
-[ RECORD 1 ]---------------
write_lag | 00:00:00.072392
flush_lag | 00:00:00.108168
replay_lag | 00:00:00.108283
If one or more of these values is significantly high, this could indicate a problem and should be investigated further. When determining the cause, consider that:
-
write_lag
indicates the time since when WAL bytes have been sent by the primary, then received to the secondary, but not yet flushed or applied. - A high
write_lag
value may indicate degraded network performance or insufficient network speed between the primary and secondary nodes. - A high
flush_lag
value may indicate degraded or sub-optimal disk I/O performance with the secondary node’s storage device. - A high
replay_lag
value may indicate long running transactions in PostgreSQL, or the saturation of a needed resource like the CPU. - The difference in time between
write_lag
andflush_lag
indicates that WAL bytes have been sent to the underlying storage system, but it has not reported that they were flushed. This data is most likely not fully written to a persistent storage, and likely held in some kind of volatile write cache. - The difference between
flush_lag
andreplay_lag
indicates WAL bytes that have been successfully persisted to storage, but could not be replayed by the database system.