Using the Praefect chart
The Praefect chart is used to manage a Gitaly cluster inside a GitLab installment deployed with the Helm charts.
Known limitations and issues
- The database has to be manually created.
- The cluster size is fixed: Gitaly Cluster does not currently support autoscaling.
- Using a Praefect instance in the cluster to manage Gitaly instances outside the cluster is not supported.
Requirements
This chart consumes the Gitaly chart. Settings from global.gitaly
are used to configure the instances created by this chart. Documentation of these settings can be found in Gitaly chart documentation.
Important: global.gitaly.tls
is independent of global.praefect.tls
. They are configured separately.
By default, this chart will create 3 Gitaly Replicas.
Configuration
The chart is disabled by default. To enable it as part of a chart deploy set global.praefect.enabled=true
.
Replicas
The default number of replicas to deploy is 3. This can be changed by setting global.praefect.virtualStorages[].gitalyReplicas
with the desired number of replicas. For example:
global:
praefect:
enabled: true
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1
Multiple virtual storages
Multiple virtual storages can be configured (see Gitaly Cluster documentation). For example:
global:
praefect:
enabled: true
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1
- name: vs2
gitalyReplicas: 5
maxUnavailable: 2
This will create two sets of resources for Gitaly. This includes two Gitaly StatefulSets (one per virtual storage).
Administrators can then configure where new repositories are stored.
Persistence
It is possible to provide persistence configuration per virtual storage.
global:
praefect:
enabled: true
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1
persistence:
enabled: true
size: 50Gi
accessMode: ReadWriteOnce
storageClass: storageclass1
- name: vs2
gitalyReplicas: 5
maxUnavailable: 2
persistence:
enabled: true
size: 100Gi
accessMode: ReadWriteOnce
storageClass: storageclass2
defaultReplicationFactor
defaultReplicationFactor
can be configured on each virtual storages. (see configure replication-factor documentation).
global:
praefect:
enabled: true
virtualStorages:
- name: default
gitalyReplicas: 5
maxUnavailable: 2
defaultReplicationFactor: 3
- name: secondary
gitalyReplicas: 4
maxUnavailable: 1
defaultReplicationFactor: 2
Migrating to Praefect
When migrating from standalone Gitaly instances to a Praefect setup, global.praefect.replaceInternalGitaly
can be set to false
.
This ensures that the existing Gitaly instances are preserved while the new Praefect-managed Gitaly instances are created.
global:
praefect:
enabled: true
replaceInternalGitaly: false
virtualStorages:
- name: virtualStorage2
gitalyReplicas: 5
maxUnavailable: 2
default
.
This is because there must be at least one storage named default
at all times,
therefore the name is already taken by the non-Praefect configuration.The instructions to migrate to Gitaly Cluster
can then be followed to move data from the default
storage to virtualStorage2
. If additional storages
were defined under global.gitaly.internal.names
, be sure to migrate repositories from those storages as well.
After the repositories have been migrated to virtualStorage2
, replaceInternalGitaly
can be set back to true
if a storage named
default
is added in the Praefect configuration.
global:
praefect:
enabled: true
replaceInternalGitaly: true
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1
- name: virtualStorage2
gitalyReplicas: 5
maxUnavailable: 2
The instructions to migrate to Gitaly Cluster
can be followed again to move data from virtualStorage2
to the newly-added default
storage if desired.
Finally, see the repository storage paths documentation to configure where new repositories are stored.
Creating the database
Praefect uses its own database to track its state. This has to be manually created in order for Praefect to be functional.
-
Log into your database instance:
kubectl exec -it $(kubectl get pods -l app.kubernetes.io/name=postgresql -o custom-columns=NAME:.metadata.name --no-headers) -- bash
PGPASSWORD=$(echo $POSTGRES_POSTGRES_PASSWORD) psql -U postgres -d template1
-
Create the database user:
CREATE ROLE praefect WITH LOGIN;
-
Set the database user password.
By default, the
shared-secrets
Job will generate a secret for you.-
Fetch the password:
kubectl get secret RELEASE_NAME-praefect-dbsecret -o jsonpath="{.data.secret}" | base64 --decode
-
Set the password in the
psql
prompt:\password praefect
-
-
Create the database:
CREATE DATABASE praefect WITH OWNER praefect;
Running Praefect over TLS
Praefect supports communicating with client and Gitaly nodes over TLS. This is
controlled by the settings global.praefect.tls.enabled
and global.praefect.tls.secretName
.
To run Praefect over TLS follow these steps:
-
The Helm chart expects a certificate to be provided for communicating over TLS with Praefect. This certificate should apply to all the Praefect nodes that are present. Hence all hostnames of each of these nodes should be added as a Subject Alternate Name (SAN) to the certificate or alternatively, you can use wildcards.
To know the hostnames to use, check the file
/srv/gitlab/config/gitlab.yml
file in the Toolbox Pod and check the variousgitaly_address
fields specified underrepositories.storages
key within it.kubectl exec -it <Toolbox Pod> -- grep gitaly_address /srv/gitlab/config/gitlab.yml
-
Create a TLS Secret using the certificate created.
kubectl create secret tls <secret name> --cert=praefect.crt --key=praefect.key
-
Redeploy the Helm chart by passing
--set global.praefect.tls.enabled=true
.
When running Gitaly over TLS, a secret name must be provided for each virtual storage.
global:
gitaly:
tls:
enabled: true
praefect:
enabled: true
tls:
enabled: true
secretName: praefect-tls
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1
tlsSecretName: default-tls
- name: vs2
gitalyReplicas: 5
maxUnavailable: 2
tlsSecretName: vs2-tls
Installation command line options
The table below contains all the possible charts configurations that can be supplied to
the helm install
command using the --set
flags.
Parameter | Default | Description |
---|---|---|
common.labels | {}
| Supplemental labels that are applied to all objects created by this chart. |
failover.enabled | true | Whether Praefect should perform failover on node failure |
failover.readonlyAfter | false | Whether the nodes should be in read-only mode after failover |
autoMigrate | true | Automatically run migrations on startup |
image.repository | registry.gitlab.com/gitlab-org/build/cng/gitaly
| The default image repository to use. Praefect is bundled as part of the Gitaly image |
podLabels | {}
| Supplemental Pod labels. Will not be used for selectors. |
ntpHost | pool.ntp.org
| Configure the NTP server Praefect should ask the for the current time. |
service.name | praefect
| The name of the service to create |
service.type | ClusterIP | The type of service to create |
service.internalPort | 8075 | The internal port number that the Praefect pod will be listening on |
service.externalPort | 8075 | The port number the Praefect service should expose in the cluster |
init.resources | ||
init.image | ||
init.containerSecurityContext.allowPrivilegeEscalation
| false
| initContainer specific: Controls whether a process can gain more privileges than its parent process |
init.containerSecurityContext.runAsNonRoot
| true
| initContainer specific: Controls whether the container runs with a non-root user |
init.containerSecurityContext.capabilities.drop
| [ "ALL" ]
| initContainer specific: Removes Linux capabilities for the container |
extraEnvFrom | List of extra environment variables from other data sources to expose | |
logging.level | Log level | |
logging.format | json
| Log format |
logging.sentryDsn | Sentry DSN URL - Exceptions from Go server | |
logging.sentryEnvironment | Sentry environment to be used for logging | |
metrics.enabled
| true
| If a metrics endpoint should be made available for scraping |
metrics.port
| 9236
| Metrics endpoint port |
metrics.separate_database_metrics
| true
| If true then metrics scrapes will not perform database queries, setting to false may cause performance problems |
metrics.path
| /metrics
| Metrics endpoint path |
metrics.serviceMonitor.enabled
| false
| If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the prometheus.io scrape annotations
|
affinity
| {}
| Affinity rules for pod assignment |
metrics.serviceMonitor.additionalLabels
| {}
| Additional labels to add to the ServiceMonitor |
metrics.serviceMonitor.endpointConfig
| {}
| Additional endpoint configuration for the ServiceMonitor |
securityContext.runAsUser | 1000 | |
securityContext.fsGroup | 1000 | |
securityContext.fsGroupChangePolicy | Policy for changing ownership and permission of the volume (requires Kubernetes 1.23) | |
securityContext.seccompProfile.type
| RuntimeDefault
| Seccomp profile to use |
containerSecurityContext.allowPrivilegeEscalation
| false
| Controls whether a process of the container can gain more privileges than its parent process |
containerSecurityContext.runAsNonRoot
| true
| Controls whether the container runs with a non-root user |
containerSecurityContext.capabilities.drop
| [ "ALL" ]
| Removes Linux capabilities for the Gitaly container |
serviceAccount.annotations
| {}
| ServiceAccount annotations |
serviceAccount.automountServiceAccountToken
| false
| Indicates whether or not the default ServiceAccount access token should be mounted in pods |
serviceAccount.create
| false
| Indicates whether or not a ServiceAccount should be created |
serviceAccount.enabled
| false
| Indicates whether or not to use a ServiceAccount |
serviceAccount.name
| Name of the ServiceAccount. If not set, the full chart name is used | |
serviceLabels | {}
| Supplemental service labels |
statefulset.strategy | {}
| Allows one to configure the update strategy utilized by the statefulset |
serviceAccount
This section controls if a ServiceAccount should be created and if the default access token should be mounted in pods.
Name | Type | Default | Description |
---|---|---|---|
annotations
| Map | {}
| ServiceAccount annotations. |
automountServiceAccountToken
| Boolean | false
| Controls if the default ServiceAccount access token should be mounted in pods. You should not enable this unless it is required by certain sidecars to work properly (for example, Istio). |
create
| Boolean | false
| Indicates whether or not a ServiceAccount should be created. |
enabled
| Boolean | false
| Indicates whether or not to use a ServiceAccount. |
name
| String | Name of the ServiceAccount. If not set, the full chart name is used. |
affinity
For more information, see affinity
.