- Requirements
- Design Choices
- Configuration
- Chart configuration examples
- External Services
- Chart settings
Using the GitLab-Gitaly chart
The gitaly
sub-chart provides a configurable deployment of Gitaly Servers.
Requirements
This chart depends on access to the Workhorse service, either as part of the complete GitLab chart or provided as an external service reachable from the Kubernetes cluster this chart is deployed onto.
Design Choices
The Gitaly container used in this chart also contains the GitLab Shell codebase in order to perform the actions on the Git repositories that have not yet been ported into Gitaly. The Gitaly container includes a copy of the GitLab Shell container within it, and as a result we also need to configure GitLab Shell within this chart.
Configuration
The gitaly
chart is configured in two parts: external services,
and chart settings.
Gitaly is by default deployed as a component when deploying the GitLab
chart. If deploying Gitaly separately, global.gitaly.enabled
needs to
be set to false
and additional configuration will need to be performed
as described in the external Gitaly documentation.
Installation command line options
The table below contains all the possible charts configurations that can be supplied to
the helm install
command using the --set
flags.
Parameter | Default | Description |
---|---|---|
annotations
| Pod annotations | |
backup.goCloudUrl
| Object storage URL for server side Gitaly backups. | |
common.labels
| {}
| Supplemental labels that are applied to all objects created by this chart. |
podLabels
| Supplemental Pod labels. Will not be used for selectors. | |
external[].hostname
| - ""
| hostname of external node |
external[].name
| - ""
| name of external node storage |
external[].port
| - ""
| port of external node |
extraContainers
| List of extra containers to include | |
extraInitContainers
| List of extra init containers to include | |
extraVolumeMounts
| List of extra volumes mounts to do | |
extraVolumes
| List of extra volumes to create | |
extraEnv
| List of extra environment variables to expose | |
extraEnvFrom
| List of extra environment variables from other data sources to expose | |
gitaly.serviceName
| The name of the generated Gitaly service. Overrides global.gitaly.serviceName , and defaults to <RELEASE-NAME>-gitaly
| |
gpgSigning.enabled
| false
| If Gitaly GPG signing should be used. |
gpgSigning.secret
| The name of the secret used for Gitaly GPG signing. | |
gpgSigning.key
| The key in the GPG secret containing Gitaly’s GPG signing key. | |
image.pullPolicy
| Always
| Gitaly image pull policy |
image.pullSecrets
| Secrets for the image repository | |
image.repository
| registry.gitlab.com/gitlab-org/build/cng/gitaly
| Gitaly image repository |
image.tag
| master
| Gitaly image tag |
init.image.repository
| initContainer image | |
init.image.tag
| initContainer image tag | |
init.containerSecurityContext
| initContainer container specific securityContext | |
internal.names[]
| - default
| Ordered names of StatefulSet storages |
serviceLabels
| {}
| Supplemental service labels |
service.externalPort
| 8075
| Gitaly service exposed port |
service.internalPort
| 8075
| Gitaly internal port |
service.name
| gitaly
| The name of the Service port that Gitaly is behind in the Service object. |
service.type
| ClusterIP
| Gitaly service type |
securityContext.fsGroup
| 1000
| Group ID under which the pod should be started |
securityContext.fsGroupChangePolicy
| Policy for changing ownership and permission of the volume (requires Kubernetes 1.23) | |
securityContext.runAsUser
| 1000
| User ID under which the pod should be started |
containerSecurityContext
| Override container securityContext under which the Gitaly container is started | |
containerSecurityContext.runAsUser
| 1000
| Allow to overwrite the specific security context under which the Gitaly container is started |
tolerations
| []
| Toleration labels for pod assignment |
affinity
| {}
| Affinity rules for pod assignment |
persistence.accessMode
| ReadWriteOnce
| Gitaly persistence access mode |
persistence.annotations
| Gitaly persistence annotations | |
persistence.enabled
| true
| Gitaly enable persistence flag |
persistance.labels
| Gitaly persistence labels | |
persistence.matchExpressions
| Label-expression matches to bind | |
persistence.matchLabels
| Label-value matches to bind | |
persistence.size
| 50Gi
| Gitaly persistence volume size |
persistence.storageClass
| storageClassName for provisioning | |
persistence.subPath
| Gitaly persistence volume mount path | |
priorityClassName
| Gitaly StatefulSet priorityClassName | |
logging.level
| Log level | |
logging.format
| json
| Log format |
logging.sentryDsn
| Sentry DSN URL - Exceptions from Go server | |
logging.sentryEnvironment
| Sentry environment to be used for logging | |
shell.concurrency[]
| Concurrency of each RPC endpoint Specified using keys rpc and maxPerRepo
| |
packObjectsCache.enabled
| false
| Enable the Gitaly pack-objects cache |
packObjectsCache.dir
| /home/git/repositories/+gitaly/PackObjectsCache
| Directory where cache files get stored |
packObjectsCache.max_age
| 5m
| Cache entries lifespan |
packObjectsCache.min_occurrences
| 1
| Key must hit a minimum count to create a cache entry |
git.catFileCacheSize
| Cache size used by Git cat-file process | |
git.config[]
| []
| Git configuration that Gitaly should set when spawning Git commands |
prometheus.grpcLatencyBuckets
| Buckets corresponding to histogram latencies on GRPC method calls to be recorded by Gitaly. A string form of the array (for example, "[1.0, 1.5, 2.0]" ) is required as input
| |
statefulset.strategy
| {}
| Allows one to configure the update strategy utilized by the StatefulSet |
statefulset.livenessProbe.initialDelaySeconds
| 30 | Delay before liveness probe is initiated. If startupProbe is enabled, this will be set to 0. |
statefulset.livenessProbe.periodSeconds
| 10 | How often to perform the liveness probe |
statefulset.livenessProbe.timeoutSeconds
| 3 | When the liveness probe times out |
statefulset.livenessProbe.successThreshold
| 1 | Minimum consecutive successes for the liveness probe to be considered successful after having failed |
statefulset.livenessProbe.failureThreshold
| 3 | Minimum consecutive failures for the liveness probe to be considered failed after having succeeded |
statefulset.readinessProbe.initialDelaySeconds
| 10 | Delay before readiness probe is initiated. If startupProbe is enabled, this will be set to 0. |
statefulset.readinessProbe.periodSeconds
| 10 | How often to perform the readiness probe |
statefulset.readinessProbe.timeoutSeconds
| 3 | When the readiness probe times out |
statefulset.readinessProbe.successThreshold
| 1 | Minimum consecutive successes for the readiness probe to be considered successful after having failed |
statefulset.readinessProbe.failureThreshold
| 3 | Minimum consecutive failures for the readiness probe to be considered failed after having succeeded |
statefulset.startupProbe.enabled
| false
| Whether a startup probe is enabled. |
statefulset.startupProbe.initialDelaySeconds
| 1 | Delay before startup probe is initiated |
statefulset.startupProbe.periodSeconds
| 2 | How often to perform the startup probe |
statefulset.startupProbe.timeoutSeconds
| 1 | When the startup probe times out |
statefulset.startupProbe.successThreshold
| 1 | Minimum consecutive successes for the startup probe to be considered successful after having failed |
statefulset.startupProbe.failureThreshold
| 30 | Minimum consecutive failures for the startup probe to be considered failed after having succeeded |
metrics.enabled
| false
| If a metrics endpoint should be made available for scraping |
metrics.port
| 9236
| Metrics endpoint port |
metrics.path
| /metrics
| Metrics endpoint path |
metrics.serviceMonitor.enabled
| false
| If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the prometheus.io scrape annotations
|
metrics.serviceMonitor.additionalLabels
| {}
| Additional labels to add to the ServiceMonitor |
metrics.serviceMonitor.endpointConfig
| {}
| Additional endpoint configuration for the ServiceMonitor |
metrics.metricsPort
|
DEPRECATED Use metrics.port
| |
gomemlimit.enabled
| true
| This will automatically set the GOMEMLIMIT environment variable for the Gitaly container to resources.limits.memory , if that limit is also set. Users can override this value by setting this value false and setting GOMEMLIMIT in extraEnv . This must meet documented format criteria.
|
cgroups.enabled
| false
| Gitaly has built-in cgroups control. When configured, Gitaly assigns Git processes to a cgroup based on the repository the Git command is operating in. This parameter will enable repository cgroups. Note only cgroups v2 will be supported if enabled. |
cgroups.initContainer.image.repository
| registry.com/gitlab-org/build/cng/gitaly-init-cgroups
| Gitaly image repository |
cgroups.initContainer.image.tag
| master
| Gitaly image tag |
cgroups.initContainer.image.pullPolicy
| IfNotPresent
| Gitaly image pull policy |
cgroups.mountpoint
| /etc/gitlab-secrets/gitaly-pod-cgroup
| Where the parent cgroup directory is mounted. |
cgroups.hierarchyRoot
| gitaly
| Parent cgroup under which Gitaly creates groups, and is expected to be owned by the user and group Gitaly runs as. |
cgroups.memoryBytes
| The total memory limit that is imposed collectively on all Git processes that Gitaly spawns. 0 implies no limit. | |
cgroups.cpuShares
| The CPU limit that is imposed collectively on all Git processes that Gitaly spawns. 0 implies no limit. The maximum is 1024 shares, which represents 100% of CPU. | |
cgroups.cpuQuotaUs
| Used to throttle the cgroups’ processes if they exceed this quota value. We set cpuQuotaUs to 100ms so 1 core is 100000. 0 implies no limit. | |
cgroups.repositories.count
| The number of cgroups in the cgroups pool. Each time a new Git command is spawned, Gitaly assigns it to one of these cgroups based on the repository the command is for. A circular hashing algorithm assigns Git commands to these cgroups, so a Git command for a repository is always assigned to the same cgroup. | |
cgroups.repositories.memoryBytes
| The total memory limit imposed on all Git processes contained in a repository cgroup. 0 implies no limit. This value cannot exceed that of the top level memoryBytes. | |
cgroups.repositories.cpuShares
| The CPU limit that is imposed on all Git processes contained in a repository cgroup. 0 implies no limit. The maximum is 1024 shares, which represents 100% of CPU. This value cannot exceed that of the top level cpuShares. | |
cgroups.repositories.cpuQuotaUs
| The cpuQuotaUs that is imposed on all Git processes contained in a repository cgroup. A Git process can’t use more then the given quota. We set cpuQuotaUs to 100ms so 1 core is 100000. 0 implies no limit. | |
gracefulRestartTimeout
| 25
| Gitaly shutdown grace period, how long to wait for in-flight requests to complete (seconds). Pod terminationGracePeriodSeconds is set to this value + 5 seconds.
|
Chart configuration examples
extraEnv
extraEnv
allows you to expose additional environment variables in all containers in the pods.
Below is an example use of extraEnv
:
extraEnv:
SOME_KEY: some_value
SOME_OTHER_KEY: some_other_value
When the container is started, you can confirm that the environment variables are exposed:
env | grep SOME
SOME_KEY=some_value
SOME_OTHER_KEY=some_other_value
extraEnvFrom
extraEnvFrom
allows you to expose additional environment variables from other data sources in all containers in the pods.
Below is an example use of extraEnvFrom
:
extraEnvFrom:
MY_NODE_NAME:
fieldRef:
fieldPath: spec.nodeName
MY_CPU_REQUEST:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
SECRET_THING:
secretKeyRef:
name: special-secret
key: special_token
# optional: boolean
CONFIG_STRING:
configMapKeyRef:
name: useful-config
key: some-string
# optional: boolean
image.pullSecrets
pullSecrets
allows you to authenticate to a private registry to pull images for a pod.
Additional details about private registries and their authentication methods can be found in the Kubernetes documentation.
Below is an example use of pullSecrets
image:
repository: my.gitaly.repository
tag: latest
pullPolicy: Always
pullSecrets:
- name: my-secret-name
- name: my-secondary-secret-name
tolerations
tolerations
allow you schedule pods on tainted worker nodes
Below is an example use of tolerations
:
tolerations:
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoExecute"
affinity
For more information, see affinity
.
annotations
annotations
allows you to add annotations to the Gitaly pods.
Below is an example use of annotations
:
annotations:
kubernetes.io/example-annotation: annotation-value
priorityClassName
priorityClassName
allows you to assign a PriorityClass
to the Gitaly pods.
Below is an example use of priorityClassName
:
priorityClassName: persistence-enabled
git.config
git.config
allows you to add configuration to all Git commands spawned by
Gitaly. Accepts configuration as documented in git-config(1)
in key
/
value
pairs, as shown below.
git:
config:
- key: "pack.threads"
value: 4
- key: "fsck.missingSpaceBeforeDate"
value: ignore
cgroups
To prevent exhaustion, Gitaly uses cgroups to assign Git processes to a cgroup based on the repository being operated on. Each cgroup has memory and CPU limits, ensuring system stability and preventing resource saturation.
Please note that the initContainer
that runs before Gitaly starts requires to be
executed as root. This container will configure the permissions so that Gitaly can manage cgroups.
Hence, it will mount a volume on the filesystem to have write access to /sys/fs/cgroup
.
cgroups:
enabled: true
# Total limit across all repository cgroups
memoryBytes: 64424509440 # 60GiB
cpuShares: 1024
cpuQuotaUs: 1200000 # 12 cores
# Per repository limits, 1000 repository cgroups
repositories:
count: 1000
memoryBytes: 32212254720 # 30GiB
cpuShares: 512
cpuQuotaUs: 400000 # 4 cores
External Services
This chart should be attached the Workhorse service.
Workhorse
workhorse:
host: workhorse.example.com
serviceName: webservice
port: 8181
Name | Type | Default | Description |
---|---|---|---|
host
| String | The hostname of the Workhorse server. This can be omitted in lieu of serviceName .
| |
port
| Integer | 8181
| The port on which to connect to the Workhorse server. |
serviceName
| String | webservice
| The name of the service which is operating the Workhorse server. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name ) in place of the host value. This is convenient when using Workhorse as a part of the overall GitLab chart.
|
Chart settings
The following values are used to configure the Gitaly Pods.
global.gitaly.authToken
value. Additionally, the Gitaly container has a copy of GitLab Shell, which has some configuration
that can be set. The Shell authToken is sourced from the global.shell.authToken
values.Git Repository Persistence
This chart provisions a PersistentVolumeClaim and mounts a corresponding persistent
volume for the Git repository data. You’ll need physical storage available in the
Kubernetes cluster for this to work. If you’d rather use emptyDir, disable PersistentVolumeClaim
with: persistence.enabled: false
.
volumeName
). If you want
to reference a specific volume, you need to manually create the PersistentVolumeClaim.VolumeClaimTemplate
is immutable.persistence:
enabled: true
storageClass: standard
accessMode: ReadWriteOnce
size: 50Gi
matchLabels: {}
matchExpressions: []
subPath: "data"
annotations: {}
Name | Type | Default | Description |
---|---|---|---|
accessMode
| String | ReadWriteOnce
| Sets the accessMode requested in the PersistentVolumeClaim. See Kubernetes Access Modes Documentation for details. |
enabled
| Boolean | true
| Sets whether or not to use a PersistentVolumeClaims for the repository data. If false , an emptyDir volume is used.
|
matchExpressions
| Array | Accepts an array of label condition objects to match against when choosing a volume to bind. This is used in the PersistentVolumeClaim selector section. See the volumes documentation.
| |
matchLabels
| Map | Accepts a Map of label names and label values to match against when choosing a volume to bind. This is used in the PersistentVolumeClaim selector section. See the volumes documentation.
| |
size
| String | 50Gi
| The minimum volume size to request for the data persistence. |
storageClass
| String | Sets the storageClassName on the Volume Claim for dynamic provisioning. When unset or null, the default provisioner will be used. If set to a hyphen, dynamic provisioning is disabled. | |
subPath
| String | Sets the path within the volume to mount, rather than the volume root. The root is used if the subPath is empty. | |
annotations
| Map | Sets the annotations on the Volume Claim for dynamic provisioning. See Kubernetes Annotations Documentation for details. |
Running Gitaly over TLS
Gitaly supports communicating with other components over TLS. This is controlled
by the settings global.gitaly.tls.enabled
and global.gitaly.tls.secretName
.
Follow the steps to run Gitaly over TLS:
-
The Helm chart expects a certificate to be provided for communicating over TLS with Gitaly. This certificate should apply to all the Gitaly nodes that are present. Hence all hostnames of each of these Gitaly nodes should be added as a Subject Alternate Name (SAN) to the certificate.
To know the hostnames to use, check the file
/srv/gitlab/config/gitlab.yml
file in the Toolbox pod and check the variousgitaly_address
fields specified underrepositories.storages
key within it.kubectl exec -it <Toolbox pod> -- grep gitaly_address /srv/gitlab/config/gitlab.yml
-
Create a k8s TLS secret using the certificate created.
kubectl create secret tls gitaly-server-tls --cert=gitaly.crt --key=gitaly.key
-
Redeploy the Helm chart by passing
--set global.gitaly.tls.enabled=true
.
Global server hooks
The Gitaly StatefulSet has support for Global server hooks. The hook scripts run on the Gitaly pod, and are therefore limited to the tools available in the Gitaly container.
The hooks are populated using ConfigMaps, and can be used by setting the following values as appropriate:
global.gitaly.hooks.preReceive.configmap
global.gitaly.hooks.postReceive.configmap
global.gitaly.hooks.update.configmap
To populate the ConfigMap, you can point kubectl
to a directory of scripts:
kubectl create configmap MAP_NAME --from-file /PATH/TO/SCRIPT/DIR
GPG signing commits created by GitLab
Gitaly has the ability to GPG sign all commits created via the GitLab UI, e.g. the WebIDE, as well as commits created by GitLab, such as merge commits and squashes.
-
Create a k8s secret using your GPG private key.
kubectl create secret generic gitaly-gpg-signing-key --from-file=signing_key=/path/to/gpg_signing_key.gpg
-
Enable GPG signing in your
values.yaml
.gitlab: gitaly: gpgSigning: enabled: true secret: gitaly-gpg-signing-key key: signing_key
Server-side backups
The chart supports Gitaly server-side backups. To use them:
- Create a bucket to store the backups.
-
Configure the object store credentials and the storage URL.
gitlab: gitaly: extraEnvFrom: # Mount the exisitign object store secret to the expected environment variables. AWS_ACCESS_KEY_ID: secretKeyRef: name: <Rails object store secret> key: aws_access_key_id AWS_SECRET_ACCESS_KEY: secretKeyRef: name: <Rails object store secret> key: aws_secret_access_key backup: # This is the connection string for Gitaly server side backups. goCloudUrl: <object store connection URL>
For the expected environment variables and storage URL format for your object storage backend, see the Gitaly documentation.
-
Enable server-side backups with
backup-utility
.