- Requirements
- Design Choices
- Configuration
- Installation command line options
- Chart configuration examples
- Using the Community Edition of this chart
- External Services
- Metrics
- Chart-wide defaults
- Per-pod Settings
-
Configuring the
networkpolicy
Using the GitLab-Sidekiq chart
The sidekiq
sub-chart provides configurable deployment of Sidekiq workers, explicitly
designed to provide separation of queues across multiple Deployment
s with individual
scalability and configuration.
While this chart provides a default pods:
declaration, if you provide an empty definition,
you will have no workers.
Requirements
This chart depends on access to Redis, PostgreSQL, and Gitaly services, either as part of the complete GitLab chart or provided as external services reachable from the Kubernetes cluster this chart is deployed onto.
Design Choices
This chart creates multiple Deployment
s and associated ConfigMap
s. It was decided
that it would be clearer to make use of ConfigMap
behaviours instead of using environment
attributes or additional arguments to the command
for the containers, in order to
avoid any concerns about command length. This choice results in a large number of
ConfigMap
s, but provides very clear definitions of what each pod should be doing.
Configuration
The sidekiq
chart is configured in three parts: chart-wide external services,
chart-wide defaults, and per-pod definitions.
Installation command line options
The table below contains all the possible charts configurations that can be supplied
to the helm install
command using the --set
flags:
Parameter | Default | Description |
---|---|---|
annotations
| Pod annotations | |
concurrency
| 25
| Sidekiq default concurrency |
cluster
| false
| See below. |
enabled
| true
| Sidekiq enabled flag |
extraContainers
| List of extra containers to include | |
extraInitContainers
| List of extra init containers to include | |
extraVolumeMounts
| List of extra volumes mountes to do | |
extraVolumes
| List of extra volumes to create | |
gitaly.serviceName
| gitaly
| Gitaly service name |
hpa.targetAverageValue
| 350m
| Set the autoscaling target value |
minReplicas
| 2
| Minimum number of replicas |
maxReplicas
| 10
| Maximum number of replicas |
image.pullPolicy
| Always
| Sidekiq image pull policy |
image.pullSecrets
| Secrets for the image repository | |
image.repository
| registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ee
| Sidekiq image repository |
image.tag
| Sidekiq image tag | |
init.image.repository
| initContainer image | |
init.image.tag
| initContainer image tag | |
logging.format
| default
| Set to json for json structured logs |
metrics.enabled
| true
| Toggle Prometheus metrics exporter |
psql.password.key
| psql-password
| key to psql password in psql secret |
psql.password.secret
| gitlab-postgres
| psql password secret |
psql.port
| Set PostgreSQL server port. Takes precedence over global.psql.port
| |
redis.serviceName
| redis
| Redis service name |
resources.requests.cpu
| 100m
| Sidekiq minimum needed cpu |
resources.requests.memory
| 600M
| Sidekiq minimum needed memory |
timeout
| 5
| Sidekiq job timeout |
tolerations
| []
| Toleration labels for pod assignment |
memoryKiller.daemonMode
| false
| If true enables daemon memory killer mode
|
memoryKiller.maxRss
| 2000000
| Maximum RSS before delayed shutdown triggered expressed in kilobytes |
memoryKiller.graceTime
| 900
| Time to wait before a triggered shutdown expressed in seconds |
memoryKiller.shutdownWait
| 30
| Amount of time after triggered shutdown for existing jobs to finish expressed in seconds |
memoryKiller.hardLimitRss
| Maximum RSS before imediate shutdown triggered expressed in kilobyte in daemon mode | |
memoryKiller.checkInterval
| 3
| Amount of time between memory checks in daemon mode |
livenessProbe.initialDelaySeconds
| 20 | Delay before liveness probe is initiated |
livenessProbe.periodSeconds
| 60 | How often to perform the liveness probe |
livenessProbe.timeoutSeconds
| 30 | When the liveness probe times out |
livenessProbe.successThreshold
| 1 | Minimum consecutive successes for the liveness probe to be considered successful after having failed |
livenessProbe.failureThreshold
| 3 | Minimum consecutive failures for the liveness probe to be considered failed after having succeeded |
readinessProbe.initialDelaySeconds
| 0 | Delay before readiness probe is initiated |
readinessProbe.periodSeconds
| 10 | How often to perform the readiness probe |
readinessProbe.timeoutSeconds
| 2 | When the readiness probe times out |
readinessProbe.successThreshold
| 1 | Minimum consecutive successes for the readiness probe to be considered successful after having failed |
readinessProbe.failureThreshold
| 3 | Minimum consecutive failures for the readiness probe to be considered failed after having succeeded |
Chart configuration examples
image.pullSecrets
pullSecrets
allows you to authenticate to a private registry to pull images for a pod.
Additional details about private registries and their authentication methods can be found in the Kubernetes documentation.
Below is an example use of pullSecrets
:
image:
repository: my.sidekiq.repository
pullPolicy: Always
pullSecrets:
- name: my-secret-name
- name: my-secondary-secret-name
tolerations
tolerations
allow you schedule pods on tainted worker nodes
Below is an example use of tolerations
:
tolerations:
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoExecute"
annotations
annotations
allows you to add annotations to the Sidekiq pods.
Below is an example use of annotations
:
annotations:
kubernetes.io/example-annotation: annotation-value
Using the Community Edition of this chart
By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you can use the Community Edition instead. Learn more about the differences between the two.
In order to use the Community Edition, set image.repository
to
registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce
.
External Services
This chart should be attached to the same Redis, PostgreSQL, and Gitaly instances
as the Unicorn chart. The values of external services will be populated into a ConfigMap
that is shared across all Sidekiq pods.
Redis
redis:
host: rank-racoon-redis
port: 6379
sentinels:
- host: sentinel1.example.com
port: 26379
password:
secret: gitlab-redis
key: redis-password
Name | Type | Default | Description |
---|---|---|---|
host
| String | The hostname of the Redis server with the database to use. This can be omitted in lieu of serviceName . If using Redis Sentinels, the host attribute needs to be set to the cluster name as specified in the sentinel.conf .
| |
password.key
| String | The password.key attribute for Redis defines the name of the key in the secret (below) that contains the password.
| |
password.secret
| String | The password.secret attribute for Redis defines the name of the Kubernetes Secret to pull from.
| |
port
| Integer | 6379
| The port on which to connect to the Redis server. |
serviceName
| String | redis
| The name of the service which is operating the Redis database. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name ) in place of the host value. This is convenient when using Redis as a part of the overall GitLab chart.
|
sentinels.[].host
| String | The hostname of Redis Sentinel server for a Redis HA setup. | |
sentinels.[].port
| Integer | 26379
| The port on which to connect to the Redis Sentinel server. |
Note: The current Redis Sentinel support only supports Sentinels that have
been deployed separately from the GitLab chart. As a result, the Redis
deployment through the GitLab chart should be disabled with redis.install=false
.
The Secret containing the Redis password will need to be manually created
before deploying the GitLab chart.
PostgreSQL
psql:
host: rank-racoon-psql
serviceName: pgbouncer
port: 5432
database: gitlabhq_production
username: gitlab
preparedStatements: false
password:
secret: gitlab-postgres
key: psql-password
Name | Type | Default | Description |
---|---|---|---|
host
| String | The hostname of the PostgreSQL server with the database to use. This can be omitted if postgresql.install=true (default non-production).
| |
serviceName
| String | The name of the service which is operating the PostgreSQL database. If this is present, and host is not, the chart will template the hostname of the service in place of the host value.
| |
database
| String | gitlabhq_production
| The name of the database to use on the PostgreSQL server. |
password.key
| String | The password.key attribute for PostgreSQL defines the name of the key in the secret (below) that contains the password.
| |
password.secret
| String | The password.secret attribute for PostgreSQL defines the name of the Kubernetes Secret to pull from.
| |
port
| Integer | 5432
| The port on which to connect to the PostgreSQL server. |
username
| String | gitlab
| The username with which to authenticate to the database. |
preparedStatements
| Bool | false
| If prepared statements should be used when communicating with the PostgreSQL server. |
Gitaly
gitaly:
internal:
names:
- default
- default2
external:
- name: node1
hostname: node1.example.com
port: 8079
authToken:
secret: gitaly-secret
key: token
Name | Type | Default | Description |
---|---|---|---|
host
| String | The hostname of the Gitaly server to use. This can be omitted in lieu of serviceName .
| |
serviceName
| String | gitaly
| The name of the service which is operating the Gitaly server. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name ) in place of the host value. This is convenient when using Gitaly as a part of the overall GitLab chart.
|
port
| Integer | 8075
| The port on which to connect to the Gitaly server. |
authToken.key
| String | The name of the key in the secret below that contains the authToken. | |
authToken.secret
| String | The name of the Kubernetes Secret to pull from.
|
Metrics
By default, a Prometheus metrics exporter is enabled per pod. Metrics are only available
when GitLab Prometheus metrics
are enabled in the Admin area. The exporter exposes a /metrics
endpoint on port
3807
. When metrics are enabled, annotations are added to each pod allowing a Prometheus
server to discover and scrape the exposed metrics.
Chart-wide defaults
The following values will be used chart-wide, in the event that a value is not presented on a per-pod basis.
Name | Type | Default | Description |
---|---|---|---|
concurrency
| Integer | 25
| The number of tasks to process simultaneously. |
cluster
| Bool | false
| See below. Overridden by per-Pod value, if present. |
timeout
| Integer | 4
| The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes. |
memoryKiller.maxRss
| Integer | 2000000
| Maximum RSS before delayed shutdown triggered expressed in kilobytes |
memoryKiller.graceTime
| Integer | 900
| Time to wait before a triggered shutdown expressed in seconds |
memoryKiller.shutdownWait
| Integer | 30
| Amount of time after triggered shutdown for existing jobs to finish expressed in seconds |
minReplicas
| Integer | 2
| Minimum number of replicas |
maxReplicas
| Integer | 10
| Maximum number of replicas |
Per-pod Settings
The pods
declaration provides for the declaration of all attributes for a worker
pod. These will be templated to Deployment
s, with individual ConfigMap
s for their
Sidekiq instances.
Name | Type | Default | Description |
---|---|---|---|
concurrency
| Integer | The number of tasks to process simultaneously. If not provided, it will be pulled from the chart-wide default. | |
cluster
| Bool | false
| See below. |
name
| String | Used to name the Deployment and ConfigMap for this pod. It should be kept short, and should not be duplicated between any two entries.
| |
queues
| See below. | ||
negateQueues
| See below. | ||
experimentalQueueSelector
| Bool | false
| Use the experimental queue selector. Only valid when cluster is enabled.
|
timeout
| Integer | The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes. If not provided, it will be pulled from the chart-wide default. | |
resources
| Each pod can present it’s own resources requirements, which will be added to the Deployment created for it, if present. These match the Kubernetes documentation.
| ||
nodeSelector
| Each pod can be configured with a nodeSelector attribute, which will be added to the Deployment created for it, if present. These definitions match the Kubernetes documentation.
| ||
minReplicas
| Integer | 2
| Minimum number of replicas |
maxReplicas
| Integer | 10
| Maximum number of replicas |
queues
The queues
value will be directly templated into the Sidekiq configuration file.
As such, you may follow the documentation from Sidekiq for the value of :queues:
.
If this is not provided, the upstream defaults
will be used, resulting in the handling of all queues.
In summary, provide a list of queue names to process. Each item in the list may be
a queue name (merge
) or an array of queue names with priorities ([merge, 5]
).
Any queue to which jobs are added but are not represented as a part of at least one
pod item will not be processed. See config/sidekiq_queues.yml
in the GitLab source for a complete list of all queues.
negateQueues
negateQueues
is a list of queue names (strings) which will be filtered from the
default list of Sidekiq queues. Unlike queues above which will replace
the default list, negateQueues
will consume the defaults, remove those named
here, and populate the rest for consumption.
negateQueues
should not be provided alongside queues
, as it will have no
affect.cluster
cluster
is a boolean, used to opt into the use of Sidekiq
Cluster
to start the Sidekiq process. If a non-boolean is provided, then the
value is ignored.
Currently defaults to false
.
When using Sidekiq Cluster, queues
(or negateQueues
) must be a
string, not an array.
cluster
will never start
more than one Sidekiq process inside a pod. To run additional Sidekiq processes,
run additional pods.Example pod
entry
pods:
- name: immediate
concurrency: 10
minReplicas: 2 # defaults to inherited value
maxReplicas: 10 # defaults to inherited value
queues:
- [post_receive, 5]
- [merge, 5]
- [update_merge_requests, 3]
- [process_commit, 3]
- [new_note, 2]
- [new_issue, 2]
resources:
limits:
cpu: 800m
memory: 2Gi
Configuring the networkpolicy
This section controls the NetworkPolicy. This configuration is optional and is used to limit Egress and Ingress of the Pods to specific endpoints.
Name | Type | Default | Description |
---|---|---|---|
enabled
| Boolean | false
| This setting enables the networkpolicy |
ingress.enabled
| Boolean | false
| When set to true , the Ingress network policy will be activated. This will block all Ingress connections unless rules are specified.
|
ingress.rules
| Array | []
| Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below |
egress.enabled
| Boolean | false
| When set to true , the Egress network policy will be activated. This will block all egress connections unless rules are specified.
|
egress.rules
| Array | []
| Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below |
Example Network Policy
The Sidekiq service requires Ingress connections for only the Prometheus exporter if enabled, and normally requires Egress connections to various places. This examples adds the following network policy:
- All Ingress requests from the network on TCP
10.0.0.0/8
port 3807 are allowed for metrics exporting - All Egress requests to the network on UDP
10.0.0.0/8
port 53 are allowed for DNS - All Egress requests to the network on TCP
10.0.0.0/8
port 5432 are allowed for PostgreSQL - All Egress requests to the network on TCP
10.0.0.0/8
port 6379 are allowed for Redis - Other Egress requests to the local network on
10.0.0.0/8
are restricted - Egress requests outside of the
10.0.0.0/8
are allowed
Note the example provided is only an example and may not be complete
Note that the Sidekiq service requires outbound connectivity to the public internet for images on external object storage
networkpolicy:
enabled: true
ingress:
enabled: true
rules:
- from:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 3807
egress:
enabled: true
rules:
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 53
protocol: UDP
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 5432
protocol: TCP
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 6379
protocol: TCP
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
Help and feedback
If there's something you don't like about this feature
To propose functionality that GitLab does not yet offer
To further help GitLab in shaping new features
If you didn't find what you were looking for
If you want help with something very specific to your use case, and can use some community support
POST ON GITLAB FORUM
If you have problems setting up or using this feature (depending on your GitLab subscription)
REQUEST SUPPORT
To view all GitLab tiers and features or to upgrade
If you want to try all features available in GitLab.com
If you want to try all features available in GitLab self-managed
If you spot an error or a need for improvement and would like to fix it yourself in a merge request
EDIT THIS PAGE
If you would like to suggest an improvement to this doc