Using the GitLab-Sidekiq chart

The sidekiq sub-chart provides configurable deployment of Sidekiq workers, explicitly designed to provide separation of queues across multiple Deployments with individual scalability and configuration.

While this chart provides a default pods: declaration, if you provide an empty definition, you will have no workers.

Requirements

This chart depends on access to Redis, PostgreSQL, and Gitaly services, either as part of the complete GitLab chart or provided as external services reachable from the Kubernetes cluster this chart is deployed onto.

Design Choices

This chart creates multiple Deployments and associated ConfigMaps. It was decided that it would be clearer to make use of ConfigMap behaviours instead of using environment attributes or additional arguments to the command for the containers, in order to avoid any concerns about command length. This choice results in a large number of ConfigMaps, but provides very clear definitions of what each pod should be doing.

Configuration

The sidekiq chart is configured in three parts: chart-wide external services, chart-wide defaults, and per-pod definitions.

Installation command line options

The table below contains all the possible charts configurations that can be supplied to the helm install command using the --set flags:

ParameterDefaultDescription
annotations Pod annotations
concurrency10Sidekiq default concurrency
enabledtrueSidekiq enabled flag
extraContainers List of extra containers to include
extraInitContainers List of extra init containers to include
extraVolumeMounts List of extra volumes mountes to do
extraVolumes List of extra volumes to create
gitaly.serviceNamegitalyGitaly service name
hpa.targetAverageValue350mSet the autoscaling target value
image.pullPolicyAlwaysSidekiq image pull policy
image.pullSecrets Secrets for the image repository
image.repositoryregistry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-eeSidekiq image repository
image.tag Sidekiq image tag
init.imagebusyboxinitContainer image
init.taglatestinitContainer image tag
metrics.enabledtrueToggle Prometheus metrics exporter
psql.password.keypsql-passwordkey to psql password in psql secret
psql.password.secretgitlab-postgrespsql password secret
redis.serviceNameredisRedis service name
replicas1Sidekiq replicas
resources.requests.cpu100mSidekiq minimum needed cpu
resources.requests.memory600MSidekiq minimum needed memory
timeout5Sidekiq job timeout
tolerations[]Toleration labels for pod assignment
memoryKiller.daemonModefalseIf true enables daemon memory killer mode
memoryKiller.maxRss2000000Maximum RSS before delayed shutdown triggered expressed in kilobytes
memoryKiller.graceTime900Time to wait before a triggered shutdown expressed in seconds
memoryKiller.shutdownWait30Amount of time after triggered shutdown for existing jobs to finish expressed in seconds
memoryKiller.hardLimitRss Maximum RSS before imediate shutdown triggered expressed in kilobyte in daemon mode
memoryKiller.checkInterval3Amount of time between memory checks in daemon mode
livenessProbe.initialDelaySeconds20Delay before liveness probe is initiated
livenessProbe.periodSeconds60How often to perform the liveness probe
livenessProbe.timeoutSeconds30When the liveness probe times out
livenessProbe.successThreshold1Minimum consecutive successes for the liveness probe to be considered successful after having failed
livenessProbe.failureThreshold3Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
readinessProbe.initialDelaySeconds0Delay before readiness probe is initiated
readinessProbe.periodSeconds10How often to perform the readiness probe
readinessProbe.timeoutSeconds2When the readiness probe times out
readinessProbe.successThreshold1Minimum consecutive successes for the readiness probe to be considered successful after having failed
readinessProbe.failureThreshold3Minimum consecutive failures for the readiness probe to be considered failed after having succeeded

Chart configuration examples

image.pullSecrets

pullSecrets allows you to authenticate to a private registry to pull images for a pod.

Additional details about private registries and their authentication methods can be found in the Kubernetes documentation.

Below is an example use of pullSecrets:

image:
  repository: my.sidekiq.repository
  pullPolicy: Always
  pullSecrets:
  - name: my-secret-name
  - name: my-secondary-secret-name

tolerations

tolerations allow you schedule pods on tainted worker nodes

Below is an example use of tolerations:

tolerations:
- key: "node_label"
  operator: "Equal"
  value: "true"
  effect: "NoSchedule"
- key: "node_label"
  operator: "Equal"
  value: "true"
  effect: "NoExecute"

annotations

annotations allows you to add annotations to the Sidekiq pods.

Below is an example use of annotations:

annotations:
  kubernetes.io/example-annotation: annotation-value

Using the Community Edition of this chart

By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you can use the Community Edition instead. Learn more about the differences between the two.

In order to use the Community Edition, set image.repository to registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce.

External Services

This chart should be attached to the same Redis, PostgreSQL, and Gitaly instances as the Unicorn chart. The values of external services will be populated into a ConfigMap that is shared across all Sidekiq pods.

Redis

redis:
  host: rank-racoon-redis
  port: 6379
  sentinels:
    - host: sentinel1.example.com
      port: 26379
  password:
    secret: gitlab-redis
    key: redis-password
NameTypeDefaultDescription
hostString The hostname of the Redis server with the database to use. This can be omitted in lieu of serviceName. If using Redis Sentinels, the host attribute needs to be set to the cluster name as specified in the sentinel.conf.
password.keyString The password.key attribute for Redis defines the name of the key in the secret (below) that contains the password.
password.secretString The password.secret attribute for Redis defines the name of the Kubernetes Secret to pull from.
portInteger6379The port on which to connect to the Redis server.
serviceNameStringredisThe name of the service which is operating the Redis database. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name) in place of the host value. This is convenient when using Redis as a part of the overall GitLab chart.
sentinels.[].hostString The hostname of Redis Sentinel server for a Redis HA setup.
sentinels.[].portInteger26379The port on which to connect to the Redis Sentinel server.

Note: The current Redis Sentinel support only supports Sentinels that have been deployed separately from the GitLab chart. As a result, the Redis deployment through the GitLab chart should be disabled with redis.enabled=false and redis-ha.enabled=false. The Secret containing the Redis password will need to be manually created before deploying the GitLab chart.

PostgreSQL

psql:
  host: rank-racoon-psql
  serviceName: pgbouncer
  port: 5432
  database: gitlabhq_production
  username: gitlab
  preparedStatements: false
  password:
    secret: gitlab-postgres
    key: psql-password
NameTypeDefaultDescription
hostString The hostname of the PostgreSQL server with the database to use. This can be omitted if postgresql.install=true (default non-production).
serviceNameString The name of the service which is operating the PostgreSQL database. If this is present, and host is not, the chart will template the hostname of the service in place of the host value.
databaseStringgitlabhq_productionThe name of the database to use on the PostgreSQL server.
password.keyString The password.key attribute for PostgreSQL defines the name of the key in the secret (below) that contains the password.
password.secretString The password.secret attribute for PostgreSQL defines the name of the Kubernetes Secret to pull from.
portInteger5432The port on which to connect to the PostgreSQL server.
usernameStringgitlabThe username with which to authenticate to the database.
preparedStatementsBoolfalseIf prepared statements should be used when communicating with the PostgreSQL server.

Gitaly

gitaly:
  internal:
    names:
      - default
      - default2
  external:
    - name: node1
      hostname: node1.example.com
      port: 8079
  authToken:
    secret: gitaly-secret
    key: token
NameTypeDefaultDescription
hostString The hostname of the Gitaly server to use. This can be omitted in lieu of serviceName.
serviceNameStringgitalyThe name of the service which is operating the Gitaly server. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name) in place of the host value. This is convenient when using Gitaly as a part of the overall GitLab chart.
portInteger8075The port on which to connect to the Gitaly server.
authToken.keyString The name of the key in the secret below that contains the authToken.
authToken.secretString The name of the Kubernetes Secret to pull from.

Metrics

By default, a Prometheus metrics exporter is enabled per pod. Metrics are only available when GitLab Prometheus metrics are enabled in the Admin area. The exporter exposes a /metrics endpoint on port 3807. When metrics are enabled, annotations are added to each pod allowing a Prometheus server to discover and scrape the exposed metrics.

Chart-wide defaults

The following values will be used chart-wide, in the event that a value is not presented on a per-pod basis.

NameTypeDefaultDescription
concurrencyInteger25The number of tasks to process simultaneously.
replicasInteger1The number of replicas to use by default per pod definition.
timeoutInteger4The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes.
memoryKiller.maxRssInteger2000000Maximum RSS before delayed shutdown triggered expressed in kilobytes
memoryKiller.graceTimeInteger900Time to wait before a triggered shutdown expressed in seconds
memoryKiller.shutdownWaitInteger30Amount of time after triggered shutdown for existing jobs to finish expressed in seconds

Per-pod Settings

The pods declaration provides for the declaration of all attributes for a worker pod. These will be templated to Deployments, with individual ConfigMaps for their Sidekiq instances.

Note: The settings default to including a single pod that is set up to monitor all queues. Making changes to to the pods section will overwrite the default pod with a different pod configuration. It will not add a new pod in addition to the default.
NameTypeDefaultDescription
concurrencyInteger The number of tasks to process simultaneously. If not provided, it will be pulled from the chart-wide default.
nameString Used to name the Deployment and ConfigMap for this pod. It should be kept short, and should not be duplicated between any two entries.
queues  See below.
negateQueues  See below.
replicasInteger The number of replicas to create for this Deployment. If not provided, it will be pulled from the chart-wide default.
timeoutInteger The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes. If not provided, it will be pulled from the chart-wide default.
resources  Each pod can present it’s own resources requirements, which will be added to the Deployment created for it, if present. These match the Kubernetes documentation.
nodeSelector  Each pod can be configured with a nodeSelector attribute, which will be added to the Deployment created for it, if present. These definitions match the Kubernetes documentation.

queues

The queues value will be directly templated into the Sidekiq configuration file. As such, you may follow the documentation from Sidekiq for the value of :queues:. If this is not provided, the upstream defaults will be used, resulting in the handling of all queues.

In summary, provide a list of queue names to process. Each item in the list may be a queue name (merge) or an array of queue names with priorities ([merge, 5]).

Any queue to which jobs are added but are not represented as a part of at least one pod item will not be processed. See config/sidekiq_queues.yml in the GitLab source for a complete list of all queues.

negateQueues

negateQueues is a list of queue names (strings) which will be filtered from the default list of Sidekiq queues. Unlike queues above which will replace the default list, negateQueues will consume the defaults, remove those named here, and populate the rest for consumption.

Note: negateQueues should not be provided alongside queues, as it will have no affect.

Example pod entry

pods:
  - name: immediate
    concurrency: 10
    replicas: 3
    - [post_receive, 5]
    - [merge, 5]
    - [update_merge_requests, 3]
    - [process_commit, 3]
    - [new_note, 2]
    - [new_issue, 2]
    resources:
      limits:
        cpu: 800m
        memory: 2Gi

Production usage

By default, all of Sidekiq queues run in an all-in-one container which is not suitable for production use cases. Check the example config for a more production ready Sidekiq deployment. You can move queues around pods as part of your tuning.