Using the GitLab Pages chart

  • Tier: Free, Premium, Ultimate
  • Offering: GitLab Self-Managed

The gitlab-pages subchart provides a daemon for serving static websites from GitLab projects.

Requirements

This chart depends on access to the Workhorse services, either as part of the complete GitLab chart or provided as an external service reachable from the Kubernetes cluster this chart is deployed onto.

Configuration

The gitlab-pages chart is configured as follows: Global settings and Chart settings.

Global Settings

We share some common global settings among our charts. See the Globals Documentation for details.

Chart settings

The tables in following two sections contains all the possible chart configurations that can be supplied to the helm install command using the --set flags.

General settings

ParameterDefaultDescription
affinity{}Affinity rules for pod assignment
annotationsPod annotations
common.labels{}Supplemental labels that are applied to all objects created by this chart.
deployment.strategy{}Allows one to configure the update strategy used by the deployment. When not provided, the cluster default is used.
extraEnvList of extra environment variables to expose
extraEnvFromList of extra environment variables from other data source to expose
hpa.behavior{scaleDown: {stabilizationWindowSeconds: 300 }}Behavior contains the specifications for up- and downscaling behavior (requires autoscaling/v2beta2 or higher)
hpa.customMetrics[]Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in targetAverageUtilization)
hpa.cpu.targetTypeAverageValueSet the autoscaling CPU target type, must be either Utilization or AverageValue
hpa.cpu.targetAverageValue100mSet the autoscaling CPU target value
hpa.cpu.targetAverageUtilizationSet the autoscaling CPU target utilization
hpa.memory.targetTypeSet the autoscaling memory target type, must be either Utilization or AverageValue
hpa.memory.targetAverageValueSet the autoscaling memory target value
hpa.memory.targetAverageUtilizationSet the autoscaling memory target utilization
hpa.minReplicas1Minimum number of replicas
hpa.maxReplicas10Maximum number of replicas
hpa.targetAverageValueDEPRECATED Set the autoscaling CPU target value
image.pullPolicyIfNotPresentGitLab image pull policy
image.pullSecretsSecrets for the image repository
image.repositoryregistry.gitlab.com/gitlab-org/build/cng/gitlab-pagesGitLab Pages image repository
image.tagimage tag
init.image.repositoryinitContainer image
init.image.taginitContainer image tag
init.containerSecurityContextinitContainer specific securityContext
init.containerSecurityContext.allowPrivilegeEscalationfalseinitContainer specific: Controls whether a process can gain more privileges than its parent process
init.containerSecurityContext.runAsNonRoottrueinitContainer specific: Controls whether the container runs with a non-root user
init.containerSecurityContext.capabilities.drop[ "ALL" ]initContainer specific: Removes Linux capabilities for the container
keda.enabledfalseUse KEDA ScaledObjects instead of HorizontalPodAutoscalers
keda.pollingInterval30The interval to check each trigger on
keda.cooldownPeriod300The period to wait after the last trigger reported active before scaling the resource back to 0
keda.minReplicaCountMinimum number of replicas KEDA will scale the resource down to, defaults to hpa.minReplicas
keda.maxReplicaCountMaximum number of replicas KEDA will scale the resource up to, defaults to hpa.maxReplicas
keda.fallbackKEDA fallback configuration, see the documentation
keda.hpaNameThe name of the HPA resource KEDA will create, defaults to keda-hpa-{scaled-object-name}
keda.restoreToOriginalReplicaCountSpecifies whether the target resource should be scaled back to original replicas count after the ScaledObject is deleted
keda.behaviorThe specifications for up- and downscaling behavior, defaults to hpa.behavior
keda.triggersList of triggers to activate scaling of the target resource, defaults to triggers computed from hpa.cpu and hpa.memory
metrics.enabledtrueIf a metrics endpoint should be made available for scraping
metrics.port9235Metrics endpoint port
metrics.path/metricsMetrics endpoint path
metrics.serviceMonitor.enabledfalseIf a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the prometheus.io scrape annotations
metrics.serviceMonitor.additionalLabels{}Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig{}Additional endpoint configuration for the ServiceMonitor
metrics.annotationsDEPRECATED Set explicit metrics annotations. Replaced by template content.
metrics.tls.enabledfalseTLS enabled for the metrics endpoint
metrics.tls.secretName{Release.Name}-pages-metrics-tlsSecret for the metrics endpoint TLS cert and key
priorityClassNamePriority class assigned to pods.
podLabelsSupplemental Pod labels. Will not be used for selectors.
resources.requests.cpu900mGitLab Pages minimum CPU
resources.requests.memory2GGitLab Pages minimum memory
securityContext.fsGroup1000Group ID under which the pod should be started
securityContext.runAsUser1000User ID under which the pod should be started
securityContext.fsGroupChangePolicyPolicy for changing ownership and permission of the volume (requires Kubernetes 1.23)
securityContext.seccompProfile.typeRuntimeDefaultSeccomp profile to use
containerSecurityContextOverride container securityContext under which the container is started
containerSecurityContext.runAsUser1000Allow to overwrite the specific security context user ID under which the container is started
containerSecurityContext.allowPrivilegeEscalationfalseControls whether a process of the container can gain more privileges than its parent process
containerSecurityContext.runAsNonRoottrueControls whether the container runs with a non-root user
containerSecurityContext.capabilities.drop[ "ALL" ]Removes Linux capabilities for the Gitaly container
service.externalPort8090GitLab Pages exposed port
service.internalPort8090GitLab Pages internal port
service.namegitlab-pagesGitLab Pages service name
service.annotationsAnnotations for all pages services.
service.primary.annotationsAnnotations for the primary service only.
service.metrics.annotationsAnnotations for the metrics service only.
service.customDomains.annotationsAnnotations for the custom domains service only.
service.customDomains.typeLoadBalancerType of service created for handling custom domains
service.customDomains.internalHttpsPort8091Port where Pages daemon listens for HTTPS requests
service.customDomains.internalHttpsPort8091Port where Pages daemon listens for HTTPS requests
service.customDomains.nodePort.httpNode Port to be opened for HTTP connections. Valid only if service.customDomains.type is NodePort
service.customDomains.nodePort.httpsNode Port to be opened for HTTPS connections. Valid only if service.customDomains.type is NodePort
service.sessionAffinityNoneType of the session affinity. Must be either ClientIP or None (this only makes sense for traffic originating from within the cluster)
service.sessionAffinityConfigSession affinity config. If service.sessionAffinity == ClientIP the default session sticky time is 3 hours (10800)
serviceAccount.annotations{}ServiceAccount annotations
serviceAccount.automountServiceAccountTokenfalseIndicates whether or not the default ServiceAccount access token should be mounted in pods
serviceAccount.createfalseIndicates whether or not a ServiceAccount should be created
serviceAccount.enabledfalseIndicates whether or not to use a ServiceAccount
serviceAccount.nameName of the ServiceAccount. If not set, the full chart name is used
serviceLabels{}Supplemental service labels
tolerations[]Toleration labels for pod assignment

Pages specific settings

ParameterDefaultDescription
artifactsServerTimeout10Timeout (in seconds) for a proxied request to the artifacts server
artifactsServerUrlAPI URL to proxy artifact requests to
extraVolumeMountsList of extra volumes mounts to add
extraVolumesList of extra volumes to create
gitlabCache.cleanupintSee: Pages Global Settings
gitlabCache.expiryintSee: Pages Global Settings
gitlabCache.refreshintSee: Pages Global Settings
gitlabClientHttpTimeoutGitLab API HTTP client connection timeout in seconds
gitlabClientJwtExpiryJWT Token expiry time in seconds
gitlabRetrieval.intervalintSee: Pages Global Settings
gitlabRetrieval.retriesintSee: Pages Global Settings
gitlabRetrieval.timeoutintSee: Pages Global Settings
gitlabServerGitLab server FQDN
headers[]Specify any additional http headers that should be sent to the client with each response. Multiple headers can be given as an array, header and value as one string, for example ['my-header: myvalue', 'my-other-header: my-other-value']
insecureCiphersfalseUse default list of cipher suites, may contain insecure ones like 3DES and RC4
internalGitlabServerInternal GitLab server used for API requests
logFormatjsonLog output format
logVerbosefalseVerbose logging
maxConnectionsLimit on the number of concurrent connections to the HTTP, HTTPS or proxy listeners
maxURILengthLimit the length of URI, 0 for unlimited.
propagateCorrelationIdReuse existing Correlation-ID from the incoming request header X-Request-ID if present
redirectHttpfalseRedirect pages from HTTP to HTTPS
sentry.enabledfalseEnable Sentry reporting
sentry.dsnThe address for sending Sentry crash reporting to
sentry.environmentThe environment for Sentry crash reporting
serverShutdowntimeout30sGitLab Pages server shutdown timeout in seconds
statusUriThe URL path for a status page
tls.minVersionSpecifies the minimum SSL/TLS version
tls.maxVersionSpecifies the maximum SSL/TLS version
useHTTPProxyfalseUse this option when GitLab Pages is behind a Reverse Proxy.
useProxyV2falseForce HTTPS request to utilize the PROXYv2 protocol.
zipCache.cleanupintSee: Zip Serving and Cache Configuration
zipCache.expirationintSee: Zip Serving and Cache Configuration
zipCache.refreshintSee: Zip Serving and Cache Configuration
zipOpenTimeoutintSee: Zip Serving and Cache Configuration
zipHTTPClientTimeoutintSee: Zip Serving and Cache Configuration
rateLimitSourceIPSee: GitLab Pages rate-limits.
rateLimitSourceIPBurstSee: GitLab Pages rate-limits
rateLimitDomainSee: GitLab Pages rate-limits.
rateLimitDomainBurstSee: GitLab Pages rate-limits
rateLimitTLSSourceIPSee: GitLab Pages rate-limits.
rateLimitTLSSourceIPBurstSee: GitLab Pages rate-limits
rateLimitTLSDomainSee: GitLab Pages rate-limits.
rateLimitTLSDomainBurstSee: GitLab Pages rate-limits
rateLimitSubnetsAllowListSee: GitLab Pages rate-limits
serverReadTimeout5sSee: GitLab Pages global settings
serverReadHeaderTimeout1sSee: GitLab Pages global settings
serverWriteTimeout5mSee: GitLab Pages global settings
serverKeepAlive15sSee: GitLab Pages global settings
authTimeout5sSee: GitLab Pages global settings
authCookieSessionTimeout10mSee: GitLab Pages global settings

Configuring the ingress

This section controls the GitLab Pages Ingress.

NameTypeDefaultDescription
apiVersionStringValue to use in the apiVersion field.
annotationsStringThis field is an exact match to the standard annotations for Kubernetes Ingress.
configureCertmanagerBooleanfalseToggles Ingress annotation cert-manager.io/issuer and acme.cert-manager.io/http01-edit-in-place. The acquisition of a TLS certificate for GitLab Pages via cert-manager is disabled because a wildcard certificate acquisition requires a cert-manager Issuer with a DNS01 solver, and the Issuer deployed by this chart only provides a HTTP01 solver. For more information see the TLS requirement for GitLab Pages.
enabledBooleanSetting that controls whether to create Ingress objects for services that support them. When not set, the global.ingress.enabled setting is used.
tls.enabledBooleanWhen set to false, you disable TLS for the Pages subchart. This is mainly useful for cases in which you cannot use TLS termination at ingress-level, like when you have a TLS-terminating proxy before the Ingress Controller.
tls.secretNameStringThe name of the Kubernetes TLS Secret that contains a valid certificate and key for the pages URL. When not set, the global.ingress.tls.secretName is used instead. Defaults to not being set.

Chart configuration examples

extraVolumes

extraVolumes allows you to configure extra volumes chart-wide.

Below is an example use of extraVolumes:

extraVolumes: |
  - name: example-volume
    persistentVolumeClaim:
      claimName: example-pvc

extraVolumeMounts

extraVolumeMounts allows you to configure extra volumeMounts on all containers chart-wide.

Below is an example use of extraVolumeMounts:

extraVolumeMounts: |
  - name: example-volume
    mountPath: /etc/example

Configuring the networkpolicy

This section controls the NetworkPolicy. This configuration is optional and is used to limit Egress and Ingress of the Pods to specific endpoints.

NameTypeDefaultDescription
enabledBooleanfalseThis setting enables the NetworkPolicy
ingress.enabledBooleanfalseWhen set to true, the Ingress network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rulesArray[]Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabledBooleanfalseWhen set to true, the Egress network policy will be activated. This will block all egress connections unless rules are specified.
egress.rulesArray[]Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below

Example Network Policy

The gitlab-pages service requires Ingress connections for port 80 and 443 and Egress connections to various to default workhorse port 8181. This example adds the following network policy:

  • Allows Ingress requests:
    • From the nginx-ingress pod to port 8090
    • From the prometheus pod to port 9235
  • Allows Egress requests:
    • To kube-dns on port 53
    • To the webservice pod to port 8181
    • To endpoints like AWS VPC endpoint for S3 172.16.1.0/24 on port 443

Note the example provided is only an example and may not be complete

The example is based on the assumption that kube-dns was deployed to the namespace kube-system, prometheus was deployed to the namespace monitoring and nginx-ingress was deployed to the namespace nginx-ingress.

networkpolicy:
  enabled: true
  ingress:
    enabled: true
    rules:
      - from:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: monitoring
            podSelector:
              matchLabels:
                app: prometheus
                component: server
                release: gitlab
        ports:
          - port: 9235
      - from:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: nginx-ingress
            podSelector:
              matchLabels:
                app: nginx-ingress
                component: controller
        ports:
          - port: 8090
  egress:
    enabled: true
    rules:
      - to:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: kube-system
            podSelector:
              matchLabels:
                k8s-app: kube-dns
        ports:
          - port: 53
            protocol: UDP
      - to:
          - ipBlock:
              cidr: 172.16.1.0/24
        ports:
          - port: 443
      - to:
          - podSelector:
              matchLabels:
                app: webservice
        ports:
          - port: 8181

TLS access to GitLab Pages

To have TLS access to the GitLab Pages feature you must:

  1. Create a dedicated wildcard certificate for your GitLab Pages domain in this format: *.pages.<yourdomain>.

  2. Create the secret in Kubernetes:

    kubectl create secret tls tls-star-pages-<mysecret> --cert=<path/to/fullchain.pem> --key=<path/to/privkey.pem>
  3. Configure GitLab Pages to use this secret:

    gitlab:
      gitlab-pages:
        ingress:
          tls:
            secretName: tls-star-pages-<mysecret>
  4. Create a DNS entry in your DNS provider with the name *.pages.<yourdomaindomain> pointing to your LoadBalancer.

Pages domain without wildcard DNS

History

GitLab Pages supports only one URL scheme at a time: Either with wildcard DNS, or without wildcard DNS. If you enable namespaceInPath, existing GitLab Pages websites are accessible only on domains without wildcard DNS.

  1. Enable namespaceInPath in the global Pages settings.

    global:
      pages:
        namespaceInPath: true
  2. Create a DNS entry in your DNS provider with the name pages.<yourdomaindomain> pointing to your LoadBalancer.

TLS access to GitLab Pages domain without wildcard DNS

  1. Create a certificate for your GitLab Pages domain in this format: pages.<yourdomain>.

  2. Create the secret in Kubernetes:

    kubectl create secret tls tls-star-pages-<mysecret> --cert=<path/to/fullchain.pem> --key=<path/to/privkey.pem>
  3. Configure GitLab Pages to use this secret:

    gitlab:
      gitlab-pages:
        ingress:
          tls:
            secretName: tls-star-pages-<mysecret>

Configure access control

  1. Enable accessControl in the global pages settings.

    global:
      pages:
        accessControl: true
  2. Optional. If TLS access is configured, update the redirect URI in the GitLab Pages System OAuth application to use the HTTPS protocol.

GitLab Pages does not update the OAuth application, and the default authRedirectUri is updated to https://pages.<yourdomaindomain>/projects/auth. While accessing a private Pages site, if you encounter an error ‘The redirect URI included is not valid’, update the redirect URI in the GitLab Pages System OAuth application to https://pages.<yourdomaindomain>/projects/auth.

Rate limits

You can enforce rate limits to help minimize the risk of a Denial of Service (DoS) attack. Detailed rate limits documentation is available.

To allow certain IP ranges (subnets) to bypass all rate limits:

  • rateLimitSubnetsAllowList: Sets the allow list with the IP ranges (subnets) that should bypass all rate limits.

Configure rate limits subnets allow list

Set the allow list with the IP ranges (subnets) in charts/gitlab/charts/gitlab-pages/values.yaml:

gitlab:
  gitlab-pages:
    rateLimitSubnetsAllowList:
     - "1.2.3.4/24"
     - "2001:db8::1/32"

Configuring KEDA

This keda section enables the installation of KEDA ScaledObjects instead of regular HorizontalPodAutoscalers. This configuration is optional and can be used when there is a need for autoscaling based on custom or external metrics.

Most settings default to the values set in the hpa section where applicable.

If the following are true, CPU and memory triggers are added automatically based on the CPU and memory thresholds set in the hpa section:

  • triggers is not set.
  • The corresponding request.cpu.request or request.memory.request setting is also set to a non-zero value.

If no triggers are set, the ScaledObject is not created.

Refer to the KEDA documentation for more details about those settings.

NameTypeDefaultDescription
enabledBooleanfalseUse KEDA ScaledObjects instead of HorizontalPodAutoscalers
pollingIntervalInteger30The interval to check each trigger on
cooldownPeriodInteger300The period to wait after the last trigger reported active before scaling the resource back to 0
minReplicaCountIntegerMinimum number of replicas KEDA will scale the resource down to, defaults to hpa.minReplicas
maxReplicaCountIntegerMaximum number of replicas KEDA will scale the resource up to, defaults to hpa.maxReplicas
fallbackMapKEDA fallback configuration, see the documentation
hpaNameStringThe name of the HPA resource KEDA will create, defaults to keda-hpa-{scaled-object-name}
restoreToOriginalReplicaCountBooleanSpecifies whether the target resource should be scaled back to original replicas count after the ScaledObject is deleted
behaviorMapThe specifications for up- and downscaling behavior, defaults to hpa.behavior
triggersArrayList of triggers to activate scaling of the target resource, defaults to triggers computed from hpa.cpu and hpa.memory

serviceAccount

This section controls if a ServiceAccount should be created and if the default access token should be mounted in pods.

NameTypeDefaultDescription
annotationsMap{}ServiceAccount annotations.
automountServiceAccountTokenBooleanfalseControls if the default ServiceAccount access token should be mounted in pods. You should not enable this unless it is required by certain sidecars to work properly (for example, Istio).
createBooleanfalseIndicates whether or not a ServiceAccount should be created.
enabledBooleanfalseIndicates whether or not to use a ServiceAccount.
nameStringName of the ServiceAccount. If not set, the chart full name is used.

affinity

For more information, see affinity.