CI/CD YAML syntax reference

Tier: Free, Premium, Ultimate Offering: GitLab.com, Self-managed, GitLab Dedicated

This document lists the configuration options for the GitLab .gitlab-ci.yml file. This file is where you define the CI/CD jobs that make up your pipeline.

When you are editing your .gitlab-ci.yml file, you can validate it with the CI Lint tool.

If you are editing content on this page, follow the instructions for documenting keywords.

Keywords

A GitLab CI/CD pipeline configuration includes:

  • Global keywords that configure pipeline behavior:

    Keyword Description
    default Custom default values for job keywords.
    include Import configuration from other YAML files.
    stages The names and order of the pipeline stages.
    variables Define CI/CD variables for all job in the pipeline.
    workflow Control what types of pipeline run.
  • Header keywords

    Keyword Description
    spec Define specifications for external configuration files.
  • Jobs configured with job keywords:

    Keyword Description
    after_script Override a set of commands that are executed after job.
    allow_failure Allow job to fail. A failed job does not cause the pipeline to fail.
    artifacts List of files and directories to attach to a job on success.
    before_script Override a set of commands that are executed before job.
    cache List of files that should be cached between subsequent runs.
    coverage Code coverage settings for a given job.
    dast_configuration Use configuration from DAST profiles on a job level.
    dependencies Restrict which artifacts are passed to a specific job by providing a list of jobs to fetch artifacts from.
    environment Name of an environment to which the job deploys.
    extends Configuration entries that this job inherits from.
    identity Authenticate with third party services using identity federation.
    image Use Docker images.
    inherit Select which global defaults all jobs inherit.
    interruptible Defines if a job can be canceled when made redundant by a newer run.
    manual_confirmation Define a custom confirmation message for a manual job.
    needs Execute jobs earlier than the stage ordering.
    pages Upload the result of a job to use with GitLab Pages.
    parallel How many instances of a job should be run in parallel.
    release Instructs the runner to generate a release object.
    resource_group Limit job concurrency.
    retry When and how many times a job can be auto-retried in case of a failure.
    rules List of conditions to evaluate and determine selected attributes of a job, and whether or not it’s created.
    script Shell script that is executed by a runner.
    run Run configuration that is executed by a runner.
    secrets The CI/CD secrets the job needs.
    services Use Docker services images.
    stage Defines a job stage.
    tags List of tags that are used to select a runner.
    timeout Define a custom job-level timeout that takes precedence over the project-wide setting.
    trigger Defines a downstream pipeline trigger.
    variables Define job variables on a job level.
    when When to run job.

Global keywords

Some keywords are not defined in a job. These keywords control pipeline behavior or import additional pipeline configuration.

default

History
  • Support for id_tokens introduced in GitLab 16.4.

You can set global defaults for some keywords. Each default keyword is copied to every job that doesn’t already have it defined. If the job already has a keyword defined, that default is not used.

Keyword type: Global keyword.

Possible inputs: These keywords can have custom defaults:

Example of default:

default:
  image: ruby:3.0
  retry: 2

rspec:
  script: bundle exec rspec

rspec 2.7:
  image: ruby:2.7
  script: bundle exec rspec

In this example:

  • image: ruby:3.0 and retry: 2 are the default keywords for all jobs in the pipeline.
  • The rspec job does not have image or retry defined, so it uses the defaults of image: ruby:3.0 and retry: 2.
  • The rspec 2.7 job does not have retry defined, but it does have image explicitly defined. It uses the default retry: 2, but ignores the default image and uses the image: ruby:2.7 defined in the job.

Additional details:

  • Control inheritance of default keywords in jobs with inherit:default.
  • Global defaults are not passed to downstream pipelines, which run independently of the upstream pipeline that triggered the downstream pipeline.

include

Use include to include external YAML files in your CI/CD configuration. You can split one long .gitlab-ci.yml file into multiple files to increase readability, or reduce duplication of the same configuration in multiple places.

You can also store template files in a central repository and include them in projects.

The include files are:

  • Merged with those in the .gitlab-ci.yml file.
  • Always evaluated first and then merged with the content of the .gitlab-ci.yml file, regardless of the position of the include keyword.

The time limit to resolve all files is 30 seconds.

Keyword type: Global keyword.

Possible inputs: The include subkeys:

And optionally:

Additional details:

  • Only certain CI/CD variables can be used with include keywords.
  • Use merging to customize and override included CI/CD configurations with local
  • You can override included configuration by having the same job name or global keyword in the .gitlab-ci.yml file. The two configurations are merged together, and the configuration in the .gitlab-ci.yml file takes precedence over the included configuration.
  • If you rerun a:
    • Job, the include files are not fetched again. All jobs in a pipeline use the configuration fetched when the pipeline was created. Any changes to the source include files do not affect job reruns.
    • Pipeline, the include files are fetched again. If they changed after the last pipeline run, the new pipeline uses the changed configuration.
  • You can have up to 150 includes per pipeline by default, including nested. Additionally:

include:component

Use include:component to add a CI/CD component to the pipeline configuration.

Keyword type: Global keyword.

Possible inputs: The full address of the CI/CD component, formatted as <fully-qualified-domain-name>/<project-path>/<component-name>@<specific-version>.

Example of include:component:

include:
  - component: $CI_SERVER_FQDN/my-org/security-components/secret-detection@1.0

Related topics:

include:local

Use include:local to include a file that is in the same repository and branch as the configuration file containing the include keyword. Use include:local instead of symbolic links.

Keyword type: Global keyword.

Possible inputs:

A full path relative to the root directory (/):

Example of include:local:

include:
  - local: '/templates/.gitlab-ci-template.yml'

You can also use shorter syntax to define the path:

include: '.gitlab-ci-production.yml'

Additional details:

  • The .gitlab-ci.yml file and the local file must be on the same branch.
  • You can’t include local files through Git submodules paths.
  • include configuration is always evaluated based on the location of the file containing the include keyword, not the project running the pipeline. If a nested include is in a configuration file in a different project, include: local checks that other project for the file.

include:project

To include files from another private project on the same GitLab instance, use include:project and include:file.

Keyword type: Global keyword.

Possible inputs:

  • include:project: The full GitLab project path.
  • include:file A full file path, or array of file paths, relative to the root directory (/). The YAML files must have the .yml or .yaml extension.
  • include:ref: Optional. The ref to retrieve the file from. Defaults to the HEAD of the project when not specified.
  • You can use certain CI/CD variables.

Example of include:project:

include:
  - project: 'my-group/my-project'
    file: '/templates/.gitlab-ci-template.yml'
  - project: 'my-group/my-subgroup/my-project-2'
    file:
      - '/templates/.builds.yml'
      - '/templates/.tests.yml'

You can also specify a ref:

include:
  - project: 'my-group/my-project'
    ref: main                                      # Git branch
    file: '/templates/.gitlab-ci-template.yml'
  - project: 'my-group/my-project'
    ref: v1.0.0                                    # Git Tag
    file: '/templates/.gitlab-ci-template.yml'
  - project: 'my-group/my-project'
    ref: 787123b47f14b552955ca2786bc9542ae66fee5b  # Git SHA
    file: '/templates/.gitlab-ci-template.yml'

Additional details:

  • include configuration is always evaluated based on the location of the file containing the include keyword, not the project running the pipeline. If a nested include is in a configuration file in a different project, include: local checks that other project for the file.
  • When the pipeline starts, the .gitlab-ci.yml file configuration included by all methods is evaluated. The configuration is a snapshot in time and persists in the database. GitLab does not reflect any changes to the referenced .gitlab-ci.yml file configuration until the next pipeline starts.
  • When you include a YAML file from another private project, the user running the pipeline must be a member of both projects and have the appropriate permissions to run pipelines. A not found or access denied error may be displayed if the user does not have access to any of the included files.
  • Be careful when including another project’s CI/CD configuration file. No pipelines or notifications trigger when CI/CD configuration files change. From a security perspective, this is similar to pulling a third-party dependency. For the ref, consider:
    • Using a specific SHA hash, which should be the most stable option. Use the full 40-character SHA hash to ensure the desired commit is referenced, because using a short SHA hash for the ref might be ambiguous.
    • Applying both protected branch and protected tag rules to the ref in the other project. Protected tags and branches are more likely to pass through change management before changing.

include:remote

Use include:remote with a full URL to include a file from a different location.

Keyword type: Global keyword.

Possible inputs:

A public URL accessible by an HTTP/HTTPS GET request:

  • Authentication with the remote URL is not supported.
  • The YAML file must have the extension .yml or .yaml.
  • You can use certain CI/CD variables.

Example of include:remote:

include:
  - remote: 'https://gitlab.com/example-project/-/raw/main/.gitlab-ci.yml'

Additional details:

  • All nested includes are executed without context as a public user, so you can only include public projects or templates. No variables are available in the include section of nested includes.
  • Be careful when including another project’s CI/CD configuration file. No pipelines or notifications trigger when the other project’s files change. From a security perspective, this is similar to pulling a third-party dependency. If you link to another GitLab project you own, consider the use of both protected branches and protected tags to enforce change management rules.

include:template

Use include:template to include .gitlab-ci.yml templates.

Keyword type: Global keyword.

Possible inputs:

A CI/CD template:

Example of include:template:

# File sourced from the GitLab template collection
include:
  - template: Auto-DevOps.gitlab-ci.yml

Multiple include:template files:

include:
  - template: Android-Fastlane.gitlab-ci.yml
  - template: Auto-DevOps.gitlab-ci.yml

Additional details:

  • All nested includes are executed without context as a public user, so you can only include public projects or templates. No variables are available in the include section of nested includes.

include:inputs

History

Use include:inputs to set the values for input parameters when the included configuration uses spec:inputs and is added to the pipeline.

Keyword type: Global keyword.

Possible inputs: A string, numeric value, or boolean.

Example of include:inputs:

include:
  - local: 'custom_configuration.yml'
    inputs:
      website: "My website"

In this example:

  • The configuration contained in custom_configuration.yml is added to the pipeline, with a website input set to a value of My website for the included configuration.

Additional details:

  • If the included configuration file uses spec:inputs:type, the input value must match the defined type.
  • If the included configuration file uses spec:inputs:options, the input value must match one of the listed options.

Related topics:

include:rules

You can use rules with include to conditionally include other configuration files.

Keyword type: Global keyword.

Possible inputs: These rules subkeys:

Some CI/CD variables are supported.

Example of include:rules:

include:
  - local: build_jobs.yml
    rules:
      - if: $INCLUDE_BUILDS == "true"

test-job:
  stage: test
  script: echo "This is a test job"

In this example, if the INCLUDE_BUILDS variable is:

  • true, the build_jobs.yml configuration is included in the pipeline.
  • Not true or does not exist, the build_jobs.yml configuration is not included in the pipeline.

Related topics:

stages

History
  • Support for nested array of strings introduced in GitLab 16.9.

Use stages to define stages that contain groups of jobs. Use stage in a job to configure the job to run in a specific stage.

If stages is not defined in the .gitlab-ci.yml file, the default pipeline stages are:

The order of the items in stages defines the execution order for jobs:

  • Jobs in the same stage run in parallel.
  • Jobs in the next stage run after the jobs from the previous stage complete successfully.

If a pipeline contains only jobs in the .pre or .post stages, it does not run. There must be at least one other job in a different stage. .pre and .post stages can be used in required pipeline configuration to define compliance jobs that must run before or after project pipeline jobs.

Keyword type: Global keyword.

Example of stages:

stages:
  - build
  - test
  - deploy

In this example:

  1. All jobs in build execute in parallel.
  2. If all jobs in build succeed, the test jobs execute in parallel.
  3. If all jobs in test succeed, the deploy jobs execute in parallel.
  4. If all jobs in deploy succeed, the pipeline is marked as passed.

If any job fails, the pipeline is marked as failed and jobs in later stages do not start. Jobs in the current stage are not stopped and continue to run.

Additional details:

  • If a job does not specify a stage, the job is assigned the test stage.
  • If a stage is defined but no jobs use it, the stage is not visible in the pipeline, which can help compliance pipeline configurations:
    • Stages can be defined in the compliance configuration but remain hidden if not used.
    • The defined stages become visible when developers use them in job definitions.

Related topics:

  • To make a job start earlier and ignore the stage order, use the needs keyword.

workflow

Use workflow to control pipeline behavior.

You can use some predefined CI/CD variables in workflow configuration, but not variables that are only defined when jobs start.

Related topics:

workflow:auto_cancel:on_new_commit

History

Use workflow:auto_cancel:on_new_commit to configure the behavior of the auto-cancel redundant pipelines feature.

Possible inputs:

  • conservative: Cancel the pipeline, but only if no jobs with interruptible: false have started yet. Default when not defined.
  • interruptible: Cancel only jobs with interruptible: true.
  • none: Do not auto-cancel any jobs.

Example of workflow:auto_cancel:on_new_commit:

workflow:
  auto_cancel:
    on_new_commit: interruptible

job1:
  interruptible: true
  script: sleep 60

job2:
  interruptible: false  # Default when not defined.
  script: sleep 60

In this example:

  • When a new commit is pushed to a branch, GitLab creates a new pipeline and job1 and job2 start.
  • If a new commit is pushed to the branch before the jobs complete, only job1 is canceled.

workflow:auto_cancel:on_job_failure

History

Use workflow:auto_cancel:on_job_failure to configure which jobs should be canceled as soon as one job fails.

Possible inputs:

  • all: Cancel the pipeline and all running jobs as soon as one job fails.
  • none: Do not auto-cancel any jobs.

Example of workflow:auto_cancel:on_job_failure:

stages: [stage_a, stage_b]

workflow:
  auto_cancel:
    on_job_failure: all

job1:
  stage: stage_a
  script: sleep 60

job2:
  stage: stage_a
  script:
    - sleep 30
    - exit 1

job3:
  stage: stage_b
  script:
    - sleep 30

In this example, if job2 fails, job1 is canceled if it is still running and job3 does not start.

Related topics:

workflow:name

History

You can use name in workflow: to define a name for pipelines.

All pipelines are assigned the defined name. Any leading or trailing spaces in the name are removed.

Possible inputs:

Examples of workflow:name:

A simple pipeline name with a predefined variable:

workflow:
  name: 'Pipeline for branch: $CI_COMMIT_BRANCH'

A configuration with different pipeline names depending on the pipeline conditions:

variables:
  PROJECT1_PIPELINE_NAME: 'Default pipeline name'  # A default is not required

workflow:
  name: '$PROJECT1_PIPELINE_NAME'
  rules:
    - if: '$CI_MERGE_REQUEST_LABELS =~ /pipeline:run-in-ruby3/'
      variables:
        PROJECT1_PIPELINE_NAME: 'Ruby 3 pipeline'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
      variables:
        PROJECT1_PIPELINE_NAME: 'MR pipeline: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME'
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH  # For default branch pipelines, use the default name

Additional details:

  • If the name is an empty string, the pipeline is not assigned a name. A name consisting of only CI/CD variables could evaluate to an empty string if all the variables are also empty.
  • workflow:rules:variables become global variables available in all jobs, including trigger jobs which forward variables to downstream pipelines by default. If the downstream pipeline uses the same variable, the variable is overwritten by the upstream variable value. Be sure to either:
    • Use a unique variable name in every project’s pipeline configuration, like PROJECT1_PIPELINE_NAME.
    • Use inherit:variables in the trigger job and list the exact variables you want to forward to the downstream pipeline.

workflow:rules

The rules keyword in workflow is similar to rules defined in jobs, but controls whether or not a whole pipeline is created.

When no rules evaluate to true, the pipeline does not run.

Possible inputs: You can use some of the same keywords as job-level rules:

Example of workflow:rules:

workflow:
  rules:
    - if: $CI_COMMIT_TITLE =~ /-draft$/
      when: never
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

In this example, pipelines run if the commit title (first line of the commit message) does not end with -draft and the pipeline is for either:

  • A merge request
  • The default branch.

Additional details:

  • If your rules match both branch pipelines (other than the default branch) and merge request pipelines, duplicate pipelines can occur.
  • start_in, allow_failure, and needs are not supported in workflow:rules, but do not cause a syntax violation. Though they have no effect, do not use them in workflow:rules as it could cause syntax failures in the future. See issue 436473 for more details.

Related topics:

workflow:rules:variables

You can use variables in workflow:rules to define variables for specific pipeline conditions.

When the condition matches, the variable is created and can be used by all jobs in the pipeline. If the variable is already defined at the global level, the workflow variable takes precedence and overrides the global variable.

Keyword type: Global keyword.

Possible inputs: Variable name and value pairs:

  • The name can use only numbers, letters, and underscores (_).
  • The value must be a string.

Example of workflow:rules:variables:

variables:
  DEPLOY_VARIABLE: "default-deploy"

workflow:
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
      variables:
        DEPLOY_VARIABLE: "deploy-production"  # Override globally-defined DEPLOY_VARIABLE
    - if: $CI_COMMIT_BRANCH =~ /feature/
      variables:
        IS_A_FEATURE: "true"                  # Define a new variable.
    - if: $CI_COMMIT_BRANCH                   # Run the pipeline in other cases

job1:
  variables:
    DEPLOY_VARIABLE: "job1-default-deploy"
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
      variables:                                   # Override DEPLOY_VARIABLE defined
        DEPLOY_VARIABLE: "job1-deploy-production"  # at the job level.
    - when: on_success                             # Run the job in other cases
  script:
    - echo "Run script with $DEPLOY_VARIABLE as an argument"
    - echo "Run another script if $IS_A_FEATURE exists"

job2:
  script:
    - echo "Run script with $DEPLOY_VARIABLE as an argument"
    - echo "Run another script if $IS_A_FEATURE exists"

When the branch is the default branch:

  • job1’s DEPLOY_VARIABLE is job1-deploy-production.
  • job2’s DEPLOY_VARIABLE is deploy-production.

When the branch is feature:

  • job1’s DEPLOY_VARIABLE is job1-default-deploy, and IS_A_FEATURE is true.
  • job2’s DEPLOY_VARIABLE is default-deploy, and IS_A_FEATURE is true.

When the branch is something else:

  • job1’s DEPLOY_VARIABLE is job1-default-deploy.
  • job2’s DEPLOY_VARIABLE is default-deploy.

Additional details:

  • workflow:rules:variables become global variables available in all jobs, including trigger jobs which forward variables to downstream pipelines by default. If the downstream pipeline uses the same variable, the variable is overwritten by the upstream variable value. Be sure to either:
    • Use unique variable names in every project’s pipeline configuration, like PROJECT1_VARIABLE_NAME.
    • Use inherit:variables in the trigger job and list the exact variables you want to forward to the downstream pipeline.

workflow:rules:auto_cancel

History

Use workflow:rules:auto_cancel to configure the behavior of the workflow:auto_cancel:on_new_commit or the workflow:auto_cancel:on_job_failure features.

Possible inputs:

Example of workflow:rules:auto_cancel:

workflow:
  auto_cancel:
    on_new_commit: interruptible
    on_job_failure: all
  rules:
    - if: $CI_COMMIT_REF_PROTECTED == 'true'
      auto_cancel:
        on_new_commit: none
        on_job_failure: none
    - when: always                  # Run the pipeline in other cases

test-job1:
  script: sleep 10
  interruptible: false

test-job2:
  script: sleep 10
  interruptible: true

In this example, workflow:auto_cancel:on_new_commit is set to interruptible and workflow:auto_cancel:on_job_failure is set to all for all jobs by default. But if a pipeline runs for a protected branch, the rule overrides the default with on_new_commit: none and on_job_failure: none. For example, if a pipeline is running for:

  • A non-protected branch and a new commit is pushed, test-job1 continues to run and test-job2 is canceled.
  • A protected branch and a new commit is pushed, both test-job1 and test-job2 continue to run.

Header keywords

Some keywords must be defined in a header section of a YAML configuration file. The header must be at the top of the file, separated from the rest of the configuration with ---.

spec

History

Add a spec section to the header of a YAML file to configure the behavior of a pipeline when a configuration is added to the pipeline with the include keyword.

spec:inputs

You can use spec:inputs to define input parameters for the CI/CD configuration you intend to add to a pipeline with include. Use include:inputs to define the values to use when the pipeline runs.

Use the inputs to customize the behavior of the configuration when included in CI/CD configuration.

Use the interpolation format $[[ inputs.input-id ]] to reference the values outside of the header section. Inputs are evaluated and interpolated when the configuration is fetched during pipeline creation, but before the configuration is merged with the contents of the .gitlab-ci.yml file.

Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section.

Possible inputs: A hash of strings representing the expected inputs.

Example of spec:inputs:

spec:
  inputs:
    environment:
    job-stage:
---

scan-website:
  stage: $[[ inputs.job-stage ]]
  script: ./scan-website $[[ inputs.environment ]]

Additional details:

  • Inputs are mandatory unless you use spec:inputs:default to set a default value.
  • Inputs expect strings unless you use spec:inputs:type to set a different input type.
  • A string containing an interpolation block must not exceed 1 MB.
  • The string inside an interpolation block must not exceed 1 KB.

Related topics:

spec:inputs:default
History

Inputs are mandatory when included, unless you set a default value with spec:inputs:default.

Use default: '' to have no default value.

Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section.

Possible inputs: A string representing the default value, or ''.

Example of spec:inputs:default:

spec:
  inputs:
    website:
    user:
      default: 'test-user'
    flags:
      default: ''
---

# The pipeline configuration would follow...

In this example:

  • website is mandatory and must be defined.
  • user is optional. If not defined, the value is test-user.
  • flags is optional. If not defined, it has no value.

Additional details:

  • The pipeline fails with a validation error when the input:
    • Uses both default and options, but the default value is not one of the listed options.
    • Uses both default and regex, but the default value does not match the regular expression.
    • Value does not match the type.
spec:inputs:description
History

Use description to give a description to a specific input. The description does not affect the behavior of the input and is only used to help users of the file understand the input.

Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section.

Possible inputs: A string representing the description.

Example of spec:inputs:description:

spec:
  inputs:
    flags:
      description: 'Sample description of the `flags` input details.'
---

# The pipeline configuration would follow...
spec:inputs:options
History

Inputs can use options to specify a list of allowed values for an input. The limit is 50 options per input.

Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section.

Possible inputs: An array of input options.

Example of spec:inputs:options:

spec:
  inputs:
    environment:
      options:
        - development
        - staging
        - production
---

# The pipeline configuration would follow...

In this example:

  • environment is mandatory and must be defined with one of the values in the list.

Additional details:

  • The pipeline fails with a validation error when:
    • The input uses both options and default, but the default value is not one of the listed options.
    • Any of the input options do not match the type, which can be either string or number, but not boolean when using options.
spec:inputs:regex
History

Use spec:inputs:regex to specify a regular expression that the input must match.

Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section.

Possible inputs: Must be a regular expression.

Example of spec:inputs:regex:

spec:
  inputs:
    version:
      regex: ^v\d\.\d+(\.\d+)$
---

# The pipeline configuration would follow...

In this example, inputs of v1.0 or v1.2.3 match the regular expression and pass validation. An input of v1.A.B does not match the regular expression and fails validation.

Additional details:

  • inputs:regex can only be used with a type of string, not number or boolean.
  • Do not enclose the regular expression with the / character. For example, use regex.*, not /regex.*/.
  • inputs:regex uses RE2 to parse regular expressions.
spec:inputs:type

By default, inputs expect strings. Use spec:inputs:type to set a different required type for inputs.

Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section.

Possible inputs: Can be one of:

  • array, to accept an array of inputs.
  • string, to accept string inputs (default when not defined).
  • number, to only accept numeric inputs.
  • boolean, to only accept true or false inputs.

Example of spec:inputs:type:

spec:
  inputs:
    job_name:
    website:
      type: string
    port:
      type: number
    available:
      type: boolean
    array_input:
      type: array
---

# The pipeline configuration would follow...

Job keywords

The following topics explain how to use keywords to configure CI/CD pipelines.

after_script

History
  • Running after_script commands for canceled jobs introduced in GitLab 17.0.

Use after_script to define an array of commands to run last, after a job’s before_script and script sections complete. after_script commands also run when:

  • The job is canceled while the before_script or script sections are still running.
  • The job fails with failure type of script_failure, but not other failure types.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs: An array including:

CI/CD variables are supported.

Example of after_script:

job:
  script:
    - echo "An example script section."
  after_script:
    - echo "Execute this command after the `script` section completes."

Additional details:

Scripts you specify in after_script execute in a new shell, separate from any before_script or script commands. As a result, they:

  • Have the current working directory set back to the default (according to the variables which define how the runner processes Git requests).
  • Don’t have access to changes done by commands defined in the before_script or script, including:
    • Command aliases and variables exported in script scripts.
    • Changes outside of the working tree (depending on the runner executor), like software installed by a before_script or script script.
  • Have a separate timeout. For GitLab Runner 16.4 and later, this defaults to 5 minutes, and can be configured with the RUNNER_AFTER_SCRIPT_TIMEOUT variable. In GitLab 16.3 and earlier, the timeout is hard-coded to 5 minutes.
  • Don’t affect the job’s exit code. If the script section succeeds and the after_script times out or fails, the job exits with code 0 (Job Succeeded).
  • There is a known issue with using CI/CD job tokens with after_script. You can use a job token for authentication in after_script commands, but the token immediately becomes invalid if the job is canceled. See issue for more details.

If a job times out, the after_script commands do not execute. An issue exists to add support for executing after_script commands for timed-out jobs.

Related topics:

allow_failure

Use allow_failure to determine whether a pipeline should continue running when a job fails.

  • To let the pipeline continue running subsequent jobs, use allow_failure: true.
  • To stop the pipeline from running subsequent jobs, use allow_failure: false.

When jobs are allowed to fail (allow_failure: true) an orange warning () indicates that a job failed. However, the pipeline is successful and the associated commit is marked as passed with no warnings.

This same warning is displayed when:

  • All other jobs in the stage are successful.
  • All other jobs in the pipeline are successful.

The default value for allow_failure is:

  • true for manual jobs.
  • false for jobs that use when: manual inside rules.
  • false in all other cases.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • true or false.

Example of allow_failure:

job1:
  stage: test
  script:
    - execute_script_1

job2:
  stage: test
  script:
    - execute_script_2
  allow_failure: true

job3:
  stage: deploy
  script:
    - deploy_to_staging
  environment: staging

In this example, job1 and job2 run in parallel:

  • If job1 fails, jobs in the deploy stage do not start.
  • If job2 fails, jobs in the deploy stage can still start.

Additional details:

  • You can use allow_failure as a subkey of rules.
  • If allow_failure: true is set, the job is always considered successful, and later jobs with when: on_failure don’t start if this job fails.
  • You can use allow_failure: false with a manual job to create a blocking manual job. A blocked pipeline does not run any jobs in later stages until the manual job is started and completes successfully.

allow_failure:exit_codes

Use allow_failure:exit_codes to control when a job should be allowed to fail. The job is allow_failure: true for any of the listed exit codes, and allow_failure false for any other exit code.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • A single exit code.
  • An array of exit codes.

Example of allow_failure:

test_job_1:
  script:
    - echo "Run a script that results in exit code 1. This job fails."
    - exit 1
  allow_failure:
    exit_codes: 137

test_job_2:
  script:
    - echo "Run a script that results in exit code 137. This job is allowed to fail."
    - exit 137
  allow_failure:
    exit_codes:
      - 137
      - 255

artifacts

Use artifacts to specify which files to save as job artifacts. Job artifacts are a list of files and directories that are attached to the job when it succeeds, fails, or always.

The artifacts are sent to GitLab after the job finishes. They are available for download in the GitLab UI if the size is smaller than the maximum artifact size.

By default, jobs in later stages automatically download all the artifacts created by jobs in earlier stages. You can control artifact download behavior in jobs with dependencies.

When using the needs keyword, jobs can only download artifacts from the jobs defined in the needs configuration.

Job artifacts are only collected for successful jobs by default, and artifacts are restored after caches.

Read more about artifacts.

artifacts:paths

Paths are relative to the project directory ($CI_PROJECT_DIR) and can’t directly link outside it.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

CI/CD variables are supported.

Example of artifacts:paths:

job:
  artifacts:
    paths:
      - binaries/
      - .config

This example creates an artifact with .config and all the files in the binaries directory.

Additional details:

  • If not used with artifacts:name, the artifacts file is named artifacts, which becomes artifacts.zip when downloaded.

Related topics:

artifacts:exclude

Use artifacts:exclude to prevent files from being added to an artifacts archive.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • An array of file paths, relative to the project directory.
  • You can use Wildcards that use glob or doublestar.PathMatch patterns.

Example of artifacts:exclude:

artifacts:
  paths:
    - binaries/
  exclude:
    - binaries/**/*.o

This example stores all files in binaries/, but not *.o files located in subdirectories of binaries/.

Additional details:

  • artifacts:exclude paths are not searched recursively.
  • Files matched by artifacts:untracked can be excluded using artifacts:exclude too.

Related topics:

artifacts:expire_in

Use expire_in to specify how long job artifacts are stored before they expire and are deleted. The expire_in setting does not affect:

After their expiry, artifacts are deleted hourly by default (using a cron job), and are not accessible anymore.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs: The expiry time. If no unit is provided, the time is in seconds. Valid values include:

  • '42'
  • 42 seconds
  • 3 mins 4 sec
  • 2 hrs 20 min
  • 2h20min
  • 6 mos 1 day
  • 47 yrs 6 mos and 4d
  • 3 weeks and 2 days
  • never

Example of artifacts:expire_in:

job:
  artifacts:
    expire_in: 1 week

Additional details:

  • The expiration time period begins when the artifact is uploaded and stored on GitLab. If the expiry time is not defined, it defaults to the instance wide setting.
  • To override the expiration date and protect artifacts from being automatically deleted:
    • Select Keep on the job page.
    • Set the value of expire_in to never.
  • If the expiry time is too short, jobs in later stages of a long pipeline might try to fetch expired artifacts from earlier jobs. If the artifacts are expired, jobs that try to fetch them fail with a could not retrieve the needed artifacts error. Set the expiry time to be longer, or use dependencies in later jobs to ensure they don’t try to fetch expired artifacts.

artifacts:expose_as

Use the artifacts:expose_as keyword to expose job artifacts in the merge request UI.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • The name to display in the merge request UI for the artifacts download link. Must be combined with artifacts:paths.

Example of artifacts:expose_as:

test:
  script: ["echo 'test' > file.txt"]
  artifacts:
    expose_as: 'artifact 1'
    paths: ['file.txt']

Additional details:

  • Artifacts are saved, but do not display in the UI if the artifacts:paths values:
    • Use CI/CD variables.
    • Define a directory, but do not end with /. For example, directory/ works with artifacts:expose_as, but directory does not.
    • Start with ./. For example, file works with artifacts:expose_as, but ./file does not.
  • A maximum of 10 job artifacts per merge request can be exposed.
  • Glob patterns are unsupported.
  • If a directory is specified and there is more than one file in the directory, the link is to the job artifacts browser.
  • If GitLab Pages is enabled, GitLab automatically renders the artifacts when the artifacts is a single file with one of these extensions:
    • .html or .htm
    • .txt
    • .json
    • .xml
    • .log

Related topics:

artifacts:name

Use the artifacts:name keyword to define the name of the created artifacts archive. You can specify a unique name for every archive.

If not defined, the default name is artifacts, which becomes artifacts.zip when downloaded.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

Example of artifacts:name:

To create an archive with a name of the current job:

job:
  artifacts:
    name: "job1-artifacts-file"
    paths:
      - binaries/

Related topics:

artifacts:public

  • Updated in GitLab 15.10. Artifacts created with artifacts:public before 15.10 are not guaranteed to remain private after this update.
  • Generally available in GitLab 16.7. Feature flag non_public_artifacts removed.
note
artifacts:public is now superseded by artifacts:access which has more options.

Use artifacts:public to determine whether the job artifacts should be publicly available.

When artifacts:public is true (default), the artifacts in public pipelines are available for download by anonymous, guest, and reporter users.

To deny read access to artifacts in public pipelines for anonymous, guest, and reporter users, set artifacts:public to false:

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • true (default if not defined) or false.

Example of artifacts:public:

job:
  artifacts:
    public: false

artifacts:access

History

Use artifacts:access to determine who can access the job artifacts from the GitLab UI or API. This option does not prevent you from forwarding artifacts to downstream pipelines.

You cannot use artifacts:public and artifacts:access in the same job.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • all (default): Artifacts in a job in public pipelines are available for download by anyone, including anonymous, guest, and reporter users.
  • developer: Artifacts in the job are only available for download by users with the Developer role or higher.
  • none: Artifacts in the job are not available for download by anyone.

Example of artifacts:access:

job:
  artifacts:
    access: 'developer'

Additional details:

artifacts:reports

Use artifacts:reports to collect artifacts generated by included templates in jobs.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

Example of artifacts:reports:

rspec:
  stage: test
  script:
    - bundle install
    - rspec --format RspecJunitFormatter --out rspec.xml
  artifacts:
    reports:
      junit: rspec.xml

Additional details:

  • Combining reports in parent pipelines using artifacts from child pipelines is not supported. Track progress on adding support in this issue.
  • To be able to browse and download the report output files, include the artifacts:paths keyword. This uploads and stores the artifact twice.
  • Artifacts created for artifacts: reports are always uploaded, regardless of the job results (success or failure). You can use artifacts:expire_in to set an expiration date for the artifacts.

artifacts:untracked

Use artifacts:untracked to add all Git untracked files as artifacts (along with the paths defined in artifacts:paths). artifacts:untracked ignores configuration in the repository’s .gitignore, so matching artifacts in .gitignore are included.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • true or false (default if not defined).

Example of artifacts:untracked:

Save all Git untracked files:

job:
  artifacts:
    untracked: true

Related topics:

artifacts:when

Use artifacts:when to upload artifacts on job failure or despite the failure.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • on_success (default): Upload artifacts only when the job succeeds.
  • on_failure: Upload artifacts only when the job fails.
  • always: Always upload artifacts (except when jobs time out). For example, when uploading artifacts required to troubleshoot failing tests.

Example of artifacts:when:

job:
  artifacts:
    when: on_failure

Additional details:

  • The artifacts created for artifacts:reports are always uploaded, regardless of the job results (success or failure). artifacts:when does not change this behavior.

before_script

Use before_script to define an array of commands that should run before each job’s script commands, but after artifacts are restored.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs: An array including:

CI/CD variables are supported.

Example of before_script:

job:
  before_script:
    - echo "Execute this command before any 'script:' commands."
  script:
    - echo "This command executes after the job's 'before_script' commands."

Additional details:

  • Scripts you specify in before_script are concatenated with any scripts you specify in the main script. The combined scripts execute together in a single shell.
  • Using before_script at the top level, but not in the default section, is deprecated.

Related topics:

cache

History
  • Introduced in GitLab 15.0, caches are not shared between protected and unprotected branches.

Use cache to specify a list of files and directories to cache between jobs. You can only use paths that are in the local working copy.

Caches are:

  • Shared between pipelines and jobs.
  • By default, not shared between protected and unprotected branches.
  • Restored before artifacts.
  • Limited to a maximum of four different caches.

You can disable caching for specific jobs, for example to override:

  • A default cache defined with default.
  • The configuration for a job added with include.

For more information about caches, see Caching in GitLab CI/CD.

cache:paths

Use the cache:paths keyword to choose which files or directories to cache.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

Example of cache:paths:

Cache all files in binaries that end in .apk and the .config file:

rspec:
  script:
    - echo "This job uses a cache."
  cache:
    key: binaries-cache
    paths:
      - binaries/*.apk
      - .config

Additional details:

  • The cache:paths keyword includes files even if they are untracked or in your .gitignore file.

Related topics:

cache:key

Use the cache:key keyword to give each cache a unique identifying key. All jobs that use the same cache key use the same cache, including in different pipelines.

If not set, the default key is default. All jobs with the cache keyword but no cache:key share the default cache.

Must be used with cache: paths, or nothing is cached.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

Example of cache:key:

cache-job:
  script:
    - echo "This job uses a cache."
  cache:
    key: binaries-cache-$CI_COMMIT_REF_SLUG
    paths:
      - binaries/

Additional details:

  • If you use Windows Batch to run your shell scripts you must replace $ with %. For example: key: %CI_COMMIT_REF_SLUG%
  • The cache:key value can’t contain:

    • The / character, or the equivalent URI-encoded %2F.
    • Only the . character (any number), or the equivalent URI-encoded %2E.
  • The cache is shared between jobs, so if you’re using different paths for different jobs, you should also set a different cache:key. Otherwise cache content can be overwritten.

Related topics:

cache:key:files

Use the cache:key:files keyword to generate a new key when one or two specific files change. cache:key:files lets you reuse some caches, and rebuild them less often, which speeds up subsequent pipeline runs.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • An array of one or two file paths.

CI/CD variables are not supported.

Example of cache:key:files:

cache-job:
  script:
    - echo "This job uses a cache."
  cache:
    key:
      files:
        - Gemfile.lock
        - package.json
    paths:
      - vendor/ruby
      - node_modules

This example creates a cache for Ruby and Node.js dependencies. The cache is tied to the current versions of the Gemfile.lock and package.json files. When one of these files changes, a new cache key is computed and a new cache is created. Any future job runs that use the same Gemfile.lock and package.json with cache:key:files use the new cache, instead of rebuilding the dependencies.

Additional details:

  • The cache key is a SHA computed from the most recent commits that changed each listed file. If neither file is changed in any commits, the fallback key is default.
cache:key:prefix

Use cache:key:prefix to combine a prefix with the SHA computed for cache:key:files.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

Example of cache:key:prefix:

rspec:
  script:
    - echo "This rspec job uses a cache."
  cache:
    key:
      files:
        - Gemfile.lock
      prefix: $CI_JOB_NAME
    paths:
      - vendor/ruby

For example, adding a prefix of $CI_JOB_NAME causes the key to look like rspec-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5. If a branch changes Gemfile.lock, that branch has a new SHA checksum for cache:key:files. A new cache key is generated, and a new cache is created for that key. If Gemfile.lock is not found, the prefix is added to default, so the key in the example would be rspec-default.

Additional details:

  • If no file in cache:key:files is changed in any commits, the prefix is added to the default key.

cache:untracked

Use untracked: true to cache all files that are untracked in your Git repository. Untracked files include files that are:

Caching untracked files can create unexpectedly large caches if the job downloads:

  • Dependencies, like gems or node modules, which are usually untracked.
  • Artifacts from a different job. Files extracted from the artifacts are untracked by default.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • true or false (default).

Example of cache:untracked:

rspec:
  script: test
  cache:
    untracked: true

Additional details:

  • You can combine cache:untracked with cache:paths to cache all untracked files, as well as files in the configured paths. Use cache:paths to cache any specific files, including tracked files, or files that are outside of the working directory, and use cache: untracked to also cache all untracked files. For example:

    rspec:
      script: test
      cache:
        untracked: true
        paths:
          - binaries/
    

    In this example, the job caches all untracked files in the repository, as well as all the files in binaries/. If there are untracked files in binaries/, they are covered by both keywords.

cache:unprotect

History

Use cache:unprotect to set a cache to be shared between protected and unprotected branches.

caution
When set to true, users without access to protected branches can read and write to cache keys used by protected branches.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • true or false (default).

Example of cache:unprotect:

rspec:
  script: test
  cache:
    unprotect: true

cache:when

Use cache:when to define when to save the cache, based on the status of the job.

Must be used with cache: paths, or nothing is cached.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • on_success (default): Save the cache only when the job succeeds.
  • on_failure: Save the cache only when the job fails.
  • always: Always save the cache.

Example of cache:when:

rspec:
  script: rspec
  cache:
    paths:
      - rspec/
    when: 'always'

This example stores the cache whether or not the job fails or succeeds.

cache:policy

To change the upload and download behavior of a cache, use the cache:policy keyword. By default, the job downloads the cache when the job starts, and uploads changes to the cache when the job ends. This caching style is the pull-push policy (default).

To set a job to only download the cache when the job starts, but never upload changes when the job finishes, use cache:policy:pull.

To set a job to only upload a cache when the job finishes, but never download the cache when the job starts, use cache:policy:push.

Use the pull policy when you have many jobs executing in parallel that use the same cache. This policy speeds up job execution and reduces load on the cache server. You can use a job with the push policy to build the cache.

Must be used with cache: paths, or nothing is cached.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

Example of cache:policy:

prepare-dependencies-job:
  stage: build
  cache:
    key: gems
    paths:
      - vendor/bundle
    policy: push
  script:
    - echo "This job only downloads dependencies and builds the cache."
    - echo "Downloading dependencies..."

faster-test-job:
  stage: test
  cache:
    key: gems
    paths:
      - vendor/bundle
    policy: pull
  script:
    - echo "This job script uses the cache, but does not update it."
    - echo "Running tests..."

Related topics:

cache:fallback_keys

Use cache:fallback_keys to specify a list of keys to try to restore cache from if there is no cache found for the cache:key. Caches are retrieved in the order specified in the fallback_keys section.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • An array of cache keys

Example of cache:fallback_keys:

rspec:
  script: rspec
  cache:
    key: gems-$CI_COMMIT_REF_SLUG
    paths:
      - rspec/
    fallback_keys:
      - gems
    when: 'always'

coverage

Use coverage with a custom regular expression to configure how code coverage is extracted from the job output. The coverage is shown in the UI if at least one line in the job output matches the regular expression.

To extract the code coverage value from the match, GitLab uses this smaller regular expression: \d+(?:\.\d+)?.

Possible inputs:

  • An RE2 regular expression. Must start and end with /. Must match the coverage number. May match surrounding text as well, so you don’t need to use a regular expression character group to capture the exact number. Because it uses RE2 syntax, all groups must be non-capturing.

Example of coverage:

job1:
  script: rspec
  coverage: '/Code coverage: \d+(?:\.\d+)?/'

In this example:

  1. GitLab checks the job log for a match with the regular expression. A line like Code coverage: 67.89% of lines covered would match.
  2. GitLab then checks the matched fragment to find a match to \d+(?:\.\d+)?. The sample matching line above gives a code coverage of 67.89.

Additional details:

  • You can find regex examples in Code Coverage.
  • If there is more than one matched line in the job output, the last line is used (the first result of reverse search).
  • If there are multiple matches in a single line, the last match is searched for the coverage number.
  • If there are multiple coverage numbers found in the matched fragment, the first number is used.
  • Leading zeros are removed.
  • Coverage output from child pipelines is not recorded or displayed. Check the related issue for more details.

dast_configuration

Tier: Ultimate Offering: GitLab.com, Self-managed, GitLab Dedicated

Use the dast_configuration keyword to specify a site profile and scanner profile to be used in a CI/CD configuration. Both profiles must first have been created in the project. The job’s stage must be dast.

Keyword type: Job keyword. You can use only as part of a job.

Possible inputs: One each of site_profile and scanner_profile.

  • Use site_profile to specify the site profile to be used in the job.
  • Use scanner_profile to specify the scanner profile to be used in the job.

Example of dast_configuration:

stages:
  - build
  - dast

include:
  - template: DAST.gitlab-ci.yml

dast:
  dast_configuration:
    site_profile: "Example Co"
    scanner_profile: "Quick Passive Test"

In this example, the dast job extends the dast configuration added with the include keyword to select a specific site profile and scanner profile.

Additional details:

  • Settings contained in either a site profile or scanner profile take precedence over those contained in the DAST template.

Related topics:

dependencies

Use the dependencies keyword to define a list of specific jobs to fetch artifacts from. The specified jobs must all be in earlier stages. You can also set a job to download no artifacts at all.

When dependencies is not defined in a job, all jobs in earlier stages are considered dependent and the job fetches all artifacts from those jobs.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • The names of jobs to fetch artifacts from.
  • An empty array ([]), to configure the job to not download any artifacts.

Example of dependencies:

build osx:
  stage: build
  script: make build:osx
  artifacts:
    paths:
      - binaries/

build linux:
  stage: build
  script: make build:linux
  artifacts:
    paths:
      - binaries/

test osx:
  stage: test
  script: make test:osx
  dependencies:
    - build osx

test linux:
  stage: test
  script: make test:linux
  dependencies:
    - build linux

deploy:
  stage: deploy
  script: make deploy
  environment: production

In this example, two jobs have artifacts: build osx and build linux. When test osx is executed, the artifacts from build osx are downloaded and extracted in the context of the build. The same thing happens for test linux and artifacts from build linux.

The deploy job downloads artifacts from all previous jobs because of the stage precedence.

Additional details:

  • The job status does not matter. If a job fails or it’s a manual job that isn’t triggered, no error occurs.
  • If the artifacts of a dependent job are expired or deleted, then the job fails.
  • To fetch artifacts from a job in the same stage, you must use needs:artifacts. You should not combine dependencies with needs in the same job.

environment

Use environment to define the environment that a job deploys to.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: The name of the environment the job deploys to, in one of these formats:

  • Plain text, including letters, digits, spaces, and these characters: -, _, /, $, {, }.
  • CI/CD variables, including predefined, project, group, instance, or variables defined in the .gitlab-ci.yml file. You can’t use variables defined in a script section.

Example of environment:

deploy to production:
  stage: deploy
  script: git push production HEAD:main
  environment: production

Additional details:

  • If you specify an environment and no environment with that name exists, an environment is created.

environment:name

Set a name for an environment.

Common environment names are qa, staging, and production, but you can use any name.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: The name of the environment the job deploys to, in one of these formats:

  • Plain text, including letters, digits, spaces, and these characters: -, _, /, $, {, }.
  • CI/CD variables, including predefined, project, group, instance, or variables defined in the .gitlab-ci.yml file. You can’t use variables defined in a script section.

Example of environment:name:

deploy to production:
  stage: deploy
  script: git push production HEAD:main
  environment:
    name: production

environment:url

Set a URL for an environment.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: A single URL, in one of these formats:

  • Plain text, like https://prod.example.com.
  • CI/CD variables, including predefined, project, group, instance, or variables defined in the .gitlab-ci.yml file. You can’t use variables defined in a script section.

Example of environment:url:

deploy to production:
  stage: deploy
  script: git push production HEAD:main
  environment:
    name: production
    url: https://prod.example.com

Additional details:

  • After the job completes, you can access the URL by selecting a button in the merge request, environment, or deployment pages.

environment:on_stop

Closing (stopping) environments can be achieved with the on_stop keyword defined under environment. It declares a different job that runs to close the environment.

Keyword type: Job keyword. You can use it only as part of a job.

Additional details:

environment:action

Use the action keyword to specify how the job interacts with the environment.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: One of the following keywords:

Value Description
start Default value. Indicates that the job starts the environment. The deployment is created after the job starts.
prepare Indicates that the job is only preparing the environment. It does not trigger deployments. Read more about preparing environments.
stop Indicates that the job stops an environment. Read more about stopping an environment.
verify Indicates that the job is only verifying the environment. It does not trigger deployments. Read more about verifying environments.
access Indicates that the job is only accessing the environment. It does not trigger deployments. Read more about accessing environments.

Example of environment:action:

stop_review_app:
  stage: deploy
  variables:
    GIT_STRATEGY: none
  script: make delete-app
  when: manual
  environment:
    name: review/$CI_COMMIT_REF_SLUG
    action: stop

environment:auto_stop_in

History
  • CI/CD variable support introduced in GitLab 15.4.
  • Updated to support prepare, access and verify environment actions in GitLab 17.7.

The auto_stop_in keyword specifies the lifetime of the environment. When an environment expires, GitLab automatically stops it.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: A period of time written in natural language. For example, these are all equivalent:

  • 168 hours
  • 7 days
  • one week
  • never

CI/CD variables are supported.

Example of environment:auto_stop_in:

review_app:
  script: deploy-review-app
  environment:
    name: review/$CI_COMMIT_REF_SLUG
    auto_stop_in: 1 day

When the environment for review_app is created, the environment’s lifetime is set to 1 day. Every time the review app is deployed, that lifetime is also reset to 1 day.

The auto_stop_in keyword can be used for all environment actions except stop. Some actions can be used to reset the scheduled stop time for the environment. For more information, see Access an environment for preparation or verification purposes.

Related topics:

environment:kubernetes

History
  • agent keyword introduced in GitLab 17.6.
  • namespace and flux_resource_path keywords introduced in GitLab 17.7.

Use the kubernetes keyword to configure the dashboard for Kubernetes for an environment.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • agent: A string specifying the GitLab agent for Kubernetes. The format is path/to/agent/project:agent-name.
  • namespace: A string representing the Kubernetes namespace. It needs to be set together with the agent keyword.
  • flux_resource_path: A string representing the path to the Flux resource. This must be the full resource path. It needs to be set together with the agent and namespace keywords.

Example of environment:kubernetes:

deploy:
  stage: deploy
  script: make deploy-app
  environment:
    name: production
    kubernetes:
      agent: path/to/agent/project:agent-name
      namespace: my-namespace
      flux_resource_path: helm.toolkit.fluxcd.io/v2/namespaces/gitlab-agent/helmreleases/gitlab-agent

This configuration sets up the deploy job to deploy to the production environment, associates the agent named agent-name with the environment, and configures the dashboard for Kubernetes for an environment with the namespace my-namespace and the flux_resource_path set to helm.toolkit.fluxcd.io/v2/namespaces/gitlab-agent/helmreleases/gitlab-agent.

Additional details:

  • To use the dashboard, you must install the GitLab agent for Kubernetes and configure user_access for the environment’s project or its parent group.
  • The user running the job must be authorized to access the cluster agent. Otherwise, it will ignore agent, namespace and flux_resource_path attributes.

environment:deployment_tier

Use the deployment_tier keyword to specify the tier of the deployment environment.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: One of the following:

  • production
  • staging
  • testing
  • development
  • other

Example of environment:deployment_tier:

deploy:
  script: echo
  environment:
    name: customer-portal
    deployment_tier: production

Additional details:

  • Environments created from this job definition are assigned a tier based on this value.
  • Existing environments don’t have their tier updated if this value is added later. Existing environments must have their tier updated via the Environments API.

Related topics:

Dynamic environments

Use CI/CD variables to dynamically name environments.

For example:

deploy as review app:
  stage: deploy
  script: make deploy
  environment:
    name: review/$CI_COMMIT_REF_SLUG
    url: https://$CI_ENVIRONMENT_SLUG.example.com/

The deploy as review app job is marked as a deployment to dynamically create the review/$CI_COMMIT_REF_SLUG environment. $CI_COMMIT_REF_SLUG is a CI/CD variable set by the runner. The $CI_ENVIRONMENT_SLUG variable is based on the environment name, but suitable for inclusion in URLs. If the deploy as review app job runs in a branch named pow, this environment would be accessible with a URL like https://review-pow.example.com/.

The common use case is to create dynamic environments for branches and use them as review apps. You can see an example that uses review apps at https://gitlab.com/gitlab-examples/review-apps-nginx/.

extends

Use extends to reuse configuration sections. It’s an alternative to YAML anchors and is a little more flexible and readable.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • The name of another job in the pipeline.
  • A list (array) of names of other jobs in the pipeline.

Example of extends:

.tests:
  stage: test
  image: ruby:3.0

rspec:
  extends: .tests
  script: rake rspec

rubocop:
  extends: .tests
  script: bundle exec rubocop

In this example, the rspec job uses the configuration from the .tests template job. When creating the pipeline, GitLab:

  • Performs a reverse deep merge based on the keys.
  • Merges the .tests content with the rspec job.
  • Doesn’t merge the values of the keys.

The combined configuration is equivalent to these jobs:

rspec:
  stage: test
  image: ruby:3.0
  script: rake rspec

rubocop:
  stage: test
  image: ruby:3.0
  script: bundle exec rubocop

Additional details:

  • You can use multiple parents for extends.
  • The extends keyword supports up to eleven levels of inheritance, but you should avoid using more than three levels.
  • In the example above, .tests is a hidden job, but you can extend configuration from regular jobs as well.

Related topics:

hooks

History

Use hooks to specify lists of commands to execute on the runner at certain stages of job execution, like before retrieving the Git repository.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • A hash of hooks and their commands. Available hooks: pre_get_sources_script.

hooks:pre_get_sources_script

History

Use hooks:pre_get_sources_script to specify a list of commands to execute on the runner before cloning the Git repository and any submodules. You can use it for example to:

Possible inputs: An array including:

CI/CD variables are supported.

Example of hooks:pre_get_sources_script:

job1:
  hooks:
    pre_get_sources_script:
      - echo 'hello job1 pre_get_sources_script'
  script: echo 'hello job1 script'

Related topics:

identity

Tier: Free, Premium, Ultimate Offering: GitLab.com Status: Beta
History

This feature is in beta.

Use identity to authenticate with third party services using identity federation.

Keyword type: Job keyword. You can use it only as part of a job or in the default: section.

Possible inputs: An identifier. Supported providers:

Example of identity:

job_with_workload_identity:
  identity: google_cloud
  script:
    - gcloud compute instances list

Related topics:

id_tokens

History

Use id_tokens to create JSON web tokens (JWT) to authenticate with third party services. All JWTs created this way support OIDC authentication. The required aud sub-keyword is used to configure the aud claim for the JWT.

Possible inputs:

  • Token names with their aud claims. aud supports:

Example of id_tokens:

job_with_id_tokens:
  id_tokens:
    ID_TOKEN_1:
      aud: https://vault.example.com
    ID_TOKEN_2:
      aud:
        - https://gcp.com
        - https://aws.com
    SIGSTORE_ID_TOKEN:
      aud: sigstore
  script:
    - command_to_authenticate_with_vault $ID_TOKEN_1
    - command_to_authenticate_with_aws $ID_TOKEN_2
    - command_to_authenticate_with_gcp $ID_TOKEN_2

Related topics:

image

Use image to specify a Docker image that the job runs in.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs: The name of the image, including the registry path if needed, in one of these formats:

  • <image-name> (Same as using <image-name> with the latest tag)
  • <image-name>:<tag>
  • <image-name>@<digest>

CI/CD variables are supported.

Example of image:

default:
  image: ruby:3.0

rspec:
  script: bundle exec rspec

rspec 2.7:
  image: registry.example.com/my-group/my-project/ruby:2.7
  script: bundle exec rspec

In this example, the ruby:3.0 image is the default for all jobs in the pipeline. The rspec 2.7 job does not use the default, because it overrides the default with a job-specific image section.

Related topics:

image:name

The name of the Docker image that the job runs in. Similar to image used by itself.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs: The name of the image, including the registry path if needed, in one of these formats:

  • <image-name> (Same as using <image-name> with the latest tag)
  • <image-name>:<tag>
  • <image-name>@<digest>

CI/CD variables are supported.

Example of image:name:

test-job:
  image:
    name: "registry.example.com/my/image:latest"
  script: echo "Hello world"

Related topics:

image:entrypoint

Command or script to execute as the container’s entry point.

When the Docker container is created, the entrypoint is translated to the Docker --entrypoint option. The syntax is similar to the Dockerfile ENTRYPOINT directive, where each shell token is a separate string in the array.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • A string.

Example of image:entrypoint:

test-job:
  image:
    name: super/sql:experimental
    entrypoint: [""]
  script: echo "Hello world"

Related topics:

image:docker

History
  • Introduced in GitLab 16.7. Requires GitLab Runner 16.7 or later.
  • user input option introduced in GitLab 16.8.

Use image:docker to pass options to the Docker executor of a GitLab Runner.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

A hash of options for the Docker executor, which can include:

  • platform: Selects the architecture of the image to pull. When not specified, the default is the same platform as the host runner.
  • user: Specify the username or UID to use when running the container.

Example of image:docker:

arm-sql-job:
  script: echo "Run sql tests"
  image:
    name: super/sql:experimental
    docker:
      platform: arm64/v8
      user: dave

Additional details:

image:pull_policy

History

The pull policy that the runner uses to fetch the Docker image.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • A single pull policy, or multiple pull policies in an array. Can be always, if-not-present, or never.

Examples of image:pull_policy:

job1:
  script: echo "A single pull policy."
  image:
    name: ruby:3.0
    pull_policy: if-not-present

job2:
  script: echo "Multiple pull policies."
  image:
    name: ruby:3.0
    pull_policy: [always, if-not-present]

Additional details:

  • If the runner does not support the defined pull policy, the job fails with an error similar to: ERROR: Job failed (system failure): the configured PullPolicies ([always]) are not allowed by AllowedPullPolicies ([never]).

Related topics:

inherit

Use inherit to control inheritance of default keywords and variables.

inherit:default

Use inherit:default to control the inheritance of default keywords.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • true (default) or false to enable or disable the inheritance of all default keywords.
  • A list of specific default keywords to inherit.

Example of inherit:default:

default:
  retry: 2
  image: ruby:3.0
  interruptible: true

job1:
  script: echo "This job does not inherit any default keywords."
  inherit:
    default: false

job2:
  script: echo "This job inherits only the two listed default keywords. It does not inherit 'interruptible'."
  inherit:
    default:
      - retry
      - image

Additional details:

  • You can also list default keywords to inherit on one line: default: [keyword1, keyword2]

inherit:variables

Use inherit:variables to control the inheritance of global variables keywords.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • true (default) or false to enable or disable the inheritance of all global variables.
  • A list of specific variables to inherit.

Example of inherit:variables:

variables:
  VARIABLE1: "This is variable 1"
  VARIABLE2: "This is variable 2"
  VARIABLE3: "This is variable 3"

job1:
  script: echo "This job does not inherit any global variables."
  inherit:
    variables: false

job2:
  script: echo "This job inherits only the two listed global variables. It does not inherit 'VARIABLE3'."
  inherit:
    variables:
      - VARIABLE1
      - VARIABLE2

Additional details:

  • You can also list global variables to inherit on one line: variables: [VARIABLE1, VARIABLE2]

interruptible

History
  • Support for trigger jobs introduced in GitLab 16.8.

Use interruptible to configure the auto-cancel redundant pipelines feature to cancel a job before it completes if a new pipeline on the same ref starts for a newer commit. If the feature is disabled, the keyword has no effect. The new pipeline must be for a commit with new changes. For example, the Auto-cancel redundant pipelines feature has no effect if you select New pipeline in the UI to run a pipeline for the same commit.

The behavior of the Auto-cancel redundant pipelines feature can be controlled by the workflow:auto_cancel:on_new_commit setting.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • true or false (default).

Example of interruptible with the default behavior:

workflow:
  auto_cancel:
    on_new_commit: conservative # the default behavior

stages:
  - stage1
  - stage2
  - stage3

step-1:
  stage: stage1
  script:
    - echo "Can be canceled."
  interruptible: true

step-2:
  stage: stage2
  script:
    - echo "Can not be canceled."

step-3:
  stage: stage3
  script:
    - echo "Because step-2 can not be canceled, this step can never be canceled, even though it's set as interruptible."
  interruptible: true

In this example, a new pipeline causes a running pipeline to be:

  • Canceled, if only step-1 is running or pending.
  • Not canceled, after step-2 starts.

Example of interruptible with the auto_cancel:on_new_commit:interruptible setting:

workflow:
  auto_cancel:
    on_new_commit: interruptible

stages:
  - stage1
  - stage2
  - stage3

step-1:
  stage: stage1
  script:
    - echo "Can be canceled."
  interruptible: true

step-2:
  stage: stage2
  script:
    - echo "Can not be canceled."

step-3:
  stage: stage3
  script:
    - echo "Can be canceled."
  interruptible: true

In this example, a new pipeline causes a running pipeline to cancel step-1 and step-3 if they are running or pending.

Additional details:

  • Only set interruptible: true if the job can be safely canceled after it has started, like a build job. Deployment jobs usually shouldn’t be canceled, to prevent partial deployments.
  • When using the default behavior or workflow:auto_cancel:on_new_commit: conservative:
    • A job that has not started yet is always considered interruptible: true, regardless of the job’s configuration. The interruptible configuration is only considered after the job starts.
    • Running pipelines are only canceled if all running jobs are configured with interruptible: true or no jobs configured with interruptible: false have started at any time. After a job with interruptible: false starts, the entire pipeline is no longer considered interruptible.
    • If the pipeline triggered a downstream pipeline, but no job with interruptible: false in the downstream pipeline has started yet, the downstream pipeline is also canceled.
  • You can add an optional manual job with interruptible: false in the first stage of a pipeline to allow users to manually prevent a pipeline from being automatically canceled. After a user starts the job, the pipeline cannot be canceled by the Auto-cancel redundant pipelines feature.
  • When using interruptible with a trigger job:
    • The triggered downstream pipeline is never affected by the trigger job’s interruptible configuration.
    • If workflow:auto_cancel is set to conservative, the trigger job’s interruptible configuration has no effect.
    • If workflow:auto_cancel is set to interruptible, a trigger job with interruptible: true can be automatically canceled.

needs

Use needs to execute jobs out-of-order. Relationships between jobs that use needs can be visualized as a directed acyclic graph.

You can ignore stage ordering and run some jobs without waiting for others to complete. Jobs in multiple stages can run concurrently.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • An array of jobs (maximum of 50 jobs).
  • An empty array ([]), to set the job to start as soon as the pipeline is created.

Example of needs:

linux:build:
  stage: build
  script: echo "Building linux..."

mac:build:
  stage: build
  script: echo "Building mac..."

lint:
  stage: test
  needs: []
  script: echo "Linting..."

linux:rspec:
  stage: test
  needs: ["linux:build"]
  script: echo "Running rspec on linux..."

mac:rspec:
  stage: test
  needs: ["mac:build"]
  script: echo "Running rspec on mac..."

production:
  stage: deploy
  script: echo "Running production..."
  environment: production

This example creates four paths of execution:

  • Linter: The lint job runs immediately without waiting for the build stage to complete because it has no needs (needs: []).
  • Linux path: The linux:rspec job runs as soon as the linux:build job finishes, without waiting for mac:build to finish.
  • macOS path: The mac:rspec jobs runs as soon as the mac:build job finishes, without waiting for linux:build to finish.
  • The production job runs as soon as all previous jobs finish: lint, linux:build, linux:rspec, mac:build, mac:rspec.

Additional details:

  • The maximum number of jobs that a single job can have in the needs array is limited:
    • For GitLab.com, the limit is 50. For more information, see issue 350398.
    • For self-managed instances, the default limit is 50. This limit can be changed.
  • If needs refers to a job that uses the parallel keyword, it depends on all jobs created in parallel, not just one job. It also downloads artifacts from all the parallel jobs by default. If the artifacts have the same name, they overwrite each other and only the last one downloaded is saved.
    • To have needs refer to a subset of parallelized jobs (and not all of the parallelized jobs), use the needs:parallel:matrix keyword.
  • You can refer to jobs in the same stage as the job you are configuring.
  • If needs refers to a job that might not be added to a pipeline because of only, except, or rules, the pipeline might fail to create. Use the needs:optional keyword to resolve a failed pipeline creation.
  • If a pipeline has jobs with needs: [] and jobs in the .pre stage, they will all start as soon as the pipeline is created. Jobs with needs: [] start immediately, and jobs in the .pre stage also start immediately.

needs:artifacts

When a job uses needs, it no longer downloads all artifacts from previous stages by default, because jobs with needs can start before earlier stages complete. With needs you can only download artifacts from the jobs listed in the needs configuration.

Use artifacts: true (default) or artifacts: false to control when artifacts are downloaded in jobs that use needs.

Keyword type: Job keyword. You can use it only as part of a job. Must be used with needs:job.

Possible inputs:

  • true (default) or false.

Example of needs:artifacts:

test-job1:
  stage: test
  needs:
    - job: build_job1
      artifacts: true

test-job2:
  stage: test
  needs:
    - job: build_job2
      artifacts: false

test-job3:
  needs:
    - job: build_job1
      artifacts: true
    - job: build_job2
    - build_job3

In this example:

  • The test-job1 job downloads the build_job1 artifacts
  • The test-job2 job does not download the build_job2 artifacts.
  • The test-job3 job downloads the artifacts from all three build_jobs, because artifacts is true, or defaults to true, for all three needed jobs.

Additional details:

  • You should not combine needs with dependencies in the same job.

needs:project

Tier: Premium, Ultimate Offering: GitLab.com, Self-managed, GitLab Dedicated

Use needs:project to download artifacts from up to five jobs in other pipelines. The artifacts are downloaded from the latest successful specified job for the specified ref. To specify multiple jobs, add each as separate array items under the needs keyword.

If there is a pipeline running for the ref, a job with needs:project does not wait for the pipeline to complete. Instead, the artifacts are downloaded from the latest successful run of the specified job.

needs:project must be used with job, ref, and artifacts.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • needs:project: A full project path, including namespace and group.
  • job: The job to download artifacts from.
  • ref: The ref to download artifacts from.
  • artifacts: Must be true to download artifacts.

Examples of needs:project:

build_job:
  stage: build
  script:
    - ls -lhR
  needs:
    - project: namespace/group/project-name
      job: build-1
      ref: main
      artifacts: true
    - project: namespace/group/project-name-2
      job: build-2
      ref: main
      artifacts: true

In this example, build_job downloads the artifacts from the latest successful build-1 and build-2 jobs on the main branches in the group/project-name and group/project-name-2 projects.

You can use CI/CD variables in needs:project, for example:

build_job:
  stage: build
  script:
    - ls -lhR
  needs:
    - project: $CI_PROJECT_PATH
      job: $DEPENDENCY_JOB_NAME
      ref: $ARTIFACTS_DOWNLOAD_REF
      artifacts: true

Additional details:

  • To download artifacts from a different pipeline in the current project, set project to be the same as the current project, but use a different ref than the current pipeline. Concurrent pipelines running on the same ref could override the artifacts.
  • The user running the pipeline must have at least the Reporter role for the group or project, or the group/project must have public visibility.
  • You can’t use needs:project in the same job as trigger.
  • When using needs:project to download artifacts from another pipeline, the job does not wait for the needed job to complete. Using needs to wait for jobs to complete is limited to jobs in the same pipeline. Make sure that the needed job in the other pipeline completes before the job that needs it tries to download the artifacts.
  • You can’t download artifacts from jobs that run in parallel.
  • Support CI/CD variables in project, job, and ref.

Related topics:

needs:pipeline:job

A child pipeline can download artifacts from a job in its parent pipeline or another child pipeline in the same parent-child pipeline hierarchy.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • needs:pipeline: A pipeline ID. Must be a pipeline present in the same parent-child pipeline hierarchy.
  • job: The job to download artifacts from.

Example of needs:pipeline:job:

  • Parent pipeline (.gitlab-ci.yml):

    create-artifact:
      stage: build
      script: echo "sample artifact" > artifact.txt
      artifacts:
        paths: [artifact.txt]
    
    child-pipeline:
      stage: test
      trigger:
        include: child.yml
        strategy: depend
      variables:
        PARENT_PIPELINE_ID: $CI_PIPELINE_ID
    
  • Child pipeline (child.yml):

    use-artifact:
      script: cat artifact.txt
      needs:
        - pipeline: $PARENT_PIPELINE_ID
          job: create-artifact
    

In this example, the create-artifact job in the parent pipeline creates some artifacts. The child-pipeline job triggers a child pipeline, and passes the CI_PIPELINE_ID variable to the child pipeline as a new PARENT_PIPELINE_ID variable. The child pipeline can use that variable in needs:pipeline to download artifacts from the parent pipeline.

Additional details:

  • The pipeline attribute does not accept the current pipeline ID ($CI_PIPELINE_ID). To download artifacts from a job in the current pipeline, use needs:artifacts.
  • You cannot use needs:pipeline:job in a trigger job, or to fetch artifacts from a multi-project pipeline. To fetch artifacts from a multi-project pipeline use needs:project.

needs:optional

To need a job that sometimes does not exist in the pipeline, add optional: true to the needs configuration. If not defined, optional: false is the default.

Jobs that use rules, only, or except and that are added with include might not always be added to a pipeline. GitLab checks the needs relationships before starting a pipeline:

  • If the needs entry has optional: true and the needed job is present in the pipeline, the job waits for it to complete before starting.
  • If the needed job is not present, the job can start when all other needs requirements are met.
  • If the needs section contains only optional jobs, and none are added to the pipeline, the job starts immediately (the same as an empty needs entry: needs: []).
  • If a needed job has optional: false, but it was not added to the pipeline, the pipeline fails to start with an error similar to: 'job1' job needs 'job2' job, but it was not added to the pipeline.

Keyword type: Job keyword. You can use it only as part of a job.

Example of needs:optional:

build-job:
  stage: build

test-job1:
  stage: test

test-job2:
  stage: test
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

deploy-job:
  stage: deploy
  needs:
    - job: test-job2
      optional: true
    - job: test-job1
  environment: production

review-job:
  stage: deploy
  needs:
    - job: test-job2
      optional: true
  environment: review

In this example:

  • build-job, test-job1, and test-job2 start in stage order.
  • When the branch is the default branch, test-job2 is added to the pipeline, so:
    • deploy-job waits for both test-job1 and test-job2 to complete.
    • review-job waits for test-job2 to complete.
  • When the branch is not the default branch, test-job2 is not added to the pipeline, so:
    • deploy-job waits for only test-job1 to complete, and does not wait for the missing test-job2.
    • review-job has no other needed jobs and starts immediately (at the same time as build-job), like needs: [].

needs:pipeline

You can mirror the pipeline status from an upstream pipeline to a job by using the needs:pipeline keyword. The latest pipeline status from the default branch is replicated to the job.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • A full project path, including namespace and group. If the project is in the same group or namespace, you can omit them from the project keyword. For example: project: group/project-name or project: project-name.

Example of needs:pipeline:

upstream_status:
  stage: test
  needs:
    pipeline: other/project

Additional details:

  • If you add the job keyword to needs:pipeline, the job no longer mirrors the pipeline status. The behavior changes to needs:pipeline:job.

needs:parallel:matrix

History

Jobs can use parallel:matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.

Use needs:parallel:matrix to execute jobs out-of-order depending on parallelized jobs.

Keyword type: Job keyword. You can use it only as part of a job. Must be used with needs:job.

Possible inputs: An array of hashes of variables:

  • The variables and values must be selected from the variables and values defined in the parallel:matrix job.

Example of needs:parallel:matrix:

linux:build:
  stage: build
  script: echo "Building linux..."
  parallel:
    matrix:
      - PROVIDER: aws
        STACK:
          - monitoring
          - app1
          - app2

linux:rspec:
  stage: test
  needs:
    - job: linux:build
      parallel:
        matrix:
          - PROVIDER: aws
            STACK: app1
  script: echo "Running rspec on linux..."

The above example generates the following jobs:

linux:build: [aws, monitoring]
linux:build: [aws, app1]
linux:build: [aws, app2]
linux:rspec

The linux:rspec job runs as soon as the linux:build: [aws, app1] job finishes.

Related topics:

Additional details:

  • The order of the matrix variables in needs:parallel:matrix must match the order of the matrix variables in the needed job. For example, reversing the order of the variables in the linux:rspec job in the earlier example above would be invalid:

    linux:rspec:
      stage: test
      needs:
        - job: linux:build
          parallel:
            matrix:
              - STACK: app1        # The variable order does not match `linux:build` and is invalid.
                PROVIDER: aws
      script: echo "Running rspec on linux..."
    

pages

Use pages to define a GitLab Pages job that uploads static content to GitLab. The content is then published as a website.

You must:

  • Define artifacts with a path to the content directory, which is public by default.
  • Use publish if want to use a different content directory.

Keyword type: Job name.

Example of pages:

pages:
  stage: deploy
  script:
    - mv my-html-content public
  artifacts:
    paths:
      - public
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
  environment: production

This example renames the my-html-content/ directory to public/. This directory is exported as an artifact and published with GitLab Pages.

pages:publish

History

Use publish to configure the content directory of a pages job.

Keyword type: Job keyword. You can use it only as part of a pages job.

Possible inputs: A path to a directory containing the Pages content.

Example of publish:

pages:
  stage: deploy
  script:
    - npx @11ty/eleventy --input=path/to/eleventy/root --output=dist
  artifacts:
    paths:
      - dist
  publish: dist
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
  environment: production

This example uses Eleventy to generate a static website and output the generated HTML files into a the dist/ directory. This directory is exported as an artifact and published with GitLab Pages.

pages:pages.path_prefix

Tier: Premium, Ultimate Offering: GitLab.com, Self-managed, GitLab Dedicated Status: Beta
History
The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is available for testing, but not ready for production use.

Use pages.path_prefix to configure a path prefix for parallel deployments of GitLab Pages.

Keyword type: Job keyword. You can use it only as part of a pages job.

Possible inputs: A string, a CI/CD variables, or a combination of both. The given value is converted to lowercase, shortened to 63 bytes, and everything except alphanumeric characters is replaced with a hyphen. Leading and trailing hyphens are not permitted.

Example of pages.path_prefix:

pages:
  stage: deploy
  script:
    - echo "Pages accessible through ${CI_PAGES_URL}/${CI_COMMIT_BRANCH}"
  pages:
    path_prefix: "$CI_COMMIT_BRANCH"
  artifacts:
    paths:
    - public

In this example, a different pages deployment is created for each branch.

pages:pages.expire_in

Tier: Premium, Ultimate Offering: GitLab.com, Self-managed, GitLab Dedicated
History

Use expire_in to specify how long a deployment should be available before it expires. After the deployment is expired, it’s deactivated by a cron job running every 10 minutes.

Extra deployments expire by default. To prevent them from expiring, set the value to never.

Keyword type: Job keyword. You can use it only as part of a pages job.

Possible inputs: The expiry time. If no unit is provided, the time is in seconds. Valid values include:

  • '42'
  • 42 seconds
  • 3 mins 4 sec
  • 2 hrs 20 min
  • 2h20min
  • 6 mos 1 day
  • 47 yrs 6 mos and 4d
  • 3 weeks and 2 days
  • never

Example of pages:pages.expire_in:

pages:
  stage: deploy
  script:
    - echo "Pages accessible through ${CI_PAGES_URL}"
  pages:
    expire_in: 1 week
  artifacts:
    paths:
      - public

parallel

History
  • Introduced in GitLab 15.9, the maximum value for parallel is increased from 50 to 200.

Use parallel to run a job multiple times in parallel in a single pipeline.

Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently.

Parallel jobs are named sequentially from job_name 1/N to job_name N/N.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • A numeric value from 1 to 200.

Example of parallel:

test:
  script: rspec
  parallel: 5

This example creates 5 jobs that run in parallel, named test 1/5 to test 5/5.

Additional details:

  • Every parallel job has a CI_NODE_INDEX and CI_NODE_TOTAL predefined CI/CD variable set.
  • A pipeline with jobs that use parallel might:
    • Create more jobs running in parallel than available runners. Excess jobs are queued and marked pending while waiting for an available runner.
    • Create too many jobs, and the pipeline fails with a job_activity_limit_exceeded error. The maximum number of jobs that can exist in active pipelines is limited at the instance-level.

Related topics:

parallel:matrix

History
  • Introduced in GitLab 15.9, the maximum number of permutations is increased from 50 to 200.

Use parallel:matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.

Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: An array of hashes of variables:

  • The variable names can use only numbers, letters, and underscores (_).
  • The values must be either a string, or an array of strings.
  • The number of permutations cannot exceed 200.

Example of parallel:matrix:

deploystacks:
  stage: deploy
  script:
    - bin/deploy
  parallel:
    matrix:
      - PROVIDER: aws
        STACK:
          - monitoring
          - app1
          - app2
      - PROVIDER: ovh
        STACK: [monitoring, backup, app]
      - PROVIDER: [gcp, vultr]
        STACK: [data, processing]
  environment: $PROVIDER/$STACK

The example generates 10 parallel deploystacks jobs, each with different values for PROVIDER and STACK:

deploystacks: [aws, monitoring]
deploystacks: [aws, app1]
deploystacks: [aws, app2]
deploystacks: [ovh, monitoring]
deploystacks: [ovh, backup]
deploystacks: [ovh, app]
deploystacks: [gcp, data]
deploystacks: [gcp, processing]
deploystacks: [vultr, data]
deploystacks: [vultr, processing]

Additional details:

  • parallel:matrix jobs add the variable values to the job names to differentiate the jobs from each other, but large values can cause names to exceed limits:
    • Job names must be 255 characters or fewer.
    • When using needs, job names must be 128 characters or fewer.
  • You cannot create multiple matrix configurations with the same variable values but different variable names. Job names are generated from the variable values, not the variable names, so matrix entries with identical values generate identical job names that overwrite each other.

    For example, this test configuration would try to create two series of identical jobs, but the OS2 versions overwrite the OS versions:

    test:
      parallel:
        matrix:
          - OS: [ubuntu]
            PROVIDER: [aws, gcp]
          - OS2: [ubuntu]
            PROVIDER: [aws, gcp]
    

Related topics:

release

Use release to create a release.

The release job must have access to the release-cli, which must be in the $PATH.

If you use the Docker executor, you can use this image from the GitLab container registry: registry.gitlab.com/gitlab-org/release-cli:latest

If you use the Shell executor or similar, install release-cli on the server where the runner is registered.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: The release subkeys:

Example of release keyword:

release_job:
  stage: release
  image: registry.gitlab.com/gitlab-org/release-cli:latest
  rules:
    - if: $CI_COMMIT_TAG                  # Run this job when a tag is created manually
  script:
    - echo "Running the release job."
  release:
    tag_name: $CI_COMMIT_TAG
    name: 'Release $CI_COMMIT_TAG'
    description: 'Release created using the release-cli.'

This example creates a release:

  • When you push a Git tag.
  • When you add a Git tag in the UI at Code > Tags.

Additional details:

  • All release jobs, except trigger jobs, must include the script keyword. A release job can use the output from script commands. If you don’t need the script, you can use a placeholder:

    script:
      - echo "release job"
    

    An issue exists to remove this requirement.

  • The release section executes after the script keyword and before the after_script.
  • A release is created only if the job’s main script succeeds.
  • If the release already exists, it is not updated and the job with the release keyword fails.

Related topics:

release:tag_name

Required. The Git tag for the release.

If the tag does not exist in the project yet, it is created at the same time as the release. New tags use the SHA associated with the pipeline.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • A tag name.

CI/CD variables are supported.

Example of release:tag_name:

To create a release when a new tag is added to the project:

  • Use the $CI_COMMIT_TAG CI/CD variable as the tag_name.
  • Use rules:if to configure the job to run only for new tags.
job:
  script: echo "Running the release job for the new tag."
  release:
    tag_name: $CI_COMMIT_TAG
    description: 'Release description'
  rules:
    - if: $CI_COMMIT_TAG

To create a release and a new tag at the same time, your rules should not configure the job to run only for new tags. A semantic versioning example:

job:
  script: echo "Running the release job and creating a new tag."
  release:
    tag_name: ${MAJOR}_${MINOR}_${REVISION}
    description: 'Release description'
  rules:
    - if: $CI_PIPELINE_SOURCE == "schedule"

release:tag_message

History
  • Introduced in GitLab 15.3. Supported by release-cli v0.12.0 or later.

If the tag does not exist, the newly created tag is annotated with the message specified by tag_message. If omitted, a lightweight tag is created.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • A text string.

Example of release:tag_message:

  release_job:
    stage: release
    release:
      tag_name: $CI_COMMIT_TAG
      description: 'Release description'
      tag_message: 'Annotated tag message'

release:name

The release name. If omitted, it is populated with the value of release: tag_name.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • A text string.

Example of release:name:

  release_job:
    stage: release
    release:
      name: 'Release $CI_COMMIT_TAG'

release:description

The long description of the release.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • A string with the long description.
  • The path to a file that contains the description.
    • The file location must be relative to the project directory ($CI_PROJECT_DIR).
    • If the file is a symbolic link, it must be in the $CI_PROJECT_DIR.
    • The ./path/to/file and filename can’t contain spaces.

Example of release:description:

job:
  release:
    tag_name: ${MAJOR}_${MINOR}_${REVISION}
    description: './path/to/CHANGELOG.md'

Additional details:

  • The description is evaluated by the shell that runs release-cli. You can use CI/CD variables to define the description, but some shells use different syntax to reference variables. Similarly, some shells might require special characters to be escaped. For example, backticks (`) might need to be escaped with a backslash (\).

release:ref

The ref for the release, if the release: tag_name doesn’t exist yet.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • A commit SHA, another tag name, or a branch name.

release:milestones

The title of each milestone the release is associated with.

release:released_at

The date and time when the release is ready.

Possible inputs:

  • A date enclosed in quotes and expressed in ISO 8601 format.

Example of release:released_at:

released_at: '2021-03-15T08:00:00Z'

Additional details:

  • If it is not defined, the current date and time is used.

Use release:assets:links to include asset links in the release.

Requires release-cli version v0.4.0 or later.

Example of release:assets:links:

assets:
  links:
    - name: 'asset1'
      url: 'https://example.com/assets/1'
    - name: 'asset2'
      url: 'https://example.com/assets/2'
      filepath: '/pretty/url/1' # optional
      link_type: 'other' # optional

resource_group

Use resource_group to create a resource group that ensures a job is mutually exclusive across different pipelines for the same project.

For example, if multiple jobs that belong to the same resource group are queued simultaneously, only one of the jobs starts. The other jobs wait until the resource_group is free.

Resource groups behave similar to semaphores in other programming languages.

You can choose a process mode to strategically control the job concurrency for your deployment preferences. The default process mode is unordered. To change the process mode of a resource group, use the API to send a request to edit an existing resource group.

You can define multiple resource groups per environment. For example, when deploying to physical devices, you might have multiple physical devices. Each device can be deployed to, but only one deployment can occur per device at any given time.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • Only letters, digits, -, _, /, $, {, }, ., and spaces. It can’t start or end with /. CI/CD variables are supported.

Example of resource_group:

deploy-to-production:
  script: deploy
  resource_group: production

In this example, two deploy-to-production jobs in two separate pipelines can never run at the same time. As a result, you can ensure that concurrent deployments never happen to the production environment.

Related topics:

retry

Use retry to configure how many times a job is retried if it fails. If not defined, defaults to 0 and jobs do not retry.

When a job fails, the job is processed up to two more times, until it succeeds or reaches the maximum number of retries.

By default, all failure types cause the job to be retried. Use retry:when or retry:exit_codes to select which failures to retry on.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • 0 (default), 1, or 2.

Example of retry:

test:
  script: rspec
  retry: 2

test_advanced:
  script:
    - echo "Run a script that results in exit code 137."
    - exit 137
  retry:
    max: 2
    when: runner_system_failure
    exit_codes: 137

test_advanced will be retried up to 2 times if the exit code is 137 or if it had a runner system failure.

retry:when

Use retry:when with retry:max to retry jobs for only specific failure cases. retry:max is the maximum number of retries, like retry, and can be 0, 1, or 2.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • A single failure type, or an array of one or more failure types:
  • always: Retry on any failure (default).
  • unknown_failure: Retry when the failure reason is unknown.
  • script_failure: Retry when:
    • The script failed.
    • The runner failed to pull the Docker image. For docker, docker+machine, kubernetes executors.
  • api_failure: Retry on API failure.
  • stuck_or_timeout_failure: Retry when the job got stuck or timed out.
  • runner_system_failure: Retry if there is a runner system failure (for example, job setup failed).
  • runner_unsupported: Retry if the runner is unsupported.
  • stale_schedule: Retry if a delayed job could not be executed.
  • job_execution_timeout: Retry if the script exceeded the maximum execution time set for the job.
  • archived_failure: Retry if the job is archived and can’t be run.
  • unmet_prerequisites: Retry if the job failed to complete prerequisite tasks.
  • scheduler_failure: Retry if the scheduler failed to assign the job to a runner.
  • data_integrity_failure: Retry if there is an unknown job problem.

Example of retry:when (single failure type):

test:
  script: rspec
  retry:
    max: 2
    when: runner_system_failure

If there is a failure other than a runner system failure, the job is not retried.

Example of retry:when (array of failure types):

test:
  script: rspec
  retry:
    max: 2
    when:
      - runner_system_failure
      - stuck_or_timeout_failure

retry:exit_codes

History

Use retry:exit_codes with retry:max to retry jobs for only specific failure cases. retry:max is the maximum number of retries, like retry, and can be 0, 1, or 2.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • A single exit code.
  • An array of exit codes.

Example of retry:exit_codes:

test_job_1:
  script:
    - echo "Run a script that results in exit code 1. This job isn't retried."
    - exit 1
  retry:
    max: 2
    exit_codes: 137

test_job_2:
  script:
    - echo "Run a script that results in exit code 137. This job will be retried."
    - exit 137
  retry:
    max: 1
    exit_codes:
      - 255
      - 137

Related topics:

You can specify the number of retry attempts for certain stages of job execution using variables.

rules

Use rules to include or exclude jobs in pipelines.

Rules are evaluated when the pipeline is created, and evaluated in order. When a match is found, no more rules are checked and the job is either included or excluded from the pipeline depending on the configuration. If no rules match, the job is not added to the pipeline.

rules accepts an array of rules. Each rules must have at least one of:

  • if
  • changes
  • exists
  • when

Rules can also optionally be combined with:

  • allow_failure
  • needs
  • variables
  • interruptible

You can combine multiple keywords together for complex rules.

The job is added to the pipeline:

  • If an if, changes, or exists rule matches, and is configured with when: on_success (default if not defined), when: delayed, or when: always.
  • If a rule is reached that is only when: on_success, when: delayed, or when: always.

The job is not added to the pipeline:

  • If no rules match.
  • If a rule matches and has when: never.

For additional examples, see Specify when jobs run with rules.

rules:if

Use rules:if clauses to specify when to add a job to a pipeline:

  • If an if statement is true, add the job to the pipeline.
  • If an if statement is true, but it’s combined with when: never, do not add the job to the pipeline.
  • If an if statement is false, check the next rules item (if any more exist).

if clauses are evaluated:

Keyword type: Job-specific and pipeline-specific. You can use it as part of a job to configure the job behavior, or with workflow to configure the pipeline behavior.

Possible inputs:

Example of rules:if:

job:
  script: echo "Hello, Rules!"
  rules:
    - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/ && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != $CI_DEFAULT_BRANCH
      when: never
    - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/
      when: manual
      allow_failure: true
    - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME

Additional details:

Related topics:

rules:changes

Use rules:changes to specify when to add a job to a pipeline by checking for changes to specific files.

For new branch pipelines or when there is no Git push event, rules: changes always evaluates to true and the job always runs. Pipelines like tag pipelines, scheduled pipelines, and manual pipelines, all do not have a Git push event associated with them. To cover these cases, use rules: changes: compare_to to specify the branch to compare against the pipeline ref.

If you do not use compare_to, you should use rules: changes only with branch pipelines or merge request pipelines, though rules: changes still evaluates to true when creating a new branch. With:

  • Merge request pipelines, rules:changes compares the changes with the target MR branch.
  • Branch pipelines, rules:changes compares the changes with the previous commit on the branch.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

An array including any number of:

  • Paths to files. The file paths can include CI/CD variables.
  • Wildcard paths for:
    • Single directories, for example path/to/directory/*.
    • A directory and all its subdirectories, for example path/to/directory/**/*.
  • Wildcard glob paths for all files with the same extension or multiple extensions, for example *.md or path/to/directory/*.{rb,py,sh}.
  • Wildcard paths to files in the root directory, or all directories, wrapped in double quotes. For example "*.json" or "**/*.json".

Example of rules:changes:

docker build:
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      changes:
        - Dockerfile
      when: manual
      allow_failure: true

docker build alternative:
  variables:
    DOCKERFILES_DIR: 'path/to/dockerfiles'
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      changes:
        - $DOCKERFILES_DIR/**/*

In this example:

  • If the pipeline is a merge request pipeline, check Dockerfile and the files in $DOCKERFILES_DIR/**/* for changes.
  • If Dockerfile has changed, add the job to the pipeline as a manual job, and the pipeline continues running even if the job is not triggered (allow_failure: true).
  • If a file in $DOCKERFILES_DIR/**/* has changed, add the job to the pipeline.
  • If no listed files have changed, do not add either job to any pipeline (same as when: never).

Additional details:

  • Glob patterns are interpreted with Ruby’s File.fnmatch with the flags File::FNM_PATHNAME | File::FNM_DOTMATCH | File::FNM_EXTGLOB.
  • A maximum of 50 patterns or file paths can be defined per rules:changes section.
  • changes resolves to true if any of the matching files are changed (an OR operation).
  • For additional examples, see Specify when jobs run with rules.
  • You can use the $ character for both variables and paths. For example, if the $VAR variable exists, its value is used. If it does not exist, the $ is interpreted as being part of a path.

Related topics:

rules:changes:paths
History

Use rules:changes to specify that a job only be added to a pipeline when specific files are changed, and use rules:changes:paths to specify the files.

rules:changes:paths is the same as using rules:changes without any subkeys. All additional details and related topics are the same.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • See the possible inputs for rules:changes above.

Example of rules:changes:paths:

docker-build-1:
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      changes:
        - Dockerfile

docker-build-2:
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      changes:
        paths:
          - Dockerfile

In this example, both jobs have the same behavior.

rules:changes:compare_to
History

Use rules:changes:compare_to to specify which ref to compare against for changes to the files listed under rules:changes:paths.

Keyword type: Job keyword. You can use it only as part of a job, and it must be combined with rules:changes:paths.

Possible inputs:

  • A branch name, like main, branch1, or refs/heads/branch1.
  • A tag name, like tag1 or refs/tags/tag1.
  • A commit SHA, like 2fg31ga14b.

CI/CD variables are supported.

Example of rules:changes:compare_to:

docker build:
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      changes:
        paths:
          - Dockerfile
        compare_to: 'refs/heads/branch1'

In this example, the docker build job is only included when the Dockerfile has changed relative to refs/heads/branch1 and the pipeline source is a merge request event.

Additional details:

  • Using compare_to with merged results pipelines can cause unexpected results, because the comparison base is an internal commit that GitLab creates.

Related topics:

rules:exists

History
  • CI/CD variable support introduced in GitLab 15.6.

Use exists to run a job when certain files exist in the repository.

Keyword type: Job keyword. You can use it as part of a job or an include.

Possible inputs:

  • An array of file paths. Paths are relative to the project directory ($CI_PROJECT_DIR) and can’t directly link outside it. File paths can use glob patterns and CI/CD variables.

Example of rules:exists:

job:
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - exists:
        - Dockerfile

job2:
  variables:
    DOCKERPATH: "**/Dockerfile"
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - exists:
        - $DOCKERPATH

In this example:

  • job1 runs if a Dockerfile exists in the root directory of the repository.
  • job2 runs if a Dockerfile exists anywhere in the repository.

Additional details:

  • Glob patterns are interpreted with Ruby’s File.fnmatch with the flags File::FNM_PATHNAME | File::FNM_DOTMATCH | File::FNM_EXTGLOB.
  • For performance reasons, GitLab performs a maximum of 10,000 checks against exists patterns or file paths. After the 10,000th check, rules with patterned globs always match. In other words, the exists rule always assumes a match in projects with more than 10,000 files, or if there are fewer than 10,000 files but the exists rules are checked more than 10,000 times.
    • If there are multiple patterned globs, the limit is 10,000 divided by the number of globs. For example, a rule with 4 patterned globs has file limit of 2500.
  • A maximum of 50 patterns or file paths can be defined per rules:exists section.
  • exists resolves to true if any of the listed files are found (an OR operation).
  • With job-level rules:exists, GitLab searches for the files in the project and ref that runs the pipeline. When using include with rules:exists, GitLab searches for the files in the project and ref of the file that contains the include section. The project containing the include section can be different than the project running the pipeline when using:
  • rules:exists cannot search for the presence of artifacts, because rules evaluation happens before jobs run and artifacts are fetched.
rules:exists:paths
History
  • Introduced in GitLab 16.11 with a flag named ci_support_rules_exists_paths_and_project. Disabled by default.
  • Generally available in GitLab 17.0. Feature flag ci_support_rules_exists_paths_and_project removed.

rules:exists:paths is the same as using rules:exists without any subkeys. All additional details are the same.

Keyword type: Job keyword. You can use it as part of a job or an include.

Possible inputs:

  • An array of file paths.

Example of rules:exists:paths:

docker-build-1:
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      exists:
        - Dockerfile

docker-build-2:
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      exists:
        paths:
          - Dockerfile

In this example, both jobs have the same behavior.

Additional details:

  • In some cases you cannot use / or ./ in a CI/CD variable with exists. See issue 386595 for more details.
rules:exists:project
History
  • Introduced in GitLab 16.11 with a flag named ci_support_rules_exists_paths_and_project. Disabled by default.
  • Generally available in GitLab 17.0. Feature flag ci_support_rules_exists_paths_and_project removed.

Use rules:exists:project to specify the location in which to search for the files listed under rules:exists:paths. Must be used with rules:exists:paths.

Keyword type: Job keyword. You can use it as part of a job or an include, and it must be combined with rules:exists:paths.

Possible inputs:

  • exists:project: A full project path, including namespace and group.
  • exists:ref: Optional. The commit ref to use to search for the file. The ref can be a tag, branch name, or SHA. Defaults to the HEAD of the project when not specified.

Example of rules:exists:project:

docker build:
  script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
  rules:
    - exists:
        paths:
          - Dockerfile
        project: my-group/my-project
        ref: v1.0.0

In this example, the docker build job is only included when the Dockerfile exists in the project my-group/my-project on the commit tagged with v1.0.0.

rules:when

Use rules:when alone or as part of another rule to control conditions for adding a job to a pipeline. rules:when is similar to when, but with slightly different input options.

If a rules:when rule is not combined with if, changes, or exists, it always matches if reached when evaluating a job’s rules.

Keyword type: Job-specific. You can use it only as part of a job.

Possible inputs:

  • on_success (default): Run the job only when no jobs in earlier stages fail.
  • on_failure: Run the job only when at least one job in an earlier stage fails.
  • never: Don’t run the job regardless of the status of jobs in earlier stages.
  • always: Run the job regardless of the status of jobs in earlier stages.
  • manual: Add the job to the pipeline as a manual job. The default value for allow_failure changes to false.
  • delayed: Add the job to the pipeline as a delayed job.

Example of rules:when:

job1:
  rules:
    - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
    - if: $CI_COMMIT_REF_NAME =~ /feature/
      when: delayed
    - when: manual
  script:
    - echo

In this example, job1 is added to pipelines:

  • For the default branch, with when: on_success which is the default behavior when when is not defined.
  • For feature branches as a delayed job.
  • In all other cases as a manual job.

Additional details:

rules:allow_failure

Use allow_failure: true in rules to allow a job to fail without stopping the pipeline.

You can also use allow_failure: true with a manual job. The pipeline continues running without waiting for the result of the manual job. allow_failure: false combined with when: manual in rules causes the pipeline to wait for the manual job to run before continuing.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • true or false. Defaults to false if not defined.

Example of rules:allow_failure:

job:
  script: echo "Hello, Rules!"
  rules:
    - if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH
      when: manual
      allow_failure: true

If the rule matches, then the job is a manual job with allow_failure: true.

Additional details:

  • The rule-level rules:allow_failure overrides the job-level allow_failure, and only applies when the specific rule triggers the job.

rules:needs

History

Use needs in rules to update a job’s needs for specific conditions. When a condition matches a rule, the job’s needs configuration is completely replaced with the needs in the rule.

Keyword type: Job-specific. You can use it only as part of a job.

Possible inputs:

  • An array of job names as strings.
  • A hash with a job name, optionally with additional attributes.
  • An empty array ([]), to set the job needs to none when the specific condition is met.

Example of rules:needs:

build-dev:
  stage: build
  rules:
    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
  script: echo "Feature branch, so building dev version..."

build-prod:
  stage: build
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
  script: echo "Default branch, so building prod version..."

tests:
  stage: test
  rules:
    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
      needs: ['build-dev']
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
      needs: ['build-prod']
  script: echo "Running dev specs by default, or prod specs when default branch..."

In this example:

  • If the pipeline runs on a branch that is not the default branch, and therefore the rule matches the first condition, the specs job needs the build-dev job.
  • If the pipeline runs on the default branch, and therefore the rule matches the second condition, the specs job needs the build-prod job.

Additional details:

  • needs in rules override any needs defined at the job-level. When overridden, the behavior is same as job-level needs.
  • needs in rules can accept artifacts and optional.

rules:variables

Use variables in rules to define variables for specific conditions.

Keyword type: Job-specific. You can use it only as part of a job.

Possible inputs:

  • A hash of variables in the format VARIABLE-NAME: value.

Example of rules:variables:

job:
  variables:
    DEPLOY_VARIABLE: "default-deploy"
  rules:
    - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
      variables:                              # Override DEPLOY_VARIABLE defined
        DEPLOY_VARIABLE: "deploy-production"  # at the job level.
    - if: $CI_COMMIT_REF_NAME =~ /feature/
      variables:
        IS_A_FEATURE: "true"                  # Define a new variable.
  script:
    - echo "Run script with $DEPLOY_VARIABLE as an argument"
    - echo "Run another script if $IS_A_FEATURE exists"

rules:interruptible

History

Use interruptible in rules to update a job’s interruptible value for specific conditions.

Keyword type: Job-specific. You can use it only as part of a job.

Possible inputs:

  • true or false.

Example of rules:interruptible:

job:
  script: echo "Hello, Rules!"
  interruptible: true
  rules:
    - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
      interruptible: false  # Override interruptible defined at the job level.
    - when: on_success

Additional details:

  • The rule-level rules:interruptible overrides the job-level interruptible, and only applies when the specific rule triggers the job.

run

Status: Experiment
History
  • Introduced in GitLab 17.3 with a flag named pipeline_run_keyword. Disabled by default. Requires GitLab Runner 17.1.
  • Feature flag pipeline_run_keyword removed in GitLab 17.5.
note
This feature is available for testing, but not ready for production use.

Use run to define a series of steps to be executed in a job. Each step can be either a script or a predefined step.

You can also provide optional environment variables and inputs.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • An array of hashes, where each hash represents a step with the following possible keys:
    • name: A string representing the name of the step.
    • script: A string or array of strings containing shell commands to execute.
    • step: A string identifying a predefined step to run.
    • env: Optional. A hash of environment variables specific to this step.
    • inputs: Optional. A hash of input parameters for predefined steps.

Each array entry must have a name, and one script or step (but not both).

Example of run:

job:
  run:
    - name: 'hello_steps'
      script: 'echo "hello from step1"'
    - name: 'bye_steps'
      step: gitlab.com/gitlab-org/ci-cd/runner-tools/echo-step@main
      inputs:
        echo: 'bye steps!'
      env:
        var1: 'value 1'

In this example, the job has two steps:

  • hello_steps runs the echo shell command.
  • bye_steps uses a predefined step with an environment variable and an input parameter.

Additional details:

  • A step can have either a script or a step key, but not both.
  • A run configuration cannot be used together with existing script keyword.
  • Multi-line scripts can be defined using YAML block scalar syntax.

script

Use script to specify commands for the runner to execute.

All jobs except trigger jobs require a script keyword.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: An array including:

CI/CD variables are supported.

Example of script:

job1:
  script: "bundle exec rspec"

job2:
  script:
    - uname -a
    - bundle exec rspec

Additional details:

Related topics:

secrets

Tier: Premium, Ultimate Offering: GitLab.com, Self-managed, GitLab Dedicated

Use secrets to specify CI/CD secrets to:

secrets:vault

History
  • generic engine option introduced in GitLab Runner 16.11.

Use secrets:vault to specify secrets provided by a HashiCorp Vault.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • engine:name: Name of the secrets engine. Can be one of kv-v2 (default), kv-v1, or generic.
  • engine:path: Path to the secrets engine.
  • path: Path to the secret.
  • field: Name of the field where the password is stored.

Example of secrets:vault:

To specify all details explicitly and use the KV-V2 secrets engine:

job:
  secrets:
    DATABASE_PASSWORD:  # Store the path to the secret in this CI/CD variable
      vault:  # Translates to secret: `ops/data/production/db`, field: `password`
        engine:
          name: kv-v2
          path: ops
        path: production/db
        field: password

You can shorten this syntax. With the short syntax, engine:name and engine:path both default to kv-v2:

job:
  secrets:
    DATABASE_PASSWORD:  # Store the path to the secret in this CI/CD variable
      vault: production/db/password  # Translates to secret: `kv-v2/data/production/db`, field: `password`

To specify a custom secrets engine path in the short syntax, add a suffix that starts with @:

job:
  secrets:
    DATABASE_PASSWORD:  # Store the path to the secret in this CI/CD variable
      vault: production/db/password@ops  # Translates to secret: `ops/data/production/db`, field: `password`

secrets:gcp_secret_manager

History
  • Introduced in GitLab 16.8 and GitLab Runner 16.8.

Use secrets:gcp_secret_manager to specify secrets provided by GCP Secret Manager.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • name: Name of the secret.
  • version: Version of the secret.

Example of secrets:gcp_secret_manager:

job:
  secrets:
    DATABASE_PASSWORD:
      gcp_secret_manager:
        name: 'test'
        version: 2

Related topics:

secrets:azure_key_vault

History
  • Introduced in GitLab 16.3 and GitLab Runner 16.3.

Use secrets:azure_key_vault to specify secrets provided by a Azure Key Vault.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • name: Name of the secret.
  • version: Version of the secret.

Example of secrets:azure_key_vault:

job:
  secrets:
    DATABASE_PASSWORD:
      azure_key_vault:
        name: 'test'
        version: 'test'

Related topics:

secrets:file

Use secrets:file to configure the secret to be stored as either a file or variable type CI/CD variable

By default, the secret is passed to the job as a file type CI/CD variable. The value of the secret is stored in the file and the variable contains the path to the file.

If your software can’t use file type CI/CD variables, set file: false to store the secret value directly in the variable.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • true (default) or false.

Example of secrets:file:

job:
  secrets:
    DATABASE_PASSWORD:
      vault: production/db/password@ops
      file: false

Additional details:

  • The file keyword is a setting for the CI/CD variable and must be nested under the CI/CD variable name, not in the vault section.

secrets:token

History

Use secrets:token to explicitly select a token to use when authenticating with Vault by referencing the token’s CI/CD variable.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • The name of an ID token

Example of secrets:token:

job:
  id_tokens:
    AWS_TOKEN:
      aud: https://aws.example.com
    VAULT_TOKEN:
      aud: https://vault.example.com
  secrets:
    DB_PASSWORD:
      vault: gitlab/production/db
      token: $VAULT_TOKEN

Additional details:

  • When the token keyword is not set, the first ID token is used to authenticate.

services

Use services to specify any additional Docker images that your scripts require to run successfully. The services image is linked to the image specified in the image keyword.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs: The name of the services image, including the registry path if needed, in one of these formats:

  • <image-name> (Same as using <image-name> with the latest tag)
  • <image-name>:<tag>
  • <image-name>@<digest>

CI/CD variables are supported, but not for alias.

Example of services:

default:
  image:
    name: ruby:2.6
    entrypoint: ["/bin/bash"]

  services:
    - name: my-postgres:11.7
      alias: db-postgres
      entrypoint: ["/usr/local/bin/db-postgres"]
      command: ["start"]

  before_script:
    - bundle install

test:
  script:
    - bundle exec rake spec

In this example, GitLab launches two containers for the job:

  • A Ruby container that runs the script commands.
  • A PostgreSQL container. The script commands in the Ruby container can connect to the PostgreSQL database at the db-postgrest hostname.

Related topics:

services:docker

History
  • Introduced in GitLab 16.7. Requires GitLab Runner 16.7 or later.
  • user input option introduced in GitLab 16.8.

Use services:docker to pass options to the Docker executor of a GitLab Runner.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

A hash of options for the Docker executor, which can include:

  • platform: Selects the architecture of the image to pull. When not specified, the default is the same platform as the host runner.
  • user: Specify the username or UID to use when running the container.

Example of services:docker:

arm-sql-job:
  script: echo "Run sql tests in service container"
  image: ruby:2.6
  services:
    - name: super/sql:experimental
      docker:
        platform: arm64/v8
        user: dave

Additional details:

services:pull_policy

History

The pull policy that the runner uses to fetch the Docker image.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

  • A single pull policy, or multiple pull policies in an array. Can be always, if-not-present, or never.

Examples of services:pull_policy:

job1:
  script: echo "A single pull policy."
  services:
    - name: postgres:11.6
      pull_policy: if-not-present

job2:
  script: echo "Multiple pull policies."
  services:
    - name: postgres:11.6
      pull_policy: [always, if-not-present]

Additional details:

  • If the runner does not support the defined pull policy, the job fails with an error similar to: ERROR: Job failed (system failure): the configured PullPolicies ([always]) are not allowed by AllowedPullPolicies ([never]).

Related topics:

stage

Use stage to define which stage a job runs in. Jobs in the same stage can execute in parallel (see Additional details).

If stage is not defined, the job uses the test stage by default.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs: A string, which can be a:

Example of stage:

stages:
  - build
  - test
  - deploy

job1:
  stage: build
  script:
    - echo "This job compiles code."

job2:
  stage: test
  script:
    - echo "This job tests the compiled code. It runs when the build stage completes."

job3:
  script:
    - echo "This job also runs in the test stage".

job4:
  stage: deploy
  script:
    - echo "This job deploys the code. It runs when the test stage completes."
  environment: production

Additional details:

  • The stage name must be 255 characters or fewer.
  • Jobs can run in parallel if they run on different runners.
  • If you have only one runner, jobs can run in parallel if the runner’s concurrent setting is greater than 1.

stage: .pre

Use the .pre stage to make a job run at the start of a pipeline. .pre is always the first stage in a pipeline. User-defined stages execute after .pre. You do not have to define .pre in stages.

If a pipeline contains only jobs in the .pre or .post stages, it does not run. There must be at least one other job in a different stage.

Keyword type: You can only use it with a job’s stage keyword.

Example of stage: .pre:

stages:
  - build
  - test

job1:
  stage: build
  script:
    - echo "This job runs in the build stage."

first-job:
  stage: .pre
  script:
    - echo "This job runs in the .pre stage, before all other stages."

job2:
  stage: test
  script:
    - echo "This job runs in the test stage."

stage: .post

Use the .post stage to make a job run at the end of a pipeline. .post is always the last stage in a pipeline. User-defined stages execute before .post. You do not have to define .post in stages.

If a pipeline contains only jobs in the .pre or .post stages, it does not run. There must be at least one other job in a different stage.

Keyword type: You can only use it with a job’s stage keyword.

Example of stage: .post:

stages:
  - build
  - test

job1:
  stage: build
  script:
    - echo "This job runs in the build stage."

last-job:
  stage: .post
  script:
    - echo "This job runs in the .post stage, after all other stages."

job2:
  stage: test
  script:
    - echo "This job runs in the test stage."

Additional details:

  • If a pipeline has jobs with needs: [] and jobs in the .pre stage, they will all start as soon as the pipeline is created. Jobs with needs: [] start immediately, ignoring any stage configuration.

tags

Use tags to select a specific runner from the list of all runners that are available for the project.

When you register a runner, you can specify the runner’s tags, for example ruby, postgres, or development. To pick up and run a job, a runner must be assigned every tag listed in the job.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs:

Example of tags:

job:
  tags:
    - ruby
    - postgres

In this example, only runners with both the ruby and postgres tags can run the job.

Additional details:

  • The number of tags must be less than 50.

Related topics:

timeout

Use timeout to configure a timeout for a specific job. If the job runs for longer than the timeout, the job fails.

The job-level timeout can be longer than the project-level timeout, but can’t be longer than the runner’s timeout.

Keyword type: Job keyword. You can use it only as part of a job or in the default section.

Possible inputs: A period of time written in natural language. For example, these are all equivalent:

  • 3600 seconds
  • 60 minutes
  • one hour

Example of timeout:

build:
  script: build.sh
  timeout: 3 hours 30 minutes

test:
  script: rspec
  timeout: 3h 30m

trigger

History
  • Support for environment introduced in GitLab 16.4.

Use trigger to declare that a job is a “trigger job” which starts a downstream pipeline that is either:

Trigger jobs can use only a limited set of GitLab CI/CD configuration keywords. The keywords available for use in trigger jobs are:

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

Example of trigger:

trigger-multi-project-pipeline:
  trigger: my-group/my-project

Additional details:

Related topics:

trigger:include

Use trigger:include to declare that a job is a “trigger job” which starts a child pipeline.

Use trigger:include:artifact to trigger a dynamic child pipeline.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

  • The path to the child pipeline’s configuration file.

Example of trigger:include:

trigger-child-pipeline:
  trigger:
    include: path/to/child-pipeline.gitlab-ci.yml

Related topics:

trigger:project

Use trigger:project to declare that a job is a “trigger job” which starts a multi-project pipeline.

By default, the multi-project pipeline triggers for the default branch. Use trigger:branch to specify a different branch.

Keyword type: Job keyword. You can use it only as part of a job.

Possible inputs:

Example of trigger:project:

trigger-multi-project-pipeline:
  trigger:
    project: my-group/my-project

Example of trigger:project for a different branch:

trigger-multi-project-pipeline:
  trigger:
    project: my-group/my-project
    branch: development

Related topics:

trigger:strategy

Use trigger:strategy to force the trigger job to wait for the downstream pipeline to complete before it is marked as success.

This behavior is different than the default, which is for the trigger job to be marked as success as soon as the downstream pipeline is created.

This setting makes your pipeline execution linear rather than parallel.

Example of trigger:strategy:

trigger_job:
  trigger:
    include: path/to/child-pipeline.yml
    strategy: depend

In this example, jobs from subsequent stages wait for the triggered pipeline to successfully complete before starting.

Additional details:

  • Optional manual jobs in the downstream pipeline do not affect the status of the downstream pipeline or the upstream trigger job. The downstream pipeline can complete successfully without running any optional manual jobs.
  • Blocking manual jobs in the downstream pipeline must run before the trigger job is marked as successful or failed. The trigger job shows pending () if the downstream pipeline status is waiting for manual action () due to manual jobs. By default, jobs in later stages do not start until the trigger job completes.
  • If the downstream pipeline has a failed job, but the job uses allow_failure: true, the downstream pipeline is considered successful and the trigger job shows success.

trigger:forward

Use trigger:forward to specify what to forward to the downstream pipeline. You can control what is forwarded to both parent-child pipelines and multi-project pipelines.

Forwarded variables do not get forwarded again in nested downstream pipelines by default, unless the nested downstream trigger job also uses trigger:forward.

Possible inputs:

  • yaml_variables: true (default), or false. When true, variables defined in the trigger job are passed to downstream pipelines.
  • pipeline_variables: true or false (default). When true, pipeline variables are passed to the downstream pipeline.

Example of trigger:forward:

Run this pipeline manually, with the CI/CD variable MYVAR = my value:

variables: # default variables for each job
  VAR: value

# Default behavior:
# - VAR is passed to the child
# - MYVAR is not passed to the child
child1:
  trigger:
    include: .child-pipeline.yml

# Forward pipeline variables:
# - VAR is passed to the child
# - MYVAR is passed to the child
child2:
  trigger:
    include: .child-pipeline.yml
    forward:
      pipeline_variables: true

# Do not forward YAML variables:
# - VAR is not passed to the child
# - MYVAR is not passed to the child
child3:
  trigger:
    include: .child-pipeline.yml
    forward:
      yaml_variables: false

Additional details:

  • CI/CD variables forwarded to downstream pipelines with trigger:forward are pipeline variables, which have high precedence. If a variable with the same name is defined in the downstream pipeline, that variable is usually overwritten by the forwarded variable.

variables

Use variables to define CI/CD variables for jobs.

Keyword type: Global and job keyword. You can use it at the global level, and also at the job level.

You can use variables defined in a job in the job’s script, before_script, or after_script sections, and also with some job keywords, but not global keywords. Check the Possible inputs section of each job keyword to see if it supports variables.

Variables defined in a global (top-level) variables section act as default variables for all jobs. Each global variable is made available to every job in the pipeline, except when the job already has a variable defined with the same name. The variable defined in the job takes precedence, so the value of the global variable with the same name cannot be used in the job.

Like job variables, you cannot use global variables as values for other global keywords, like include.

Possible inputs: Variable name and value pairs:

  • The name can use only numbers, letters, and underscores (_). In some shells, the first character must be a letter.
  • The value must be a string.

CI/CD variables are supported.

Examples of variables:

variables:
  DEPLOY_SITE: "https://example.com/"

deploy_job:
  stage: deploy
  script:
    - deploy-script --url $DEPLOY_SITE --path "/"
  environment: production

deploy_review_job:
  stage: deploy
  variables:
    DEPLOY_SITE: "https://dev.example.com/"
    REVIEW_PATH: "/review"
  script:
    - deploy-review-script --url $DEPLOY_SITE --path $REVIEW_PATH
  environment: production

In this example:

  • deploy_job has no variables defined. The global DEPLOY_SITE variable is copied to the job and can be used in the script section.
  • deploy_review_job already has a DEPLOY_SITE variable defined, so the global DEPLOY_SITE is not copied to the job. The job also has a REVIEW_PATH job-level variable defined. Both job-level variables can be used in the script section.

Additional details:

Related topics:

variables:description

Use the description keyword to define a description for a pipeline-level (global) variable. The description displays with the prefilled variable name when running a pipeline manually.

Keyword type: Global keyword. You cannot use it for job-level variables.

Possible inputs:

  • A string.

Example of variables:description:

variables:
  DEPLOY_NOTE:
    description: "The deployment note. Explain the reason for this deployment."

Additional details:

  • When used without value, the variable exists in pipelines that were not triggered manually, and the default value is an empty string ('').

variables:value

Use the value keyword to define a pipeline-level (global) variable’s value. When used with variables: description, the variable value is prefilled when running a pipeline manually.

Keyword type: Global keyword. You cannot use it for job-level variables.

Possible inputs:

  • A string.

Example of variables:value:

variables:
  DEPLOY_ENVIRONMENT:
    value: "staging"