- What is end-to-end testing?
- How do we test GitLab?
- Test pipeline tools and configuration
- How do you run the tests?
- How do you write tests?
- Continued reading
- Where can you ask for help?
End-to-end Testing
What is end-to-end testing?
End-to-end (e2e) testing is a strategy used to check whether your application works as expected across the entire software stack and architecture, including integration of all micro-services and components that are supposed to work together.
How do we test GitLab?
We use Omnibus GitLab to build GitLab packages and then we test these packages using the GitLab QA orchestrator tool to run the end-to-end tests located in the qa
directory.
Additionally, we use the GitLab Development Kit (GDK) as a test environment that can be deployed quickly for faster test feedback.
Testing nightly builds
We run scheduled pipelines each night to test nightly builds created by Omnibus. You can find these pipelines at https://gitlab.com/gitlab-org/gitlab/-/pipeline_schedules (requires the Developer role). Results are reported in the #e2e-run-master
Slack channel.
Testing staging
We run scheduled pipelines each night to test staging. You can find these pipelines at https://gitlab.com/gitlab-org/quality/staging/pipelines (requires the Developer role). Results are reported in the #e2e-run-staging
Slack channel.
Testing code in merge requests
Using the test-on-omnibus job
It is possible to run end-to-end tests for a merge request by triggering the e2e:test-on-omnibus
manual action in the qa
stage (not available for forks).
This runs end-to-end tests against a custom EE (with an Ultimate license) Docker image built from your merge request’s changes.
Manual action that starts end-to-end tests is also available in gitlab-org/omnibus-gitlab
merge requests.
How does it work?
Currently, we are using multi-project pipeline-like approach to run end-to-end pipelines against Omnibus GitLab.
- In the
gitlab-org/gitlab
pipeline:- Developer triggers the
e2e:test-on-omnibus
manual action (available once thebuild-qa-image
andbuild-assets-image
jobs are done), that can be found in GitLab merge requests. This starts a e2e test child pipeline. - E2E child pipeline triggers a downstream pipeline in
gitlab-org/build/omnibus-gitlab-mirror
and polls for the resulting status. We call this a status attribution.
- Developer triggers the
- In the
gitlab-org/build/omnibus-gitlab-mirror
pipeline:- Docker image is being built and pushed to its container registry.
- Once Docker images are built and pushed jobs in
test
stage are started
- In the
Test
stage:- Container for the Docker image stored in the
gitlab-org/build/omnibus-gitlab-mirror
registry is spun-up. - End-to-end tests are run with the
gitlab-qa
executable, which spin up a container for the end-to-end image from thegitlab-org/gitlab
registry.
- Container for the Docker image stored in the
gitlab-org/build/omnibus-gitlab-mirror
instead of
gitlab-org/omnibus-gitlab
. This is due to technical limitations in the GitLab permission model: the ability to run a pipeline
against a protected branch is controlled by the ability to push/merge to this branch.
This means that for developers to be able to trigger a pipeline for the default branch in
gitlab-org/omnibus-gitlab
, they would need to have the Maintainer role for this project.
For security reasons and implications, we couldn’t open up the default branch to all the Developers.
Hence we created this mirror where Developers and Maintainers are allowed to push/merge to the default branch.
This problem was discovered in https://gitlab.com/gitlab-org/gitlab-qa/-/issues/63#note_107175160 and the “mirror”
work-around was suggested in https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4717.
A feature proposal to segregate access control regarding running pipelines from ability to push/merge was also created at https://gitlab.com/gitlab-org/gitlab/-/issues/24585.For more technical details on CI/CD setup and documentation on adding new test jobs to e2e:test-on-omnibus
pipeline, see e2e:test-on-omnibus
setup documentation.
Using the test-on-gdk
job
The e2e:test-on-gdk
job is run automatically in most merge requests, which triggers a child-pipeline that builds and installs a GDK instance from your merge request’s changes, and then executes end-to-end tests against that GDK instance.
How does it work?
In the gitlab-org/gitlab
pipeline:
- The
build-gdk-image
job uses the code from the merge request to build a Docker image for a GDK instance. - The
e2e:test-on-gdk
trigger job creates a child pipeline that executes the end-to-end tests against GDK instances launched from the image built in the previous job.
For more details, see the documentation for the e2e:test-on-gdk
pipeline.
With merged results pipelines
In a merged results pipeline, the pipeline runs on a new ref that contains the merge result of the source and target branch.
The end-to-end tests on a merged results pipeline would use the new ref instead of the head of the merge request source branch.
Running custom tests
The existing scenarios
that run in the downstream gitlab-qa-mirror
pipeline include many tests, but there are times when you might want to run a
test or a group of tests that are different than the groups in any of the existing scenarios.
For example, when we dequarantine a flaky test we first want to make sure that it’s no longer flaky. We can do that by running _ee:quarantine
manual job. When selecting the name (not the play icon) of manual job, you are prompted to enter variables. You can use any of the variables that can be used with gitlab-qa
as well as these:
Variable | Description |
---|---|
QA_SCENARIO
| The scenario to run (default Test::Instance::Image )
|
QA_TESTS
| The tests to run (no default, which means run all the tests in the scenario). Use file paths as you would when running tests by using RSpec, for example, qa/specs/features/ee/browser_ui would include all the EE UI tests.
|
QA_RSPEC_TAGS
| The RSpec tags to add (default --tag quarantine )
|
For now,
manual jobs with custom variables don’t use the same variable when retried,
so if you want to run the same tests multiple times,
specify the same variables in each custom-parallel
job (up to as
many of the 10 available jobs that you want to run).
Selective test execution
In order to limit amount of tests executed in a merge request, dynamic selection of which tests to execute is present. Algorithm of which tests to run is based on changed files and merge request labels. Following criteria determine which tests will run:
- Changes in
qa
framework code would execute the full suite - Changes in particular
_spec.rb
file inqa
folder would execute only that particular test. In this case knapsack will not be used to run jobs in parallel. - Merge request with backend changes and label
devops::manage
would execute all e2e tests related tomanage
stage. Jobs will be run in parallel in this case using knapsack.
Overriding selective test execution
To override selective test execution and trigger the full suite, label pipeline:run-all-e2e
should be added to particular merge request.
Skipping end-to-end tests
In some cases, it may not be necessary to run the end-to-end test suite.
Examples could include:
- ~"Stuff that should Just Work"
- Small refactors
- A small requested change during review, that doesn’t warrant running the entire suite a second time
Skip running end-to-end tests by applying the pipeline:skip-e2e
label to the merge request.
Test pipeline tools and configuration
Test parallelization
Our CI setup uses the knapsack
gem to enable test parallelization. Knapsack reports are automatically generated and stored in the knapsack-reports
GCS bucket within the gitlab-qa-resources
project. The KnapsackReport
helper manages the report generation and upload process.
Test metrics
To enhance test health visibility, a custom setup exports the pipeline’s test execution results to an InfluxDB instance, with results visualized on Grafana dashboards.
Test reports
Allure report
For additional test results visibility, tests that run on pipelines generate and host Allure test reports.
The QA
framework is using the Allure RSpec gem to generate source files for the Allure
test report. An additional job in the pipeline:
- Fetches these source files from all test jobs.
- Generates and uploads the report to the
S3
bucketgitlab-qa-allure-report
located inAWS
group projecteng-quality-ops-ci-cd-shared-infra
.
A common CI template for report uploading is stored in allure-report.yml
.
Merge requests
When these tests are executed in the scope of merge requests, the Allure
report is uploaded to the GCS
bucket and a bot comment is added linking to their respective reports.
Scheduled pipelines
Scheduled pipelines for these tests contain a generate-allure-report
job under the Report
stage. They also output a link to the current test report. Each type of scheduled pipeline generates a static link for the latest test report according to its stage. You can find a list of this in the GitLab handbook.
Provisioning
Provisioning of all components is performed by the engineering-productivity-infrastructure
project.
Exporting metrics in CI
Use these environment variables to configure metrics export:
Variable | Required | Information |
---|---|---|
QA_INFLUXDB_URL
| true
| Should be set to https://influxdb.quality.gitlab.net . No default value.
|
QA_INFLUXDB_TOKEN
| true
| InfluxDB write token that can be found under Influxdb auth tokens document in Gitlab-QA 1Password vault. No default value.
|
QA_RUN_TYPE
| false
| Arbitrary name for test execution, like e2e:test-on-omnibus . Automatically inferred from the project name for live environment test executions. No default value.
|
QA_EXPORT_TEST_METRICS
| false
| Flag to enable or disable metrics export to InfluxDB. Defaults to false .
|
QA_SAVE_TEST_METRICS
| false
| Flag to enable or disable saving metrics as JSON file. Defaults to false .
|
How do you run the tests?
If you are not testing code in a merge request, there are two main options for running the tests. If you want to run the existing tests against a live GitLab instance or against a pre-built Docker image, use the GitLab QA orchestrator. See also examples of the test scenarios you can run by using the orchestrator.
On the other hand, if you would like to run against a local development GitLab environment, you can use the GitLab Development Kit (GDK). Refer to the instructions in the QA README and the section below.
Running tests that require special setup
Learn how to perform tests that require special setup or consideration to run on your local environment.
How do you write tests?
Before you write new tests, review the GitLab QA architecture.
After you’ve decided where to put test environment orchestration scenarios and instance-level scenarios, take a look at the GitLab QA README, the GitLab QA orchestrator README, and the already existing instance-level scenarios.
Consider not writing an end-to-end test
We should follow these best practices for end-to-end tests:
- Do not write an end-to-end test if a lower-level feature test exists. End-to-end tests require more work and resources.
- Troubleshooting for end-to-end tests can be more complex as connections to the application under test are not known.
Continued reading
Getting started with E2E testing
-
Beginner’s Guide: An introductory guide to help new contributors get started with E2E testing
-
Flows: Overview of
Flows
used to capture reusable sequences of actions in tests - Page objects: Explanation of page objects and their role in test design
-
Resources: Overview of
Resources
class that used for creating test data
-
Flows: Overview of
Best practices
-
Best practices when writing end-to-end tests: Guidelines for efficient and reliable E2E testing
- Dynamic element validation: Techniques for handling dynamic elements in tests
- Execution context selection: Tips for choosing the right execution context for tests to run on
- Testing with feature flags: Managing feature flags during tests
- RSpec metadata for end-to-end tests: Using metadata to organize and categorize tests
- Test users: Guidelines for creating and managing test users
- Waits: Best practices for using waits to handle asynchronous elements
- Style guide for writing end-to-end tests: Standards and conventions to ensure consistency in E2E tests
Testing infrastructure
- Test pipelines: Overview of the pipeline setup for E2E tests, including parallelization and CI configuration
- Test infrastructure for cloud integrations: Describes cloud-specific setups
Running and troubleshooting tests
-
Running tests: Instructions for executing tests
- Running tests that require special setup: Specific setup requirements for certain tests
- Troubleshooting: Common issues encountered during E2E testing and solutions
Miscellaneous
- Test Platform Sub-Department handbook: Topics related to our vision, monitoring practices, failure triage processes, etc
-
gitlab-qa
: For information regarding the use of the GitLab QA orchestrator -
customers-gitlab-com
(internal only): For guides that are specific to the CustomersDot platform
Where can you ask for help?
You can ask question in the #test-platform
channel on Slack (GitLab internal) or you can find an issue you would like to work on in the gitlab
issue tracker.