- Class and module naming
- Link a test to its test case
- Prefer API over UI
- Avoid superfluous expectations
- Prefer
aggregate_failures
when there are back-to-back expectations - Prefer
aggregate_failures
when there are multiple expectations - Avoid multiple actions in
expect do ... raise_error
blocks - Prefer to split tests across multiple files
let
variables vs instance variables- Limit the use of the UI in
before(:context)
andafter
hooks - Ensure tests do not leave the browser logged in
- Tag tests that require administrator access
- Prefer
Commit
resource overProjectPush
- Preferred method to blur elements
-
Ensure
expect
statements wait efficiently - Use logger over puts
End-to-end testing Best Practices
This is a tailored extension of the Best Practices found in the testing guide.
Class and module naming
The QA framework uses Zeitwerk for class and module autoloading. The default Zeitwerk inflector simply converts snake_cased file names to PascalCased module or class names. It is advised to stick to this pattern to avoid manual maintenance of inflections.
In case custom inflection logic is needed, custom inflectors are added in the qa.rb file in the loader.inflector.inflect
method invocation.
Link a test to its test case
Every test should have a corresponding test case in the GitLab project Test Cases as well as a results issue in the Quality Test Cases project.
If a test case issue does not yet exist you can create one yourself. To do so, create a new
issue in the Test Cases GitLab project
with a placeholder title. After the test case URL is linked to a test in the code, when the test is
run in a pipeline that has reporting enabled, the report-results
script automatically updates the
test case and the results issue.
If a results issue does not yet exist, the report-results
script automatically creates one and
links it to its corresponding test case.
To link a test case to a test in the code, you must manually add a testcase
RSpec metadata tag.
In most cases, a single test is associated with a single test case.
For example:
RSpec.describe 'Stage' do
describe 'General description of the feature under test' do
it 'test name', testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/:test_case_id' do
...
end
it 'another test', testcase: 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/:another_test_case_id' do
...
end
end
end
For shared tests
Most tests are defined by a single line of a spec
file, which is why those tests can be linked to a
single test case via the testcase
tag.
However, some tests don’t have a one-to-one relationship between a line of a spec
file and a test case.
This is because some tests are defined in a way that means a single line is associated with
multiple tests, including:
- Parallelized tests.
- Templated tests.
- Tests in shared examples that include more than one example.
In those and similar cases we need to include the test case link by other means.
To illustrate, there are two tests in the shared examples in qa/specs/features/ee/browser_ui/3_create/repository/restrict_push_protected_branch_spec.rb
:
shared_examples 'unselected maintainer' do |testcase|
it 'user fails to push', testcase: testcase do
...
end
end
shared_examples 'selected developer' do |testcase|
it 'user pushes and merges', testcase: testcase do
...
end
end
Consider the following test that includes the shared examples:
RSpec.describe 'Create' do
describe 'Restricted protected branch push and merge' do
context 'when only one user is allowed to merge and push to a protected branch' do
...
it_behaves_like 'unselected maintainer', 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347775'
it_behaves_like 'selected developer', 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347774'
end
context 'when only one group is allowed to merge and push to a protected branch' do
...
it_behaves_like 'unselected maintainer', 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347772'
it_behaves_like 'selected developer', 'https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases/347773'
end
end
end
We recommend creating four associated test cases, two for each shared example.
Prefer API over UI
The end-to-end testing framework has the ability to fabricate its resources on a case-by-case basis. Resources should be fabricated via the API wherever possible.
We can save both time and money by fabricating resources that our test will need via the API.
Learn more about resources.
Avoid superfluous expectations
To keep tests lean, it is important that we only test what we need to test.
Ensure that you do not add any expect()
statements that are unrelated to what needs to be tested.
For example:
#=> Good
Flow::Login.sign_in
Page::Main::Menu.perform do |menu|
expect(menu).to be_signed_in
end
#=> Bad
Flow::Login.sign_in(as: user)
Page::Main::Menu.perform do |menu|
expect(menu).to be_signed_in
expect(page).to have_content(user.name) #=> we already validated being signed in. redundant.
expect(menu).to have_element(:nav_bar) #=> likely unnecessary. already validated in lower-level. test doesn't call for validating this.
end
#=> Good
issue = Resource::Issue.fabricate_via_api! do |issue|
issue.name = 'issue-name'
end
Project::Issues::Index.perform do |index|
expect(index).to have_issue(issue)
end
#=> Bad
issue = Resource::Issue.fabricate_via_api! do |issue|
issue.name = 'issue-name'
end
Project::Issues::Index.perform do |index|
expect(index).to have_issue(issue)
expect(page).to have_content(issue.name) #=> page content check is redundant as the issue was already validated in the line above.
end
Prefer aggregate_failures
when there are back-to-back expectations
See Prefer aggregate failures when there are multiple expectations
Prefer aggregate_failures
when there are multiple expectations
In cases where there must be multiple expectations within a test case, it is preferable to use aggregate_failures
.
This allows you to group a set of expectations and see all the failures altogether, rather than having the test being aborted on the first failure.
For example:
#=> Good
Page::Search::Results.perform do |search|
search.switch_to_code
aggregate_failures 'testing search results' do
expect(search).to have_file_in_project(template[:file_name], project.name)
expect(search).to have_file_with_content(template[:file_name], content[0..33])
end
end
#=> Bad
Page::Search::Results.perform do |search|
search.switch_to_code
expect(search).to have_file_in_project(template[:file_name], project.name)
expect(search).to have_file_with_content(template[:file_name], content[0..33])
end
Attach the :aggregate_failures
metadata to the example if multiple expectations are separated by statements.
#=> Good
it 'searches', :aggregate_failures do
Page::Search::Results.perform do |search|
expect(search).to have_file_in_project(template[:file_name], project.name)
search.switch_to_code
expect(search).to have_file_with_content(template[:file_name], content[0..33])
end
end
#=> Bad
it 'searches' do
Page::Search::Results.perform do |search|
expect(search).to have_file_in_project(template[:file_name], project.name)
search.switch_to_code
expect(search).to have_file_with_content(template[:file_name], content[0..33])
end
end
Avoid multiple actions in expect do ... raise_error
blocks
When you wrap multiple actions in a single expect do ... end.not_to raise_error
or expect do ... end.to raise_error
block,
it can be hard to debug the actual cause of the failure, because of how the logs are printed. Important information can be truncated
or missing altogether.
For example, if you encapsulate some actions and expectations in a private method in the test, like expect_owner_permissions_allow_delete_issue
:
it "has Owner role with Owner permissions" do
Page::Dashboard::Projects.perform do |projects|
projects.filter_by_name(project.name)
expect(projects).to have_project_with_access_role(project.name, 'Owner')
end
expect_owner_permissions_allow_delete_issue
end
Then, in the method itself