- Enable browser-based crawler
- Vulnerability detection
- Managing scan time
- Debugging scans using logging
Introduced in GitLab 13.12.
The browser-based crawler works by loading the target application into a specially-instrumented Chromium browser. A snapshot of the page is taken before a search to find any actions that a user might perform, such as clicking on a link or filling in a form. For each action found, the crawler executes it, takes a new snapshot, and determines what in the page changed from the previous snapshot. Crawling continues by taking more snapshots and finding subsequent actions.
The benefit of crawling by following user actions in a browser is that the crawler can interact with the target application much like a real user would, identifying complex flows that traditional web crawlers don’t understand. This results in better coverage of the website.
Using the browser-based crawler should provide greater coverage for most web applications, compared with the current DAST AJAX crawler. The new crawler replaces the AJAX crawler and is specifically designed to maximize crawl coverage in modern web applications. While both crawlers are currently used in conjunction with the existing DAST scanner, the combination of the browser-based crawler with the current DAST scanner is much more effective at finding and testing every page in an application.
The browser-based crawler is an extension to the GitLab DAST product. DAST should be included in the CI/CD configuration and the browser-based crawler enabled using CI/CD variables:
- Ensure the DAST prerequisites are met.
- Include the DAST CI template.
- Set the target website using the
- Set the CI/CD variable
An example configuration might look like the following:
include: - template: DAST.gitlab-ci.yml dast: variables: DAST_WEBSITE: "https://example.com" DAST_BROWSER_SCAN: "true"
The browser-based crawler can be configured using CI/CD variables.
|URL||The URL of the website to scan.|
|boolean||Configures DAST to use the browser-based crawler engine.|
|List of strings||Hostnames included in this variable are considered in scope when crawled. By default the |
|List of strings||Hostnames included in this variable are considered excluded and connections are forcibly dropped.|
|List of strings||Hostnames included in this variable are accessed but not reported against.|
|number||The maximum number of actions that the crawler performs. For example, clicking a link, or filling a form.|
|number||The maximum number of chained actions that the crawler takes. For example, |
|number||The maximum number of concurrent browser instances to use. For shared runners on GitLab.com, we recommended a maximum of three. Private runners with more resources may benefit from a higher number, but are likely to produce little benefit after five to seven instances.|
|dictionary||A cookie name and value to be added to every request.|
|List of strings||A list of modules and their intended log level.|
|Duration string||The maximum amount of time to wait for a browser to navigate from one page to another.|
|Duration string||The maximum amount of time to wait for a browser to complete an action.|
|Duration string||The maximum amount of time to wait for a browser to consider a page loaded and ready for analysis.|
|Duration string||The maximum amount of time to wait for a browser to consider a page loaded and ready for analysis after a navigation completes.|
|Duration string||The maximum amount of time to wait for a browser to consider a page loaded and ready for analysis after completing an action.|
|Duration string||The maximum amount of time to allow the browser to search for new elements or navigations.|
|Duration string||The maximum amount of time to allow the browser to extract newly found elements or navigations.|
|Duration string||The maximum amount of time to wait for an element before determining it is ready for analysis.|
The DAST variables
DAST_ZAP_LOG_CONFIGURATION are also compatible with browser-based crawler scans.
While the browser-based crawler crawls modern web applications efficiently, vulnerability detection is still managed by the standard DAST/Zed Attack Proxy (ZAP) solution.
The crawler runs the target website in a browser with DAST/ZAP configured as the proxy server. This ensures that all requests and responses made by the browser are passively scanned by DAST/ZAP. When running a full scan, active vulnerability checks executed by DAST/ZAP do not use a browser. This difference in how vulnerabilities are checked can cause issues that require certain features of the target website to be disabled to ensure the scan works as intended.
For example, for a target website that contains forms with Anti-CSRF tokens, a passive scan works as intended because the browser displays pages and forms as if a user is viewing the page. However, active vulnerability checks that run in a full scan cannot submit forms containing Anti-CSRF tokens. In such cases, we recommend you disable Anti-CSRF tokens when running a full scan.
It is expected that running the browser-based crawler results in better coverage for many web applications, when compared to the normal GitLab DAST solution. This can come at a cost of increased scan time.
You can manage the trade-off between coverage and scan time with the following measures:
- Limit the number of actions executed by the browser with the variable
DAST_BROWSER_MAX_ACTIONS. The default is
- Limit the page depth that the browser-based crawler will check coverage on with the variable
DAST_BROWSER_MAX_DEPTH. The crawler uses a breadth-first search strategy, so pages with smaller depth are crawled first. The default is
- Vertically scale the runner and use a higher number of browsers with variable
DAST_BROWSER_NUMBER_OF_BROWSERS. The default is
Due to poor network conditions or heavy application load, the default timeouts may not be applicable to your application.
Browser-based scans offer the ability to adjust various timeouts to ensure it continues smoothly as it transitions from one page to the next. These values are configured using a Duration string, which allow you to configure durations with a prefix:
m for minutes,
s for seconds, and
ms for milliseconds.
Navigations, or the act of loading a new page, usually require the most amount of time because they are
DAST_BROWSER_NAVIGATION_TIMEOUT may not be sufficient.
Stability timeouts, such as those configurable with
DAST_BROWSER_ACTION_STABILITY_TIMEOUT can also be configured. Stability timeouts determine when browser-based scans consider
a page fully loaded. Browser-based scans consider a page loaded when:
- The DOMContentLoaded event has fired.
Depending on whether the browser executed a navigation, was forcibly transitioned, or action:
- There are no new Document Object Model (DOM) modification events after the
- There are no new Document Object Model (DOM) modification events after the
After these events have occurred, browser-based scans consider the page loaded and ready, and attempt the next action.
If your application experiences latency or returns many navigation failures, consider adjusting the timeout values such as in this example:
include: - template: DAST.gitlab-ci.yml dast: variables: DAST_WEBSITE: "https://my.site.com" DAST_BROWSER_NAVIGATION_TIMEOUT: "25s" DAST_BROWSER_ACTION_TIMEOUT: "10s" DAST_BROWSER_STABILITY_TIMEOUT: "15s" DAST_BROWSER_NAVIGATION_STABILITY_TIMEOUT: "15s" DAST_BROWSER_ACTION_TIMEOUT: "10s" DAST_BROWSER_ACTION_STABILITY_TIMEOUT: "3s"
Logging can be used to help you troubleshoot a scan.
The CI/CD variable
DAST_BROWSER_LOG configures the logging level for particular modules of the crawler. Each module represents a component of the browser-based crawler and is separated so that debug logs can be configured just for the area of the crawler that requires further inspection. For more details, see Crawler modules.
For example, the following job definition enables the browsing module and the authentication module to be logged in debug-mode:
include: - template: DAST.gitlab-ci.yml dast: variables: DAST_WEBSITE: "https://my.site.com" DAST_BROWSER_SCAN: "true" DAST_BROWSER_LOG: "brows:debug,auth:debug"
Log messages have the format
[time] [log level] [log module] [message] [additional properties]. For example, the following log entry has level
INFO, is part of the
CRAWL log module, and has the message
2021-04-21T00:34:04.000 INF CRAWL Crawled path nav_id=0cc7fd path="LoadURL [https://my.site.com:8090]"
The modules that can be configured for logging are as follows:
|Log module||Component overview|
|Used for creating an authenticated scan.|
|Used for querying the state or page of the browser.|
|The set of browsers that are leased out for crawling.|
|Used for the core crawler algorithm.|
|Used for persisting data to the internal database.|
|Used to create browsers to add them to the browser pool.|
|Used for the flow of the main event loop of the crawler.|
|Used for persistence mechanisms to store navigation entries.|
|Used for generating reports.|
|Used for general statistics while running the scan.|