- Enable browser-based crawler
- Vulnerability detection
- Managing scan time
- Debugging scans using logging
Introduced in GitLab 13.12.
The browser-based crawler works by loading the target application into a specially-instrumented Chromium browser. A snapshot of the page is taken prior to a search to find any actions that a user might perform, such as clicking on a link or filling in a form. For each action found, the crawler will execute it, take a new snapshot and determine what in the page changed from the previous snapshot. Crawling continues by taking more snapshots and finding subsequent actions.
The benefit of crawling by following user actions in a browser is that the crawler can interact with the target application much like a real user would, identifying complex flows that traditional web crawlers don’t understand. This results in better coverage of the website.
Scanning a web application with both the browser-based crawler and GitLab DAST should provide greater coverage, compared with only GitLab DAST. The new crawler does not support API scanning or the DAST AJAX crawler.
The browser-based crawler is an extension to the GitLab DAST product. DAST should be included in the CI/CD configuration and the browser-based crawler enabled using CI/CD variables:
- Install the DAST prerequisites.
- Include the DAST CI template.
- Set the target website using the
- Set the CI/CD variable
An example configuration might look like the following:
include: - template: DAST.gitlab-ci.yml variables: DAST_WEBSITE: "https://example.com" DAST_BROWSER_SCAN: "true"
The browser-based crawler can be configured using CI/CD variables.
|URL||The URL of the website to scan.|
|boolean||Configures DAST to use the browser-based crawler engine.|
|List of strings||Hostnames included in this variable are considered in scope when crawled. By default the |
|List of strings||Hostnames included in this variable are considered excluded and connections are forcibly dropped.|
|List of strings||Hostnames included in this variable are accessed but not reported against.|
|number||The maximum number of actions that the crawler performs. For example, clicking a link, or filling a form.|
|number||The maximum number of chained actions that the crawler takes. For example, |
|number||The maximum number of concurrent browser instances to use. For shared runners on GitLab.com we recommended a maximum of three. Private runners with more resources may benefit from a higher number, but will likely produce little benefit after five to seven instances.|
|dictionary||A cookie name and value to be added to every request.|
|List of strings||A list of modules and their intended log level.|
|string||The URL of page that hosts the sign-in form.|
|string||The username to enter into the username field on the sign-in HTML form.|
|string||The password to enter into the password field on the sign-in HTML form.|
|selector||A selector describing the username field on the sign-in HTML form.|
|selector||A selector describing the password field on the sign-in HTML form.|
|selector||A selector describing the element that when clicked submits the login form or the password form of a multi-page login process.|
|selector||A selector describing the element that when clicked submits the username form of a multi-page login process.|
The DAST variables
DAST_ZAP_LOG_CONFIGURATION are also compatible with browser-based crawler scans.
Selectors are used by CI/CD variables to specify the location of an element displayed on a page in a browser.
Selectors have the format
search string. The crawler will search for the selector using the search string based on the type.
|Searches for a HTML element having the supplied CSS selector. Selectors should be as specific as possible for performance reasons.|
|Searches for an HTML element with the provided element ID.|
|Searches for an HTML element with the provided element name.|
|Searches for a HTML element with the provided XPath. Note that XPath searches are expected to be less performant than other searches.|
|None provided||Defaults to searching using a CSS selector.|
While the browser-based crawler crawls modern web applications efficiently, vulnerability detection is still managed by the standard DAST/Zed Attack Proxy (ZAP) solution.
The crawler runs the target website in a browser with DAST/ZAP configured as the proxy server. This ensures that all requests and responses made by the browser are passively scanned by DAST/ZAP. When running a full scan, active vulnerability checks executed by DAST/ZAP do not use a browser. This difference in how vulnerabilities are checked can cause issues that require certain features of the target website to be disabled to ensure the scan works as intended.
For example, for a target website that contains forms with Anti-CSRF tokens, a passive scan will scan as intended because the browser displays pages/forms as if a user is viewing the page. However, active vulnerability checks run in a full scan will not be able to submit forms containing Anti-CSRF tokens. In such cases we recommend you disable Anti-CSRF tokens when running a full scan.
It is expected that running the browser-based crawler will result in better coverage for many web applications, when compared to the normal GitLab DAST solution. This can come at a cost of increased scan time.
You can manage the trade-off between coverage and scan time with the following measures:
- Limit the number of actions executed by the browser with the variable
DAST_BROWSER_MAX_ACTIONS. The default is
- Limit the page depth that the browser-based crawler will check coverage on with the variable
DAST_BROWSER_MAX_DEPTH. The crawler uses a breadth-first search strategy, so pages with smaller depth are crawled first. The default is
- Vertically scaling the runner and using a higher number of browsers with variable
DAST_BROWSER_NUMBER_OF_BROWSERS. The default is
Logging can be used to help you troubleshoot a scan.
The CI/CD variable
DAST_BROWSER_LOG configures the logging level for particular modules of the crawler. Each module represents a component of the browser-based crawler and is separated so that debug logs can be configured just for the area of the crawler that requires further inspection. For more details, see Crawler modules.
For example, the following job definition enables the browsing module and the authentication module to be logged in debug-mode:
include: - template: DAST.gitlab-ci.yml variables: DAST_WEBSITE: "https://my.site.com" DAST_BROWSER_SCAN: "true" DAST_BROWSER_LOG: "brows:debug,auth:debug"
Log messages have the format
[time] [log level] [log module] [message] [additional properties]. For example, the following log entry has level
INFO, is part of the
CRAWL log module, and has the message
2021-04-21T00:34:04.000 INF CRAWL Crawled path nav_id=0cc7fd path="LoadURL [https://my.site.com:8090]"
The modules that can be configured for logging are as follows:
|Log module||Component overview|
|Used for creating an authenticated scan.|
|Used for querying the state/page of the browser.|
|The set of browsers that are leased out for crawling.|
|Used for the core crawler algorithm.|
|Used for persisting data to the internal database.|
|Used to create browsers to add them to the browser pool.|
|Used for the flow of the main event loop of the crawler.|
|Used for persistence mechanisms to store navigation entries.|
|Used for generating reports.|
|Used for general statistics while running the scan.|