- Shared modules
- How to use the analyzers
- Analyzers development
- How to test the analyzers
- Analyzer scripts
- Versioning and release process
- Development of new analyzers
- Security and Build fixes of Go
Sec section analyzer development
Analyzers are shipped as Docker images to execute within a CI pipeline context. This guide describes development and testing practices across analyzers.
Shared modules
There are a number of shared Go modules shared across analyzers for common behavior and interfaces:
- The
command
Go package implements a CLI interface. - The
common
project provides miscellaneous shared modules for logging, certificate handling, and directory search capabilities. - The
report
Go package’sReport
andFinding
structs marshal JSON reports. - The
template
project scaffolds new analyzers.
How to use the analyzers
Analyzers are shipped as Docker images. For example, to run the Semgrep Docker image to scan the working directory:
-
cd
into the directory of the source code you want to scan. - Run
docker login registry.gitlab.com
and provide username plus personal or project access token with at least theread_registry
scope. -
Run the Docker image:
docker run \ --interactive --tty --rm \ --volume "$PWD":/tmp/app \ --env CI_PROJECT_DIR=/tmp/app \ -w /tmp/app \ registry.gitlab.com/gitlab-org/security-products/analyzers/semgrep:latest /analyzer run
- The Docker container generates a report in the mounted project directory with a report filename corresponding to the analyzer category. For example, SAST generates a file named
gl-sast-report.json
.
Analyzers development
To update the analyzer:
- Modify the Go source code.
- Build a new Docker image.
- Run the analyzer against its test project.
- Compare the generated report with what’s expected.
Here’s how to create a Docker image named analyzer
:
docker build -t analyzer .
For example, to test Secret Detection run the following:
wget https://gitlab.com/gitlab-org/security-products/ci-templates/-/raw/master/scripts/compare_reports.sh
sh ./compare_reports.sh sd test/fixtures/gl-secret-detection-report.json test/expect/gl-secret-detection-report.json \
| patch -Np1 test/expect/gl-secret-detection-report.json && Git commit -m 'Update expectation' test/expect/gl-secret-detection-report.json
rm compare_reports.sh
You can also compile the binary for your own environment and run it locally
but analyze
and run
probably won’t work
since the runtime dependencies of the analyzer are missing.
Here’s an example based on SpotBugs:
go build -o analyzer
./analyzer search test/fixtures
./analyzer convert test/fixtures/app/spotbugsXml.Xml > ./gl-sast-report.json
Execution criteria
Enabling SAST requires including a pre-defined template to your GitLab CI/CD configuration.
The following independent criteria determine which analyzer needs to be run on a project:
- The SAST template uses
rules:exists
to determine which analyzer will be run based on the presence of certain files. For example, the Brakeman analyzer runs when there are.rb
files and aGemfile
. - Each analyzer runs a customizable match interface before it performs the actual analysis. For example: Flawfinder checks for C/C++ files.
- For some analyzers that run on generic file extensions, there is a check based on a CI/CD variable. For example: Kubernetes manifests are written in YAML, so Kubesec runs only when
SCAN_KUBERNETES_MANIFESTS
is set to true.
Step 1 helps prevent wastage of compute quota that would be spent running analyzers not suitable for the project. However, due to technical limitations, it cannot be used for large projects. Therefore, step 2 acts as final check to ensure a mismatched analyzer is able to exit early.
How to test the analyzers
Video walkthrough of how Dependency Scanning analyzers are using downstream pipeline feature to test analyzers using test projects:
Testing local changes
To test local changes in the shared modules (such as command
or report
) for an analyzer
you can use the
go mod replace
directive to load command
with your local changes instead of using the version of command that has been
tagged remotely. For example:
go mod edit -replace gitlab.com/gitlab-org/security-products/analyzers/command/v3=/local/path/to/command
Alternatively you can achieve the same result by manually updating the go.mod
file:
module gitlab.com/gitlab-org/security-products/analyzers/awesome-analyzer/v2
replace gitlab.com/gitlab-org/security-products/analyzers/command/v3 => /path/to/command
require (
...
gitlab.com/gitlab-org/security-products/analyzers/command/v3 v2.19.0
)
Testing local changes in Docker
To use Docker with replace
in the go.mod
file:
- Copy the contents of
command
into the directory of the analyzer.cp -r /path/to/command path/to/analyzer/command
. - Add a copy statement in the analyzer’s
Dockerfile
:COPY command /command
. - Update the
replace
statement to make sure it matches the destination of theCOPY
statement in the step above:replace gitlab.com/gitlab-org/security-products/analyzers/command/v3 => /command
Analyzer scripts
The analyzer-scripts repository contains scripts that can be used to interact with most analyzers. They enable you to build, run, and debug analyzers in a GitLab CI-like environment, and are particularly useful for locally validating changes to an analyzer.
For more information, refer to the project README.
Versioning and release process
GitLab Security Products use an independent versioning system from GitLab MAJOR.MINOR
. All products use a variation of Semantic Versioning and are available as Docker images.
Major
is bumped with every new major release of GitLab, when breaking changes are allowed. Minor
is bumped for new functionality, and Patch
is reserved for bugfixes.
The analyzers are released as Docker images following this scheme:
- each push to the default branch will override the
edge
image tag - each push to any
awesome-feature
branch will generate a matchingawesome-feature
image tag - each Git tag will generate the corresponding
Major.Minor.Patch
image tag. A manual job allows to override the correspondingMajor
and thelatest
image tags to point to thisMajor.Minor.Patch
.
In most circumstances it is preferred to rely on the MAJOR
image,
which is automatically kept up to date with the latest advisories or patches to our tools.
Our included CI templates pin to major version but if preferred, users can override their version directly.
To release a new analyzer Docker image, there are two different options:
The following diagram describes the Docker tags that are created when a new analyzer version is released:
Per our Continuous Deployment flow, for new components that do not have a counterpart in the GitLab Rails application, the component can be released at any time. Until the components are integrated with the existing application, iteration should not be blocked by our standard release cycle and process.
Manual release process
- Ensure that the
CHANGELOG.md
entry for the new analyzer is correct. - Ensure that the release source (typically the
master
ormain
branch) has a passing pipeline. - Create a new release for the analyzer project by selecting the Deployments menu on the left-hand side of the project window, then selecting the Releases sub-menu.
- Select New release to open the New Release page.
- In the Tag name drop down, enter the same version used in the
CHANGELOG.md
, for examplev2.4.2
, and select the option to create the tag (Create tag v2.4.2
here). - In the Release title text box enter the same version used above, for example
v2.4.2
. - In the
Release notes
text box, copy and paste the notes from the corresponding version in theCHANGELOG.md
. - Leave all other settings as the default values.
- Select Create release.
- In the Tag name drop down, enter the same version used in the
After following the above process and creating a new release, a new Git tag is created with the Tag name
provided above. This triggers a new pipeline with the given tag version and a new analyzer Docker image is built.
If the analyzer uses the analyzer.yml
template, then the pipeline triggered as part of the New release process above automatically tags and deploys a new version of the analyzer Docker image.
If the analyzer does not use the analyzer.yml
template, you’ll need to manually tag and deploy a new version of the analyzer Docker image:
- Select the CI/CD menu on the left-hand side of the project window, then select the Pipelines sub-menu.
- A new pipeline should currently be running with the same tag used previously, for example
v2.4.2
. - After the pipeline has completed, it will be in a
blocked
state. - Select the
Manual job
play button on the right hand side of the window and selecttag version
to tag and deploy a new version of the analyzer Docker image.
Use your best judgment to decide when to create a Git tag, which will then trigger the release job. If you can’t decide, then ask for other’s input.
Automatic release process
The following must be performed before the automatic release process can be used:
- Configure
CREATE_GIT_TAG: true
as aCI/CD
environment variable. - Check the
Variables
in the CI/CD project settings. Unless the project already inherits theGITLAB_TOKEN
environment variable from the project group, create a project access token withcomplete read/write access to the API
and configureGITLAB_TOKEN
as aCI/CD
environment variable which refers to this token.
After the above steps have been completed, the automatic release process executes as follows:
- A project maintainer merges an MR into the default branch.
- The default pipeline is triggered, and the
upsert git tag
job is executed.- If the most recent version in the
CHANGELOG.md
matches one of the Git tags, the job is a no-op. - Else, this job automatically creates a new release and Git tag using the releases API. The version and message is obtained from the most recent entry in the
CHANGELOG.md
file for the project.
- If the most recent version in the
- A pipeline is automatically triggered for the new Git tag. This pipeline releases the
latest
,major
,minor
andpatch
Docker images of the analyzer.
Steps to perform after releasing an analyzer
- After a new version of the analyzer Docker image has been tagged and deployed, test it with the corresponding test project.
-
Announce the release on the relevant group Slack channel. Example message:
FYI I’ve just released
ANALYZER_NAME
ANALYZER_VERSION
.LINK_TO_RELEASE
Never delete a Git tag that has been pushed as there is a good chance that the tag will be used and/or cached by the Go package registry.
Backporting a critical fix or patch
To backport a critical fix or patch to an earlier version, follow the steps below.
- Create a new branch from the tag you are backporting the fix to, if it doesn’t exist.
- For example, if the latest stable tag is
v4
and you are backporting a fix tov3
, create a new branch calledv3
.
- For example, if the latest stable tag is
- Submit a merge request targeting the branch you just created.
- After its approved, merge the merge request into the branch.
- Create a new tag for the branch.
- If the analyzer has the automatic release process enabled, a new version will be released.
- If not, you have to follow the manual release process to release a new version.
- NOTE: the release pipeline will override the latest
edge
tag so the most recent release pipeline’stag edge
job may need to be re-ran to avoid a regression for that tag.
Development of new analyzers
We occasionally need to build out new analyzer projects to support new frameworks and tools. In doing so we should follow our engineering Open Source guidelines, including licensing and code standards.
In addition, to write a custom analyzer that will integrate into the GitLab application a minimal feature set is required:
Checklist
Verify whether the underlying tool has:
- A permissive software license.
- Headless execution (CLI tool).
- Bundle-able dependencies to be packaged as a Docker image, to be executed using GitLab Runner’s Linux or Windows Docker executor.
- Compatible projects that can be detected based on filenames or extensions.
- Offline execution (no internet access) or can be configured to use custom proxies and/or CA certificates.
Dockerfile
The Dockerfile
should use an unprivileged user with the name GitLab
. The reason this is necessary is to provide compatibility with Red Hat OpenShift instances, which don’t allow containers to run as an admin (root) user. There are certain limitations to keep in mind when running a container as an unprivileged user, such as the fact that any files that need to be written on the Docker filesystem will require the appropriate permissions for the GitLab
user. Please see the following merge request for more details: Use GitLab user instead of root in Docker image.
Minimal vulnerability data
Please see our security-report-schemas for a full list of required fields.
The security-report-schema repository contains JSON schemas that list the required fields for each report type:
Location of Container Images
In order to restrict the number of people who have write access to the container registry, all images are to be published to the location below. The container registry in the development project must be made private.
- Group:
https://gitlab.com/security-products/
- Project path:
https://gitlab.com/security-products/<NAME>
(example) - Registry address:
registry.gitlab.com/security-products/<NAME>[/<IMAGE_NAME>]:[TAG]
- Permissions
- Top-level group
- Maintainer:
@gitlab-org/secure/managers
,@gitlab-org/govern/managers
- Maintainer:
- Project level
- A deploy token with
read_registry
andwrite_registry
access is used to push images. - The token will be entered by its creator as a protected and masked variable on the
originating project (i.e. the project under
security-products
namespace)
- A deploy token with
- Top-level group
- Project Settings
- Visibility, project features, permissions.
- Project visibility: Public. Uncheck “Users can request access”.
- Issues: disable.
- Repository: set to “Only Project Members”. Disable: Merge requests, Forks, Git LFS, Packages, CI/CD.
- Disable remaining items: Analytics, Requirements, Wiki, Snippets, Pages, Operations.
- Service Desk: disable
- Visibility, project features, permissions.
Each group in the Sec Section is responsible for:
- Managing the deprecation and removal schedule for their artifacts, and creating issues for this purpose.
- Creating and configuring projects under the new location.
- Configuring builds to push release artifacts to the new location.
- Removing or keeping images in old locations according to their own support agreements.
Daily rebuild of Container Images
The analyzer images are rebuilt on a daily basis to ensure that we frequently and automatically pull patches provided by vendors of the base images we rely on.
This process only applies to the images used in versions of GitLab matching the current MAJOR release. The intent is not to release a newer version each day but rather rebuild each active variant of an image and overwrite the corresponding tags:
- the
MAJOR.MINOR.PATCH
image tag (e.g.:4.1.7
) - the
MAJOR.MINOR
image tag(e.g.:4.1
) - the
MAJOR
image tag (e.g.:4
) - the
latest
image tag
The implementation of the rebuild process may vary depending on the project, though a shared CI configuration is available in our development ci-templates project to help achieving this.
Security and Build fixes of Go
The Dockerfile
of the Secure analyzers implemented in Go must reference a MAJOR
release of Go, and not a MINOR
revision.
This ensures that the version of Go used to compile the analyzer includes all the security fixes available at a given time.
For example, the multi-stage Dockerfile of an analyzer must use the golang:1.15-alpine
image
to build the analyzer CLI, but not golang:1.15.4-alpine
.
When a MINOR
revision of Go is released, and when it includes security fixes,
project maintainers must check whether the Secure analyzers need to be re-built.
The version of Go used for the build should appear in the log of the build
job corresponding to the release,
and it can also be extracted from the Go binary using the strings command.
If the latest image of the analyzer was built with the affected version of Go, then it needs to be rebuilt. To rebuild the image, maintainers can either:
- trigger a new pipeline for the Git tag that corresponds to the stable release
- create a new Git tag where the
BUILD
number is incremented - trigger a pipeline for the default branch, and where the
PUBLISH_IMAGES
variable is set to a non-empty value
Either way a new Docker image is built, and it’s published with the same image tags: MAJOR.MINOR.PATCH
and MAJOR
.
This workflow assumes full compatibility between MINOR
revisions of the same MAJOR
release of Go.
If there’s a compatibility issue, the project pipeline will fail when running the tests.
In that case, it might be necessary to reference a MINOR
revision of Go in the Dockerfile
and document that exception until the compatibility issue has been resolved.
Since it is NOT referenced in the Dockerfile
, the MINOR
revision of Go is NOT mentioned in the project changelog.
There may be times where it makes sense to use a build tag as the changes made are build related and don’t require a changelog entry. For example, pushing Docker images to a new registry location.
Git tag to rebuild
When creating a new Git tag to rebuild the analyzer,
the new tag has the same MAJOR.MINOR.PATCH
version as before,
but the BUILD
number (as defined in semver) is incremented.
For instance, if the latest release of the analyzer is v1.2.3
,
and if the corresponding Docker image was built using an affected version of Go,
then maintainers create the Git tag v1.2.3+1
to rebuild the image.
If the latest release is v1.2.3+1
, then they create v1.2.3+2
.
The build number is automatically removed from the image tag.
To illustrate, creating a Git tag v1.2.3+1
in the gemnasium
project
makes the pipeline rebuild the image, and push it as gemnasium:1.2.3
.
The Git tag created to rebuild has a simple message that explains why the new build is needed.
Example: Rebuild with Go 1.15.6
.
The tag has no release notes, and no release is created.
To create a new Git tag to rebuild the analyzer, follow these steps:
-
Create a new Git tag and provide a message
git tag -a v1.2.3+1 -m "Rebuild with Go 1.15.6"
-
Push the tags to the repo
git push origin --tags
- A new pipeline for the Git tag will be triggered and a new image will be built and tagged.
- Run a new pipeline for the
master
branch in order to run the full suite of tests and generate a new vulnerability report for the newly tagged image. This is necessary because the release pipeline triggered in step3.
above runs only a subset of tests, for example, it does not perform aContainer Scanning
analysis.
Monthly release process
This should be done on the 18th of each month. Though, this is a soft deadline and there is no harm in doing it within a few days after.
First, create an new issue for a release with a script from this repo: ./scripts/release_issue.rb MAJOR.MINOR
.
This issue will guide you through the whole release process. In general, you have to perform the following tasks:
- Check the list of supported technologies in GitLab documentation.
-
Check that CI job definitions are still accurate in vendored CI/CD templates and all of the ENV vars are propagated to the Docker containers upon
docker run
per tool.- SAST vendored CI/CD template
- Dependency Scanning vendored CI/CD template
- Container Scanning CI/CD template
If needed, go to the pipeline corresponding to the last Git tag, and trigger the manual job that controls the build of this image.
- Current bot accounts used in the pipeline
- Account name:
@group_2452873_bot
- Use: Used for creating releases/tags
- Member of: Group
gitlab-org/security-products
- Max role:
Developer
- Scope of the associated
GITLAB_TOKEN
: - Expiry Date of the associated
GITLAB_TOKEN
:
- Account name:
Dependency updates
All dependencies and upstream scanners (if any) used in the analyzer source are updated on a monthly cadence which primarily includes security fixes and non-breaking changes.
- Static Analysis team uses a custom internal tool (SastBot) to automate dependency management of all the SAST analyzers. SastBot generates MRs on the 8th of each month and distributes their assignment among Static Analysis team members to take them forward for review. For details on the process, see Dependency Update Automation.