- Machine types available for private projects (x86-64)
- GPU-enabled SaaS runners on Linux
- Example of GPU job
- Example of how to tag a job
- SaaS runners for GitLab projects
- SaaS runners on Linux settings
- Pre-clone script (removed)
SaaS runners on Linux
When you run jobs on SaaS runners on Linux, the runners are on auto-scaled ephemeral virtual machine (VM) instances.
Each VM uses the Google Container-Optimized OS (COS) and the latest version of Docker Engine.
The default region for the VMs is
Machine types available for private projects (x86-64)
For the SaaS runners on Linux, GitLab offers a range of machine types for use in private projects. For Free, Premium, and Ultimate plan customers, jobs on these instances consume the CI/CD minutes allocated to your namespace.
|Specs||2 vCPU, 8 GB RAM||4 vCPUs, 16 GB RAM||8 vCPUs, 32 GB RAM|
|GitLab CI/CD tags|
|Subscription||Free, Premium, Ultimate||Free, Premium, Ultimate||Premium, Ultimate|
small machine type is the default. Your job runs on this machine type if you don’t specify
a tags: keyword in your
CI/CD jobs that run on
large machine types consumes CI minutes at a different rate than CI/CD jobs on the
small machine type.
Refer to the CI/CD minutes cost factor for the cost factor applied to the machine type based on size.
GPU-enabled SaaS runners on Linux
We offer GPU-enabled SaaS runners for heavy compute including ModelOps or HPC workloads. Available to Premium and Ultimate plan customers, jobs on these instances consume the CI/CD minutes allocated to your namespace.
|Specs||4 vCPU, 16 GB RAM, 1 Nvidia Tesla T4 GPU (or similar)|
|GitLab CI/CD tags|
As with all our Linux runners, your job runs in an isolated virtual machine (VM) with a bring-your-own-image policy. GitLab mounts the GPU from the host VM into your isolated environment. To use the GPU, you must use a Docker image with the GPU driver installed. For Nvidia GPUs, you can use their CUDA Toolkit.
Example of GPU job
In the following example of the
.gitlab-ci.yml file, the Nvidia CUDA base Ubuntu image is used.
In in the
script: section we install Python.
gpu-job: stage: build tags: - saas-linux-medium-amd64-gpu-standard image: nvcr.io/nvidia/cuda:12.1.1-base-ubuntu22.04 script: - apt-get update - apt-get install -y python3.10 - python3.10 --version
If you don’t want to install larger libraries such as Tensorflow or XGBoost each time you run a job, you can create your own image with all the required components pre-installed.
Example of how to tag a job
To use a machine type other than
small, add a
tags: keyword to your job.
stages: - Prebuild - Build - Unit Test job_001: stage: Prebuild script: - echo "this job runs on the default (small) instance" job_002: tags: [ saas-linux-medium-amd64 ] stage: Build script: - echo "this job runs on the medium instance" job_003: tags: [ saas-linux-large-amd64 ] stage: Unit Test script: - echo "this job runs on the large instance"
SaaS runners for GitLab projects
gitlab-shared-runners-manager-X.gitlab.com fleet of runners are dedicated for
GitLab projects and related community forks. These runners are backed by a Google Compute
n1-standard-2 machine type and do not run untagged jobs. Unlike the machine types used
for private projects, each virtual machine is re-used up to 40 times.
SaaS runners on Linux settings
Below are the settings for SaaS runners on Linux.
|Default Docker image||-|
Cache: These runners share a distributed cache that’s stored in a Google Cloud Storage (GCS) bucket. Cache contents not updated in the last 14 days are automatically removed, based on the object lifecycle management policy. The maximum size of an uploaded cache artifact can be 5GB after the cache becomes a compressed archive.
Timeout settings: Jobs handled by the SaaS Runners on Linux time out after 3 hours, regardless of the timeout configured in a project. For details, see issues #4010 and #4070.
Pre-clone script (removed)
This feature was deprecated in GitLab 15.9
and removed in 16.0.
The full contents of our
Google Cloud Platform
concurrent = X check_interval = 1 metrics_server = "X" sentry_dsn = "X" [[runners]] name = "docker-auto-scale" request_concurrency = X url = "https://gitlab.com/" token = "SHARED_RUNNER_TOKEN" pre_clone_script = "eval \"$CI_PRE_CLONE_SCRIPT\"" executor = "docker+machine" environment = [ "DOCKER_DRIVER=overlay2", "DOCKER_TLS_CERTDIR=" ] limit = X [runners.docker] image = "ruby:3.1" privileged = true volumes = [ "/certs/client", "/dummy-sys-class-dmi-id:/sys/class/dmi/id:ro" # Make kaniko builds work on GCP. ] [runners.machine] IdleCount = 50 IdleTime = 3600 MaxBuilds = 1 # For security reasons we delete the VM after job has finished so it's not reused. MachineName = "srm-%s" MachineDriver = "google" MachineOptions = [ "google-project=PROJECT", "google-disk-size=25", "google-machine-type=n2d-standard-2", "google-username=core", "google-tags=gitlab-com,srm", "google-use-internal-ip", "google-zone=us-east1-d", "engine-opt=mtu=1460", # Set MTU for container interface, for more information check https://gitlab.com/gitlab-org/gitlab-runner/-/issues/3214#note_82892928 "google-machine-image=PROJECT/global/images/IMAGE", "engine-opt=ipv6", # This will create IPv6 interfaces in the containers. "engine-opt=fixed-cidr-v6=fc00::/7", "google-operation-backoff-initial-interval=2" # Custom flag from forked docker-machine, for more information check https://github.com/docker/machine/pull/4600 ] [[runners.machine.autoscaling]] Periods = ["* * * * * sat,sun *"] Timezone = "UTC" IdleCount = 70 IdleTime = 3600 [[runners.machine.autoscaling]] Periods = ["* 30-59 3 * * * *", "* 0-30 4 * * * *"] Timezone = "UTC" IdleCount = 700 IdleTime = 3600 [runners.cache] Type = "gcs" Shared = true [runners.cache.gcs] CredentialsFile = "/path/to/file" BucketName = "bucket-name"