- Consider your workload and environment
- Workers, executors, and autoscaling capabilities
- Configure instance-level shared runners
- Monitoring runners
Plan and operate a fleet of shared runners
This guide contains best practices for scaling a fleet of runners in a shared service model.
When you host a fleet of shared runners, you need a well-planned infrastructure that takes into consideration your:
- Computing capacity.
- Storage capacity.
- Network bandwidth and throughput.
- Type of jobs (including programming language, OS platform, and dependent libraries).
Use this guide to develop a GitLab Runner deployment strategy based on your organization’s requirements.
The guide does not make specific recommendations about the type of infrastructure you should use. However, it provides insights from the experience of operating the runner fleet on GitLab.com, which processes millions of CI/CD jobs each month.
Consider your workload and environment
Before you deploy runners, consider your workload and environment requirements.
- Create a list of the teams that you plan to onboard to GitLab.
- Estimate the number of CI/CD jobs each team may execute per hour, per day.
- Validate if any team has build environment requirements that cannot be addressed by using containers.
- Validate if any team has build environment requirements that are best served by having runners dedicated to that team.
- Estimate the compute capacity that you may need to support the expected demand.
You might choose different infrastructure stacks to host different runner fleets. For example, you might need to deploy some runners in the public cloud and some on-premise.
The performance of the CI/CD jobs on the runner fleet is directly related to the fleet’s environment. If you are executing a large number of resource-intensive CI/CD jobs, hosting the fleet on a shared computing platform is not recommended.
Workers, executors, and autoscaling capabilities
gitlab-runner executable runs your CI/CD jobs. Each runner is an isolated process that
picks up requests for job executions and deals with them according to pre-defined configurations.
As an isolated process, each runner can create “sub-processes” (also called “workers”) to run jobs.
Concurrency and limit
- Concurrency: Sets the number of jobs that can run concurrently when you’re using all of the configured runners on a host system.
- Limit: Sets the number of sub-processes that a runner can create to execute jobs simultaneously.
The limit is different for autoscaling runners (like Docker Machine and Kubernetes) than it is for runners that don’t autoscale.
- On runners that do not autoscale,
limitdefines the capacity of the runner on a host system.
- On autoscaling runners,
limitis the number of runners you want to run in total.
Basic configuration: one runner, one worker
For the most basic configuration, you install the GitLab Runner software on a supported compute architecture and operating system. For example, you might have an x86-64 virtual machine (VM) running Ubuntu Linux.
After the installation is complete, you execute the runner registration command just once
and you select the
shell executor. Then you edit the runner
config.toml file to set concurrency to
concurrent = 1 [[runners]] name = "instance-level-runner-001" url = "" token = "" executor = "shell"
The GitLab CI/CD jobs that this runner can process are executed directly on the host system where you installed the runner.
It’s as if you were running the CI/CD job commands yourself in a terminal. In this case, because you only executed the registration
command one time, the
config.toml file contains only one
[[runners]] section. Assuming you set the concurrency value to
only one runner “worker” can execute CI/CD jobs for the runner process on this system.
Intermediate configuration: one runner, multiple workers
You can also register multiple runner workers on the same machine.
When you do this, the runner’s
config.toml file has multiple
[[runners]] sections in it.
If all of the additional runner workers are registered to use the shell executor,
and you update the value of the global configuration option,
the upper limit of jobs that can run concurrently on this host is equal to three.
concurrent = 3 [[runners]] name = "instance_level_shell_001" url = "" token = "" executor = "shell" [[runners]] name = "instance_level_shell_002" url = "" token = "" executor = "shell" [[runners]] name = "instance_level_shell_003" url = "" token = "" executor = "shell"
You can register many runner workers on the same machine, and each one is an isolated process. The performance of the CI/CD jobs for each worker is dependent on the compute capacity of the host system.
Autoscaling configuration: one or more runner managers, multiple workers
When GitLab Runner is set up for autoscaling, you can configure a runner to act as a manager of other runners.
You can do this with the
kubernetes executors. In this type of
manager-only configuration, the runner agent is itself not executing any CI/CD jobs.
Docker Machine executor
- The runner manager provisions on-demand virtual machine instances with Docker.
- On these VMs, GitLab Runner executes the CI/CD jobs using a container image that you specify in your
- You should test the performance of your CI/CD jobs on various machine types.
- You should consider optimizing your compute hosts based on speed or cost.
- The runner manager provisions pods on the target Kubernetes cluster.
- The CI/CD jobs are executed on each pod, which is comprised of multiple containers.
- The pods used for job execution typically require more compute and memory resources than the pod that hosts the runner manager.
Reusing a GitLab Runner configuration
Runners registered with the same authentication token and different
system_id values become grouped under a single runner. Grouped runners can be reused to run different jobs by multiple runner managers.
system_id is generated each time the GitLab Runner application starts and when the
configuration is saved, and identifies the machine where the runner is being used.
system_id is saved to the
.runner_system_id file, in the same folder as the
Configure instance-level shared runners
Using instance-level shared runners in an autoscaling configuration (where a runner acts as a “runner manager”) is an efficient and effective way to start.
The compute capacity of the infrastructure stack where you host your VMs or pods depends on:
- The requirements you captured when you were considering your workload and environment.
- The technology stack you use to host your runner fleet.
You will probably need to adjust your computing capacity after you start running CI/CD workloads and analyzing the performance over time.
For configurations that use instance-level shared runners with an autoscaling executor, we recommend that you start with, at minimum, two runner managers.
The total number of runner managers you may need over time depends on:
- The compute resources of the stack that hosts the runner managers.
- The concurrency that you choose to configure for each runner manager.
- The load that is generated by the CI/CD jobs that each manager is executing hourly, daily, and monthly.
For example, on GitLab.com, we currently run seven runner managers with the Docker Machine executor.
Each CI/CD job is executed in a Google Cloud Platform (GCP)
n1-standard-1 VM. With this configuration,
we process millions of jobs per month. You can view
a snippet of the
config.toml configuration file for GitLab.com.
An essential step in operating a runner fleet at scale is to set up and use the runner monitoring capabilities included with GitLab.
The following table includes a summary of GitLab Runner metrics. The list does not include the Go-specific process metrics. To view those metrics on a runner, execute the command as noted here.
|The total number of API requests, partitioned by runner, endpoint, and status.|
|Histogram of machine creation time.|
|The number of machines per state in this provider.|
|The value of concurrent setting.|
|The number of caught errors. This metric is a counter that tracks log lines. The metric includes the label |
|This shows how many jobs are currently being executed (with different scopes in the labels).|
|Histogram of job durations.|
|This displays the total jobs executed.|
|The current value of the limit setting.|
|The current number of concurrent requests for a new job.|
|Count of excess requests above the configured |
|A metric with a constant |
|Total user and system CPU time spent in seconds.|
|Maximum number of open file descriptors.|
|Number of open file descriptors.|
|Resident memory size in bytes.|
|Start time of the process since unix epoch in seconds.|
|Virtual memory size in bytes.|
|Maximum amount of virtual memory available in bytes.|
Grafana dashboard configuration tips
In this public repository you will find the source code for the Grafana dashboards that we use to operate the runner fleet on GitLab.com.
We track a lot of metrics for GitLab.com. As a large provider of cloud-based CI/CD, we need many different views into the system so we can debug issues. In most cases, self-managed runner fleets don’t need to track the volume of metrics that we track with GitLab.com.
Here are a few essential dashboards that we recommend you use to monitor your runner fleet.
Jobs started on runners:
- View an overview of the total jobs executed on your runner fleet for a selected time interval.
- View trends in usage. You should analyze this dashboard weekly at a minimum.
You can correlate this data with other metrics, like job duration, to determine if you need configuration changes or capacity upgrades to continue to service your internal SLO’s for CI/CD job performance.
- Analyze the performance and scaling of your runner fleet.
- View the number of jobs being executed divided by the value of limit or concurrent.
- Determine if there is still capacity to execute additional jobs.
Considerations for monitoring runners on Kubernetes
When you use a Kubernetes platform to host your runner fleet, for example, OpenShift, EKS, or GKE, you need a different approach for setting up the Grafana dashboards.
On Kubernetes, runner CI/CD job execution pods can be created and deleted frequently. In these cases, you should plan to monitor the runner manager pod and potentially implement the following:
- Gauges: Display the aggregate of the same metric from different sources.
- Counters: Reset the counter when applying