GitLab Documentation

Prometheus integration

Introduced in GitLab 9.0.

GitLab offers powerful integration with Prometheus for monitoring your apps. Metrics are retrieved from the configured Prometheus server, and then displayed within the GitLab interface.

Each project can be configured with its own specific Prometheus server, see the configuration section for more details. If you have a single Prometheus server which monitors all of your infrastructure, you can pre-fill the settings page with a default template. To configure the template, see the Services templates document.


Integration with Prometheus requires the following:

  1. GitLab 9.0 or higher
  2. The Kubernetes integration must be enabled on your project
  3. Your app must be deployed on Kubernetes
  4. Prometheus must be configured to collect Kubernetes metrics
  5. Each metric must be have a label to indicate the environment
  6. GitLab must have network connectivity to the Prometheus sever

There are a few steps necessary to set up integration between Prometheus and GitLab.

Configuring Prometheus to collect Kubernetes metrics

In order for Prometheus to collect Kubernetes metrics, you first must have a Prometheus server up and running. You have two options here:

Configuring Omnibus GitLab Prometheus to monitor Kubernetes

With Omnibus GitLab running inside of Kubernetes, you can leverage the bundled version of Prometheus to collect the required metrics.

  1. Read how to configure the bundled Prometheus server in the Administration guide.
  2. Now that Prometheus is configured, proceed on configuring the Prometheus project service in GitLab.

Configuring your own Prometheus server within Kubernetes

Setting up and configuring Prometheus within Kubernetes is quick and painless. The Prometheus project provides an official Docker image which we can use as a starting point.

To get started quickly, we have provided a sample YML file that can be used as a template. This file will create a prometheus Namespace, Service, Deployment, and ConfigMap in Kubernetes. You can upload this file to the Kubernetes dashboard using + Create at the top right.

Deploy Prometheus

Or use kubectl:

kubectl apply -f path/to/prometheus.yml

Once deployed, you should see the Prometheus service, deployment, and pod start within the prometheus namespace. The server will begin to collect metrics from each Kubernetes Node in the cluster, based on the configuration provided in the template.

Since GitLab is not running within Kubernetes, the template provides external network access via a NodePort running on 30090. This method allows access to be controlled using provider firewall rules, like within Google Compute Engine.

Since a NodePort does not automatically have firewall rules created for it, one will need to be created manually to allow access. In GCP/GKE, you will want to confirm the Node that the Prometheus pod is running on. This can be done either by looking at the Pod in the Kubernetes dashboard, or by running:

kubectl describe pods -n prometheus

Next on GKE, we need to get the tag of the Node or VM Instance, so we can create an accurate firewall rule. The easiest way to do this is to go into the Google Cloud Platform Compute console and select the VM instance that matches the name of the Node gathered from the step above. In this case, the node tag needed is gke-prometheus-demo-5d5ada10-node. Also make a note of the External IP, which will be the IP address the Prometheus server is reachable on.

GCP Node Detail

Armed with the proper Node tag, the firewall rule can now be created specifically for this node. To create the firewall rule, open the Google Cloud Platform Networking console, and select Firewall Rules.

Create a new rule:

GCP Firewall Rule

Now that Prometheus is configured, proceed to configure the Prometheus project service in GitLab.

Configuration in GitLab

The actual configuration of Prometheus integration within GitLab is very simple. All you will need is the DNS or IP address of the Prometheus server you'd like to integrate with.

  1. Navigate to the Integrations page
  2. Click the Prometheus service
  3. Provide the base URL of the your server, for example The Test Settings button can be used to confirm connectivity from GitLab to the Prometheus server.

Configure Prometheus Service

Metrics and Labels

GitLab retrieves performance data from two metrics, container_cpu_usage_seconds_total and container_memory_usage_bytes. These metrics are collected from the Kubernetes pods via Prometheus, and report CPU and Memory utilization of each container or Pod running in the cluster.

In order to isolate and only display relevant metrics for a given environment however, GitLab needs a method to detect which pods are associated. To do that, GitLab will specifically request metrics that have an environment tag that matches the $CI_ENVIRONMENT_SLUG.

If you are using GitLab Auto-Deploy and one of the methods of configuring Prometheus above, the environment will be automatically added.

GitLab Prometheus queries

The queries utilized by GitLab are shown in the following table.

Metric Query
Average Memory (MB) (sum(container_memory_usage_bytes{container_name!="POD",environment="$CI_ENVIRONMENT_SLUG"}) / count(container_memory_usage_bytes{container_name!="POD",environment="$CI_ENVIRONMENT_SLUG"})) /1024/1024
Average CPU Utilization (%) sum(rate(container_cpu_usage_seconds_total{container_name!="POD",environment="$CI_ENVIRONMENT_SLUG"}[2m])) / count(container_cpu_usage_seconds_total{container_name!="POD",environment="$CI_ENVIRONMENT_SLUG"}) * 100

Monitoring CI/CD Environments

Once configured, GitLab will attempt to retrieve performance metrics for any environment which has had a successful deployment.

Learn more about monitoring environments.

Determining the performance impact of a merge

Introduced in GitLab 9.2. GitLab 9.3 added the numeric comparison of the 30 minute averages.

Developers can view the performance impact of their changes within the merge request workflow. When a source branch has been deployed to an environment, a sparkline and numeric comparison of the average memory consumption will appear. On the sparkline, a dot indicates when the current changes were deployed, with up to 30 minutes of performance data displayed before and after. The comparison shows the difference between the 30 minute average before and after the deployment. This information is updated after each commit has been deployed.

Once merged and the target branch has been redeployed, the metrics will switch to show the new environments this revision has been deployed to.

Performance data will be available for the duration it is persisted on the Prometheus server.

Merge Request with Performance Impact


If the "Attempting to load performance data" screen continues to appear, it could be due to: