- Enabled by default
- Quick start
- Comparison to application platforms and PaaS
- Kubernetes requirements
- Auto DevOps base domain
- Enabling/Disabling Auto DevOps
- Using multiple Kubernetes clusters
- Unable to select a buildpack
- Pipeline that extends Auto DevOps with only / except fails
- Failure to create a Kubernetes namespace
- Detected an existing PostgreSQL database
- Error: unable to recognize “”: no matches for kind “Deployment” in version “extensions/v1beta1”
- Error: error initializing: Looks like “https://kubernetes-charts.storage.googleapis.com” is not a valid chart repository or cannot be reached
- Error: release …. failed: timed out waiting for the condition
- Development guides
- Introduced in GitLab 10.0.
- Generally available on GitLab 11.0.
Auto DevOps are default CI/CD templates that auto-discover the source code you have. They enable GitLab to automatically detect, build, test, deploy, and monitor your applications. Leveraging CI/CD best practices and tools, Auto DevOps aims to simplify the setup and execution of a mature and modern software development lifecycle.
You can spend a lot of effort to set up the workflow and processes required to build, deploy, and monitor your project. It gets worse when your company has hundreds, if not thousands, of projects to maintain. With new projects constantly starting up, the entire software development process becomes impossibly complex to manage.
Auto DevOps provides you a seamless software development process by automatically detecting all dependencies and language technologies required to test, build, package, deploy, and monitor every project with minimal configuration. Automation enables consistency across your projects, seamless management of processes, and faster creation of new projects: push your code, and GitLab does the rest, improving your productivity and efficiency.
For an introduction to Auto DevOps, watch AutoDevOps in GitLab 11.0.
For requirements, read Requirements for Auto DevOps for more information.
For a developer’s guide, read Auto DevOps development guide.
As Auto DevOps continues to gain popularity, and lowers the barrier to entry for getting started with DevOps and CI/CD, see what our wider community is saying:
We welcome everyone to share your experience by tagging GitLab on Twitter.
Introduced in GitLab 11.3.
On self-managed instances, Auto DevOps is enabled by default for all projects. It attempts to run on all pipelines in each project. An instance administrator can enable or disable this default in the Auto DevOps settings. Auto DevOps automatically disables in individual projects on their first pipeline failure,
If a CI/CD configuration file is present in the project, it continues to be used, whether or not Auto DevOps is enabled.
If you’re using GitLab.com, see the quick start guide for setting up Auto DevOps with GitLab.com and a Kubernetes cluster on Google Kubernetes Engine (GKE).
If you use a self-managed instance of GitLab, you must configure the Google OAuth2 OmniAuth Provider before configuring a cluster on GKE. After configuring the provider, you can follow the steps in the quick start guide to get started.
Auto DevOps provides features often included in an application platform or a Platform as a Service (PaaS). It takes inspiration from the innovative work done by Heroku and goes beyond it in multiple ways:
- Auto DevOps works with any Kubernetes cluster; you’re not limited to running on infrastructure managed by GitLab. (Note that many features also work without Kubernetes).
- There is no additional cost (no markup on the infrastructure costs), and you can use a Kubernetes cluster you host or Containers as a Service on any public cloud (for example, Google Kubernetes Engine).
- Auto DevOps has more features including security testing, performance testing, and code quality testing.
- Auto DevOps offers an incremental graduation path. If you need advanced customizations, you can start modifying the templates without starting over on a completely different platform. Review the customizing documentation for more information.
Comprised of a set of stages, Auto DevOps brings these best practices to your project in a simple and automatic way:
- Auto Build
- Auto Test
- Auto Code Quality
- Auto SAST (Static Application Security Testing)
- Auto Secret Detection
- Auto Dependency Scanning
- Auto License Compliance
- Auto Container Scanning
- Auto Review Apps
- Auto DAST (Dynamic Application Security Testing)
- Auto Deploy
- Auto Browser Performance Testing
- Auto Monitoring
- Auto Code Intelligence
As Auto DevOps relies on many different components, you should have a basic knowledge of the following:
For an overview on the creation of Auto DevOps, read more in this blog post.
- either under the cluster’s settings, whether for an instance, projects or groups
- or at the project level as a variable:
- or at the group level as a variable:
- or as an instance-wide fallback in Admin Area > Settings under the Continuous Integration and Delivery section
The base domain variable
KUBE_INGRESS_BASE_DOMAIN follows the same order of precedence
as other environment variables.
If the CI/CD variable is not set and the cluster setting is left blank, the instance-wide Auto DevOps domain
setting is used if set.
Auto DevOps requires a wildcard DNS A record matching the base domain(s). For
a base domain of
example.com, you’d need a DNS entry like:
*.example.com 3600 A 18.104.22.168
In this case, the deployed applications are served from
is the IP address of your load balancer; generally NGINX (see requirements).
Setting up the DNS record is beyond the scope of this document; check with your
DNS provider for information.
After completing setup, all requests hit the load balancer, which routes requests to the Kubernetes pods running your application.
GitLab.com users can enable or disable Auto DevOps only at the project level. Self-managed users can enable or disable Auto DevOps at the project, group, or instance level.
If enabling, check that your project does not have a
.gitlab-ci.yml, or if one exists, remove it.
- Go to your project’s Settings > CI/CD > Auto DevOps.
- Select the Default to Auto DevOps pipeline checkbox to enable it.
- (Optional, but recommended) When enabling, you can add in the base domain Auto DevOps uses to deploy your application, and choose the deployment strategy.
- Click Save changes for the changes to take effect.
After enabling the feature, an Auto DevOps pipeline is triggered on the
Introduced in GitLab 11.10.
Only administrators and group owners can enable or disable Auto DevOps at the group level.
When enabling or disabling Auto DevOps at group level, group configuration is implicitly used for the subgroups and projects inside that group, unless Auto DevOps is specifically enabled or disabled on the subgroup or project.
To enable or disable Auto DevOps at the group level:
- Go to your group’s Settings > CI/CD > Auto DevOps page.
- Select the Default to Auto DevOps pipeline checkbox to enable it.
- Click Save changes for the changes to take effect.
Even when disabled at the instance level, group owners and project maintainers can still enable Auto DevOps at the group and project level, respectively.
- Go to Admin Area > Settings > Continuous Integration and Deployment.
- Select Default to Auto DevOps pipeline for all projects to enable it.
- (Optional) You can set up the Auto DevOps base domain, for Auto Deploy and Auto Review Apps to use.
- Click Save changes for the changes to take effect.
Introduced in GitLab 11.0.
You can change the deployment strategy used by Auto DevOps by visiting your project’s Settings > CI/CD > Auto DevOps. The following options are available:
Continuous deployment to production: Enables Auto Deploy
masterbranch directly deployed to production.
Continuous deployment to production using timed incremental rollout: Sets the
timed. Production deployments execute with a 5 minute delay between each increment in rollout.
masterbranch is directly deployed to staging.
- Manual actions are provided for incremental rollout to production.
When using Auto DevOps, you can deploy different environments to different Kubernetes clusters, due to the 1:1 connection existing between them.
The Deploy Job template used by Auto DevOps currently defines 3 environment names:
review/(every environment starting with
Those environments are tied to jobs using Auto Deploy, so
except for the environment scope, they must have a different deployment domain.
You must define a separate
KUBE_INGRESS_BASE_DOMAIN variable for each of the above
based on the environment.
The following table is an example of how to configure the three different clusters:
|Cluster name||Cluster environment scope||
||Variable environment scope||Notes|
|review||The review cluster which runs all Review Apps. |
|staging||(Optional) The staging cluster which runs the deployments of the staging environments. You must enable it first.|
|production||The production cluster which runs the production environment deployments. You can use incremental rollouts.|
To add a different cluster for each environment:
- Navigate to your project’s Operations > Kubernetes.
- Create the Kubernetes clusters with their respective environment scope, as described from the table above.
- After creating the clusters, navigate to each cluster and install Ingress. Wait for the Ingress IP address to be assigned.
- Make sure you’ve configured your DNS with the specified Auto DevOps domains.
- Navigate to each cluster’s page, through Operations > Kubernetes, and add the domain based on its Ingress IP address.
After completing configuration, you can test your setup by creating a merge request
and verifying your application is deployed as a Review App in the Kubernetes
cluster with the
review/* environment scope. Similarly, you can check the
Cluster environment scope isn’t respected
when checking for active Kubernetes clusters. For multi-cluster setup to work with Auto DevOps,
create a fallback cluster with Cluster environment scope set to
*. A new cluster isn’t
required. You can use any of the clusters already added.
The following restrictions apply.
No documented way of using private container registry with Auto DevOps exists. We strongly advise using GitLab Container Registry with Auto DevOps to simplify configuration and prevent any unforeseen issues.
The GitLab integration with Helm does not support installing applications when
behind a proxy. Users who want to do so must inject their proxy settings
into the installation pods at runtime, such as by using a
apiVersion: settings.k8s.io/v1alpha1 kind: PodPreset metadata: name: gitlab-managed-apps-default-proxy namespace: gitlab-managed-apps spec: env: - name: http_proxy value: "PUT_YOUR_HTTP_PROXY_HERE" - name: https_proxy value: "PUT_YOUR_HTTPS_PROXY_HERE"
Auto Build and Auto Test may fail to detect your language or framework with the following error:
Step 5/11 : RUN /bin/herokuish buildpack build ---> Running in eb468cd46085 -----> Unable to select a buildpack The command '/bin/sh -c /bin/herokuish buildpack build' returned a non-zero code: 1
The following are possible reasons:
- Your application may be missing the key files the buildpack is looking for.
Ruby applications require a
Gemfileto be properly detected, even though it’s possible to write a Ruby app without a
- No buildpack may exist for your application. Try specifying a custom buildpack.
If your pipeline fails with the following message:
Found errors in your .gitlab-ci.yml: jobs:test config key may not be used with `rules`: only
This error appears when the included job’s rules configuration has been overridden with the
To fix this issue, you must either:
- Transition your
only/exceptsyntax to rules.
- (Temporarily) Pin your templates to the GitLab 12.10 based templates.
Auto Deploy fails if GitLab can’t create a Kubernetes namespace and service account for your project. For help debugging this issue, see Troubleshooting failed deployment jobs.
After upgrading to GitLab 13.0, you may encounter this message when deploying with Auto DevOps:
Detected an existing PostgreSQL database installed on the deprecated channel 1, but the current channel is set to 2. The default channel changed to 2 in of GitLab 13.0. [...]
Auto DevOps, by default, installs an in-cluster PostgreSQL database alongside your application. The default installation method changed in GitLab 13.0, and upgrading existing databases requires user involvement. The two installation methods are:
- channel 1 (deprecated): Pulls in the database as a dependency of the associated Helm chart. Only supports Kubernetes versions up to version 1.15.
- channel 2 (current): Installs the database as an independent Helm chart. Required for using the in-cluster database feature with Kubernetes versions 1.16 and greater.
If you receive this error, you can do one of the following actions:
You can safely ignore the warning and continue using the channel 1 PostgreSQL database by setting
You can delete the channel 1 PostgreSQL database and install a fresh channel 2 database by setting
AUTO_DEVOPS_POSTGRES_DELETE_V1to a non-empty value and redeploying.Deleting the channel 1 PostgreSQL database permanently deletes the existing channel 1 database and all its data. See Upgrading PostgreSQL for more information on backing up and upgrading your database.
If you are not using the in-cluster database, you can set
falseand re-deploy. This option is especially relevant to users of custom charts without the in-chart PostgreSQL dependency. Database auto-detection is based on the
postgresql.enabledHelm value for your release. This value is set based on the
POSTGRES_ENABLEDCI variable and persisted by Helm, regardless of whether or not your chart uses the variable.
falsepermanently deletes any existing channel 1 database for your environment.
After upgrading your Kubernetes cluster to v1.16+, you may encounter this message when deploying with Auto DevOps:
UPGRADE FAILED Error: failed decoding reader into objects: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
This can occur if your current deployments on the environment namespace were deployed with a
deprecated/removed API that doesn’t exist in Kubernetes v1.16+. For example,
if your in-cluster PostgreSQL was installed in a legacy way,
the resource was created via the
extensions/v1beta1 API. However, the deployment resource
was moved to the
app/v1 API in v1.16.
To recover such outdated resources, you must convert the current deployments by mapping legacy APIs
to newer APIs. There is a helper tool called
that works for this problem. Follow these steps to use the tool in Auto DevOps:
include: - template: Auto-DevOps.gitlab-ci.yml - remote: https://gitlab.com/shinya.maeda/ci-templates/-/raw/master/map-deprecated-api.gitlab-ci.yml variables: HELM_VERSION_FOR_MAPKUBEAPIS: "v2" # If you're using auto-depoy-image v2 or above, please specify "v3".
Run the job
<environment-name>:map-deprecated-api. Ensure that this job succeeds before moving to the next step. You should see something like the following output:
2020/10/06 07:20:49 Found deprecated or removed Kubernetes API: "apiVersion: extensions/v1beta1 kind: Deployment" Supported API equivalent: "apiVersion: apps/v1 kind: Deployment"
.gitlab-ci.ymlto the previous version. You no longer need to include the supplemental template
Continue the deployments as usual.
Error: error initializing: Looks like “https://kubernetes-charts.storage.googleapis.com” is not a valid chart repository or cannot be reached
As announced in the official CNCF blog post, the stable Helm chart repository was deprecated and removed on November 13th, 2020. You may encounter this error after that date.
Some GitLab features had dependencies on the stable chart. To mitigate the impact, we changed them to use new official repositories or the Helm Stable Archive repository maintained by GitLab. Auto Deploy contains an example fix.
In Auto Deploy,
auto-deploy-image no longer adds the deprecated stable repository to
helm command. If you use a custom chart and it relies on the deprecated stable repository,
specify an older
auto-deploy-image like this example:
include: - template: Auto-DevOps.gitlab-ci.yml .auto-deploy: image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.5"
Keep in mind that this approach stops working when the stable repository is removed, so you must eventually fix your custom chart.
To fix your custom chart:
In your chart directory, update the
repositoryvalue in your
requirements.yamlfile from :
- In your chart directory, run
helm dep update .using the same Helm major version as Auto DevOps.
- Commit the changes for the
- If you previously had a
requirements.lockfile, commit the changes to the file. If you did not previously have a
requirements.lockfile in your chart, you do not need to commit the new one. This file is optional, but when present, it’s used to verify the integrity of the downloaded dependencies.
You can find more information in issue #263778, “Migrate PostgreSQL from stable Helm repository”.
When getting started with Auto DevOps, you may encounter this error when first deploying your application:
INSTALL FAILED PURGING CHART Error: release staging failed: timed out waiting for the condition
This is most likely caused by a failed liveness (or readiness) probe attempted during the deployment process. By default, these probes are run against the root page of the deployed application on port 5000. If your application isn’t configured to serve anything at the root page, or is configured to run on a specific port other than 5000, this check fails.
If it fails, you should see these failures in the events for the relevant Kubernetes namespace. These events look like the following example:
LAST SEEN TYPE REASON OBJECT MESSAGE 3m20s Warning Unhealthy pod/staging-85db88dcb6-rxd6g Readiness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused 3m32s Warning Unhealthy pod/staging-85db88dcb6-rxd6g Liveness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused
To change the port used for the liveness checks, pass custom values to the Helm chart used by Auto DevOps:
Create a directory and file at the root of your repository named
Populate the file with the following content, replacing the port values with the actual port number your application is configured to use:
service: internalPort: <port_value> externalPort: <port_value>
Commit your changes.
After committing your changes, subsequent probes should use the newly-defined ports.
The page that’s probed can also be changed by overriding the
readinessProbe.path values (shown in the
file) in the same fashion.