Installing a GitLab POC on Amazon Web Services (AWS)

This page offers a walkthrough of a common configuration for GitLab on AWS using the official GitLab Linux package. You should customize it to accommodate your needs.

For organizations with 1,000 users or less, the recommended AWS installation method is to launch an EC2 single box Omnibus Installation and implement a snapshot strategy for backing up the data. See the 1,000 user reference architecture for more information.

Getting started for production-grade GitLab

This document is an installation guide for a proof of concept instance. It is not a reference architecture and it does not result in a highly available configuration.

Following this guide exactly results in a proof of concept instance that roughly equates to a scaled down version of a two availability zone implementation of the Non-HA Omnibus 2000 User Reference Architecture. The 2K reference architecture is not HA because it is primarily intended to provide some scaling while keeping costs and complexity low. The 3000 User Reference Architecture is the smallest size that is GitLab HA. It has additional service roles to achieve HA, most notably it uses Gitaly Cluster to achieve HA for Git repository storage and specifies triple redundancy.

GitLab maintains and tests two main types of Reference Architectures. The Omnibus architectures are implemented on instance compute while Cloud Native Hybrid architectures maximize the use of a Kubernetes cluster. Cloud Native Hybrid reference architecture specifications are addendum sections to the Reference Architecture size pages that start by describing the Omnibus architecture. For example, the 3000 User Cloud Native Reference Architecture is in the subsection titled Cloud Native Hybrid reference architecture with Helm Charts (alternative) in the 3000 User Reference Architecture page.

Getting started for production-grade Omnibus GitLab

The Infrastructure as Code tooling GitLab Environment Tool (GET) is the best place to start for building Omnibus GitLab on AWS and most especially if you are targeting an HA setup. While it does not automate everything, it does complete complex setups like Gitaly Cluster for you. GET is open source so anyone can build on top of it and contribute improvements to it.

Getting started for production-grade Cloud Native Hybrid GitLab

For the Cloud Native Hybrid architectures there are two Infrastructure as Code options which are compared in GitLab Cloud Native Hybrid on AWS EKS implementation pattern in the section Available Infrastructure as Code for GitLab Cloud Native Hybrid. It compares the GitLab Environment Toolkit to the AWS Quick Start for GitLab Cloud Native Hybrid on EKS which was co-developed by GitLab and AWS. GET and the AWS Quick Start are both open source so anyone can build on top of them and contribute improvements to them.


For the most part, we’ll make use of Omnibus GitLab in our setup, but we’ll also leverage native AWS services. Instead of using the Omnibus bundled PostgreSQL and Redis, we will use Amazon RDS and ElastiCache.

In this guide, we’ll go through a multi-node setup where we’ll start by configuring our Virtual Private Cloud and subnets to later integrate services such as RDS for our database server and ElastiCache as a Redis cluster to finally manage them within an auto scaling group with custom scaling policies.


In addition to having a basic familiarity with AWS and Amazon EC2, you will need:

It can take a few hours to validate a certificate provisioned through ACM. To avoid delays later, request your certificate as soon as possible.


Below is a diagram of the recommended architecture.

AWS architecture diagram

AWS costs

GitLab uses the following AWS services, with links to pricing information:

  • EC2: GitLab is deployed on shared hardware, for which on-demand pricing applies. If you want to run GitLab on a dedicated or reserved instance, see the EC2 pricing page for information about its cost.
  • S3: GitLab uses S3 (pricing page) to store backups, artifacts, and LFS objects.
  • ELB: A Classic Load Balancer (pricing page), used to route requests to the GitLab instances.
  • RDS: An Amazon Relational Database Service using PostgreSQL (pricing page).
  • ElastiCache: An in-memory cache environment (pricing page), used to provide a Redis configuration.

Create an IAM EC2 instance role and profile

As we’ll be using Amazon S3 object storage, our EC2 instances need to have read, write, and list permissions for our S3 buckets. To avoid embedding AWS keys in our GitLab configuration, we’ll make use of an IAM Role to allow our GitLab instance with this access. We’ll need to create an IAM policy to attach to our IAM role:

Create an IAM Policy

  1. Navigate to the IAM dashboard and select Policies in the left menu.
  2. Select Create policy, select the JSON tab, and add a policy. We want to follow security best practices and grant least privilege, giving our role only the permissions needed to perform the required actions.
    1. Assuming you prefix the S3 bucket names with gl- as shown in the diagram, add the following policy:
    {   "Version": "2012-10-17",
        "Statement": [
                "Effect": "Allow",
                "Action": [
                "Resource": "arn:aws:s3:::gl-*/*"
                "Effect": "Allow",
                "Action": [
                "Resource": "arn:aws:s3:::gl-*"
  3. Select Review policy, give your policy a name (we’ll use gl-s3-policy), and select Create policy.

Create an IAM Role

  1. Still on the IAM dashboard, select Roles in the left menu, and select Create role.
  2. Create a new role by selecting AWS service > EC2, then select Next: Permissions.
  3. In the policy filter, search for the gl-s3-policy we created above, select it, and select Tags.
  4. Add tags if needed and select Review.
  5. Give the role a name (we’ll use GitLabS3Access) and select Create Role.

We’ll use this role when we create a launch configuration later on.

Configuring the network

We’ll start by creating a VPC for our GitLab cloud infrastructure, then we can create subnets to have public and private instances in at least two Availability Zones (AZs). Public subnets will require a Route Table keep and an associated Internet Gateway.

Creating the Virtual Private Cloud (VPC)

We’ll now create a VPC, a virtual networking environment that you’ll control:

  1. Sign in to Amazon Web Services.
  2. Select Your VPCs from the left menu and then select Create VPC. At the “Name tag” enter gitlab-vpc and at the “IPv4 CIDR block” enter If you don’t require dedicated hardware, you can leave “Tenancy” as default. Select Yes, Create when ready.

    Create VPC

  3. Select the VPC, select Actions, select Edit DNS resolution, and enable DNS resolution. Hit Save when done.


Now, let’s create some subnets in different Availability Zones. Make sure that each subnet is associated to the VPC we just created and that CIDR blocks don’t overlap. This will also allow us to enable multi AZ for redundancy.

We will create private and public subnets to match load balancers and RDS instances as well:

  1. Select Subnets from the left menu.
  2. Select Create subnet. Give it a descriptive name tag based on the IP, for example gitlab-public-, select the VPC we created previously, select an availability zone (we’ll use us-west-2a), and at the IPv4 CIDR block let’s give it a 24 subnet

    Create subnet

  3. Follow the same steps to create all subnets:

    Name tag Type Availability Zone CIDR block
    gitlab-public- public us-west-2a
    gitlab-private- private us-west-2a
    gitlab-public- public us-west-2b
    gitlab-private- private us-west-2b
  4. Once all the subnets are created, enable Auto-assign IPv4 for the two public subnets:
    1. Select each public subnet in turn, select Actions, and select Modify auto-assign IP settings. Enable the option and save.

Internet Gateway

Now, still on the same dashboard, go to Internet Gateways and create a new one:

  1. Select Internet Gateways from the left menu.
  2. Select Create internet gateway, give it the name gitlab-gateway and select Create.
  3. Select it from the table, and then under the Actions dropdown choose “Attach to VPC”.