Connect EKS clusters through cluster certificates (deprecated)

Tier: Free, Premium, Ultimate Offering: GitLab.com, Self-managed
caution
This feature was deprecated in GitLab 14.5. Use Infrastructure as Code to create new clusters.

Through GitLab, you can create new clusters and add existing clusters hosted on Amazon Elastic Kubernetes Service (EKS).

Connect an existing EKS cluster

If you already have an EKS cluster and want to connect it to GitLab, use the GitLab agent.

Create a new EKS cluster

To create a new cluster from GitLab, use Infrastructure as Code.

How to create a new cluster on EKS through cluster certificates (deprecated)

History

Prerequisites:

For instance-level clusters, see additional requirements for self-managed instances.

To create new Kubernetes clusters for your project, group, or instance through the certificate-based method:

  1. Define the access control (RBAC or ABAC) for your cluster.
  2. Create a cluster in GitLab.
  3. Prepare the cluster in Amazon.
  4. Configure your cluster’s data in GitLab.

Further steps:

  1. Create a default Storage Class.
  2. Deploy the app to EKS.

Create a new EKS cluster in GitLab

To create new a EKS cluster for your project, group, or instance, through cluster certificates:

  1. Go to your:
    • Project’s Operate > Kubernetes clusters page, for a project-level cluster.
    • Group’s Kubernetes page, for a group-level cluster.
    • The Admin area’s Kubernetes page, for an instance-level cluster.
  2. Select Integrate with a cluster certificate.
  3. Under the Create new cluster tab, select Amazon EKS to display an Account ID and External ID needed for later steps.
  4. In the IAM Management Console, create an IAM policy:
    1. From the left panel, select Policies.
    2. Select Create Policy, which opens a new window.
    3. Select the JSON tab, and paste the following snippet in place of the existing content. These permissions give GitLab the ability to create resources, but not delete them:

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "autoscaling:CreateAutoScalingGroup",
                      "autoscaling:DescribeAutoScalingGroups",
                      "autoscaling:DescribeScalingActivities",
                      "autoscaling:UpdateAutoScalingGroup",
                      "autoscaling:CreateLaunchConfiguration",
                      "autoscaling:DescribeLaunchConfigurations",
                      "cloudformation:CreateStack",
                      "cloudformation:DescribeStacks",
                      "ec2:AuthorizeSecurityGroupEgress",
                      "ec2:AuthorizeSecurityGroupIngress",
                      "ec2:RevokeSecurityGroupEgress",
                      "ec2:RevokeSecurityGroupIngress",
                      "ec2:CreateSecurityGroup",
                      "ec2:createTags",
                      "ec2:DescribeImages",
                      "ec2:DescribeKeyPairs",
                      "ec2:DescribeRegions",
                      "ec2:DescribeSecurityGroups",
                      "ec2:DescribeSubnets",
                      "ec2:DescribeVpcs",
                      "eks:CreateCluster",
                      "eks:DescribeCluster",
                      "iam:AddRoleToInstanceProfile",
                      "iam:AttachRolePolicy",
                      "iam:CreateRole",
                      "iam:CreateInstanceProfile",
                      "iam:CreateServiceLinkedRole",
                      "iam:GetRole",
                      "iam:listAttachedRolePolicies",
                      "iam:ListRoles",
                      "iam:PassRole",
                      "ssm:GetParameters"
                  ],
                  "Resource": "*"
              }
          ]
      }
      

      If you get an error during this process, GitLab does not roll back the changes. You must remove resources manually. You can do this by deleting the relevant CloudFormation stack.

    4. Select Review policy.
    5. Enter a suitable name for this policy, and select Create Policy. You can now close this window.

Prepare the cluster in Amazon

  1. Create an EKS IAM role for your cluster (role A).
  2. Create another EKS IAM role for GitLab authentication with Amazon (role B).

Create an EKS IAM role for your cluster

In the IAM Management Console, create an EKS IAM role (role A) following the Amazon EKS cluster IAM role instructions. This role is necessary so that Kubernetes clusters managed by Amazon EKS can make calls to other AWS services on your behalf to manage the resources that you use with the service.

For GitLab to manage the EKS cluster correctly, you must include AmazonEKSClusterPolicy in addition to the policies the guide suggests.

Create another EKS IAM role for GitLab authentication with Amazon

In the IAM Management Console, create another IAM role (role B) for GitLab authentication with AWS:

  1. On the AWS IAM console, select Roles from the left panel.
  2. Select Create role.
  3. Under Select type of trusted entity, select Another AWS account.
  4. Enter the Account ID from GitLab into the Account ID field.
  5. Check Require external ID.
  6. Enter the External ID from GitLab into the External ID field.
  7. Select Next: Permissions, and select the policy you just created.
  8. Select Next: Tags, and optionally enter any tags you wish to associate with this role.
  9. Select Next: Review.
  10. Enter a role name and optional description into the fields provided.
  11. Select Create role. The new role name displays at the top. Select its name and copy the Role ARN from the newly created role.

Configure your cluster’s data in GitLab

  1. Back in GitLab, enter the copied role ARN into the Role ARN field.
  2. In the Cluster Region field, enter the region you plan to use for your new cluster. GitLab confirms you have access to this region when authenticating your role.
  3. Select Authenticate with AWS.
  4. Adjust your cluster’s settings.
  5. Select the Create Kubernetes cluster button.

After about 10 minutes, your cluster is ready to go.

note
If you have installed and configured kubectl and you would like to manage your cluster with it, you must add your AWS external ID in the AWS configuration. For more information on how to configure AWS CLI, see using an IAM role in the AWS CLI.

Cluster settings

When you create a new cluster, you have the following settings:

Setting Description
Kubernetes cluster name Your cluster’s name.
Environment scope The associated environment.
Service role The EKS IAM role (role A).
Kubernetes version The Kubernetes version for your cluster.
Key pair name The key pair that you can use to connect to your worker nodes.
VPC The VPC to use for your EKS Cluster resources.
Subnets The subnets in your VPC where your worker nodes run. Two are required.
Security group The security group to apply to the EKS-managed Elastic Network Interfaces that are created in your worker node subnets.
Instance type The instance type of your worker nodes.
Node count The number of worker nodes.
GitLab-managed cluster Check if you want GitLab to manage namespaces and service accounts for this cluster.

Create a default Storage Class

Amazon EKS doesn’t have a default Storage Class out of the box, which means requests for persistent volumes are not automatically fulfilled. As part of Auto DevOps, the deployed PostgreSQL instance requests persistent storage, and without a default storage class it cannot start.

If a default Storage Class doesn’t already exist and is desired, follow Amazon’s guide on storage classes to create one.

Alternatively, disable PostgreSQL by setting the project variable POSTGRES_ENABLED to false.

Deploy the app to EKS

With RBAC disabled and services deployed, Auto DevOps can now be leveraged to build, test, and deploy the app.

Enable Auto DevOps if not already enabled. If a wildcard DNS entry was created resolving to the Load Balancer, enter it in the domain field under the Auto DevOps settings. Otherwise, the deployed app isn’t externally available outside of the cluster.

Deploy Pipeline

GitLab creates a new pipeline, which begins to build, test, and deploy the app.

After the pipeline has finished, your app runs in EKS, and is available to users. Select Operate > Environments.

Deployed Environment

GitLab displays a list of the environments and their deploy status, as well as options to browse to the app, view monitoring metrics, and even access a shell on the running pod.

Additional requirements for self-managed instances

Tier: Free, Premium, Ultimate Offering: Self-managed, GitLab Dedicated

If you are using a self-managed GitLab instance, you need to configure Amazon credentials. GitLab uses these credentials to assume an Amazon IAM role to create your cluster.

Create an IAM user and ensure it has permissions to assume the roles that your users need to create EKS clusters.

For example, the following policy document allows assuming a role whose name starts with gitlab-eks- in account 123456789012:

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Action": "sts:AssumeRole",
    "Resource": "arn:aws:iam::123456789012:role/gitlab-eks-*"
  }
}

Configure Amazon authentication

To configure Amazon authentication in GitLab, generate an access key for the IAM user in the Amazon AWS console, and follow these steps:

  1. On the left sidebar, at the bottom, select Admin.
  2. Select Settings > General.
  3. Expand Amazon EKS.
  4. Check Enable Amazon EKS integration.
  5. Enter your Account ID.
  6. Enter your access key and ID.
  7. Select Save changes.

EKS access key and ID

You can use instance profiles to dynamically retrieve temporary credentials from AWS when needed. In this case, leave the Access key ID and Secret access key fields blank and pass an IAM role to an EC2 instance.

Otherwise, enter your access key credentials into Access key ID and Secret access key.

Troubleshooting

The following errors are commonly encountered when creating a new cluster.

Validation failed: Role ARN must be a valid Amazon Resource Name

Check that the Provision Role ARN is correct. An example of a valid ARN:

arn:aws:iam::123456789012:role/gitlab-eks-provision'

Access denied: User is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::y

This error occurs when the credentials defined in the Configure Amazon authentication cannot assume the role defined by the Provision Role ARN:

User `arn:aws:iam::x` is not authorized to perform: `sts:AssumeRole` on resource: `arn:aws:iam::y`

Check that:

  1. The initial set of AWS credentials has the AssumeRole policy.
  2. The Provision Role has access to create clusters in the given region.
  3. The account ID and external ID match the value defined in the Trust relationships tab in AWS:

    AWS IAM Trust relationships

Could not load Security Groups for this VPC

When populating options in the configuration form, GitLab returns this error because GitLab has successfully assumed your provided role, but the role has insufficient permissions to retrieve the resources needed for the form. Make sure you’ve assigned the role the correct permissions.

Key Pairs are not loaded

GitLab loads the key pairs from the Cluster Region specified. Ensure that key pair exists in that region.

ROLLBACK_FAILED during cluster creation

The creation process halted because GitLab encountered an error when creating one or more resources. You can inspect the associated CloudFormation stack to find the specific resources that failed to create.

If the Cluster resource failed with the error The provided role doesn't have the Amazon EKS Managed Policies associated with it., the role specified in Role name is not configured correctly.

note
This role should be the role you created by following the EKS cluster IAM role guide. In addition to the policies that guide suggests, you must also include the AmazonEKSClusterPolicy policy for this role in order for GitLab to manage the EKS cluster correctly.