Authenticate with registry in Docker-in-Docker
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
When you use Docker-in-Docker, the standard authentication methods do not work, because a fresh Docker daemon is started with the service.
Option 1: Run docker login
In before_script, run docker login:
default:
  image: docker:24.0.5-cli
  services:
    - docker:24.0.5-dind
variables:
  DOCKER_TLS_CERTDIR: "/certs"
build:
  stage: build
  before_script:
    - echo "$DOCKER_REGISTRY_PASS" | docker login $DOCKER_REGISTRY --username $DOCKER_REGISTRY_USER --password-stdin
  script:
    - docker build -t my-docker-image .
    - docker run my-docker-image /script/to/run/testsTo sign in to Docker Hub, leave $DOCKER_REGISTRY
empty or remove it.
Option 2: Mount ~/.docker/config.json on each job
If you are an administrator for GitLab Runner, you can mount a file
with the authentication configuration to ~/.docker/config.json.
Then every job that the runner picks up is already authenticated. If you
are using the official docker:24.0.5 image, the home directory is
under /root.
If you mount the configuration file, any docker command
that modifies the ~/.docker/config.json fails. For example, docker login
fails, because the file is mounted as read-only. Do not change it from
read-only, because this causes problems.
Here is an example of /opt/.docker/config.json that follows the
DOCKER_AUTH_CONFIG
documentation:
{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ="
        }
    }
}Docker
Update the volume mounts to include the file.
[[runners]]
  ...
  executor = "docker"
  [runners.docker]
    ...
    privileged = true
    volumes = ["/opt/.docker/config.json:/root/.docker/config.json:ro"]Kubernetes
Create a ConfigMap with the content of this file. You can do this with a command like:
kubectl create configmap docker-client-config --namespace gitlab-runner --from-file /opt/.docker/config.jsonUpdate the volume mounts to include the file.
[[runners]]
  ...
  executor = "kubernetes"
  [runners.kubernetes]
    image = "alpine:3.12"
    privileged = true
    [[runners.kubernetes.volumes.config_map]]
      name = "docker-client-config"
      mount_path = "/root/.docker/config.json"
      sub_path = "config.json"Option 3: Use DOCKER_AUTH_CONFIG
If you already have
DOCKER_AUTH_CONFIG
defined, you can use the variable and save it in
~/.docker/config.json.
You can define this authentication in several ways:
- In pre_build_scriptin the runner configuration file.
- In before_script.
- In script.
The following example shows before_script.
The same commands apply for any solution you implement.
default:
  image: docker:24.0.5-cli
  services:
    - docker:24.0.5-dind
variables:
  DOCKER_TLS_CERTDIR: "/certs"
build:
  stage: build
  before_script:
    - mkdir -p $HOME/.docker
    - echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json
  script:
    - docker build -t my-docker-image .
    - docker run my-docker-image /script/to/run/tests