This page looks best with JavaScript enabled

How to Setup a Jenkins to DockerHub Pipeline with Multi-Arch Images

 ·   ·  ☕ 13 min read

Introduction

We have previously seen how to set up a GitHub to Jenkins pipeline with webhooks, this time we are going to continue on that lesson and learn how to configure Jenkins to build docker images and push Docker images to DockerHub.

We’ll also see how to create images for both arm64 as well as amd64 architectures machines. Keep on reading.

While you read this post, take a moment to connect with me on LinkedIn.

Index

  1. Setup dummy project
  2. Build images, test and delete the docker image
  3. Setup registry credentials
  4. Push images to docker hub
  5. Build and push multi-architecture images

Install docker on Jenkins hosts

By now, you should have the Jenkins server already running. You should have a repo with a working Jenkinsfile. Your Jenkins server should have the repo linked to each other and each push to GitHub should trigger a build on the Jenkins server. If you are not able to do any of those, please look at How to Setup a GitHub to Jenkins Pipeline with WebHooks.

Coming back to the present, our Jenkins pipeline will be using commands like docker build and docker push. Jenkins does not come with docker capabilities on its own. We’ll have to install it on our own on the machine that Jenkins is running on (if your build is running on another machine than the master node of Jenkins, adjust accordingly).

Someone already has written about Setting Up Docker on Ubuntu 20.04 (Arm64) so I won’t replicate it here. Please refer to that post for installing Docker on Ubuntu 20.04.

I have also written a docker role for installing Docker Engine. Please note that my Jenkins host runs a Ubuntu 20.04 on an arm64 machine. If you are using other OS or architecture, please let’s deal with multi-platform, multi-architecture install of Docker together. It’s a good first issue for someone who’s learning Ansible and want to do some action.

Anyway, if you are taking a manual way of installing Docker, please make sure jenkins user is added in docker group. Otherwise, the Jenkins build might fail.

Setup dummy files to work with

Let’s write a tiny hello world python application. This will be a command-line application. When this script is invoked, it will print a ‘hello world’ to the terminal. Then we’ll wrap this script in a docker image.

Here are the minimal files I came up with.

Dockerfile

1
2
3
4
5
6
7
8
9
FROM python:3.8-alpine

WORKDIR /app

RUN pip3 install pytest

COPY . .

CMD [ "python3", "cotu.py"]

cotu.py

1
2
3
4
5
6
def hello_world():
    print("hello world")


if __name__ == "__main__":
    hello_world()

script_test.py

1
2
3
4
5
6
7
8
import script

def test_hello_world(capsys):
    script.hello_world()
    out, err = capsys.readouterr()

    assert out == 'hello world\n'
    assert err == ''

With these 3 files in place (along with our Jenkinsfile), you may want to build an image locally.

Build, run, test, and remove locally

The command to build is docker build -t sntshk/cotu:latest .. I hope you already know what they are. In case you don’t, build is the docker’s subcommand which is used to build/create images out of Dockerfile. This command is going to read the Dockerfile we have written above as a recipe to create images.

With -t sntshk/cotu:latest, you tell the build command to tag your image. Here you usually put <your dockerhub user id>/<image name>:<image versione>. And at last, there is a . (dot) which tells the build command that the Dockerfile is in the current directory itself.

$ docker build -t sntshk/cotu:latest .
Sending build context to Docker daemon  169.5kB
Step 1/7 : FROM python:3.8-alpine
3.8-alpine: Pulling from library/python

[...output trimmed...]

 ---> Running in 6c59633bf78c
Removing intermediate container 6c59633bf78c
 ---> b3814bc48ae9
Successfully built b3814bc48ae9
Successfully tagged sntshk/cotu:latest

You can check if they are created, and then run the container made with this image.

$ docker images
REPOSITORY      TAG               IMAGE ID       CREATED         SIZE
sntshk/cotu     latest            b3814bc48ae9   9 minutes ago   58.9MB

$ docker run --rm -it sntshk/cotu
hello world

Don’t panic about the image size just yet. It can be reduced. But my focus is not there right now.

If you run the test, it will pass as well. You can confirm that it has passed by checking if the last command resulted in 0 exit status.

$ docker run --rm -it sntshk/cotu pytest
==================================== test session starts ====================================
platform linux -- Python 3.8.12, pytest-7.0.1, pluggy-1.0.0
rootdir: /app
collected 1 item                                                                            

test_cotu.py .                                                                        [100%]

===================================== 1 passed in 0.01s =====================================

$ echo $?
0

Finally, let’s remove the images which we have built.

$ docker images
REPOSITORY      TAG               IMAGE ID       CREATED         SIZE
sntshk/cotu     latest            b3814bc48ae9   9 minutes ago   58.9MB

$ docker rmi $(docker images -qa)
Untagged: sntshk/cotu:latest

[...output trimmed...]

Deleted: sha256:8d3ac3489996423f53d6087c81180006263b79f206d3fdec9e66f0e27ceb8759

$ docker images
REPOSITORY      TAG               IMAGE ID       CREATED         SIZE

We have done the major part here. We know how to create images, run them, run test commands, and remove images. We have to do the same thing on the Jenkins server (minus the running part). We’ll do that in the next section.

Modify Jenkinsfile to build, test and delete Docker image

Let’s look at how we left our Jenkinsfile in the last article.

pipeline {
    agent any

    stages {
        stage('Init') {
            steps {
                echo 'Initializing..'
                echo "Running ${env.BUILD_ID} on ${env.JENKINS_URL}"
            }
        }
        stage('Test') {
            steps {
                echo 'Testing..'
                echo 'Running pytest..'
            }
        }
        stage('Build') {
            steps {
                echo 'Building..'
                echo 'Running docker build -t sntshk/cotu .'
            }
        }
        stage('Publish') {
            steps {
                echo 'Publishing..'
                echo 'Running docker push..'
            }
        }
        stage('Cleanup') {
            steps {
                echo 'Cleaning..'
                echo 'Running docker rmi..'
            }
        }
    }
}

They are merely placeholder stages. We’re going to put life into it.

Just like Jenkinsfile have an echo directive to echo whatever text is in front of them, there is a sh directive to run shell commands.

Below is the diff of the Jenkinsfile after modification.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
-        stage('Test') {
+        stage('Build') {
             steps {
-                echo 'Testing..'
-                echo 'Running pytest..'
+                echo 'Running docker build -t sntshk/cotu:latest .'
+                sh 'docker build -t sntshk/cotu:latest .'
             }
         }
-        stage('Build') {
+        stage('Test') {
             steps {
-                echo 'Building..'
-                echo 'Running docker build -t sntshk/cotu .'
+                echo 'Testing..'
+                sh 'docker run --rm -e CI=true sntshk/cotu pytest'
             }
         }
         stage('Cleanup') {
             steps {
-                echo 'Cleaning..'
-                echo 'Running docker rmi..'
+                echo 'Removing unused docker images..'
+                // keep intermediate images as cache, only delete the final image
+                sh 'docker images -q | xargs --no-run-if-empty docker rmi'
             }
         }
     }

A few things to notice here:

  1. I have swapped the position of the Build and Test stages.

I have decided to build the docker image first, instead of testing. This seems counter-intuitive, isn’t it? You might be thinking that if the test failed, the building was useless. I know it costs the time of building an image if the test is supposed to fail. But it helped me to bear the pain of setting up a test environment.

  1. Some of the echo statements are now sh commands.

This includes all of the build, test, and cleanup stages.

At this point when you run your build (by simply pushing this update to GitHub), your build should not fail. Please let me know if it did.

Next, we’ll see what we need to do to push this built image to a registry. See you in the next section.

Setup Docker Registry (DockerHub) credentials with Jenkins

Hello again, the first thing you must do to push to a docker registry is to log in to it. I’m using DockerHub as an example, but it could be any registry. E.g. AWS ECR, Google Cloud Container Registry, Azure Container Registry, or your self-hosted registry.

To set up DockerHub as a registry in Jenkins, I need DockerHub’s user ID and password. If you don’t know how to set up credentials with Jenkins, please follow these steps:

  1. Head over to <JENKINS_URL>/credentials/.
Add Jenkins credential screen
Add Jenkins credential screen
  1. On the table on the right, when you hover over the text (global), you’ll see a caret will appear, click and that and select Add Credentials.

  2. On the page that appears: 1. Select the Kind to be Secret text. 2. Scope to Global. 3. Secret to be your DockerHub password. 4. ID to be whatever you’d like to refer to this password as. Finally, click the OK button.

Steps to setup a cred
Steps to setup a cred
  1. You’ll finally see something like this.
Cred listing
Cred listing

Similarly, I have also created a secret for DOCKER_ID. These credentials will be available to each build as an environment variable. We’ll set it up in upcoming sections.

Login to Registry and Publish

With credentials registered with Jenkins, we need to use them in our recipe file to push the built image to the registry. But first, we need to…

Modify Jenkinsfile to initialize credentials

Setting credentials in itself is not enough. We also have to use it in our pipeline. Some modifications are needed in the Jenkinsfile.

1
2
3
4
5
6
7
8
9
 pipeline {
     agent any
 
+    environment {
+        DOCKER_ID = credentials('DOCKER_ID')
+        DOCKER_PASSWORD = credentials('DOCKER_PASSWORD')
+    }
+
     stages {

On lines 4-7 in the above diff, I’m storing the credentials in an environment variable that will be available for the entire pipeline.

Login to Docker Registry

1
2
3
4
5
6
7
8
         stage('Init') {
             steps {
                 echo 'Initializing..'
                 echo "Running ${env.BUILD_ID} on ${env.JENKINS_URL}"
+                echo "Current branch: ${env.BRANCH_NAME}"
+                sh 'echo $DOCKER_PASSWORD | docker login -u $DOCKER_ID --password-stdin'
             }
         }

On line 6, I’m using a traditional way to log in to the docker hub in a CI environment.

Publish to Docker Registry

Finally, our Jenkinsfile needs a final bit of modification to push the built image to DockerHub.

1
2
3
4
5
6
7
8
         stage('Publish') {
             steps {
-                echo 'Publishing..'
-                echo 'Running docker push..'
+                echo 'Publishing image to DockerHub..'
+                sh 'docker push $DOCKER_ID/cotu:latest'
             }
         }

After this stage, the image will be deleted in the cleanup stage to prevent storage in our Jenkins host.

Final Jenkinsfile

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
pipeline {
    agent any

    environment {
        DOCKER_ID = credentials('DOCKER_ID')
        DOCKER_PASSWORD = credentials('DOCKER_PASSWORD')
    }

    stages {
        stage('Init') {
            steps {
                echo 'Initializing..'
                echo "Running ${env.BUILD_ID} on ${env.JENKINS_URL}"
                echo "Current branch: ${env.BRANCH_NAME}"
                sh 'echo $DOCKER_PASSWORD | docker login -u $DOCKER_ID --password-stdin'
            }
        }
        stage('Build') {
            steps {
                echo 'Building image..'
                sh 'docker build -t $DOCKER_ID/cotu:latest .'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing..'
                sh 'docker run --rm -e CI=true $DOCKER_ID/cotu pytest'
            }
        }
        stage('Publish') {
            steps {
                echo 'Publishing image to DockerHub..'
                sh 'docker push $DOCKER_ID/cotu:latest'
            }
        }
        stage('Cleanup') {
            steps {
                echo 'Removing unused docker containers and images..'
                sh 'docker ps -aq | xargs --no-run-if-empty docker rm'
                // keep intermediate images as cache, only delete the final image
                sh 'docker images -q | xargs --no-run-if-empty docker rmi'
            }
        }
    }
}

The pipeline is pushing images.

Passing builds and publishing of images
Passing builds and publishing of images
Pushed image to DockerHub
Pushed image to DockerHub

Mission accomplished. If you’d like to know about the multi-arch builds, please refer to the bonus section.

Problems you might run into

  1. Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock

This is a typical error that is not related to Jenkins. When you see this error, it can have 2 meanings.

a) jenkins user is not in the docker group.
b) User is added into the group but hasn’t re-logged in.

Please see post installation guide for Linux on Docker Docs.

In Jenkins' context, you might need to restart the Jenkins service.

Bonus: Multi-Arch images with docker manifest

As you can see in the last image, the image which is pushed is for the arm64 machine. You can’t run this image on an amd64 machine and expect it to work. This happened because the image is built on an arm64. And by default, when you do docker pull or docker push, docker knows which arch you are on, and pulls/pushes accordingly.

But I usually work on an amd64 machine locally. And this motivated me to build multiple architectures of images simultaneously.

For this, there is a Docker plugin called Docker Buildx. Luckily, buildx is available on Linux installations by default. The next step would be to create a new builder instance.

It is important to run the below command as a Jenkins user in the Jenkins server.

$ docker buildx create --use --name multiarch

Read the last line carefully. If you miss this, your builds will fail. I have not been able to incorporate this into my ansible config yet, so this step is manual for now. Let’s also inspect this builder instance to see the architectures it supports.

$ docker buildx inspect --bootstrap
Name:   multiarch
Driver: docker-container

Nodes:
Name:      multiarch0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/arm64, linux/arm/v7, linux/arm/v6

After you have run the above command, and don’t see your target architecture in the Platforms section, you need to install appropriate emulators. I was using a Ubuntu machine so I installed qemu-user-static from apt. I restarted my docker service and ran docker buildx inspect --bootstrap again.

$ docker buildx inspect --bootstrap
Name:   multiarch
Driver: docker-container

Nodes:
Name:      multiarch0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/arm64, linux/amd64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6

Now my build machine is capable of building amd64 images.

If you still see any problem at this stage, please feel free to ping me on @sntshk. Also, have a read of Building Multi-Architecture Docker Images With Buildx by Artur Klauser.

Modify Dockerfile and Jenkinsfile for multi-arch build

We have added --platform=$TARGETPLATFORM in the FROM directive.

Dockerfile

1
2
3
4
-FROM python:3.8-alpine
+FROM --platform=$TARGETPLATFORM python:3.8-alpine
 
WORKDIR /app

This is important as we’ll be passing --platform to build command now. And this image will be built whatever number of the platform we pass.

Jenkinsfile

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@@ -18,7 +18,7 @@ pipeline {
         stage('Build') {
             steps {
                 echo 'Building image..'
-                sh 'docker build -t $DOCKER_ID/cotu:latest .'
+                sh 'docker buildx build -t $DOCKER_ID/cotu:latest .'
             }
         }
         stage('Test') {
@@ -29,16 +29,15 @@ pipeline {
         }
         stage('Publish') {
             steps {
-                echo 'Publishing image to DockerHub..'
-                sh 'docker push $DOCKER_ID/cotu:latest'
+                echo 'Building and publishing multi-arch image to DockerHub..'
+                sh 'docker buildx build --push --platform linux/amd64,linux/arm64 -t $DOCKER_ID/cotu:latest .'
             }
         }

Some notable changes are:

  1. docker build is replaced with docker buildx build.
  2. buildx command also has a --push flag, which tells docker to push as soon as the build is succeeded.
  3. buildx takes several --platforms. This is a comma-separated value of arches.

When the build is successful. You should see something like this on your Docker Hub side.

DockerHub listing of multi-arch image
DockerHub listing of multi-arch image

For people who are reading their article on a Mac. MacOS' irony is that, you don’t need a darwin/amd64 image to run on Mac, you can use linux/amd64. This is because Docker VM on Mac runs in a Linux VM rather than native on the machine itself.

Epilogue

That’s how we create an image for a CLI application. And that’s how we can build it for multiple architectures of the machine. The docker client will request relevant images according to their host architecture.

With that said, if you are planning to use methods described in this post in production. I’d highly recommend looking more at the security aspect. I didn’t cover much of security on the Jenkins side as it was not in the scope of this article. One of the things which can be improved is… notice that when you go through the console output of the build, you’d see something like this:

WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

This is one place you can improve at. There is also a Docker Pipeline plugin which can be used to write Jenkins files with their syntax. I haven’t used that personally, so can’t say much.

Please let me know what do you think about this post. Subscribe to the below newsletter.

Share on

Santosh Kumar
WRITTEN BY
Santosh Kumar
Santosh is a Software Developer currently working with NuNet as a Full Stack Developer.