The Tekton project enables the pipeline resources to be declared as a Kubernetes CRDs and therefore being managed in the Kubernetes native ways. One of the common use cases for on-premise Kubernetes cluster is to build and push the Docker images into the private registry. Targeting the OpenShift 3.11 docker registry, this paper explores different ways of building and pushing images in Tekton pipelines.

Tekton Pipeline Resources

The goal of the Tekton pipelines is to pull the source code of a simple hello world Golang HTTP handler program from a Git repository, build the executable and the docker images, and push it to the OpenShift Image Registry.

The Dockerfile is attached here as below,

FROM golang as builder

WORKDIR /build

COPY src/*go /build RUN CGO_ENABLED=0 go build -o demoApp *.go FROM alpine

WORKDIR /app COPY --from=builder /build/demoApp /app CMD ["./demoApp"]

To generalize the process, the following pipeline resources are created for the git repo and the docker registry repo.

---

apiVersion: tekton.dev/v1alpha1

kind: PipelineResource

metadata:

name: cicd-demo-git-source

spec:

type: git

params:

- name: revision

value: master

- name: url

value: http://gitea.apps.ocp.fyre.io.cpak/wenzm/cicd-demo.git

---

apiVersion: tekton.dev/v1alpha1

kind: PipelineResource

metadata:

name: cicd-demo-registry-image

spec:

type: image

params:

- name: url

value: docker-registry.default.svc.cluster.local:5000/cicd-demo/cicd-demo

The git resource is a Git repo from Gitea, which is hosted on the OpenShift Kubernetes cluster on-premise. The image resource is the OpenShift registry defined with the namespace and the repo name.

Access to OpenShift Image Registry

As the Tekton pipeline runs inside the Kubernetes’ cluster, we can access the OpenShift image registry through its service name, which is

docker-registry.default.svc.cluster.local:5000

We use the service account and its token for authentication. In the namespace, create the service account named as pipeline . Assign the cluster role system:image-pusher to this service account, so that RBAC allows it to create the image stream object and push the image to this namespace.

oc create sa pipeline

oc adm policy add-role-to-user system:image-builder -z pipeline

The password is the service account’s token, which can be retrieved by,

oc get secret $(oc get secret | grep pipeline-token | head -1 | awk '{print $1}') -o jsonpath="{.data.token}" | base64 -d

Validate it by connecting to the registry exposed by the OpenShift router, in my case which is docker-registry-default.apps.ocp.fyre.io.cpak ,

docker login docker-registry-default.apps.ocp.fyre.io.cpak

Username: sa

Password: <The token shown with above command>

Login Succeeded

Method 1. Build with Docker

The straight-forward way to build the image in the Tekton pipeline is to build with the docker container, docker in docker. However, to make it working in the OpenShift cluster, some points are to be noted.

In this approach, the mounting of host’s docker socket is a big security concern. Never do this in a production environment.

The following is the Tekton task object created for this task.

apiVersion: tekton.dev/v1alpha1

kind: Task

metadata:

name: cicd-demo-build-with-docker

namespace: cicd-demo

spec:

inputs:

resources:

- name: git-source

type: git

params:

- name: pathToDockerFile

default: /workspace/git-source/compile.Dockerfile

- name: pathToContext

default: /workspace/git-source

- name: registryHost

default: "docker-registry.default.svc.cluster.local:5000"

outputs:

resources:

- name: docker-image

type: image

steps:

- name: build

image: docker

securityContext:

runAsUser: 0

privileged: true

command:

- sh

- -c

- "cd ${inputs.params.pathToContext} && \

docker build -f int-build.dockerfile -t int-build . && \

docker stop int-container || echo ignore not running && \

docker rm int-container || echo ignore non-exists && \

docker run --name int-container -d int-build sh -c 'while true; do sleep 60; done' && \

docker cp int-container:/go/demoApp . && \

docker stop int-container && \

docker build -f final-build.dockerfile -t ${outputs.resources.docker-image.url} . && \

docker login ${inputs.params.registryHost} -u sa -p $(cat /var/run/secrets/kubernetes.io/serviceaccount/token) && \

docker push ${outputs.resources.docker-image.url}

"

volumeMounts:

- name: docker-socket

mountPath: /var/run/docker.sock

volumes:

- name: docker-socket

hostPath:

path: /var/run/docker.sock

type: Socket

In the build step, we use the docker image. However, in order to build the image in the docker in docker (dind), we mount the docker socket from the node into the pod and the container has to be run in privileged mode. Therefore, the respective Security Context Constraint (SCC) must be assigned to the service account that runs the pod.

oc adm policy add-scc-to-user privileged -z pipeline

Accordingly, the security context in the container definition set the following,

securityContext:

runAsUser: 0

privileged: true

As in OpenShift 3.11, the version of Doker doesn’t support multi-stage build, we have to switch back to some old Docker techniques to build the image accordingly. First, we compile and build the binary with the dockerfile named as int-build.Dockerfile,

FROM golang

COPY src/* src

RUN CGO_ENABLED=0 go build -o demoApp src/*.go

Then we run the container as a daemon with a looping shell command. Use the “ docker cp ” to copy the binary out to the building context. Lastly, we build the release image with the following dockerfile,

FROM alpine

WORKDIR /app

COPY demoApp /app CMD ["./demoApp"]

Before we can push the image to the registry, we need to login by using the docker login command. The user name could be any, the password is the service accout’s token, which Kubernetes mount it to the location of /var/run/secrets/kubernetes.io/serviceaccount/token . Use the content of this file to log in, and we can push the image.

We can run the task by creating the TaskRun CRD object. The content of it is skipped.

Overall the building speed is fast. However, the mounting of the node’s Docker socket and the privileged security context requirement are the biggest security concern of this approach.

Method 2. Build with Buildah

Buildah is a command-line tool that facilitates the building of Open Container Initiative (OCI) container images.

While running the command line tool outside of the cluster in the host is pretty smooth, running the tool inside the OpenShift’s Kubernetes cluster is quite challenging. We use the pre-built Docker image quay.io/buildah/stable for our Tekton task. The final working version is shown below,

apiVersion: tekton.dev/v1alpha1

kind: Task

metadata:

name: cicd-demo-build-with-buildah

namespace: cicd-demo

spec:

inputs:

resources:

- name: git-source

type: git

params:

- name: pathToDockerFile

default: /workspace/git-source/Dockerfile

- name: pathToContext

default: /workspace/git-source

- name: registryHost

default: "docker-registry.default.svc.cluster.local:5000"

- name: imageTag

default: cicd-demo

outputs:

resources:

- name: docker-image

type: image

steps:

- name: build

image: quay.io/buildah/stable

securityContext:

runAsUser: 0

privileged: true

command:

- sh

- -c

- "cd ${inputs.params.pathToContext} && \

buildah --storage-driver=vfs build-using-dockerfile -f ${inputs.params.pathToDockerFile} -t ${inputs.params.imageTag} . && \

buildah --storage-driver=vfs push --tls-verify=false --creds=anyone:$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) ${inputs.params.imageTag} docker://${outputs.resources.docker-image.url}

"

Ideally, we should not expect the privileged requirement for the security context. However, without that, I encountered the following error,

Error during unshare(CLONE_NEWUSER): Invalid argumentUser namespaces are not enabled in /proc/sys/user/max_user_namespaces.

We should be able to run the normal build command as below,

buildah build-using-dockerfile -f ${inputs.params.pathToDockerFile} -t ${inputs.params.imageTag} .

However, I encountered the following error, (copied from the container log)

process exited with error: fork/exec /bin/sh: no such file or directorysubprocess exited with status 1

End up we have to use the buildah — storage-driver=vfs option to build the image. It’s also noted that in order to let WORKDIR /build to work, we have to create that directory explicitly in the Dockerfile as below,

FROM golang as builder

RUN mkdir /build

WORKDIR /build

COPY src/*go /build RUN CGO_ENABLED=0 go build -o demoApp *.go FROM alpine

RUN mkdir /app

WORKDIR /app COPY --from=builder /build/demoApp /app CMD ["./demoApp"]

For the authentication part, we use the content of the token file as the password to login to the registry.

Overall, Buildah doesn’t require anything mounted from the host, but we still need the privileged security context and the building speed is relatively slow as we have to use the vfs storage driver to make it work inside the cluster.

Method 3. Build with Kaniko

Let’s create the following Tekton task,

apiVersion: tekton.dev/v1alpha1

kind: Task

metadata:

name: cicd-demo-build-with-kaniko

namespace: cicd-demo

spec:

inputs:

resources:

- name: git-source

type: git

params:

- name: pathToDockerFile

default: /workspace/git-source/Dockerfile

- name: pathToContext

default: /workspace/git-source

outputs:

resources:

- name: docker-image

type: image

steps:

- name: build

image: gcr.io/kaniko-project/executor:debug

securityContext:

runAsUser: 0

command:

- sh

- "-c"

- "

/kaniko/executor \

--dockerfile=${inputs.params.pathToDockerFile} \

--destination=${outputs.resources.docker-image.url} \

--context=${inputs.params.pathToContext} \

--skip-tls-verify

"

volumeMounts:

- name: docker-config

mountPath: /kaniko/.docker

volumes:

- name: docker-config

configMap:

name: docker-config

I am using the gcr.io/kaniko-project/executor:debug image so that I can insert sleep command in between to troubleshoot.

Notice, we don’t need the privileged security context.

A special thing is that for Kaniko to push the image to the private registry, we need to provide the authentication in the Docker’s config.json format.

Create the following JSON file,

{

"auths": {

"docker-registry.default.svc.cluster.local:5000": {

"auth": "c2E6ZXlKaGJHY2lPaUp<...skipped many lines ...>"

}

}

}

The value for auth is obtained with the commands below

secret=$(oc get secret | grep pipeline-token | head -1 | awk '{print $1}') token=$(oc get secret $secret -o jsonpath="{.data.token}" | base64 -d -i)

echo "sa:$token" | base64

You may need to have the effort to remove the new line to join the base64 encoding lines. A better approach might be to automate it with some golang magic such as base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf(“sa:%s”, token)))

Create the config map with the key named as config.json,

kubectl -n cicd-demo create configmap docker-config --from-file=config.json=docker.config.json

Use this config map to mount it into the Pod to the default directory /kaniko/.docker

Conclusion

Based on the comparison at this moment, Kaniko provides a more secure yet fast enough approach to build and push images in the Tekton pipelines.