Over the weekend, I explored the CI/CD implementation with Gitea and Drone.io running on top of K3s. The target is to set up a lightweight CI/CD implementation that can be hosted on a laptop to simulate the on-premise environment. This paper documents the setup for future reference.

It is assumed that the K3s is up and running on your laptop. Refer to my paper “Running K3s with Multipass on Macbook” for how to run K3s on your Macbook.

Gitea setup

Gitea will be the source code repository. Though there is no chart in the Kubernetes’ Helm repository, thanks to jfelten, there is a chart on https://github.com/jfelten/gitea-helm-chart.

Git clone the repository, create the helm package by running

helm package gitea

Upload the created package, gitea-1.6.1.tgz , to the multipass VM which hosts the K3s. Drop the file to the directory of /var/lib/rancher/k3s/server/static/charts

Now create the Helmchart deployment CRD,

apiVersion: k3s.cattle.io/v1

kind: HelmChart

metadata:

name: gitea

namespace: kube-system

spec:

chart: https://%{KUBERNETES_API}%/static/charts/gitea-1.6.1.tgz

targetNamespace: gitea

valuesContent: |-

service:

http:

serviceType: NodePort

externalPort:

externalHost:

persistence:

enabled: true

accessMode: ReadWriteOnce

config:

secretKey: password

disableInstaller: true

Select the serviceType as NodePort. Set the externalPort and externalHost to empty to use the cluster IP; Enable persistence by setting the accessMode to ReadWriteOnce as the ReadWriteMany mode is not supported on hostPath; Set the secretKey so that the initial installer popup is disabled.

Once the pods are running, find out the NodePort of the service, goto the URL then register the user,

Drone setup

There is a stable drone Helm chart. Download the Helm chart to avoid the problem of the chart not in sync problem. Deploy it with the following,



kind: HelmChart

metadata:

name: drone

namespace: kube-system

spec:

chart: https://%{KUBERNETES_API}%/static/charts/drone-2.0.0-rc.12.tgz

targetNamespace: drone

valuesContent: |-

service:

type: NodePort

sourceControl:

provider: "gitea"

gitea:

server: "

server:

adminUser: "wenzm" apiVersion: k3s.cattle.io/v1kind: HelmChartmetadata:name: dronenamespace: kube-systemspec:chart: https://%{KUBERNETES_API}%/static/charts/drone-2.0.0-rc.12.tgztargetNamespace: dronevaluesContent: |-service:type: NodePortsourceControl:provider: "gitea"gitea:server: " http://gitea-gitea-http.gitea:3000 server:adminUser: "wenzm"

Set the provider as “gitea” and point the server to the service name of the gitea inside the cluster. Assigned the adminUser with the one registered in Gitea.

Find the nodeport of the drone, goto the web console. Login using the username and password in Gitea. You will have no repository shown in Drone at this moment yet.

Gitea webhook setup

Create a repository in Gitea. Now back to the drone console, click the “SYNC” button to sync the repository. The Gitea repository appears in Drone.

Click ACTIVATE. Then, check the settings/webhook of the repository in Gitea,

The webhook is created. However due to the drone is not in the same namespace as Gitea, we need to update the URL to include the namespace. Update the webhook as below.

http://drone-drone.drone/hook?secret=xxxxxxxxxxxxxxxxxxxxxxxx

Now we have the Gitea and Drone integrated. If there is a push event in Gitea, Drone will kick off the pipeline execution.

CI/CD Pipeline

Like others, the CI/CD pipeline is defined within the source code. It is named as .drone.yml by default where it defines a pipeline that consists of different steps. Each step will be running a container and performing the designed tasks/commands accordingly.

The pipeline for this exploration is listed below.

kind: pipeline

name: mypipeline steps:

- name: build

image: golang

environment:

CGO_ENABLED: 0

commands:

- go build -o hello src/*.go - name: build-push-dockerhub

image: plugins/docker

settings:

username:

from_secret: dockerUser

password:

from_secret: dockerPass

repo: zhiminwen/hello-by-drone

tags:

- "${DRONE_COMMIT_SHA:0:8}" - name: deploy-to-k3s

image: zhiminwen/kubectl:v1.14

environment:

K3SSERVER:

from_secret: k3sServer

K3SCERT:

from_secret: k3sCert

K3SPASS:

from_secret: k3sPass

IMAGETAG: "${DRONE_COMMIT_SHA:0:8}"



commands:

- envsubst < k3s.env.yaml > k3s.yaml

- envsubst < deploy/kustomization.env.yaml > deploy/kustomization.yaml

- export KUBECONFIG=$${DRONE_WORKSPACE_BASE}/k3s.yaml

- cd deploy; kubectl apply -k .

1. Step: build

The application is just a simple golang hello world HTTP handler. We use the golang image to build the application. Set the environment variable CGO_ENABLED to 0 as we will build the docker image using the alpine base.

2. Step: build-push-dockerhub

When building the docker images we use the Drone plugin to create the docker image and push it into Dockerhub.

A Dockerfile is created in the source repository as below,

FROM alpine RUN mkdir -p /app

ADD hello /app

RUN chmod a+rx /app/hello CMD ["/app/hello"]

In the step, the repo setting sets the image name defaults to dockerhub repo. If a different repo is required, set its full name such as, mycluster.icp:8500/zhiminwen/hello-by-drone

We set the image tag using the first 8 characters of the commit SHA value for this build.

To push to the dockerhub, login credential is required. Create the Drone repository secrets by using the drone command line tool,



export DRONE_TOKEN=zlz8c26jRy0Vc14wXkSlv8ripVHkmdHT export DRONE_SERVER= http://192.168.64.5:30093 export DRONE_TOKEN=zlz8c26jRy0Vc14wXkSlv8ripVHkmdHT drone secret add --name dockerUser --data xxxxxx --repository wenzm/cicd

drone secret add --name dockerPass --data yyyyyy --repository wenzm/cicd

3. Step: deploy-to-k3s

We will deploy the app into K3s with the image build.

Build the kubectl image

Create a docker image with the following Dockerfile to run kubectl command. Notice I am using the 1.14 version so that the Kustomize feature can be used.

FROM alpine

apk add curl gettext && \

curl -LO

mv kubectl /usr/local/bin && \

chmod a+rx /usr/local/bin/kubectl RUN apk update && \apk add curl gettext && \curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \mv kubectl /usr/local/bin && \chmod a+rx /usr/local/bin/kubectl

Meanwhile, the gettext package is added as I need to run the envsubst command to substitute the environment variables.

Prepare the KUBECONFIG file

With the tool ready, I need to talk to the K3s server. Add the following k3s.env.yaml file into the repository,

apiVersion: v1

clusters:

- cluster:

certificate-authority-data: $K3SCERT

server: $K3SSERVER

name: default

contexts:

- context:

cluster: default

user: default

name: default

current-context: default

kind: Config

preferences: {}

users:

- name: default

user:

password: $K3SPASS

username: admin

Create the Drone secret for the certificate CA, K3s server, K3s password. In the pipeline step, create the environment variables with those secret.

In the pipeline command, first we create the k3s.yaml file by replacing the environment variable, then export the KUBECONFIG to the newly created file, we are ready to run the kubectl command against the K3s server.

- envsubst < k3s.env.yaml > k3s.yaml

- export KUBECONFIG=$${DRONE_WORKSPACE_BASE}/k3s.yaml

I use $$ to escape the variable expanding when Drone tries to execute the command.

Deploy with Kustomize

To avoid excessive templating, I used the latest Kustomize feature embedded in the kubectl 1.14 release.

In the source repository, create a deploy folder, save the following deploy.yaml and service.yaml file which is not supposed to be changed.

---

apiVersion: apps/v1

kind: Deployment

metadata:

labels:

app: hello

name: hello

namespace: demo

spec:

replicas: 1

selector:

matchLabels:

app: hello

template:

metadata:

labels:

app: hello

spec:

containers:

- name: hello

image: zhiminwen/hello-by-drone

The service yaml file,

---

apiVersion: v1

kind: Service

metadata:

labels:

app: hello

name: hello

namespace: demo

spec:

ports:

- name: http

port: 80

protocol: TCP

targetPort: 8080

selector:

app: hello

type: NodePort

Create the following Kustomization.env.yaml file which will customize the image settings of the deployment.

apiVersion: kustomize.config.k8s.io/v1beta1

kind: Kustomization images:

- name: zhiminwen/hello-by-drone

newName: zhiminwen/hello-by-drone

newTag: $IMAGETAG resources:

- resources/deploy.yaml

- resources/service.yaml

In the pipeline command, I will first create the Kustomization.yaml file with the envsubst command to replace the $IMAGETAG environment variable which is set as “ ${DRONE_COMMIT_SHA:0:8} ”. This will make the deployment to use the image tag in the current build.

- envsubst < deploy/kustomization.env.yaml > deploy/kustomization.yaml - cd deploy; kubectl apply -k .

Then, apply the Kustomization with the -k option.

Testing

Update the code, commit the change, push to the repo. The pipeline is kicked off and runs successfully.

The application can be accessed without problem.