In my last story, I managed to deploy the K3s on top of my MacBook Pro using multipass VM. I am having my project work to develop some Jenkins pipelines currently. It's a perfect chance to examine the real usage of K3s.

1. Multipass VM preparation and K3s installation

Let's be a bit generous to create a VM of 2GB memory and 50GB hard disk,

multipass launch --name k3s --mem 2G --disk 50G

Install K3s the same approach. (Of course, you need to be always cautious by examining what is the script you are running)

multipass exec k3s -- sh -c "curl -sfL https://get.k3s.io | sh -"

Copy the kubeconfig file to host,

multipass copy-files k3s:/etc/rancher/k3s/k3s.yaml .

List the info of the K3s, multipass info k3s , to get the IP address, replace the server address from https://localhost:6443 to https://192.168.64.5:6443 export out the KUBECONFIG, check the nodes are working fine.

export KUBECONFIG=k3s.yaml

kubectl get nodes

Now we have our development K3s environment ready. We don’t need to go inside the VM, the kubectl command line tool from the host will be enough.

2. Dynamic Storage Class

We need dynamic storage provision in order to get some real work done. Let's use the local volume provisioner to achieve it. Download the yaml file and examine it before you apply it.

Apply it and patch this storage class as the default

kubectl apply -f local-path-storage.yaml kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Now we have the dynamic storage class ready.

3. Deploy the Jenkins Helm chart

K3s has the HelmChart support builtin by providing some CRDs. We don’t need the tiller component deployed, we don’t even need the helm command.

Let's create a HelmChart CRD as below,

apiVersion: k3s.cattle.io/v1

kind: HelmChart

metadata:

name: jenkins

namespace: kube-system

spec:

chart: stable/jenkins

targetNamespace: jenkins

valuesContent: |-

Master:

AdminUser: {{ .adminUser }}

AdminPassword: {{ .adminPassword }}

rbac:

install: true

Notice here the namespace in the metadata is for the HelmChart Object. K3s monitor this CRD object in the kube-system, launch a Helm install job if there is any new HelmChart object created. (I confused with my deployment initially, putting my target namspace jenkins . Sure enough, nothing happens.)

In the spec, chart defines which repo and Helm chart to deploy. The targetNamespace is where my Jenkins suppose to reside in. Instead of using the “set” keyword as in the readme sample, I use valuesContent where I can apply the same format of the chart’s value.yaml file.

Nothing much to change for the Jenkins. Save the file as jenkins.yaml. Create the target namespace and apply it just as normal Kubernetes object yaml file.

kubectl create ns jenkins

kubectl apply -f jenkins.yaml

Monitor the Helm installation job is kicked off,

kubectl -n kube-system get pods

NAME READY STATUS RESTARTS AGE

coredns-7748f7f6df-g6rgw 1/1 Running 0 138m

helm-install-jenkins-txxjn 0/1 Completed 0 111m

helm-install-traefik-bnc5x 0/1 Completed 0 138m

svclb-traefik-b65f58f65-rxllp 2/2 Running 0 138m

traefik-5cc8776646-nfclx 1/1 Running 0 138m

Validate the PVC is bound

kubectl -n jenkins get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

jenkins Bound pvc-18988281-4d45-11e9-b75c-5ef9efd9374c 8Gi RWO local-path 113m

and the Pods are running,

kubectl -n jenkins get pods

NAME READY STATUS RESTARTS AGE

jenkins-6b6f58bc8d-hbf4r 1/1 Running 0 113m

svclb-jenkins-74fdf6b9f4-zxnwz 1/1 Running 0 113m

4. Access Jenkins

Find out the service ports,

kubectl -n jenkins get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

jenkins LoadBalancer 10.43.75.62 192.168.64.5 8080:30254/TCP 115m

jenkins-agent ClusterIP 10.43.239.13 <none> 50000/TCP 115m

We can then access the Jenkins from http://192.168.64.5:8080. The familiar Jenkins is shown,

Follow us on Twitter 🐦 and Facebook 👥 and join our Facebook Group 💬.

To join our community Slack 🗣️ and read our weekly Faun topics 🗞️, click here⬇