This is followup to my local docker development environment described here https://github.com/Voronenko/traefik2-compose-template. In addition to classic dockerized projects, I also have number of kubernetes projects. Kubernetes is both resource and money consuming platform. As I don’t always need external cluster, solution I use for local development for kubernetes is https://k3s.io/.

This platform positions itself as a lightweight kubernetes, but truth is that it is one of the smallest certified Kubernetes distribution built for IoT & Edge trcomputing, capable also to be deployed on a prod scale to VMs.

I use k3s in two ways: I have k3s installed locally on my work notebook, although sometimes I need to deploy locally heavier test workloads, and for that purpose I have two small beasts — two external intel NUCs running ESXi.

By default k3s gets installed with traefik1 as ingress, and if you are satisfied with that setup, you generally can stop reading article.

In my scenario I am involved in multiple projects, in particular classic docker and docker swarm one, and thus I often have situation when traefik is already deployed in standalone mode.

So rest of this article dives into configuring external traefik2 as ingress for k3s cluster.

Installing kubernetes k3s family cluster.

You can start with classic curl -sfL https://get.k3s.io | sh - or you can use k3sup light-weight utility written by https://github.com/alexellis/k3sup.

What would be different for our setup is that we specifically install k3s without traefik component using switch --no-deploy traefik

as a result of the execution you will get connection details necessary to use kubectl. Upon k3s installation you can quickly check if you can see the nodes

Side note — there is no specific magic with k3s flavor of the kubernetes. You can even start it on your own with docker-compose

Configuring traefik2 to work with kubernetes.

As you recall, by that time I usually have traefik2 already present in my system and serving some needs as per https://github.com/Voronenko/traefik2-compose-template. Now it is time to configure traefik2 kubernetes backend.

Traefik2 does so using CRD — custom resource definition concept ( https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/). Latest examples of the definitions can always be found under the link https://docs.traefik.io/reference/dynamic-configuration/kubernetes-crd/ , but those are for the scenario when traefik2 is also executed as a part of kubernetes workload.

For scenario of the external traefik2 we need only subset of definitions described below.

We are introducing set of custom resource definitions allowing us to describe how our kubernetes service will be exposed outside, traefik-crd.yaml :

We also need cluster role traefik-ingress-controller giving mostly readonly access to services, endpoints and secrets and custom traefik.containo.us group, traefik-clusterrole.yaml

and finally we need system service account traefik-ingress-controller associated with previously created traefik-ingress-controller cluster role

Once we apply resources above

we are ready to start tuning traefik2

Pointing traefik2 to k3s cluster

When deployed into Kubernetes, as traefik docs suggest, traefik will read the environment variables KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT or KUBECONFIG to construct the endpoint.

The access token will be looked up in /var/run/secrets/kubernetes.io/serviceaccount/token and the SSL CA certificate in /var/run/secrets/kubernetes.io/serviceaccount/ca.crt. Both are provided mounted automatically when deployed inside Kubernetes.

When the environment variables are not found, Traefik will try to connect to the Kubernetes API server with an external-cluster client. In this case, the endpoint is required. Specifically, it may be set to the URL used by kubectl proxy to connect to a Kubernetes cluster using the granted authentication and authorization of the associated kubeconfig.

Traefik2 can be statically configured using any types of the config supported — toml, yaml or commandline switches.

On a first run, if you have traefik outside, most likely you will not have access tokens for traefik-ingress-controller to specify mytoken. To discover one:

If ok, you should receive successful response, kind of

and some facts, like token

external address of the api server https://192.168.3.100:6443 as per last response

Again, nothing magic in provided token: this is JWT token and you you can use https://jwt.io/#debugger-io to inspect it’s contents

As proper configuration is quite important, ensure that both calls to APISERVER return reasonable response.

Creating additional access token

A controller loop ensures a secret with an API token exists for each service account, that can be discovered like we did previously. Also you can create additional API tokens for a service account, create a secret of type ServiceAccountToken with an annotation referencing the service account, and the controller will update it with a generated token:

to create

to delete/invalidate

Changes to external traefik2 compose definitions

What changes we need to add to traefik2 configuration we’ve got on https://github.com/Voronenko/traefik2-compose-template ?

a) New folder kubernetes_data where we store ca.crt file used to validate calls to kubernetes authority. This is the certificate that can be found under clusters->cluster->certificate-authority-data of your kubeconfig file.

This volume will be mapped under /var/run/secrets/kubernetes.io/serviceaccount for the official traefik2 image

and b) adjust traefik2 kubernetescrd backend providing 3 parameters: endpoint, path to cert and token. Please note, that as your external traefik as a docker container, you need to specify proper accessible endpoint address, and ensure you are doing that in +- secure way.

If you did everything right, you should see now something promising on your traefik UI, namely KubernetesCRD backend

if you do not see one or have issues running traefik up check troubleshouting section.

Now is time to expose some kubernetes service via traefik2 to ensure that traefik2 is actually working as ingress. Let’s take our classic whoami service, whoami-service.yaml

and expose it in a http or https way, whoami-ingress-route.yaml under whoami.k.voronenko.net fqdn.

and apply it:

Once applied, you should see smth promising on a traefik dashboard

As you see traefik2 has detected our new workload running on our k3s kubernetes cluster, and moreover it is nicely co-exists with classic docker workloads we have on the same box, like portainer.

Let’s check if traefik2 routes traefik to our kubernetes workload: as you see you can successfully reach whoami workload on a both http and https endpoints and browser accepts your certificate as a trusted grean seal one.

Yooohoo, we reached our goal. We have traefik2 configured either on your local notebook or perhaps some dedicated machine in your homelab. Traefik2 exposes your docker or kubernetes workflow on a http or https endpoints. Traefik2 with optional letsencrypt are responsible for https.

Troubleshouting

As you understand, there could be multiple issues. Consider some of the tools for analysis https://github.com/Voronenko/dotfiles/blob/master/Makefile#L185

In particular I recommend:

a) VMWare octant — powerfull web based kubernetes dashboard that starts using your kubeconfig

b) Rakess — https://github.com/corneliusweig/rakkess standalone tool and also kubectl plugin to show an access matrix for k8s server resources

Inspect credentials for system account

c) just kubectl

Reverse task: check which roles are associated with service account

d) Traefik docs — for example kubernetescrd backend has a way more configuration switches.

e) Ensure traefik has enough rights to access apiserver endpoints.

If you are keen which information is queried by traefik: you can see accessed endpoints and order of queriiing by putting some wrong apiserver address in configuration. Having this knowledge, and your traefik kubernetes token you can check that those endpoints are accessible using traefik credentials

f) k3s logs itself

The installation script will auto-detect if your OS is using systemd or openrc and start the service. When running with openrc, logs will be created at /var/log/k3s.log. When running with systemd, logs will be created in /var/log/syslog and viewed using journalctl -u k3s.

There you might get some hints, like

which would provide you are clue on traefik startups issue with k8s

Good luck in your journey!

Related code can be found on https://github.com/Voronenko/k3s-mini