Cert-Manager and Helm

I found Cert-Manager as the ACME agent implmentation for Kubernetes environment, if you search both “Kubernetes” and “Let’s Encrypt” in Google, it should be listed within top 10. The tool integrates with Nginx ingress controller to do the HTTP-01 challenge automatically.

Install Helm and Tiller

Cert-Manager is available in Helm chart package, so I has to install Helm first. Helm is a packaging system for Kubernetes resources.

Helm comes with a backend service, the Tiller which to deploy different Kubernetes resources in a Helm chart package. To run Tiller on a Kubernetes cluster which has Role Base Access Control (RBAC) enabled (cluster created by Rancher is RBAC enabled by default). Tiller needs to run with a service account granted with the cluster-admin role. I captured the script to install Helm as below:

# Install Helm with snap

sudo snap install helm --classic # Create a service account for triller with following manifest

cat <<EOF | kubectl apply -f -

apiVersion: v1

kind: ServiceAccount

metadata:

name: tiller

namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: tiller

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: cluster-admin

subjects:

- kind: ServiceAccount

name: tiller

namespace: kube-system

EOF # Install Tiller - the backend service for Helm

helm init --service-account tiller # Verify Helm client and Tiller server installation

helm version

Install Cert-Manager

Cert-Manager’s document recommands to install it into a separated namespace and I captured only thenecessary steps to install Cert-Manager.

# Install the CustomResourceDefinition resources separately

kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml



# Create the namespace for cert-manager

kubectl create namespace cert-manager



# Add the Jetstack Helm repository

helm repo add jetstack https://charts.jetstack.io



# Update your local Helm chart repository cache

helm repo update



# Install the cert-manager Helm chart

helm install \

--name cert-manager \

--namespace cert-manager \

--version v0.11.0 \

jetstack/cert-manager # Verify the cert-manager installation

kubectl get pods --namespace cert-manager

Create Issuer for Let’s Encrypt production service

Now I came to the ACME agent part, Issuer and Cluster Issuer are types of Kubernete resource comes with Cert-Manager, Issuer can only work with resources in its namespace, and Cluster Issuer do not has such restiction.

An issuer responses to deal with differnt types of CA and issuing TLS certificate for ingress rules. Following manifest defined a Cluster Issuer that works as agent for Let’s Encrypt production service, the spec.acme.solvers property defined to use HTTP-01 challenge for verification and integrated for Nginx ingress controller.

property defined to use HTTP-01 challenge for verification and integrated for Nginx ingress controller. Other than production service, Let’s Encrypt also provides the staging service, to switch to it, you just need to change the spec.acme.server property to a proper URL.

# Create the cluster issuer with following manifest

cat <<EOF | kubectl apply -f -

apiVersion: cert-manager.io/v1alpha2

kind: ClusterIssuer

metadata:

name: letsencrypt-prod

spec:

acme:

# The URL for Let's Encrypt production service

server: https://acme-v02.api.letsencrypt.org/directory

# My Email address used for ACME registration

email: kwonghung.yip@gmail.com

# Name of a secret used to store the ACME account private key

privateKeySecretRef:

name: letsencrypt-prod

# Enable the HTTP-01 challenge provider

solvers:

- http01:

ingress:

class: nginx

EOF # Verify the resource

kubectl describe clusterissuer letsencrypt-prod

Request a TLS certificate and save it into Secert

Next step is to request a TLS certificate. The Certificate resource introduced by Cert-Manager actually is for making certificate request (a little bit confuse, Ha!), the received TLS certificate eventally is stored as a Kubernetes Secret object.

That is what you can find in Kubernete offical reference, the spec.tls.secretName property for Ingress rule defines which Secret contains the TLS key pair, it means you can apply TLS certificate without using the Cert-Manager, but it does give a convenience way to handling the certificate.

property for Ingress rule defines which Secret contains the TLS key pair, it means you can apply TLS certificate without using the Cert-Manager, but it does give a convenience way to handling the certificate. Following manifest defined a Certificate Resource that refer to the Cluster Issuer created before, the TLS certificate was stored into Secret named tls-public-domain.

#Create certificate resource to request certifiate from Cluster Issuer

cat <<EOF | kubectl apply -f -

apiVersion: cert-manager.io/v1alpha2

kind: Certificate

metadata:

name: tls-public-domain

namespace: default

spec:

dnsNames:

- hung-from-hongkong.asuscomm.com

issuerRef:

group: cert-manager.io

kind: ClusterIssuer

name: letsencrypt-prod

secretName: tls-public-domain

EOF

Rather than creating the certificate resource manually, Cert-Manager also provides the ingress-shim. By putting the annotation into your ingress-rule, Cert-Manager can create the certificate for you.

Deploy the Tomcat service for testing

After the TLS certificate Secret has been created, I deployed a Tomcat service for verification, a sample service was necessary because it needed a Ingress rule that get used the TLS certificate Secret. I used Tomcat because I am a Java developer and it does provide a default welcome page for verification.

I packed the Tomcat service as a Helm chart package and hosting it on GitHub Page, you can refer to my other post for details. Following script show how to deploy the Tomcat with Helm, and the Ingress rule came with the package.



helm repo add hung-repo # Add my Helm repository running on GitHub Pagehelm repo add hung-repo https://kwonghung-yip.github.io/helm-charts-repo/ # Update local Helm charts repository cac

helm repo update # Install the tomcat service

helm install hung-repo/tomcat-prod --name tomcat # Verify the ingress rule manifest after installed the tomcat, sample output as below:

helm get manifest tomcat

...

...

---

# Source: tomcat-prod/templates/ingress.yaml

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

name: tomcat-tomcat-prod

labels:

app.kubernetes.io/name: tomcat-prod

helm.sh/chart: tomcat-prod-0.1.0

app.kubernetes.io/instance: tomcat

app.kubernetes.io/version: "9.0.27"

app.kubernetes.io/managed-by: Tiller

spec:

tls:

- hosts:

- hung-from-hongkong.asuscomm.com

secretName: tomcat-acme-prod

rules:

- host: hung-from-hongkong.asuscomm.com

http:

paths:

- backend:

serviceName: tomcat-tomcat-prod

servicePort: 8080

After going through all the steps, the welcome page was exposed and secured.

Conclusion and further work

In this post, I shared my findings and the steps that how I opened my home Kubernetes cluster to the Internet and secrued it with the Let’s Encrypt TLS certificate.

Other than ACME agent, Cert- Manager Issuer also supports self-signed certificate as the Certificate Authority, it allows to issue a certificate to a wildcard domain within your private LAN, with a wildcard domain, different services can have their customized domain and they all under a single self signed root certificate.

Other further works can be:

To bridge Github or other public repo and your home Kubernetes cluster with webhook to automate the deployment process for your home Kubernetes cluster.

Instead of forwarding request to only one of my worker nodes, the requests should be forward to a HA proxy that will be a load balancer of all worker nodes.

In the next post, I will look into service mesh, Istio and their implementations.

Below sections supplement the technical details for your reference, please feel free to leave comment or messaging me, my contact info can be found at the end of this post.