tl;dr - I switched from Jetstack’s kube-lego to cert-manager (it’s natural successor), and am pretty happy with the operator pattern they’ve decided to adopt, switch over was easy, but I tripped myself up for a bit because I don’t like using Helm. Complete resource definitions (that worked for me, YMMV) are in the TLDR section @ the bottom.

I’m taking a break from my regularly scheduled programming (I’m in the middle of a series on trying out monitoring/observability tools/frameworks in Kubernetes) to write about my switch from jetstack/kube-lego to jetstack/cert-manager .

Recently I actually completely scrapped my micro-cluster (single node on 8 core 32GB RAM dedicated hardware), in favor of installing Arch Linux on the box and re-installing Kubernetes the hard way from scratch. While I’m not going to go into why just yet (there will be a future blog post on that), one of the things I needed to do after I was done was start re-creating all the things that were running on the old cluster. For most things, this was as simple as just kubectl apply -f ing some cluster configurations, and that was perfect. Unfortunately while my resource files for kube-lego worked while resetting some things, it didn’t work quite properly for others, very likely due to bad configuration or stale files on my part. The devs over @ jetstack have actuallys started work on a new project to replace kube-lego – cert-manager ! This seems like a good time to migrate over to cert-manager .

While I’m sure there are many differences between kube-lego and cert-manager (I’m too lazy to try and find and list them here for you, sorry), I think the biggest difference (at least deployment wise), is that cert-manager takes advantage of the CRD + Custom Controller pattern, AKA, the Operator pattern pioneered by CoreOS. This means you can kubectl create -f resources with kind: Certificate or kind: Issuer to represent the certificates you need in a system, and a controller that’s sitting on your cluster goes about making sure they exist. I really like this pattern, and love that it enables so much flexibility – you can run just about anything with it (whether you should is another question), and it makes things really easy and simple from a deployment standpoint, when people who want to deploy apps need a cert, they just create the resource like they do anything else, and the appropriate steps are taken under the covers.

Note that another option to getting this done is to use Traefik’s Let’s Encrypt settings. I’ve decided against this, because it ties me a bit too strongly to Traefik, and I’m not using it quite yet (I’m still using the NGINX ingress controller). Changing both components at the same time seemed a bit much, though I do plan to give Traefik a try in a future blog post.

Well all that said, let’s jump into setting up cert-manager to serve the certs.

Step 0: RTFM

As always, first step is the RTFM, in this case the docs from the cert-manager repo. As the top level repo README says, this cert-manager isn’t quite ready for production, but I’m feeling pretty brave :)

If you’re unfamiliar, you might also want to check out how exactly getting free certs thanks to Let’s Encrypt and the EFF and a bunch of other sponsors. There’s a fantastic documentation page on the LE site which explains just how it all works.

Rather than reading my guide, you should also probably seriously consider if just following [ cert-manager ’s documentation for migrating from kube-lego][cert-manager-migration-docs]. The documentation is fantastically written and probably a better resource to use than this blog post.

Step 0.1: Assemble the resources

Since I don’t like using Helm and presently like writing my resource configurations by hand, the first step for me was to distill the chart contents in the cert-manager repo into fully fleshed out resource configurations specific to my cluster:

cert-manager.ns.yaml :

--- apiVersion: v1 kind: Namespace metadata: name: cert-manager

cert-manager.rbac.yaml :

--- apiVersion: v1 kind: ServiceAccount metadata: name: cluster-cert-manager namespace: cert-manager labels: app: cert-manager --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: cert-manager labels: app: cert-manager rules: - apiGroups: ["certmanager.k8s.io"] resources: ["certificates", "issuers", "clusterissuers"] verbs: ["*"] - apiGroups: [""] resources: ["endpoints", "configmaps", "secrets", "events", "services", "pods"] verbs: ["*"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: cert-manager labels: app: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager subjects: - name: cluster-cert-manager namespace: cert-manager kind: ServiceAccount

cluster-manager.crd.yaml :

--- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: certificates.certmanager.k8s.io labels: app: cert-manager spec: group: certmanager.k8s.io version: v1alpha1 scope: Namespaced names: kind: Certificate plural: certificates shortnames: - certs --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: clusterissuers.certmanager.k8s.io labels: app: cert-manager spec: group: certmanager.k8s.io version: v1alpha1 scope: Cluster names: kind: ClusterIssuer plural: clusterissuers --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: issuers.certmanager.k8s.io labels: app: cert-manager spec: group: certmanager.k8s.io version: v1alpha1 scope: Namespaced names: kind: Issuer plural: issuers

cert-manager.deployment.yaml :

--- apiVersion: apps/v1 kind: Deployment metadata: name: cert-manager namespace: cert-manager labels: app: cert-manager spec: replicas: 1 selector: matchLabels: app: cert-manager template: metadata: labels: app: cert-manager spec: serviceAccountName: cluster-cert-manager containers: - name: mgr image: quay.io/jetstack/cert-manager-controller:v0.2.3 imagePullPolicy: IfNotPresent args: - --cluster-resource-namespace=$(POD_NAMESPACE) env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # resources: # TODO: Specify resource limits - name: shim image: quay.io/jetstack/cert-manager-ingress-shim:v0.2.3 imagePullPolicy: IfNotPresent

Obviously, the weakness of writing out configs like this is that it’s not very portable/easy to modify – you have to manually keep in mind the links between the resources, for example if you change the name of one of the ClusterResourceDefinition s, you’ll need name you need to go back and ensure that the ClusterRole s and other things reflect the change, whereas if you used Helm, this would all be relatively linked together in the values.yaml file (though I think they don’t have a way to actually change the name of the CRD currently, I think it’s hard-coded. The upside is simplicity though – I feel the complexity added by Tiller/the rest of the Helm ecosystem is unnecessary. Also, I think it can be used to hide the complexity of another projects, only to bite you when something goes wrong, and you realize that the component you oh-so-simply helm-install ed is spread over like 10⁄ 15 resources, and you only thought you understood how they interacted.

kubectl apply -f of these resources was pretty straightforward, and I fixed any bugs I encountered in the configurations as I went:

$ k apply -f cert-manager.ns.yaml $ k apply -f cert-manager.rbac.yaml $ k apply -f cert-manager.crd.yaml $ k apply -f cert-manager.deployment.yaml

After deploying all that, it’s always a good idea to check the logs of the custom controller (the “operator”) that was spun up in the deployment and make sure there aren’t any obvious errors:

$ k get pods -n cert-manager NAME READY STATUS RESTARTS AGE cert-manager-66f5bb696c-vlwdk 2/2 Running 0 22s $ k logs -f cert-manager-66f5bb696c-vlwdk mgr -n cert-manager I0312 03:30:47.954949 1 server.go:68] Listening on http://0.0.0.0:9402 I0312 03:30:47.956509 1 leaderelection.go:174] attempting to acquire leader lease... I0312 03:30:47.971925 1 leaderelection.go:184] successfully acquired lease kube-system/cert-manager-controller

For the shim the log is a bit longer since I have a bunch of stuff with annotations set up already:

$ k logs -f cert-manager-66f5bb696c-vlwdk shim -n cert-manager I0312 03:30:50.398951 1 leaderelection.go:174] attempting to acquire leader lease... I0312 03:30:50.416262 1 leaderelection.go:184] successfully acquired lease kube-system/ingress-shim-controller I0312 03:30:50.516493 1 controller.go:147] ingress-shim controller: syncing item 'totejo/totejo-ing' I0312 03:30:50.516523 1 sync.go:41] Not syncing ingress totejo/totejo-ing as it does not contain necessary annotations I0312 03:30:50.516534 1 controller.go:147] ingress-shim controller: syncing item 'kube-lego/kube-lego-nginx' I0312 03:30:50.516566 1 sync.go:41] Not syncing ingress kube-lego/kube-lego-nginx as it does not contain necessary annotations I0312 03:30:50.516595 1 controller.go:161] ingress-shim controller: Finished processing work item "kube-lego/kube-lego-nginx" I0312 03:30:50.516614 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' I0312 03:30:50.516539 1 controller.go:161] ingress-shim controller: Finished processing work item "totejo/totejo-ing" E0312 03:30:50.516685 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:50.521992 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:50.522100 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:50.532289 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:50.532328 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:50.552597 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:50.552629 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:50.592829 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:50.592867 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:50.672997 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:50.673026 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:50.833160 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:50.833197 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:51.153350 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:51.153393 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:51.793572 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:51.793626 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:53.073831 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:53.073920 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:30:55.634218 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:30:55.634270 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:31:00.754547 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:31:00.754601 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:31:10.994895 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:31:10.994948 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:31:31.475103 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:31:31.475168 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:32:12.435433 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:32:12.435517 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found I0312 03:33:34.355772 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:33:34.355857 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: issuer.certmanager.k8s.io "" not found

The errors that are showing up here make sense (and actually are good, since it shows that the ingress shim is working at least) – A bunch of pre-existing projects’ annotations were picked up, but couldn’t be given certs, due to the issuer being empty.

Looks like everything’s started up properly, time to try and create the proper ClusterIssuer for Let’s Encrypt so that the existing resources can get certs created.

Step 1: Create the Let’s Encrypt Issuer

Before I get into creating the issuer, let’s take a quick look at one of the annotations that the ingress-shim container in the cert-manager pod was complaining about:

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: totejo-ing namespace: totejo annotations: ingress.kubernetes.io/class: "nginx" ingress.kubernetes.io/ssl-redirect: "true" ingress.kubernetes.io/limit-rps: "20" ingress.kubernetes.io/proxy-body-size: "10m" kubernetes.io/tls-acme: "true" # <---- cert-manager (and previously kube-lego) are picking up on these annotations kubernetes.io/ingress.class: "nginx" spec: # .... rest of the spec ... #

As you can see, while the majority of the annotations are directions to the NGINX ingress controller, cert-manager is picking up where kube-lego left off, and paying attention to kubernetes.io/tls-acme .

For these annotations to work with the new setup however, I need to also indicate an Issuer for cert-manager to use to grab a certificate from. cert-manager already has excellent documentation on how to do this, indicating that you need to configure the ingressShim container with the default issuer to use (and what kind it is), which means the Deployment needs to change a bit:

--- apiVersion: apps/v1 kind: Deployment metadata: name: cert-manager namespace: cert-manager labels: app: cert-manager spec: replicas: 1 selector: matchLabels: app: cert-manager template: metadata: labels: app: cert-manager spec: serviceAccountName: cluster-cert-manager containers: - name: mgr image: quay.io/jetstack/cert-manager-controller:v0.2.3 imagePullPolicy: IfNotPresent args: - --cluster-resource-namespace=$(POD_NAMESPACE) env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: requests: cpu: 10m memory: 32Mi - name: shim image: quay.io/jetstack/cert-manager-ingress-shim:v0.2.3 imagePullPolicy: IfNotPresent args: # <----- this bit is new - --default-issuer-name=letsencrypt-prod - --default-issuer-kind=ClusterIssuer

Now that that’s changed, the logs look different as you might expect, still the same errors, but the issuer we indicated is being searched for by default. Here’s an excerpt:

... other log lines ... I0312 03:49:55.705757 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' E0312 03:49:55.705812 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: clusterissuer.certmanager.k8s.io "letsencrypt-prod" not found

Perfect, now let’s make the ClusterIssuer that it’s asking for, referencing the cert-manager cluster-issuer guide along the way:

apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: server: https://acme-staging.api.letsencrypt.org/directory email: user@example.com privateKeySecretRef: name: letsencrypt-staging http01: {}

I started off by creating letsencrypt-staging , and making sure that worked, so here’s the code for letsencrypt-prod :

apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: server: https://acme-v01.api.letsencrypt.org/directory email: user@example.com privateKeySecretRef: name: letsencrypt-prod http01: {}

I then created the resources with:

$ k apply -f letsencrypt-staging.clusterissuer.yaml $ k apply -f letsencrypt-prod.clusterissuer.yaml

After creating the resources I found that the errors were seeminly still present in the output of the shim container, I needed to create a Certificate for mailu to use. The guide in the cert-manager docs lays it out pretty well – I won’t post the configuration here since it’s got some sensitive details in it (of course).

I can however, note that depsite the fact that the ClusterIssuer I installed called letsencrypt-prod does show up when I do a kubectl get clusterissuer --all-namespaces , it is not properly configured (I guess), since shim (and now the Certificate I tried to create) can’t find it:

Warning ErrorIssuerNotFound 4s cert-manager-controller Issuer clusterissuer.certmanager.k8s.io "letsencrypt-prod" not found does not exist

I found that warning after doing a kubectl describe certificate <cert name> -n <namespace> on a certificate I expected to be properly created. Time to go back and figure out why the cluster issuer is not being picked up properly by the cert-manager-controller .

DEBUG: cert-manager-controller not picking up letsencrypt-prod ClusterIssuer

It’s a little perplexing that the cert-manager-controller is returning that error because when I check the manager output I see:

$ k logs -f cert-manager-b88c8555c-lgww2 mgr -n cert-manager I0312 04:59:58.953482 1 server.go:68] Listening on http://0.0.0.0:9402 I0312 04:59:58.955080 1 leaderelection.go:174] attempting to acquire leader lease... I0312 05:00:15.169178 1 leaderelection.go:184] successfully acquired lease kube-system/cert-manager-controller I0312 05:00:15.269819 1 controller.go:138] clusterissuers controller: syncing item 'default/letsencrypt-prod' E0312 05:00:15.269858 1 controller.go:168] issuer "default/letsencrypt-prod" in work queue no longer exists I0312 05:00:15.269874 1 controller.go:152] clusterissuers controller: Finished processing work item "default/letsencrypt-prod" I0312 05:00:15.269819 1 controller.go:138] clusterissuers controller: syncing item 'default/letsencrypt-staging' E0312 05:00:15.269906 1 controller.go:168] issuer "default/letsencrypt-staging" in work queue no longer exists I0312 05:00:15.270956 1 controller.go:152] clusterissuers controller: Finished processing work item "default/letsencrypt-staging"

Reading this, it looks like it has indeed found and registered the created ClusterIssuer s – but when I check the shim container I see:

$ k logs -f cert-manager-b88c8555c-lgww2 shim -n cert-manager I0312 04:59:59.110402 1 leaderelection.go:174] attempting to acquire leader lease... I0312 05:00:15.495188 1 leaderelection.go:184] successfully acquired lease kube-system/ingress-shim-controller I0312 05:00:15.595455 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' I0312 05:00:15.595477 1 controller.go:147] ingress-shim controller: syncing item 'totejo/totejo-ing' I0312 05:00:15.595502 1 sync.go:41] Not syncing ingress totejo/totejo-ing as it does not contain necessary annotations I0312 05:00:15.595517 1 controller.go:161] ingress-shim controller: Finished processing work item "totejo/totejo-ing" E0312 05:01:37.513909 1 controller.go:156] ingress-shim controller: Re-queuing item "mailu/mailu-admin-ing" due to error processing: clusterissuer.certmanager.k8s.io "letsencrypt-prod" not found I0312 05:02:59.434147 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' ... lots more of the same error, as it retries ...

So somehow, the shim can’t find the right ClusterIssuer , some how. After a bit of head scratching, I noticed that if you create the ClusterIssuer , it does have a namespace (NOTE FROM THE FUTURE: this is because I messed up the configuration, scope: Cluster was missing on the CRD), you just get the default (literally default ) one. After a bunch of headscratching and re-creating resources, I realized that the fully-qualified name ( default/letsencrypt-prod ) is probably what the shim container needed: So a few changes needed to be made.

**Altering the deployment to use the POD_NAMESPACE for the shim container as well:

# ... rest of the resource definition ... - name: shim image: quay.io/jetstack/cert-manager-ingress-shim:v0.2.3 imagePullPolicy: IfNotPresent args: - --default-issuer-name=$(POD_NAMESPACE)/letsencrypt-prod # <---- this is the important line - --default-issuer-kind=ClusterIssuer env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace

Also, since I didn’t want the cluster issuers to actually be in default I needed to delete them, and re-create them in the cert-manager namespace:

$ k delete -f letsencrypt-staging.clusterissuer.yaml $ k delete -f letsencrypt-prod.clusterissuer.yaml $ k apply -f letsencrypt-staging.clusterissuer.yaml -n cert-manager $ k apply -f letsencrypt-prod.clusterissuer.yaml -n cert-manager

After doing that the logs look much better, and the right stuff gets created:

$ k logs -f cert-manager-594db5c68b-ftbp8 shim -n cert-manager I0312 05:10:16.753753 1 leaderelection.go:174] attempting to acquire leader lease... I0312 05:10:35.026389 1 leaderelection.go:184] successfully acquired lease kube-system/ingress-shim-controller I0312 05:10:35.126735 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' I0312 05:10:35.126736 1 controller.go:147] ingress-shim controller: syncing item 'totejo/totejo-ing' I0312 05:10:35.126860 1 sync.go:41] Not syncing ingress totejo/totejo-ing as it does not contain necessary annotations I0312 05:10:35.126899 1 controller.go:161] ingress-shim controller: Finished processing work item "totejo/totejo-ing" I0312 05:10:35.131338 1 controller.go:161] ingress-shim controller: Finished processing work item "mailu/mailu-admin-ing" I0312 05:10:35.131788 1 controller.go:147] ingress-shim controller: syncing item 'mailu/mailu-admin-ing' I0312 05:10:35.131807 1 sync.go:85] Certificate "mailu-tls" for ingress "mailu-admin-ing" already exists, not re-creating I0312 05:10:35.131819 1 controller.go:161] ingress-shim controller: Finished processing work item "mailu/mailu-admin-ing"

Nice that it even realizes that the tls cert for my mailu instance is already present!

NOTE FROM THE FUTURE, AGAIN - while this works, it’s much better to create the ClusterIssuer CRD properly and specify scope: Cluster so a proper non-namespaced resource is created. Right below this sentence is where I realized it.

I double checked the resource configuration from the helm directory and found that I had overlooked an option: scope: Cluster needed to be set on the ClusterIssuer CRD, since that’s how non-namespaced CRDs work! I fixed this issue in the CRDs above, so you likely won’t run into this. Looks like I’ve learned a thing, so it’s a good day.

TLDR

Here are the working, final-for-now configurations for everything that worked for me (YMMV):

cert-manager.ns.yaml :

--- apiVersion: v1 kind: Namespace metadata: name: cert-manager

cert-manager.rbac.yaml :

--- apiVersion: v1 kind: ServiceAccount metadata: name: cluster-cert-manager namespace: cert-manager labels: app: cert-manager --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: cert-manager labels: app: cert-manager rules: - apiGroups: ["certmanager.k8s.io"] resources: ["certificates", "issuers", "clusterissuers"] verbs: ["*"] - apiGroups: [""] resources: ["endpoints", "configmaps", "secrets", "events", "services", "pods"] verbs: ["*"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: cert-manager labels: app: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cert-manager subjects: - name: cluster-cert-manager namespace: cert-manager kind: ServiceAccount

cert-manager.crd.yaml :

--- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: certificates.certmanager.k8s.io labels: app: cert-manager spec: group: certmanager.k8s.io version: v1alpha1 scope: Namespaced names: kind: Certificate plural: certificates shortnames: - certs --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: clusterissuers.certmanager.k8s.io labels: app: cert-manager spec: group: certmanager.k8s.io version: v1alpha1 scope: Cluster names: kind: ClusterIssuer plural: clusterissuers --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: issuers.certmanager.k8s.io labels: app: cert-manager spec: group: certmanager.k8s.io version: v1alpha1 scope: Namespaced names: kind: Issuer plural: issuers

letsencrypt-prod.clusterissuer.yaml :

--- apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: server: https://acme-v01.api.letsencrypt.org/directory email: your-email@example.com privateKeySecretRef: name: letsencrypt-prod http01: {}

cert-manager.deployment.yaml :

--- apiVersion: apps/v1 kind: Deployment metadata: name: cert-manager namespace: cert-manager labels: app: cert-manager spec: replicas: 1 selector: matchLabels: app: cert-manager template: metadata: labels: app: cert-manager spec: serviceAccountName: cluster-cert-manager containers: - name: mgr image: quay.io/jetstack/cert-manager-controller:v0.2.3 imagePullPolicy: IfNotPresent args: - --cluster-resource-namespace=$(POD_NAMESPACE) env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: requests: cpu: 10m memory: 32Mi - name: shim image: quay.io/jetstack/cert-manager-ingress-shim:v0.2.3 imagePullPolicy: IfNotPresent args: - --default-issuer-name=letsencrypt-prod - --default-issuer-kind=ClusterIssuer resources: requests: cpu: 10m memory: 32Mi

Wrapping Up

Of course, it’s important to check that all these changes actually paid off – you should be able to visit your projects that were protected with the kubernetes.io/tls-acme annotation and see that they’re using Let’s Encrypt-provided TLS.

All in all, it was pretty easy (and didn’t take a huge amount of bravery) to set up cert-manager . Huge thanks to the guys over at Jetstack for taking initiative with kube-lego , and now expanding to cert-manager . While wildcard domains were delayed this year, it’s awesome that they’re coming at all, and I’m excited to take advantage of them, and cert-manager makes things easy until then.

Now that this done, the next few posts will get back to exploring more observability tooling – next up, monitoring request timing with Jaeger!