Sometimes we don’t want to use the default error page from the Nginx ingress controller. For example, as part of the basic security requirement, we can’t expose the fingerprint of the Nginx server as shown below.

Testing Application

To illustrate let's create a sample application with the following HTTP handler implemented. It accepts a URL param and returns the HTTP status based on this param.

func errHandler(w http.ResponseWriter, r *http.Request) {

code := r.FormValue("code")t

status, err := strconv.Atoi(code)

if err != nil {

log.Printf("Failed to convert status:%v", err)

w.WriteHeader(500)

fmt.Fprintf(w, "Unknown code")

return

} w.WriteHeader(status)

fmt.Fprintf(w, "Code=%s", code)

}

Build the docker image, push into docker hub, deploy it into my test K3s instance with the following Ingress, Service, and Deployment. (I have removed the default traefik ingress controller from K3s, deployed the nginx ingress helm chart with the default settings.)

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

annotations:

kubernetes.io/ingress.class: nginx

ingress.kubernetes.io/rewrite-target: /

labels:

app: err-status-test

name: err-status-test

spec:

backend:

serviceName: err-status-test

servicePort: 80

rules:

- host: err-test.192.168.64.5.nip.io

http:

paths:

- path: /

backend:

serviceName: err-status-test

servicePort: 8080

---

apiVersion: v1

kind: Service

metadata:

name: err-status-test

labels:

app: err-status-test

spec:

type: NodePort

ports:

- port: 80

targetPort: 8080

protocol: TCP

name: http

selector:

app: err-status-test

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: err-status-test

labels:

app: err-status-test

spec:

replicas: 1

selector:

matchLabels:

app: err-status-test

template:

metadata:

labels:

app: err-status-test

spec:

containers:

- name: err-status-test

image: zhiminwen/error-test-app

imagePullPolicy: Always

Now test it with the error code 413, we see it works as expected,



HTTP/1.1 413 Request Entity Too Large

Server: nginx/1.15.10

Date: Sun, 05 May 2019 10:59:18 GMT

Content-Type: text/plain; charset=utf-8

Content-Length: 8

Connection: keep-alive curl -i " http://err-test.192.168.64.5.nip.io/err?code=413 HTTP/1.1 413 Request Entity Too LargeServer: nginx/1.15.10Date: Sun, 05 May 2019 10:59:18 GMTContent-Type: text/plain; charset=utf-8Content-Length: 8Connection: keep-alive Code=413

Problem with the default Nginx default-backend

To update the default error page, edit the config map of nginx-ingress-controller. Insert a new key custom-http-errors with the HTTP status code that we want to change its error page, such as,

apiVersion: v1

kind: ConfigMap

data:

custom-http-errors: 404,413,503

enable-vts-status: "false"

metadata:

labels:

app: nginx-ingress

chart: nginx-ingress-1.6.0

component: controller

heritage: Tiller

release: nginx-ingress

name: nginx-ingress-controller

namespace: kube-system

Now, fire the test command again,



HTTP/1.1 404 Not Found

Server: nginx/1.15.10

Date: Sun, 05 May 2019 11:36:04 GMT

Content-Type: text/plain; charset=utf-8

Content-Length: 21

Connection: close curl -i " http://err-test.192.168.64.5.nip.io/err?code=413 HTTP/1.1 404 Not FoundServer: nginx/1.15.10Date: Sun, 05 May 2019 11:36:04 GMTContent-Type: text/plain; charset=utf-8Content-Length: 21Connection: close default backend - 404

This is NOT what we want. The Nginx ingress controller correctly captures the HTTP status code that we want to customize. However, the default Nginx “default-backend” (Image: k8s.gcr.io/defaultbackend:1.4) just simply returns 404 status regardless of the actual status code that the application intends to return. This will cause a problem if the status code is to be used for other purposes.

Custom error backend

Reading the Nginx ingress controller document, a custom error backend is required to resolve this issue. Basically, the custom error backend should take care of the HTTP header values, such as X-Code, X-Format and so on, that are passed from the Ingress controller, and return back the status code directly to the requester.

The custom backend is expected to return the correct HTTP status code instead of 200 . NGINX does not change the response from the custom default backend.

The Nginx ingress controller Github repo provides a sample implementation of the custom error backend under the path images/custom-error-pages .

Since I don’t really need cross compile for other platforms, I just compiled the binary for Linux amd64, modified the Dockerfile as below,

FROM alpine

COPY /rootfs / ADD custom-error-pages /

CMD ["/custom-error-pages"]

Build the image and push into docker hub. Redeploy the Nginx helm chart in K3s with the following CRD

apiVersion: k3s.cattle.io/v1

kind: HelmChart

metadata:

name: nginx-ingress

namespace: kube-system

spec:

chart: stable/nginx-ingress

targetNamespace: kube-system

valuesContent: |-

defaultBackend:

enabled: true

name: default-backend

image:

repository: zhiminwen/custom-error-page

tag: latest

Test again

Fire the same curl command again



HTTP/1.1 413 Request Entity Too Large

Server: nginx/1.15.10

Date: Sun, 05 May 2019 12:09:21 GMT

Content-Type: */*

Transfer-Encoding: chunked

Connection: close ➜ curl -i " http://err-test.192.168.64.5.nip.io/err?code=413 HTTP/1.1 413 Request Entity Too LargeServer: nginx/1.15.10Date: Sun, 05 May 2019 12:09:21 GMTContent-Type: */*Transfer-Encoding: chunkedConnection: close 4xx html

We have the custom error page with the correct HTTP status code.

Since we trap 503 status, let's scale down the replica, k scale deploy err-status-test --replicas=0

Now if we access the application again, we will see the custom error page with the status code shown as 503 which is expected.