How to build a proof-of-concept in about 15 minutes

This guide is an update to a previous story of mine.

Why another guide? Because it addresses the same issue in a simpler way.

Generally speaking, simplifying is the art of distilling information. It’s all about organizing ideas and concepts to extract only the meaningful parts.

This guide will get you to a working example of setting up an API gateway from scratch which will use JWT with ACL to authorize a user to reach an endpoint. For other parts, you can refer to the excellent Kong documentation.

“Simplicity is the ultimate sophistication.” Leonardo da Vinci (1452–1519)

Context: Investigating different API gateways

At Sumup we want to investigate differents API Gateways, the context is that we are building API services and need to allow or restrict certain calls based on roles, using auth tokens like JWTs, rate limiting and so on. In short we want some gateway for microservices requests that handle load balancing, logging, authentication, rate-limiting, transformations, and more through plugins. Of course this should be able to scale.

@limoges and me decided to build a quick proof-of-concept to showcase the capabilities of a tool like Kong. This article was built and reviewed with his help.

The “Old Way” versus Kong

For more info about plugins and integrations, you can check out the Kong Hub.

Building a showcase for Kong

Before going further with the API Gateway discussions, we wanted to create a proof-of-concept to educate ourselves on the problem space and tools available.

This guide is a short example on how to use setup Kong with a sample application. At SumUp, we are running application in Kubernetes and so we wanted the proof-of-concept to be setup on a Kubernetes cluster.

This proof-of-concept will help us explore all the non-functional requirements of our API Gateway.

On a serious note, this is not a production-ready environment, just a quick and dirty way to create a developer sandbox which they could use to develop.

1. Creating our “cluster” using Minikube

We will use minikube-version 1.15.6 due to some api deprecation on 1.16

> minikube start --kubernetes-version v1.15.6

😄 minikube v1.5.2 on Ubuntu 19.10

✨ Automatically selected the 'virtualbox' driver (alternates: [none])

🔥 Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...

🐳 Preparing Kubernetes v1.15.6 on Docker '18.09.9' ...

💾 Downloading kubeadm v1.15.6

💾 Downloading kubelet v1.15.6

🚜 Pulling images ...

🚀 Launching Kubernetes ...

⌛ Waiting for: apiserver

🏄 Done! kubectl is now configured to use "minikube"

2. Setting up Helm v2 on our cluster

We want to use the Kong helm chart to simplify the process so we need to install the Tiller (Helm’s server-side component) on our cluster.

> curl -L https://git.io/get_helm.sh | bash # Install helm > helm init # Setup Tiller

Helm v2.16.1

Run 'helm init' to configure helm.

$HELM_HOME has been configured at /home/pablo/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

3. Setting up Kong

The first step is simply to install Kong using the Helm chart. Because we are good citizens, we’ll put everything related to Kong into a single namespace named kong.

> kubectl create ns kong > helm install --version 0.26.1 \

--name kong stable/kong \

--namespace kong \

--set ingressController.enabled=true \

--set image.tag=1.4 \

--set admin.useTLS=false

Kong’s Helm chart installation will provide you with a command for obtaining the Proxy address and the Admin address. As you might already know, the proxy address will be used as the entry-point to your services while the Admin gives you access to the Kong HTTP API.

> kubectl get services --namespace kong

NAME TYPE ...

kong-kong-admin NodePort ...

kong-kong-proxy NodePort ...

kong-postgresql ClusterIP ...

kong-postgresql-headless ClusterIP ...

Since we’ll refer to the Admin and Proxy addresses a few time, let’s export them.

> export PROXY_ADDR=$(minikube service -n kong kong-kong-proxy --url | head -1) > export ADMIN_ADDR=$(minikube service -n kong kong-kong-admin --url | head -1) > echo $PROXY_ADDR # This will differ for you.

http://192.168.64.7:30076 > curl -i $PROXY_ADDR

HTTP/1.1 404 Not Found

Date: Mon, 25 Nov 2019 16:50:59 GMT

Content-Type: application/json; charset=utf-8

Connection: keep-alive

Content-Length: 48

X-Kong-Response-Latency: 2

Server: kong/1.4.0

{“message”:”no Route matched with those values”}%

Since we have no http service running on the cluster, we get a 404.

4. Setting up KONGA (optional)

Konga is an excellent graphical Admin interface for managing Kong and we can set it up simply with the following manifest.

# We can create a file containing this manifest

# or pipe it to `kubectl`.

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: konga

namespace: kong

spec:

replicas: 1

template:

metadata:

labels:

name: konga

app: konga

spec:

containers:

- name: konga

image: pantsel/konga

ports:

- containerPort: 1337

env:

- name: NO_AUTH

value: "true"

---

apiVersion: v1

kind: Service

metadata:

name: konga-svc

namespace: kong

spec:

type: NodePort

ports:

- name: kong-proxy

port: 1337

targetPort: 1337

nodePort: 30338

protocol: TCP

selector:

app: konga

Since we’ll be looking at Konga through our browser, let’s get that address.

export KONGA_ADDR=$(minikube service -n kong konga-svc — url | head -1)

Opening $KONG_ADDR in the browser, we’ll be faced with having to setup the connexion with the Kong Admin HTTP API.

We can give that connexion a name and set the URL to $ADMIN_ADDR .

Loading up the Konga dashboard

5. Installing an HTTP application

Since we’re mostly interested in the HTTP requests, we can use the simple echo-server application which just prints the HTTP requests back to us with some additional details about pods and whatnot.

We’ll put that application behind Kong and use Kong’s ingress class.

> curl -sL bit.ly/echo-server | kubectl apply -f -

service/echo created

deployment.apps/echo created

Using the Kong proxy (NEED TO CHECK LOWER)

Create an Ingress rule to proxy the echo-server created previously:

> echo "

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

name: demo

annotations:

# This annotation is optional since the ingress class will

# default to "kong" behind the scene.

# You can validate the right ingress is being used by the

# `X-Kong-Upstream-Latency` header being added to requests.

kubernetes.io/ingress.class: "kong"

spec:

rules:

- http:

paths:

- path: /foo

backend:

serviceName: echo

servicePort: 80

" | kubectl apply -f -

ingress.extensions/demo created ingress.extensions/demo created

Verify the ingress is working:

(Note the X-Kong-* response headers that the Kong Ingress sent back.)

> curl -i $PROXY_IP/foo

HTTP/1.1 200 OK

Content-Type: text/plain; charset=UTF-8

Transfer-Encoding: chunked

Connection: keep-alive

Date: Mon, 25 Nov 2019 17:19:13 GMT

Server: echoserver

X-Kong-Upstream-Latency: 4

X-Kong-Proxy-Latency: 4

Via: kong/1.4.0 Hostname: echo-599d77c5c7-m9zbh Pod Information:

node name: minikube

pod name: echo-599d77c5c7-m9zbh

pod namespace: default

pod IP: 172.17.0.15 Server values:

server_version=nginx: 1.12.2 - lua: 10010

client_address=172.17.0.14

method=GET

real path=/

query=

request_version=1.1

request_scheme=http

request_uri= Request Information:client_address=172.17.0.14method=GETreal path=/query=request_version=1.1request_scheme=httprequest_uri= http://192.168.99.110:8080/ Request Headers:

accept=*/*

connection=keep-alive

host=192.168.99.110:30400

user-agent=curl/7.65.3

x-forwarded-for=172.17.0.1

x-forwarded-host=192.168.99.110

x-forwarded-port=8000

x-forwarded-proto=http

x-real-ip=172.17.0.1 Request Body:

-no body in request-

5. Setting up Authorization

ACL (Access-control List)

Restrict access to a Service or a Route by whitelisting or blacklisting consumers using arbitrary ACL group names. This plugin requires an authentication plugin to have been already enabled on the Service or Route. JWT (JSON Web Tokens)

Verify requests containing HS256 or RS256 signed JSON Web Tokens (as specified in RFC 7519). Each of your Consumers will have JWT credentials (public and secret keys) which must be used to sign their JWTs.

In order for this to work, we must do 5 steps:

Create a consumer named hello. Add hello to a group, which we’ll name allowed. Add a JWT to the consumer. Add the JWT plugin to the route. Add the ACL plugin to the route.

So, let’s get started by creating a consumer:

Creating a new consumer using Konga

Then, we associate our use to a group that we create:

Groups are found under the Consumer panel

Then, we create a JWT for that consumer; the defaults are desirable, otherwise you’ll play with keys and claims:

Do nothing, just press submit

Great. The consumer setup is completed. Now let’s move to the route.

We add the ACL plugin and whitelist the group we created earlier:

Then we add the JWT plugin:

(This is the single critical step. You absolutely need the authorization header name otherwise setting the token in the HTTP header won’t work.)

Make sure to add authorization to the header names

That’s it. We’ve setup our route to use ACL+JWT. Now, all we have to do is use it. Let’s test the endpoint:

> curl -i $PROXY_ADDR/foo

HTTP/1.1 401 Unauthorized

Date: Tue, 26 Nov 2019 14:11:27 GMT

Content-Type: application/json; charset=utf-8

Connection: keep-alive

Content-Length: 26

X-Kong-Response-Latency: 8

Server: kong/1.4.0 {"message":"Unauthorized"}

Great. So now we need to get the token.

Going to jwt.io and using the marvellous tool:

1. In the payload: set iss to the value of key found in the JWT credentials for your consumer.