In this article we are going to show how to configure and deploy the KrakenD API Gateway in a kubernetes environment.

We will use for the demonstration Minikube so you cant test it in your own local machine.

Let’s get started!

Setting up a local Kubernetes

We will run Kubernetes locally thanks to Minikube. Just follow the README in the project for the installation and the Quickstart section. Installing minikube it’s quick and easy.

$ minikube start

Starting local Kubernetes v1.7.5 cluster...

Starting VM...

SSH-ing files into VM...

Setting up certs...

Starting cluster components...

Connecting to cluster...

Setting up kubeconfig...

Kubectl is now configured to use the cluster.

Once Minikube is installed and running you can see the graphical interface with:

$ minikube dashboard

Now we are ready and with a Kubernetes ready to start pushing containers!

Building the Docker images

Download here all the example files mentioned in this post.

Local Backend (Your API)

If your API has a container already you can skip this step. Otherwise we will create here a backend API that will provide the data to Krakend. The easiest way to build an ultra-fast backend for our demonstration purposes is to use a server named LWAN. The krakend-playground contains several dummy responses that we have used in other demos. These static files used for the responses are also included in the example repository.

The backend Dockerfile is under the backend folder and contains these 2 simple lines:

backend/Dockerfile

FROM jaxgeller/lwan

ADD ./data/. /lwan/wwwroot/.

Now we are going to build this image in the same registry Minikube uses. To do so you need to export the required environment variables before building it, as follows:

$ cd backend

$ eval $(minikube docker-env)

$ docker build -t fake-api -f Dockerfile .

Now the Docker registry associated to Minikube contains the fake-api image. If you have executed the eval the images list should like this:

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

fake-api latest 4db4257837ee About a minute ago 439MB

k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 2 months ago 97MB

k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 2 months ago 50.4MB

k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 2 months ago 148MB

k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 2 months ago 225MB

k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 2 months ago 193MB

k8s.gcr.io/kube-addon-manager v8.6 9c16409588eb 3 months ago 78.4MB

k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 5 months ago 41MB

k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 5 months ago 42.2MB

k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 5 months ago 50.5MB

k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 5 months ago 742kB

k8s.gcr.io/kubernetes-dashboard-amd64 v1.8.1 e94d2f21bc0c 5 months ago 121MB

k8s.gcr.io/kube-addon-manager v6.5 d166ffa9201a 6 months ago 79.5MB

gcr.io/k8s-minikube/storage-provisioner v1.8.0 4689081edb10 7 months ago 80.8MB

gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 7 months ago 80.8MB

k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.4 38bac66034a6 11 months ago 41.8MB

k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.4 a8e00546bcf3 11 months ago 49.4MB

k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.4 f7f45b9cb733 11 months ago 41.4MB

k8s.gcr.io/etcd-amd64 3.0.17 243830dae7dd 15 months ago 169MB

k8s.gcr.io/pause-amd64 3.0 99e59f495ffa 2 years ago 747kB

jaxgeller/lwan latest 526243cbb205 2 years ago 439MB

You will quickly realize the terminal session you are in is not using your docker host, but minikube’s.

KrakenD

With your backend ready is time to setup the image for the API gateway, which is also very easy to setup. All it takes is to write the Krakend’ settings in the krakend.json and add it in the image, as follows:

krakend.json

{

"version": 2,

"name": "KrakenD on k8s",

"port": 8080,

"cache_ttl": "3600s",

"timeout": "3s",

"host": [ "https://jsonplaceholder.typicode.com" ],

"endpoints": [

{

"endpoint": "/debug",

"backend": [

{

"host": [ "http://krakend-service:8000" ],

"url_pattern": "/__debug/debug"

}

]

},

{

"endpoint": "/combination/{id}",

"backend": [

{

"url_pattern": "/posts?userId={id}",

"is_collection": true,

"mapping": { "collection": "posts" }

},

{

"url_pattern": "/users/{id}",

"mapping": { "email": "personal_email" }

}

]

},

{

"endpoint": "/splash",

"backend": [

{

"host": [ "http://fake_api:8080" ],

"url_pattern": "/shop/campaigns.json",

"whitelist": [ "campaigns" ]

},

{

"host": [ "http://fake_api:8080" ],

"url_pattern": "/shop/products.json",

"whitelist": [ "products" ]

}

]

}

]

}

And now let’s create the API Gateway image with the configuration. The following Dockerfile is in the root folder.

Dockerfile

FROM devopsfaith/krakend:0.4.2

COPY configuration.json /etc/krakend/krakend.json

And this how you build it:

$ docker build -t k8s-krakend:0.0.1 .

Deploying in kubernetes

At this point we have two images ready to use in the Docker registry: the fake-api and the k8s-krakend . We need to deploy them now in k8s.

Deploy the backend API in k8s

With two kubectl calls you can create a kubernetes deployment and a service, so our fake backend is started and published in the service discovery.

$ kubectl run fake-api --image=fake-api:latest --port=8080 --image-pull-policy='Never'

$ kubectl expose deployment fake-api --type=NodePort

Deploy the API gateway in k8s

With the backend running and the custom image ready, let’s go with the YAML way of defining resource creations in kubernetes. Place the KrakenD deployment definition in a file called deployment-definition.yaml :

deployment-definition.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: krakend-deployment

spec:

selector:

matchLabels:

app: krakend

replicas: 2

template:

metadata:

labels:

app: krakend

spec:

containers:

- name: krakend

image: k8s-krakend:0.0.1

ports:

- containerPort: 8080

imagePullPolicy: Never

command: [ "/usr/bin/krakend" ]

args: [ "run", "-d", "-c", "/etc/krakend/krakend.json", "-p", "8080" ]

The KrakenD service definition, in a file called service-definition.yaml :

service-definition.yaml

apiVersion: v1

kind: Service

metadata:

name: krakend-service

spec:

type: NodePort

ports:

- name: http

port: 8000

targetPort: 8080

protocol: TCP

selector:

app: krakend

And then register both with the same kubectl command:

$ kubectl create -f deployment-definition.yaml

$ kubectl create -f service-definition.yaml

You are done, both services are now running in Kubernetes!

Testing the services

After exposing the krakend-service you can test it by asking Minikube the assigned URL with:

$ minikube service krakend-service

This will open your default browser with the krakend-service. It will return a 404 as we haven’t configured any endpoint in the root / . Keep the URL and let’s query the gateway with the endpoints we have defined.

Check the regular features work as expected:

$ curl -i http://192.168.99.101:32064/combination/10

HTTP/1.1 200 OK

Cache-Control: public, max-age=3600

Content-Type: application/json; charset=utf-8

X-Krakend: Version 0.4.2

Date: Mon, 23 Apr 2018 19:16:40 GMT

Transfer-Encoding: chunked



{"address":{"city":"Lebsackbury","geo":{"lat":"-38.2386","lng":"57.2232"},"street":"Kattie Turnpike","suite":"Suite 198","zipcode":"31428-2261"},"company":{"bs":"target end-to-end models","catchPhrase":"Centralized empowering task-force","name":"Hoeger LLC"},"id":10,"name":"Clementina DuBuque","personal_email":"Rey.Padberg@karina.biz","phone":"024-648-3804","posts":[{"body":"libero voluptate eveniet aperiam sed

sunt placeat suscipit molestias

similique fugit nam natus

expedita consequatur consequatur dolores quia eos et placeat","id":91,"title":"aut amet sed","userId":10},{"body":"aut et excepturi dicta laudantium sint rerum nihil

laudantium et at

a neque minima officia et similique libero et

commodi voluptate qui","id":92,"title":"ratione ex tenetur perferendis","userId":10},{"body":"dolorem quibusdam ducimus consequuntur dicta aut quo laboriosam

voluptatem quis enim recusandae ut sed sunt

nostrum est odit totam

sit error sed sunt eveniet provident qui nulla","id":93,"title":"beatae soluta recusandae","userId":10},{"body":"aspernatur expedita soluta quo ab ut similique

expedita dolores amet

sed temporibus distinctio magnam saepe deleniti

omnis facilis nam ipsum natus sint similique omnis","id":94,"title":"qui qui voluptates illo iste minima","userId":10},{"body":"earum voluptatem facere provident blanditiis velit laboriosam

pariatur accusamus odio saepe

cumque dolor qui a dicta ab doloribus consequatur omnis

corporis cupiditate eaque assumenda ad nesciunt","id":95,"title":"id minus libero illum nam ad officiis","userId":10},{"body":"in non odio excepturi sint eum

labore voluptates vitae quia qui et

inventore itaque rerum

veniam non exercitationem delectus aut","id":96,"title":"quaerat velit veniam amet cupiditate aut numquam ut sequi","userId":10},{"body":"eum non blanditiis soluta porro quibusdam voluptas

vel voluptatem qui placeat dolores qui velit aut

vel inventore aut cumque culpa explicabo aliquid at

perspiciatis est et voluptatem dignissimos dolor itaque sit nam","id":97,"title":"quas fugiat ut perspiciatis vero provident","userId":10},{"body":"doloremque ex facilis sit sint culpa

soluta assumenda eligendi non ut eius

sequi ducimus vel quasi

veritatis est dolores","id":98,"title":"laboriosam dolor voluptates","userId":10},{"body":"quo deleniti praesentium dicta non quod

aut est molestias

molestias et officia quis nihil

itaque dolorem quia","id":99,"title":"temporibus sit alias delectus eligendi possimus magni","userId":10},{"body":"cupiditate quo est a modi nesciunt soluta

ipsa voluptas error itaque dicta in

autem qui minus magnam et distinctio eum

accusamus ratione error aut","id":100,"title":"at nam consequatur ea labore ea harum","userId":10}],"username":"Moriah.Stanton","website":"ambrose.net"}

Now check the kubernetes out-of-the-box host resolution by requesting data from our debug endpoint:

$ curl -i http://192.168.99.101:32064/debug

HTTP/1.1 200 OK

Cache-Control: public, max-age=3600

Content-Type: application/json; charset=utf-8

X-Krakend: Version 0.4.2

Date: Mon, 23 Apr 2018 19:17:10 GMT

Content-Length: 18



{"message":"pong"}

Finally, the actual test using the dummy backends:

$ curl -i http://192.168.99.101:32064/splash

HTTP/1.1 200 OK

Cache-Control: public, max-age=3600

Content-Type: application/json; charset=utf-8

X-Krakend: Version 0.4.2

Date: Mon, 23 Apr 2018 19:17:53 GMT

Transfer-Encoding: chunked



{"campaigns":[{"discounts":[{"discount":0.15,"id_product":1},{"discount":0.50,"id_product":2},{"discount":0.25,"id_product":3}],"end_date":"2017/02/15","id_campaign":1,

...

Hurray!

Conclusion

In this post we have seen how quickly you can get an API Gateway running on Kubernetes without requiring any extra modules nor 3rd party components. In under 5 minutes you can have an ultra-perfomance API Gateway that doesn’t require coding the endpoints and that anyone can use.

Start doing your own experiments now by downloading the source code of these examples.

Thanks for reading! If you like our product don’t forget to star our project!