We wrote in depth about setting up microservices in one of our previous posts. In this post we are going to talk about building a simple microservice, containerizing it with docker, and scaling those containers using Kubernetes.

It is assumed that reader has first hand experience with flask, redis and docker.

The source repository can be found here.

Building a Microservice using Flask and Redis

In this post lets get started with writing a Microservice with Flask. The microservice has a single point of entry that allows the user to create a resource with a PUT request, corresponding to a URL of their choice, the GET request serves the request. The stale items are deleted after a year.

The crux of the code that will help us build this microservice is as follows:

db = Redis(host='redis', port=6379) @app.route('/') def hello(): db.incr('count') return 'Count is %s.' % db.get('count') @app.route('/<path:path>', methods = ['PUT', 'GET']) def home(path): if (request.method == 'PUT'): event = request.json event['last_updated'] = int(time.time()) event['ttl'] = ttl db.delete(path) #remove old keys db.hmset(path, event) db.expire(path, ttl) return flask.jsonify(event), 201 if not db.exists(path): return "Error: thing doesn't exist" event = db.hgetall(path) event["ttl"] = db.ttl(path) return flask.jsonify(event), 200

If a PUT request is sent to any URL, with an additional key-value pair, the same gets saved in redis with the current time stamp. If we hit the base url, we get a count of how many times the URL has been hit.

Containerizing the Microservice

Containerizing Redis

We will use docker compose to containerize our microservice. Our microservice consists of a simple flask application that interacts with the redis database.

The compose file will essentially create two different conainers, first for the flask application and second for the redis.

web : build : . command : python app . py ports : - "5000:5000" volumes : - .:/ code links : - redis redis : image : redis

The Dockerfile for the flask application is fairly simple, it creates a working directory and installs all the requirements.

FROM python:2.7 ADD . /code WORKDIR /code RUN pip install -r requirements.txt

We are all set to run the containers. docker-compose up -d .

Once the containers are running we can test if the code works we issue a PUT request using curl -H "Content-Type: application/json" -X PUT -d '{"hello":999}' http://localhost:5000/testurl

We receive an output as follows:

{ "last_updated": 1485246805, "ttl": 31104000, "hello": 999 }

On making a GET request to the same URL curl http://localhost:5000/testurl

We receive a successful response as follows :

{ "last_updated": "1485246805", "ttl": 31103997, "hello": "999" }

A fairly straight forward microservice!

Why Kubernetes?

Now that we have built our service and containers are running for us, lets dive into production scenarios. We mostly see containers as if they are faster / simpler VMs, in other workds, each container includes an executre specific functionality with a unique lifecycle and dependency on other containers.

Such an approach is useful in early stages and improves the development environment, however, it creates significant difficulties when we migrate to production environment.

We start realising the challenges like :

How do multiple containers talk to each other?

How do I horizontally scale my container service?

How do multiple container services discover each other?

How do we secure our container network?

How do we deploy new containers and roll back previous ones?

Kubernetes attempts to solve the above such problems. Lets dig into it.

Kubernetes 101

Kubernetes is a system, developed at Google, for managing containerized application in a clustered environment. Most modern day application follow a microservices pattern, Kubernetes helps management of these different services as related components on the same host, configured in a trivial way.

Primarily Kubernetes helps with

Replication of components

Auto-scaling

Load balancing

Rolling updates

Logging across components

Monitoring and health checking

Service discovery

Authentication

Core Kubernetes Concepts

The controlling service in a Kubernetes cluster are called master or control panel components. They operate as main management points and provide cluster-wise system for worker nodes. Node servers have a few requirements that are necessary to communicate with the master components, configure the networking for containers and run workloads assigned to them.

Kubernetes API offers numerous primitives to work with, the more important ones for our purposes are the following:

Pods : Basic unit of a cluster, they are machines running the docker containers that should be controlled as a single "application".

Services : It is a logical logical grouping of a set of pods that perform the same function and constitute a single entity. The IP Address of a service remains stable, irrespective of the number/state/health of the underlying pods. Services enable the communication between pods.

Labels : They are key-value pairs which serves as an arbitrary tag to associate one/more kubernetes resource. There can be multiple labels, and the keys will be unique. These tags can then be selected for management purposes and action targeting.

Deployments : They ensure that a specified number of pods (of a specific kind) are running at any given time. They are a framework for defining pods that are meant to be horizontally scaled, and maintains a constant number of pods. If a container currently goes down, another will be started.

A detailed description can be found in the official docs.

Running Kubernetes locally

Kubernetes is one of the best tools for managing containerized applications, and has been production-ready, however, it has been difficult for developers to test things locally. Minikube is the missing piece in the puzzle that serves as the local development platform.

Minikube has not been designed for scalability or resiliency. It helps to get started with the Kubernetes CLI and API tools on a small single-node. There is not an easier way to test the first couple of commands with Kubernetes.

Minikube starts a virtual machine locally and runs the necessary Kubernetes components. The VM then gets configured with Docker and Kubernetes via a single binary called localkube, resulting into a local endpoint which can be used with the Kubernetes client kubectl .

Install Minikube locally

Quickstart Minikube

Working with Minikube

Let's go ahead and get started by bringing up our local Kubernetes cluster:

minikube start Starting local Kubernetes cluster... Kubernetes is available at https://192.168.99.101:443

Deploy Redis

We first have to create a redis deployment file.

apiVersion: extensions/v1beta1 kind: Deployment metadata: creationTimestamp: null name: redis spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: app: redis spec: containers: - image: redis name: redis resources: {} restartPolicy: Always status: {}

We further use kubectl to create the deployment

$ kubectl create -f redis-deployment.yaml

Expose the Redis Deployment by creating a service

Here is the service file :

apiVersion : v1 kind : Service metadata : creationTimestamp : null labels : app : redis name : redis spec : clusterIP : None ports : - name : headless port : 55555 targetPort : 0 selector : service . name : redis status : loadBalancer : {}

And we expose the service as follows :

$ kubectl create -f redis-service.yaml

Deploy Flask

Similarly, we need to creat another deployment for flask, here is the deployment file

apiVersion : extensions / v1beta1 kind : Deployment metadata : creationTimestamp : null name : flask spec : replicas : 1 strategy : type : Recreate template : metadata : creationTimestamp : null labels : app : flask spec : containers : - args : - python - app . py name : flask ports : - containerPort : 5000 resources : {} volumeMounts : - mountPath : / code name : . restartPolicy : Always volumes : - name : . persistentVolumeClaim : claimName : . status : {}

The deployment can be created as follows :

$ kubectl create -f flask-deployment.yaml

Expose the Flask Service

We first create the services file :

apiVersion : v1 kind : Service metadata : creationTimestamp : null labels : io . kompose . service : flask name : flask spec : ports : - name : "5000" port : 5000 targetPort : 5000 selector : service . name : flask status : loadBalancer : {}

And then we create the service

$ kubectl create -f flask-service.yaml

You have exposed your service on an external port on all nodes in your cluster. If you want to expose this service to the external internet, you may need to set up firewall rules for the service port(s) (tcp:30321) to serve traffic.

$ kubectl get service <service_name> --output = 'jsonpath={.spec.ports[0].NodePort}' 30321 % $ port = $( kubectl get service <sevice_name> --output = 'jsonpath={.spec.ports[0].NodePort}' )

We construct the address for ease of reference later on:

$ address = " $( minikube ip ) : $port " $ echo $address 192 .168.99.101:31637 $ open "http:// ${ address } /?start=23&stop=31"

I hope the article was of help. Do mention in comments, as to what would you like to hear more about.