There’s a new buzzword in cloud industry — Serverless.

In the past few years, cloud industry became more and more popular and brought to us new approaches and technologies to use. In this post I will talk about serverless implementation in Kubernetes cluster framework.

What is Serverless?

Actually, there is no such thing as “serverless”, I know this name suggest that you can write a piece of code and let it “float” somewhere and “automagically” your app will work. But that is not the case , It just means that someone else is doing this job for you. When using serverless computing no need to worry about server creation, maintenance, hardware, blackout ,support, and staffing resources. Simply take advantage of the cloud platform and resources and let it handle all of the problem I’ve outlined above. Serverless computing takes it to next level by letting the developer focus on their code only.

There are a lot of frameworks that give you the serverless option. I will be talking about a Kubernetes-native serverless framework — kubeless.

What is kubeless?

It is an open source serverless framework running on top of Kubernetes. It allows you to deploy a small bit of code without having to worry about underlying infrastructure. It uses Kubernetes resources to provide auto-scaling, routing, monitoring, and troubleshooting.

All you need to do is create and deploy a function that will be exposed via three possible types of event/trigger mechanism:

pubsub triggered.

http triggered.

schedule triggered.

Pubsub events are managed with Kafka cluster, which is built-in component in kubeless installation package(basic kafka cluster with one broker and zookeeper) while HTTP triggers are exposed with kubernetes services and schedule function will translate to a cron job.

Currently Python, NodeJS, Ruby and .Net Core are supported.

How does it works?

Kubeless will create a Custom Resource Definition, called Function, that way we can create a function as a normal kubernetes resource, in the background a controller created, it will watch over these custom resources and will launch runtimes on-demand.

Kubeless Architecture

Demo

Before we begin, some assumption:

Basic knowledge in Kubernetes Concepts is required.

A working Kubernetes cluster, I will be using Minikube for this tutorial.

Kubectl cli installed.

Let’s get our hands dirty

We will create two very basic services. Main service will listen to messages events from kafka and will display them on screen, second service will be our Worker, let’s assume that he will be triggerd every time that our imaginary app saved some data to a data base and this worker will notify it to our main service, so in the end we should see all of the transactions on screen.

First, create a namespace for kubeless components.

> kubectl create ns kubeless

Next, download and deploy kubeless into your cluster, I will be using the latest version (during the preparation of this guide):



> kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-$RELEASE.yaml > export RELEASE=v0.2.3

After a minute, let’s do some sanity checks:

> kubectl get pods -n kubeless

NAME READY STATUS RESTARTS AGE

kafka-0 1/1 Running 0 1m

kubeless-controller-1046320385-4dp09 1/1 Running 0 1m

zoo-0 1/1 Running 0 1m > kubectl get customresourcedefinition

NAME KIND

functions.k8s CustomResourceDefinition.v1beta1.apiextensions.k8s

Now we are ready to deploy some code.

Run the command :

kubectl apply -f https://rawgit.com/idobry/kubeless/master/deploy-main.yaml

This will create our main Python-Flask pod and service that will be responsible to display the messages it will get from our kubeless service.

Python-main.py

There are many ways we can get access to our service, I will use kubernetes proxy command:

kubectl proxy

#Now, we can access into our pod via the address:

#http://127.0.0.1:8001/api/v1/proxy/namespaces/kubeless/services/main-svc:5000 Now, we can access into our pod via the address:

Sanity check — should see “Hello world!” in your browser.

Now we are ready to talk about our kubeless service. This service will be triggerd each time new data saved into our fake app data base. We will simulate this use case using Kubeless CLI command:

> kubeless topic publish --topic <topic> --data <data>

Download two more files from here, you should see the requirements file and our worker’s code:

Python-worker.py

What else do we need in order to create our new service? docker image? deployment? service? No! thats it, all we need is our code.

Let’s create it using kubeless cli:

> kubeless function deploy <name> --runtime <runtime>\

--from-file <runtime> \

--handler <file_name.function_name>\

--trigger-topic <topic_to_trigger>\

--namespace <namespace> \

--dependencies <path/to/req_file>

--------------------------------------------------------------------

> kubeless function deploy python-worker --runtime python2.7 \

--from-file worker.py \

--handler worker.run \

--trigger-topic start \

--namespace kubeless \

--dependencies requirements.txt

We will also use kubeless cli to easily create topics:

> kubeless topic create start

Created topic "start". > kubeless topic create toScreen

Created topic "toScreen".

Go to your browser and navigate to:

Now, our Main service listening to messages.

Let’s trigger our worker:

> kubeless topic publish --topic start --data "image.png"

> kubeless topic publish --topic start --data "vid.wav"

In the browser we should see the data stream:

New data saved : image.png

Doing something with it...

Sent image.png to backup

New data saved : vid.wav

Doing something with it...

Sent vid.wav to backup

Bonus, Kubeless-Ui

We can make things even easier with a complete Graphical User Interface to interact and use kubeless.

Simply run the command:

> kubectl create -f https://raw.githubusercontent.com/kubeless/kubeless-ui/master/k8s.yaml serviceaccount "ui-acct" created

clusterrole "kubeless-ui" created

clusterrolebinding "kubeless-ui" created

deployment "ui" created

service "ui" created

We will now access to the kubeless-ui service in differend way. First we need to retrieve our kuberntes master URL:

> kubectl cluster-info

Kubernetes master is running at https://192.168.99.100:8443



To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Now, we need to get the random ip that was given to the service from kube-api:

> kubectl -n kubeless get svc

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE

broker None <none> 9092/TCP 7h

kafka 10.0.0.42 <none> 9092/TCP 7h

main-svc 10.0.0.64 <none> 5000/TCP 7h

python-worker 10.0.0.235 <none> 8080/TCP 7h

ui 10.0.0.61 <nodes> 3000:30632/TCP 18m

zoo None <none> 9092/TCP,3888/TCP 7h

zookeeper 10.0.0.253 <none> 2181/TCP 7h

As you can see, I need to use port 30632. Finally we can access the ui via the address http://192.168.99.100:30632.

Kubeless-ui

Conclusion

We created a very basic usage of kubeless, but I’m sure that you got clear sense of how quick and easy for developer, devOps or anyone who are working with Kubernetes cluster to create fully and functional services.

This feature is quite new, so stay tuned for news and updates.