Microsoft recently released Bot Framework where developers can publish intelligent bots to many services, including Skype, Slack, Messenger, Web, SMS, and many more.

Most of the documentation and guides around publishing Bot Framework bots uses Microsoft Azure App Services to create and deploy to a platform-as-a-service platform. In this post, we are going to look at how to deploy a bot as a microservice on a Kubernetes (k8s) cluster running on Azure Container Service (ACS). While this guide focuses on Azure, steps can also be done in a similar way on Amazon Web Services (AWS) or Google Cloud Platform (GCP).

Creating a bot

First, let’s create a very simple bot in NodeJS. This bot will reply back with whatever user types.

app.js

Deploying the bot to a local Docker container

Let’s get the bot running on a local Docker container first. In this step, we’ll create the container and test it locally to make sure it works as expected. Make sure you have Docker engine installed in your system.

To create a container, we need a Dockerfile to assemble the container image.

Dockerfile

Let’s build it…

BOTNAME=echobot docker build -t $BOTNAME ~/path/to/bot

and run it!

HOSTPORT=3978 CONTAINERPORT=3978 docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT -name $BOTNAME -t $NAME

Now, open up Bot Framework Emulator and connect to http://localhost:3978/api/messages

And you should see this:

Updating bot with SSL certificates

Bot Framework requires all published bots to communicate with valid SSL certificates.

In this part, we’ll learn how to get a free SSL certificate from Let’s Encrypt. You’ll also need a domain name, which you can get a free one from Freenom. I suggest getting a paid certificate and domain when going production of course.

Let’s include auto-sni module that makes this process much easier:

npm install auto-sni --save

and modify app.js to look like below, and make sure to edit the email and domain name.

modified app.js

Creating our Kubernetes cluster

Now that we ran and tested our bot locally, we need to run this on the cloud so others can access it.

Kubernetes is an open-source orchestration platform for automating deployment, operations, and scaling of applications across multiple hosts. It targets applications composed of multiple Docker containers, such as distributed micro-services and provides ways for containers to find and communicate with each other. You can get more information and go through bootcamp at https://kubernetesbootcamp.github.io/kubernetes-bootcamp/index.html

First, we need to create a Kubernetes cluster in Azure Container Service (ACS).

Pre-requirements:

Make sure you installed Azure CLI 2.0 Preview from: https://github.com/azure/azure-cli#installation

Have created SSH public key at ~/.ssh/id_rsa.pub

We need to create a resource group (so everything is under that group) to get started.

At the time of this post, I had some issues deploying the Kubernetes cluster to West US, but the issue might have been resolved already.

RESOURCE_GROUP=my-resource-group LOCATION=eastus az group create --name=$RESOURCE_GROUP --location=$LOCATION

Now that we have our resource group, let’s create our cluster. This will take some time to fully deploy.

DNS_PREFIX=some-unique-value SERVICE_NAME=any-acs-service-name az acs create -orchestrator-type=kubernetes -resource-group $RESOURCE_GROUP -name=$SERVICE_NAME -dns-prefix=$DNS_PREFIX

Keep in mind that by default, Azure deploys a k8s cluster as 1 master running Ubuntu 16.04 LTS (xenial) and 3 agents running same Ubuntu version on Standard_D2_v2 instances. At the time of the post, you can not scale the number of instances up/down.

If you want to customize your deployment, you can Azure Resource Manager templates available at https://github.com/Azure/azure-quickstart-templates/blob/master/101-acs-kubernetes/

Controlling the Kubernetes cluster manager

Kubernetes has a handy CLI tool called kubectl that is used to run commands against your cluster.

You can install kubectl by:

az acs kubernetes install-cli

While I suggest using the Azure CLI tool to install kubectl , you can also get it using homebrew (macOS) or Chocolatey (Windows) or download it from: https://kubernetes.io/docs/user-guide/prereqs/ depending on your platform.

Configuring kubectl

az acs kubernetes get-credentials

This will retrieve and store the master cluster configuration under ~/.kube/config to use with kubectl.

Let’s check if we can connect to our cluster successfully:

kubectl get nodes

Deploying bot to Docker Registry or Azure Container Registry

Now that we created our cluster, we need to push our container somewhere so k8s can deploy it. We can either host this inside Azure Container Registry (ACR) or Docker Registry depending on your preference. For example, Docker Hub will give you 1 private and unlimited public registries for free tier, while ACR will provide unlimited private registries but only charges for storage and data transfer costs.

Setting up for Docker Hub:

Make sure we are logged into Docker Hub so we can push to Docker registry

docker login

Setting up ACR:

REGISTRY_NAME=your-registry-name az acr create -g $RESOURCE_GROUP -n $REGISTRY_NAME -l eastus az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${SUBSCRIPTION_ID}" docker login -u <app-id> -p <password> registry-microsoft.azurecr.io

after using either Docker Registry or ACR, you can push with:

$USERNAME=your-username docker images docker tag <container-id> $USERNAME/$BOTNAME:latest docker push $USERNAME/$BOTNAME

Deploying to Kubernetes on Azure Container Service (ACS)

To deploy our bot to the cluster, we need to create a deployment YAML file to specify configuration details:

echobot-deployment.yaml

Save this as echobot-deployment.yaml and then:

kubectl create -f echobot-deployment.yaml --record

To see a list of the all pods:

kubectl get pods

If you want details of a specific pod:

$PODNAME=name-of-pod kubectl describe pods $PODNAME

Creating and exposing a service

In this part, we’ll learn how to expose a service and set up a load balancer.

Let’s start by building a configuration file:

Here, we are defining a service called echobot with a load balancer with 2 external ports, 80 (HTTP) and 443 (HTTPS). It’s important to note that we are using port 443 as port and directing to internal target port 3978 since NodeJS cannot bind to port 80 unless it is running in privileged mode.

Let’s create our service now:

kubectl create -f echobot-service.yaml --record

Since we are creating a load balancer with a public IP address, it might take a while to fully set up the service.

Checking the status of all services:

kubectl get svc

pending public IP assignment