Last Friday, we held Redgate’s 2nd Open Space Event after the first successful one 🎉 After getting over the sadness that I missed the first one, I wanted to contribute to the 2nd event by sharing my continuous findings on Azure’s new managed Kubernetes service: AKS. In this post, I’ll put that knowledge in written words 📝

A bit of background… 🤔

You can skip this section if you don’t care about how I ended up looking into Kubernetes and AKS!

For the last couple of months, we’ve been researching into an area at work and it might require us to architect, develop and maintain a cloud hosted service. As you might guess, this means that we need to be doing deployments against a live system and introducing any downtime during deployments is really not something I want to have, even at the start of the product!

I have a passion about zero-downtime deployments as I believe it’s one of the strong complements to continuous delivery. I have previously had successful attempts to solve this problem in various ways but it was always dirty and hard to communicate! If you’re interested in this topic, I’d encourage you to watch my talk at NDC Oslo 2016 (shameless plug, oops!) where I demonstrated a few sample scenarios with Docker and HAProxy on how to achieve zero-downtime deployments:

Zero-downtime deployments are hard but not impossible!

I had a few conclusions and things I cared about with this potential need to do deployments:

Zero-downtime deployments are hard (no shit!)

I didn’t want to do it manually by coupling with a specific cloud provider

I wanted something that the entire team can understand and have a chat about. In other words, I wanted good abstractions with clear concepts ✨

When I saw that a few of my awesome colleagues from Foundry looking into Kubernetes, I also started looking into it and I was impressed with its clear concepts, portable nature and good adoption in the community!

But, wait! There was still a problem 🤦🏻‍ I’ll just leave the below picture here for you to guess what that was:

Oh, yes! hmm, route tables and all that!

Yes, you guessed correctly! I really don’t want to set all that infrastructure up in the cloud in a secure way and maintain it (e.g. scaling, upgrading, etc.).

Azure Container Service (AKS) to the rescue 🚀

This is how I ended up looking into Azure Container Service (AKS)! AKS is a managed Kubernetes container orchestration service in Azure. It helps removing the complexity of implementing, installing, maintaining and securing Kubernetes in Azure. As it’s still Kubernetes that you’re going to be interacting with at the end of the day, you’re still avoiding being locked into any one vendor or resource. In other words, you get the best of both Worlds 👊 Finally, you only pay for the resources you consume, there are no per-cluster charges which is really great!

The service is still at its preview stage but I strongly believe Azure is going towards the right direction with AKS this time after having a few of “unsuccessful” (OK, let’s maybe call it not-that-successful) attempts in this area with Service Fabric, “the other” container service (yay, naming stuff is hard!).

I can summarise how you would go from zero to hero with AKS with the below flow chart:

From zero to hero with AKS

All of this can be done from the comfort of your terminal window with Azure CLI and kubectl!

Azure CLI and kubectl

Create an AKS cluster 👨🏻‍💻

From this point on, I assume that you have functioning Azure CLI setup against your preferred subscription.

While AKS is in preview, creating new clusters requires a feature flag on your subscription. You can enable the service with below command

az provider register -n Microsoft.ContainerService

Once the service is enabled on your subscription, you can now create a resource group to stick AKS under:

az group create --name k8s-demo-1 --location westeurope

Now you can run the command to create the service:

az aks create --resource-group k8s-demo-1 --name uat-k8s --node-count 1 --generate-ssh-keys

There are a few more parameters that you can supply to custimize the initial setup. Take a look at the documentation for az aks create command to learn more about those. Alternatively, you can execute az aks create --help to bring these up.

Once you execute this command, it will take about 5 to 10 minutes for the service to be up and running. Then, you’ll have a fully functioning Kubernetes cluster running in Azure 🤘

At this point, you cannot yet talk to this Kubernetes cluster through kubectl because it doesn’t know where Kubernetes cluster lives and what credentials to use during the communication with it. However, AKS Azure CLI commands make it easy to configure this, too! Running the below az aks get-credentials command will get this configured.

az aks get-credentials --resource-group k8s-demo-1 --name uat-k8s

At this stage, you should hopefully be able to run kubectl commands against the AKS cluster:

“kubectl get nodes” command lists all the nodes available in Kubernetes cluster

Azure CLI also gives you a way to view the Kubernetes dashboard in a secure way through az aks browse command by tunnelling the connection into localhost.

To be perfectly honest at this stage, I’m really not sure how this works and whether there are alternative ways to do this but it’s good to see that this has been at least thought through!

Kubernetes dashboard for AKS cluster

AKS Cluster in Azure Portal 🖥

We have done all the work through CLI so far but as you might expect, all of this is accessible through Azure Portal. In fact, once you have a look at the resources created and the wiring up that has being done between them, I bet that you will appreciate the work Azure takes care of for you!

Looking under our resource group ( k8s-demo-1 ), we can see the managed container service created for us ( uat-k8s ).

Drilling down into that managed container should give you some information about the Kubernetes cluster such as its version and API Server address.

One interesting aspect of the service I have noticed is that it doesn’t create the resources under the resource group you have created. Instead, it creates another one and sticks the resources under them. I’m really not sure yet why this is the case but it could be related to being able to have resources across regions and wire them together. I’ll try to dig and find out more about this.

Once you locate what resource group that is, you will see the variety of resources created for you!

As you start creating the pods, services and scaling them up/down, you will see more resources being created and dropped under the dynamically created resource group. For instance, Azure Load Balancer is one of the new resource types you will see appearing if your application structure requires one.

Running and testing your applications on AKS 🔬

This stage is all about Kubernetes and Azure pretty much gets out of your way, which is what I would expect! Standard operations you would run against a Kubernetes cluster is all applicable against AKS such as using kubectl create command to run an application based on a Kubernetes manifest file and manually changing the number of pods in a deployment using the kubectl scale command.

For the sake of just seeing the thing doing the expected, I have gone ahead and followed the example given in AKS docs and I got what I wanted to see!

Kubernetes manifest file and applying it on the AKS cluster with kubectl create

Public IP address will eventually be assigned to your service and you can also see the deployed pods

Yay, the App (P.S. Obviously Dogs! 🐶)

Manage the AKS cluster through Azure CLI 🕹

You actually benefit and appreciate the power of AKS when you need to scale the number of nodes in your cluster or need to perform Kubernetes upgrade across the nodes.

Official docs have done a great job of explaining the AKS cluster scaling feature. So, I will directly quote from the official docs:

It is easy to scale an AKS cluster to a different number of nodes. Select the desired number of nodes and run the az aks scale command. When scaling down, nodes will be carefully cordoned and drained to minimise disruption to running applications. When scaling up, the az command waits until nodes are marked Ready by the Kubernetes cluster.

Similar to scaling, upgrading the AKS cluster is also made easy by az aks upgrade command. Before upgrading a cluster, you can use the az aks get-versions command to check which Kubernetes releases are available for upgrade. As it’s the case with scaling, nodes are carefully cordoned and drained to minimise disruption to running applications during the upgrade process.

Some Resources 📚

If you’re in the same boat as I’m, you will be reading and watching a lot about Kubernetes and Azure’s managed Kubernetes service. Official AKS docs and Kubernetes docs will definitely give you a good start.

Eddie Villalba also has a really through introduction on his Connect(); 2017 session about Managed Kubernetes on Azure. Also, check out the AKS launch blog post which would give you some ideas on what are the thought process and expectations behind this service.