The Easiest and Cheapest Way to Try Kubernetes For Yourself

Save some money — and some of your sanity — by using Docker Desktop’s Kubernetes cluster.

Photo by Kaitlyn Baker on Unsplash

The best way to learn something new is to try it out for yourself. Want to learn a new programming language? Download the runtime and create a few sample applications. Itching to try out a new microservice framework? Use your environment’s package manager to install the binaries, and create a simple CRUD ReST service. Ready to immerse yourself in Docker? Download Docker Desktop and create a new image — maybe using your new microservice framework! — and run it as a container.

But how do we get hands-on experience with Kubernetes? To interact with Kubernetes, we first need a cluster— one or more nodes (or worker machines) plus a handful of services that make Kubernetes work (also known as the control plane). So, where can we actually find a cluster to play around with?

Options for accessing a Kubernetes cluster

There are a few common options:

Our company’s cluster

We might work for a large organization, with a large Operations group, that maintains its own Kubernetes cluster. If so, we might be granted permission to deploy our own images and Kubernetes objects.

Cloud providers

But most of us aren’t that lucky. So instead, we might turn to the major cloud providers, each of which offer their own Kubernetes offerings.

Google Cloud Platform (GCP) . Kubernetes was borne out of Google, so it makes sense that Google’s GCP has a strong Kubernetes offering, Google Kubernetes Enginer (GKE). GCP has a reputation of being developer-friendly and easy to use, and with GKE, it is relatively straightforward to set up a cluster on GCP. But it’s not cheap. The last time I set up a cluster on GCP, I wound up racking up about $3 (US) per day, which doesn’t wind up being a cost-effective option.

. Kubernetes was borne out of Google, so it makes sense that Google’s GCP has a strong Kubernetes offering, Google Kubernetes Enginer (GKE). GCP has a reputation of being developer-friendly and easy to use, and with GKE, it is relatively straightforward to set up a cluster on GCP. But it’s not cheap. The last time I set up a cluster on GCP, I wound up racking up about $3 (US) per day, which doesn’t wind up being a cost-effective option. Amazon Web Services (AWS) . AWS is probably the most popular cloud provider these days. From the outset, AWS offered its own services and platforms, starting with EC2 and AMIs and, later, AWS Lambda and ECS. Owing to the growing popularity of Kubernetes, however, AWS grudgingly joined the bandwagon, and now offers Elastic Kubernetes Service (EKS). EKS is a bit more difficult to work with than GKE, and it tends to be a bit more expensive (mainly because EKS charges for the control plane); my last GKE project wound up costing me a bit over $4 (US) per day.

. AWS is probably the most popular cloud provider these days. From the outset, AWS offered its own services and platforms, starting with EC2 and AMIs and, later, AWS Lambda and ECS. Owing to the growing popularity of Kubernetes, however, AWS grudgingly joined the bandwagon, and now offers Elastic Kubernetes Service (EKS). EKS is a bit more difficult to work with than GKE, and it tends to be a bit more expensive (mainly because EKS charges for the control plane); my last GKE project wound up costing me a bit over $4 (US) per day. Microsoft Azure. I’ve never been a Microsoft fan, but we can’t forget Azure’s could-based Kubernetes offering (Azure Kubernetes Service, or AKS). I’ve never used Azure or AKS, but I’ve heard good things about the service’s usability. Still, it is no cheaper of an option than either of the other cloud providers.

Using one of the abovementioned cloud providers can help you get going, but it can be a bit of a process to get everything set up. More important, it’s an expensive option, particularly, if you’re setting up a cluster simply to learn Kubernetes.

MiniKube

You can also run a cluster locally, on your own computer, using Minikube. Minikube is a tool that purports to make it easy to run Kubernetes locally, by running a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop or desktop.

Minikube will automatically set up the cluster for you. But first, you must set up Minikube itself. And given the number of dependencies that Minikube has, that can be a frustrating experience. I spent a weekend battling with (and losing to) Minikube, and I am far from the only engineer who’s had that experience. Minikube can also be temperamental, working once and then subsequently, inexplicably, failing. Still, I know of other engineers that have had no significant issues with Minikube, so your mileage will vary.

But wait, there’s one more…

However, if you want a nearly-fool-proof, basically-cost-free option, there’s one more to consider. And you might already have it installed and ready to go. Because Docker Desktop also ships with a single-node Kubernetes cluster. And the good news is, if you’ve installed Docker Desktop, then you’ve installed Kubernetes.

How to use Kubernetes in Docker Desktop

If you’re interested in Kubernetes, then you’re likely interested in deploying Docker images and running them as containers. Which means that you likely already have Docker Desktop installed on your computer. But if you don’t, installation is a snap. Just following the instructions on the Docker website.

Just launch your cluster

With Docker Desktop installed and running, you’re already halfway there to running containers on your own cluster. You’ll just need to open Docker Desktop’s preferences by clicking on the Docker icon in your toolbar and selecting the Preferences… item:

When the preferences window opens, go to the Kubernetes tab and select the Enable Kubernetes checkbox:

Then select Apply & Restart. You’ll likely be prompted that the Kubernetes cluster installation will take a few minutes; that’s fine, just click the Install button. You’ll see in the Docker toolbar menu that your cluster is being created:

After a few minutes, your new cluster should be ready!

Use kubectl to try it out

kubectl is the tool that we use to communicate with a Kubernetes cluster. First, let’s validate that it’s already installed by heading to the command line and typing kubectl version . If we see something like the following, then we’re good to go:

Client Version: version.Info{Major:”1", Minor:”15", GitVersion:”v1.15.5", GitCommit:”20c265fef0741dd71a66480e35bd69f18351daea”, GitTreeState:”clean”, BuildDate:”2019–10–15T19:16:51Z”, GoVersion:”go1.12.10", Compiler:”gc”, Platform:”darwin/amd64"} Server Version: version.Info{Major:”1", Minor:”15", GitVersion:”v1.15.5", GitCommit:”20c265fef0741dd71a66480e35bd69f18351daea”, GitTreeState:”clean”, BuildDate:”2019–10–15T19:07:57Z”, GoVersion:”go1.12.10", Compiler:”gc”, Platform:”linux/amd64"}

If not, no problem! Just head over to the official kubectl installation page to grab the latest. (Note, if you need to find out what version of Kubernetes server is installed with Docker Desktop, just access the Docker Desktop menu and select About Docker Desktop).

Now let’s use kubectl to deploy a sample pod. First, we’ll create a Pod definition file (call it nginx-pod.yaml ) like the following, which deploys the nginx server:

apiVersion: v1

kind: Pod

metadata:

name: nginx-server

labels:

app: nginx-server

spec:

containers:

- name: nginx

image: nginx:1.7.9

ports:

- containerPort: 80

To deploy it, we’ll simply run kubectl create -f nginx-pod.yaml Our Pod should be deployed. We can verify this by running kubectl get pod ; we should see something like the following:

NAME READY STATUS RESTARTS AGE nginx-server 1/1 Running 0 3s

When we’re done, we can spin down our Pod via kubectl delete -f nginx-pod.yaml .

At this point, we’ve easily set up our own Kubernetes cluster. And it’s at no additional cost to us (save the extra bit of power it might take to run a local cluster). We’re ready to learn Kubernetes and play around with it to our hearts’ content!