Running Linux Applications as Unikernels with K8S

1,116 reads

If you've read some of my prior articles you might've thought I'd never write this one huh? :) Well here goes.

reactions

A common question that we get is "Can you use unikernels with K8S?" The answer is yes, however, there are caveats. Namely, unikernels come packaged as virtual machines and in many cases k8s is provisioned on the public cloud on top of virtual machines. Also, you should be aware that provisioning unikernels under k8s incurs security risks that you would otherwise not need to deal with. These are greatly diminished as the guests are unikernels, not linux guests, but still.

reactions

Now, if you have your own servers or you are running k8s on bare metal this is how you'd go about running Nanos unikernels under k8s.

reactions

For this article you need a real physical machine and OPS. While you can use nested virtualization I wouldn't because you are going to take a pretty significant performance hit. Google Cloud has this feature on some of their instances and if you are on Amazon you might be able to perform this example on the "metal" instances (I haven't checked), although, keep in mind both of these options will not be cheap compared to simply spinning up a t2 nano or micro instance of which you can do easily with unikernels.

reactions

We are going to run a Go unikernel for this example but you can use any OPS example to follow along. Here we have a simple go webserver that sits on port 8083:

reactions

package main import ( "fmt" "net/http" ) func main() { http.HandleFunc( "/" , func(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Welcome to my website!" ) }) fs := http.FileServer(http.Dir( "static/" )) http.Handle( "/static/" , http.StripPrefix( "/static/" , fs)) http.ListenAndServe( ":8083" , nil) }

Ok - looks good. We can quickly build the image and ensure everything is working alright like so. We are using the 'nightly' build option here:

reactions

ops run -n -p 8083 goweb

Ops build works here as well but run will autorun it for you to ensure it works locally first. Now we'll need to put it into a format for k8s to use. First, we compress it with XZ (sudo apt-get install xz-utils):

reactions

cp .ops/images/goweb.img . xz goweb.img

From there we need to put it into a place for k8s to import it. I tossed it into a cloud bucket and to keep this article as simple as possible have left it open. (Obviously, you don't want to do this in a real life production scenario.)

reactions

Now let's install kubectl:

reactions

curl -LO https: //storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl chmod +x ./kubectl mv kubectl /usr/local/bin/. sudo mv kubectl /usr/local/bin/. kubectl version --client

Now let's install minikube. I'm using minikube here to hopefully minimize the number of steps you need to do from a fresh install but feel free to use whatever you want.

reactions

curl -Lo minikube https: //storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube minikube start --vm-driver=kvm2

Then install the kvm2 driver. For this box I needed to install the libvirt suite of tooling:

reactions

sudo apt-get install libvirt-daemon-system libvirt-clients bridge-utils

Libvirt is this rather old and nasty library used to interact w/KVM although it is a ton of integrations and there aren't that many alternatives.

reactions

If you are having trouble after this step you can run this quick validation check to ensure everything is setup:

reactions

virt-host-validate

Also, ensure you are in the right group to interact with KVM:

reactions

groups

After getting all of this installed you might find the need to reset your session (quickest way is to just logout/login again).

reactions

Next up - let's install the kubevirt operator. This is what really ties the room together.

reactions

reactions

export KUBEVIRT_VERSION=$(curl -s https: //api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- - | sort -V | tail -1 | awk -F':' '{print $2}' | sed 's/,//' | xargs) echo $KUBEVIRT_VERSION kubectl create -f https: //github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml

Then let's create a resource:

reactions

kubectl create -f https: //github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml

Now let's install virtctl. Are we getting tired yet?

reactions

curl -L -o virtctl https: //github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64 chmod +x virtctl

Then we'll import with CDI.

reactions

wget https: //raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/storage-setup.yml kubectl create -f storage-setup.yml export VERSION=$(curl -s https: //github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*") kubectl create -f https: //github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml kubectl create -f https: //github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml kubectl get pods -n cdi

Ok! Whooh! If you got through all of that we are almost to the finish line. Let's grab a template for our persistent volume claim:

reactions

wget https: //raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/pvc_fedora.yml

Now, edit the line to show where you stuffed the original disk image. In my example it looks like this (again this is just an example to keep things easy - you wouldn't/shouldn't do this in real life):

reactions

cdi.kubevirt.io/storage.import.endpoint: "https://storage.googleapis.com/totally-insecure/goweb.img.xz"

Let's create it:

reactions

kubectl create -f pvc_fedora.yml kubectl get pvc fedora -o yaml

You can check out the import as it happens but wait until you see the success message:

reactions

cdi.kubevirt.io/storage.pod.phase: Succeeded

Now we can create the actual vm:

reactions

wget https: //raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/vm1_pvc.yml kubectl create -f vm1_pvc.yml

Now if you:

reactions

kubectl get vmi

You should see your instance running.

reactions

If you have minikube you can now do this:

reactions

Wow! We just deployed a unikernel to K8S. Easy? Well, I'll let you decide that.

reactions

Of course, if you are using the public cloud like AWS or GCP and you don't want to have to go through all the hassle these 2 commands will get the same webserver deployed just as easily with a lot less hassle, more security and more performance with less waste:

reactions

ops image create -c config.json -a goweb ops instance create -z us-west2-a -i goweb-image

Until next time.

reactions

Tags