Plugin for kubectl to run code in Kubernetes

Warp is a kubectl plugin which allows you to execute your local code directly in Kubernetes without slow image build process. It’s MIT licensed and available at GitHub.

Time to time I face some problem what I’m super curious to solve. This time I faced the problem when helped our customer at Polar Squad with Kubernetes

How do I execute my local code directly “on Kubernetes”?

- Random developer

This might be something that developers are already used to when using Docker locally, but with Kubernetes, all containers run on some server in the cluster.

I already solved this once, when I was working with Eliot project.

So all I needed to do is re-implement it on top of Kubernetes, and that’s how I ended up to create the kubectl warp plugin.

…Sounds easy

Think case that you want to develop software with “live-reload” in Kubernetes

I have source code locally (maybe cloned from GitHub)…

… I want to transfer the code to a container in Kubernetes and …

the code to a container in Kubernetes and … … build and start the project and …

and start the project and … … make a change to the source files and …

to the source files and … … live-reload the code and see the result!

Not so easy…

To implement above, you would need to

Normal update process flow

Build the source code

Create a Docker image

Push the image to the Docker repository

Deploy new version to Kubernetes

Wait for the rolling update

See the result

If you need to do this multiple times in a minute, your development cycle is waaay too slow and you start to miss the days when there was only one big monolith application, what you could run easily locally. But to run nowadays “microservices” locally, you need to replicate all the dependencies (other services, databases, storages, etc.) to get a real “production-like” environment.

Why do I need this?

Telepresence solves the problem nicely by running your project locally and tunneling all the traffic from/to Kubernetes to your local process.

This works in many cases nicely enough but what if you want truly “production-like”? What if you want to match with memory/CPU limits? CPU architecture? Attached disks? Sensors? Suddenly having the capability to run the project in Kubernetes starts to sound good.

This is the case when you’re building something little more complex than a simple web application. Especially when you’re working in a special environment, optimizing for high load, or building IoT solution.

How does it work?

Ok, let’s cut the chitchat and let’s take a look how does it work.

Warp speed up the process by avoiding skipping slow steps

I wanted it to be as simple as possible. For instance, you could implement the same thing with shared NFS (this is really common in IoT development) but setting it up and teardown is a way too complex process (just guess how many IoT devices is out there where’s still Samba server running?). There must be an easier way if we have Kubernetes under the hood.

kubectl warp command runs your command inside a container, the same way as kubectl run do, but before executing the command, it synchronizes all your files into the container.

kubectl warp = kubectl run + rsync

For example, to run and live reload NodeJS project in Kubernetes, you can

$ kubectl warp -i -t --image node -- npm run watch # Start editing files and it live reloads the changes!

The kubectl warp creates Pod where is, additionally to the actual container, also sshd container running (with throwaway SSH key), so we can synchronize files to the Pod with rsync .

Challenges

Implementing it was more complex than I thought.

Initially, I was planning to implement this with just a simple bash script 😀

Initial synchronization

First, I needed to synchronize the files once, before the actual container starts so that when the actual container starts up with the defined command, the files are already available.

So I needed the sshd container to be init-container and complete it when the first synchronization is done. That was easy, thanks to sshd -d debugging flag, except it raised another problem…

Accessing the init-container

I needed a way to access the sshd init-container, but Kubernetes routes traffic to Pod only when it’s in the Ready state, but when the init-container is running, it’s in Init state.

Luckily I found out that, you can port-forward to init-container through the Kubernetes API ( kubectl port-forward blocks this only on the client side). Created a Github issue, hopefully, in future, we can open port-forwarding to init-container with plain kubectl port-forward .

So it ended up to be even better than starting to mess up with Ingress configurations, which would be really environment specific.

After creating the Pod, warp starts a port-forwarding from a random local port to the sshd container port 22 and executes the rsync command locally. Then when the first synchronization is done, the init-container completes successfully and the actual container starts. Awesome! 🎉

Getting started

With MacOS and Brew (other see docs), you can install it with

brew install rsync ernoaapa/kubectl-plugins/warp

Get the example NodeJS project from here and just

kubectl warp -i -t --image node demo -- npm install && npm run watch

Conclusion

I’m looking forward to getting some feedback about kubectl warp . It’s early working version and doesn’t implement all features of kubectl run and for example, a capability to use PersistentVolumes for even faster starts, need to be implemented to make it even better.

kubiot.io — Kubernetes for IoT

This is just one open source part of Kubiot — Kubernetes for IoT.

Sign up in kubiot.io for early access!