Lightweight Kubernetes Platform with a Minimal PaaS for Edge, IoT and Telco Platforms

Edge resources are often limited, however, facilitating Kubernetes bare minimal 2–4 GB minimum RAM requirement might not be a problem in a data center or public cloud environments and on the other-hand the same will be a limiting factor for edge computing where users cannot use a heavy Kubernetes distro with multiple package dependencies.

This is where Rancher’s k3s comes into the rescue, k3s is really a Linux distro and Kubernetes distro combined in a one binary of less than 40 MB and just requires 512 MB of RAM which makes it easy for deploying Kubernetes even on the environments with the most resource conservative infrastructures and low-resource computing environments. k3OS is an other offering based on k3s where the solution is packaged as a operating system completely managed by Kubernetes. K3s omits many features that bloat up most Kubernetes distributions, such as rarely used plug-ins, and consolidates the various functions of a Kubernetes distribution into a single process.

k3s resolves the infrastructure component, then comes the major set of challenges like deploying and managing applications as Kubernetes itself is not a traditional, all-inclusive PaaS (Platform as a Service) system. Deploying microservice based applications without PaaS can be simple with few geographically centralized data centers but the scenario is totally different having thousands of edge sites as shown below:

In a traditional PaaS setting , developers would develop an application locally, isolated away from where it was going to be deployed. Here application and its infrastructure were not connected but the same will not fit in a Kubernetes environment where each piece of an application is its own container (simply micorservices). μPaaS represents a shift in the way applications are developed and deployed. Rather than an application being developed in isolation from its infrastructure, an application is developed inside its infrastructure.

Rancher’s Rio is a MicroPaaS that can be layered on top of any standard Kubernetes cluster. Consisting of a few Kubernetes custom resources and a CLI to enhance the user experience, users can easily deploy services to Kubernetes and automatically get continuous delivery, DNS, HTTPS, routing, monitoring, autoscaling, canary deployments, git-triggered builds, and much more. The work for k3s started as a component of Rio which was then separated as two different first-class open source projects.

K3s

K3s is a brand new distribution of Kubernetes that is designed for users who need to deploy applications quickly and reliably to resource-constrained environments (example: edge sites with single board computers) and designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

K3s achieves its lightweight goal by stripping a bunch of features out of the Kubernetes binaries (e.g. legacy, alpha, and cloud-provider-specific features), replacing docker with Containerd, and using sqlite3 as the default DB (instead of etcd, supports etcd as a option). K3s just needs minimal kernel and cgroup mounts. As a result, this lightweight Kubernetes only consumes 512 MB of RAM and 200 MB of disk space.

K3s bundles the Kubernetes components (kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy) into combined processes that are presented as a simple server and agent model. Running k3s server will start the Kubernetes server and automatically register the local host as an agent. k3s supports multi-node model where users can use the ‘node-token’ generated while the process startup. By default k3s installs both server and agent (combined the Kubelet, kubeproxy and flannel agent processes), the same can be controlled using ‘ — disable-agent’ where server and agent (master and node in Kubernetes terminology) can be separated.

server initiation: