Customer and market demands often require IoT solutions to have the ability to quickly deploy new features and updates. Kubernetes provides a unified deployment model that allows edge-users to quickly and automatically deploy new services and manage their lifecycle using Kubernetes native primitives which brings in a great level of management capabilities. Kubernetes supports zero-downtime deployments in form of rolling updates, this allows mission-critical IoT solutions to be kept up-to-date with no impact on end users (customers). Apart form the rolling-updates, Kubernetes provides rich feature-set such as high-availability, autoscaling, ingress etc.

Many IoT solutions are considered business critical systems that need to be reliable and available. For instance, an IoT solution critical to the operation of a factory needs to be available at all time. Kubernetes provides the tooling required to deploy highly available services. It’s architecture also allows for workloads to run independently. In addition, they can be restarted or recreated with no effect to end-users.

How is this different from already existing Virtual Kubelet based Azure IoT Edge Connector for Kubernetes ?

Azure already had a Virtual Kubelet based implementation to support IoT Edge deployments through Kubernetes. Azure IoT Edge Connector leverages the Virtual Kubelet project to provide a virtual Kubernetes node backed by an Azure IoT hub. It translates a Kubernetes pod specification to an IoT Edge Deployment and submits it to the backing IoT hub. The edge deployment contains a device selector query that controls which subset of edge devices the deployment will be applied to.

Azure Kubernetes — Virtual Kubelet based IoT Edge Implementation

Here, the On-Prem devices must run IoT Runtime and Docker as CRI where the edge deployments are created as docker containers.

IoTedge Gateway on Kubernetes (IoT Edge deployed on Kubernetes as a gateway) enables users to deploy an Azure IoT Edge workload to a Kubernetes cluster on premises without need for a virtual-node (virtual kubelet). IoT Edge registers a Custom Resource Definition (CRD) representing an Edge deployment with the Kubernetes API Server. Additionally, it provides an Operator (IoT Edge Agent) that reconciles cloud-managed desired state with the local cluster state. With this solution the EdgeAgent now talks to both docker and Kubernetes runtimes. This solution in one or other way also eliminates need for specific packages on the host (iotedge, iotedgectl etc.) as the architecture involves direct interaction with Kubernetes API and all control-plane components are deployed on Kubernetes as pods and containers.

This solution enables users to register Kubernetes clusters running on edge devices as IoT Edge Devices on Azure and manage application deployments on the same from a central Azure portal on to distributed edge-devices. All the edge modules are seamlessly translated to native Kubernetes objects facilitating users to directly operate them from a Kubernetes spec.

Architecture and Components

Traditional Azure IoT Edge runtime constitutes a IoT Edge agent : instantiates modules, manage lifecycle & monitoring of modules, report module status back to IoT Hub. edgeAgent uses its module twin to store this configuration data. The second, IoT Edge hub: acts as a local proxy for IoT Hub by exposing the same protocol endpoints as IoT Hub. This consistency means that clients (whether devices or modules) can connect to the IoT Edge runtime just as they would to IoT Hub.

Usually with the previous approaches (like virtual Kubelet, docker as CRI etc.) the credentials are managed through the iotedge runtime configuration on the host (OS), with this approach a new entity “IoTedged” is used to perform the credential creation and cloud connectivity.