Private Kubernetes Service, with a public endpoint — inlets/inlets-operator

Kubernetes Cloud Providers embedded the knowledge and context of each public and private cloud providers into most of the Kubernetes components. With these providers it is easy to expose Kubernetes services running on the specific platforms using the platform native LoadBalancing constructs. If a user is deploying to an EC2 instance or a DigitalOcean Droplet, then they have a public IPv4 address, but when working behind a corporate firewall, NAT, or within a VM or container, this just doesn’t work.

There are multiple other scenarios where this might be an issue:

Unable to expose localhost application directly to internet without DMZ & other network configuration

Limited resources in private data centers where development is carried out.

Unable to share websites for testing purpose

Develop any services behind enterprise firewall which consume Webhooks (HTTP CallBacks). For example, developers code in private space has no routable IP address and GitHub simply has no way to send a message.

Sharing a website temporarily that is running only on a developer machine

Tools like Ngrok are well known for tunnelling services for exposing localhost to the web. These tools implement a multiplatform tunnelling, reverse proxy that establishes secure tunnels from a public endpoint such as internet to a locally running network service using a WebSocket. WebSocket is a naturally full-duplex, bidirectional, single-socket connection. With WebSocket, the HTTP request becomes a single request to open a WebSocket connection and reuses the same connection from the client to the server, and the server to the client.

A client runs in the internal network ( alongside the Applications ) and connects to a remote server with HTTP websockets. The Server then forward the request to the Client over the one of the offered websockets.

Websocket Tunnelling

Alex Ellis — Inlets combine a reverse proxy and WebSocket tunnels to expose internal and development endpoints to the public Internet via an exit-node. A exit-node is a publicly reachable server on any public-cloud platform with Inlets-Server process configured.

Inlets — Exit Node on Public Cloud Platforms

Similar tools such as ngrok or Argo Tunnel from Cloudflare are closed-source, have limited built-in and can work out expensive. ngrok is also often banned by corporate firewall policies meaning it can be unusable. Inlets aims to dynamically bind and discover your local services to DNS entries with automated TLS certificates to a public IP address over a websocket tunnel.

Without inlets user might have to configure required firewall rules for the webhooks to reach applications in internal network. This might be a daunting task as keeping track of all the dynamic connections is impossible.

Scenario — Incoming Webhooks without Inlets

With inlets all the requests are sent from a publicly hosted exit-node (inlets server) to inlets client residing on-premise. Inlets act as a traffic sink, and thus not prone to any types of abuses. It won’t relay any requests out to the public internet. Instead, inlets suffers from the opposite problem.

Scenario — Incoming Webhooks with Inlets

As mentioned, by default, a LoadBalancer Service for Kubernetes services is only available on cloud providers, not privately hosted Kubernetes clusters. The cleanest way to get traffic into a cluster seems to be a load balancer. However, it requires an external service usually provided by GCP or AWS that doesn’t come with Kubernetes. With Inlets-operator users can seamlessly enable public LoadBalancer for private Kubernetes Services without having to manage a full-fledged cluster on a public cloud platform.

inlets-operator

In Kubernetes setting, inlets is implemented as an operator that make use of CRD (Custom Resource Definition) to manage tunnels and its components. With Inlets-operator users can dynamically create an exit server on any of the supported platforms like Digitalocean, Packet, GCP etc. With each service type ‘LoadBalancer’ a dedicated server is created on the supported public cloud platform with Inlets server process.

Inlets-Operator on Kubernetes

Inlets CRD shown below enables users to operate (create/delete/get/describe) tunnels as extension to Kubernetes API.

Inlets-Operator Custom Resource Definition

In this walkthrough the operator and sample applications are deployed on a Kubernetes cluster running on a intel-nuc connected to home private network. Inlets operator is deployed as a Kubernetes deployment and constantly polls information from Kubernetes-API for any services with type LoadBalancer.

Operator Deployment

Operator Controller and Kube-apiserver

Operator Deployment

Inlets-operator supports multiple platforms (Digitalocean, Packet, Scaleway, GCP), with the supported platforms users can make use of auto provisioning of required components to seamlessly initiate an exit-node. Trying out inlets-operator with Digitalocean:

Creating a API key for authentication: