Rancher Labs have released Submariner, a new open-source project to enable network connectivity between Kubernetes clusters. This project aims to connect overlay networks of individuals Kubernetes clusters to form a "multicluster", which in turn facilitates inter-cluster communication and synchronisation between applications and distributed data stores.

All current Kubernetes deployments implement network virtualization, which enables applications that run on multiple nodes within the same cluster to communicate with each other over the cluster's internal networking space. Applications running on Kubernetes are packaged in containers and deployed within Pods, where all containers within a Pod share a local network namespace. Applications deployed within Pods can be configured to be accessible globally within a cluster via Services, which expose applications via a ClusterIP, NodePort or Load Balancers. Pods that require network access to an application or service located within a different Kubernetes cluster must communicate with each other through ingress controllers, external load balancers, or NodePorts.

According to a recent blog post by Matthew Laufe Scheer, marketing manager at Rancher Labs, Submariner was launched to provide network connectivity for "microservices deployed in multiple Kubernetes clusters that need to communicate with each other", and enable "a host of new multi-cluster implementations, such as database replication within Kubernetes across geographic regions and deploying service mesh across clusters". Submariner provides a centralised "submariner broker" that manages the necessary gateways, network tunnels, and routes that are required to enable containers in different Kubernetes clusters to connect directly.

The basic architecture diagram for Submariner is as follows:

Rancher Labs Submariner Architecture (image from Submariner GitHub repo)

According to the release blog post, the key features of Submariner include:

Compatibility and connectivity with existing clusters: Users can deploy Submariner into existing Kubernetes clusters, and this provides the addition of Layer 3 network connectivity between pods in different clusters.

Secure paths: Encrypted network connectivity is by default implemented using IPSec tunnels (with pluggable connectivity mechanisms scheduled for the future roadmap).

Flexible service discovery: Submariner provides service discovery across multiple Kubernetes clusters.

Container Networking Interface (CNI) compatibility: Works with popular CNI drivers such as Flannel and Calico.

The two primary Submariner components that must be deployed within connected clusters are: "submariner" (operated as a Deployment), and "submariner-route-agent" (deployed as a DaemonSet). The submariner pods are run on "gateway nodes", and will perform leader election between the nodes to elect an active IPsec endpoint. The submariner-route-agent is run on every node, and is aware of the current gateway node leader. Upon startup, the submariner pod that is elected leader will perform a reconciliation process that ensures it is the sole endpoint for this cluster. Upon failure, another Submariner pod (on one of the other gateway hosts) will gain leadership and perform reconciliation to ensure it is the active leader.

Submariner consists of several components that use or integrate with Kubernetes Custom Resource Definitions (CRDs). CRDs are entities that have been designed to facilitate extension of functionality over than provided via Kubernetes itself. Submariner uses a central broker to facilitate the exchange of information and sync CRD's between clusters. The "datastoresyncer" runs as a controller within the leader-elected submariner pod, and is responsible for performing a two-way synchronization between the datastore and local cluster of Submariner CRDs.

Submariner currently has several prerequisites for deployment:

The running of at least three Kubernetes clusters, each with unique cluster IDs, and one of which must be designated to serve as the central broker that is accessible by all of the connected clusters.

Different cluster/service CIDR block usage (as well as different kubernetes DNS suffixes) between clusters, which is required in order to prevent traffic selector/policy/routing conflicts.

Direct IP connectivity between instances through the internet (or on the same network if not running Submariner over the internet). Submariner supports 1:1 NAT setups, but has a few caveats/provider specific configuration instructions in this configuration.

Knowledge of each cluster's network configuration.

A Helm (Kubernetes package manager) installation with a version that supports crd-install hook (v2.12.1+).

When running in AWS, it is also necessary to disable source/dest checking of the instances that are gateway hosts to allow the instances to pass traffic for remote clusters.

This announcement of Submariner also complements the recent release of "multi-cluster applications" in a preview version of the Rancher 2.2 platform. This new feature will simultaneously deploy and upgrade copies of the same application that has been deployed in any number of Kubernetes clusters. The use case for this includes operators replicating applications for redundancy by running a Kubernetes cluster per cloud availability zone, and also edge or IoT deployments, where multiple copies of the same application are running in each cluster.

InfoQ recently sat down with Sheng Liang, co-founder and CEO at Rancher Labs, and explored the motivations behind the creation of Submariner.

InfoQ: Can you explain how the Submariner release relates to the earlier "Multi-Cluster Kubernetes Applications" announcement, please?

Sheng Liang: Submariner is a natural extension to the earlier "Multi-Cluster Kubernetes Applications" technology. As users deploy applications across multiple Kubernetes clusters, they encounter the need to have pods in different clusters to communicate with each other. Without Submariner, pods in one cluster would have to connect to pods in another cluster through ingress controllers or node ports. Submariner simplifies pod-to-pod connectivity so that pods can directly connect to each other across different Kubernetes clusters.

InfoQ: How does Submariner compare to a service mesh like Istio, Linkerd or Consul Connect?

Liang: Submariner is not a service mesh. It only provides layer 3 network connectivity. In integrates with services meshes such as Istio, however. With Submariner, for example, Istio can be deployed across multiple Kubernetes clusters. Istio multi-cluster support requires pod-to-pod connectivity across clusters, the exact capability provided by Submariner. For details see the prerequisites section here: https://istio.io/docs/setup/kubernetes/multicluster-install/

InfoQ: How important will cross-cluster/cross-DC connectivity be for a typical enterprise organization?

Liang: Enterprise organizations often deploy separate Kubernetes clusters in multiple data centers and clouds. As they deploy applications and services across these clusters, cross-cluster network connectivity becomes a foundational requirement for application deployment. Cross-cluster networking connectivity simplifies application deployment because operators no longer need to setup complex network routing or load balancer rules between application components. Consider, for example, when an admin sets up MySQL master in one cluster and MySQL slave in another cluster. Submariner will allow these two pods to communicate with each other directly, which is required for setting up MySQL HA.

Liang also stated that Submariner could be used to provide multicluster networking within "edge" or IoT use case Kubernetes deployments. Deploying Kubernetes clusters at the edge is becoming increasingly popular, for example, Chick-fil-A talked about this at QCon New York in "Milking the Most out of 1000's of K8s Clusters"), and a recent CNCF Technical Oversight Committee (TOC) meeting included a presentation that proposed the KubeEdge project for inclusion within the CNCF sandbox.

The Submariner GitHub repository README notes that Submariner should not yet be used for production purposes, and a timeline for a full release is not yet available. The Rancher Labs team "welcomes usage/experimentation with [Submariner, although] it is quite possible that you could run into severe bugs with it". Feedback can be provided via GitHub issues, Rancher forums or on Slack.