Service interruptions caused by outages have severe consequences that range from harming a provider’s reputation to frustrating a customer to the point that they move to a competitor, so it’s extremely important to build and run resilient systems. Resiliency must be implemented at multiple levels from the bottom infrastructure layer to the application. For our customers, it’s a necessity that the EKS clusters they deploy their production applications onto be highly available.

With Pipeline our customers can provision EKS clusters in any AWS region across multiple Availability Zones by creating node pools in different subnets in different Availability Zones .

AWS Regions provide multiple physically separated and isolated Availability Zones , which are connected with low-latency, high-throughput, and highly redundant networking.

The EKS control plane is configured to run on multiple subnets, which are located in at least two AZs to ensure high availability. With this setup, if an outage hits an entire AZ, the:

EKS control plane is not affected as it’s nodes in the affected AZ are automatically started in the unaffected AZs.

workloads running on a worker node in the affected AZ are rescheduled to healthy Kubernetes nodes that belong to node pools in the unaffected AZs

Note: the worker nodes of the node pool are assigned to a single subnet versus multiple subnets in different AZs. The thinking behind this is that the Kubernetes cluster autoscaler does not support AutosScaling Groups which span multiple AZs.

Resiliency levels can be further increased through the multi- cluster, multi- cloud and service mesh solutions offered by Pipeline which makes possible the deployment and operation of production applications across multiple regions and cloud providers. These solutions can be used in isolation but also can be combined to implement complex high availability plans.

Install banzai cli in order to run the example commands listed bellow for setting up multi-AZ EKS clusters.

Create a Multi-AZ EKS into existing VPC and Subnets 🔗︎

The command below creates an EKS cluster with worker nodes distributed across existing subnets and supports various advanced use cases that require spreading worker nodes across multiple subnets and AZs. The subnets must have routing set up sufficient to allow outbound access to the internet for worker nodes.

banzai cluster create <<EOF { "name": "eks-cluster", "location": "us-east-2", "cloud": "amazon", "secretName": "eks-aws-secret", "properties": { "eks": { "version": "{{eks-version}}", "nodePools": { "pool1": { "spotPrice": "0.2", "count": 1, "minCount": 1, "maxCount": 2, "autoscaling": true, "instanceType": "t2.medium", "subnet": { "subnetId": "{{subnet-us-east-2a-zone}}" } }, "pool2": { "spotPrice": "0.2", "count": 1, "minCount": 1, "maxCount": 2, "autoscaling": true, "instanceType": "c5.large", "subnet": { "subnetId": "{{subnet-us-east-2b-zone}}" } }, "pool3": { "spotPrice": "0.2", "count": 1, "minCount": 1, "maxCount": 2, "autoscaling": true, "instanceType": "c5.xlarge", "subnet": { "subnetId": "{{subnet-us-east-2c-zone}}" } } }, "vpc": { "vpcId": "{{vpc-id}}" }, "subnets": [ { "subnetId": "{{subnet-us-east-2a-zone}}" }, { "subnetId": "{{subnet-us-east-2b-zone}}" }, { "subnetId": "{{subnet-us-east-2c-zone}}" } ] } } } EOF

The created EKS cluster will make use of the subnets listed under the subnets section. This list should also contain subnets that might be used in the future (e.g. additional subnets that might be used by new node pools the cluster is expanded with later). Node pools are created in the subnet that is specified for the node pool in the payload. If no subnet is specified for a node pool it is created in one of the subnets from the subnets list. Multiple node pools can share the same subnet.

Create a Multi-AZ EKS into new VPC and Subnets 🔗︎

The command below creates an EKS cluster with worker nodes distributed across multiple subnets created by Pipeline. This functionality only differs from Create a Multi-AZ EKS into existing VPC and Subnets in that the EKS cluster is not created in an existing infrastructure but Pipeline provisions the infrastructure for the cluster.

banzai cluster create <<EOF { "name": "eks-cluster", "location": "us-east-2", "cloud": "amazon", "secretName": "eks-aws-secret", "properties": { "eks": { "version": "{{eks-version}}", "nodePools": { "pool1": { "spotPrice": "0.2", "count": 1, "minCount": 1, "maxCount": 3, "autoscaling": true, "instanceType": "t2.medium", "subnet": { "cidr": "192.168.64.0/20", "availabilityZone": "us-east-2a" } }, "pool2": { "spotPrice": "0.2", "count": 1, "minCount": 1, "maxCount": 2, "autoscaling": true, "instanceType": "c5.large", "subnet": { "cidr": "192.168.80.0/20", "availabilityZone": "us-east-2b" } }, "pool3": { "spotPrice": "0.2", "count": 1, "minCount": 1, "maxCount": 2, "autoscaling": true, "instanceType": "c5.xlarge", "subnet": { "cidr": "192.168.96.0/20", "availabilityZone": "us-east-2c" } } }, "vpc": { "cidr": "192.168.0.0/16" }, "subnets": [ { "cidr": "192.168.64.0/20", "availabilityZone": "us-east-2a" }, { "cidr": "192.168.80.0/20", "availabilityZone": "us-east-2b" } ] } } } EOF

The VPC and all subnet definitions are collected from the payload and created by Pipeline with the provided parameters.

Verify that worker nodes are in different AZs 🔗︎

$ banzai cluster shell --cluster-name eks-cluster INFO [ 0002 ] Running /usr/local/bin/bash [ eks-cluster ] $

The following command lists the failure-domain.beta.kubernetes.io/region and failure-domain.beta.kubernetes.io/zone labels of the nodes:

[ eks-cluster ] $ kubectl get nodes -o = json | jq -r '.items[].metadata | "\(.name)\t\(.labels["failure-domain.beta.kubernetes.io/region"])\t\(.labels["failure-domain.beta.kubernetes.io/zone"])"' ip-192-168-77-16.us-east-2.compute.internal us-east-2 us-east-2a ip-192-168-81-164.us-east-2.compute.internal us-east-2 us-east-2b ip-192-168-97-83.us-east-2.compute.internal us-east-2 us-east-2c

As we can see the worker nodes are located in different AZs of region us-east-2 .

About Banzai Cloud Pipeline 🔗︎

Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform.