With Kubernetes gaining traction, more and more teams are looking to use it. Recently, AWS announced the release of the Amazon EKS (Elastic Kubernetes Service), which means we can now deploy Kubernetes in AWS, more-or-less as a managed service. I say more or less because AWS takes good care of managing the Kubernetes control plane (the master nodes), but you have to manage the worker nodes (which you can launch as EC2 instances in one ore more Auto Scale Groups).

Launching an AWS EKS cluster has quite a few steps, since you have to first create a VPC, subnets, IAM roles and other AWS resources.

Simplifying Kubernetes cluster creation in AWS EKS

In order to quickly spin up Kubernetes clusters (in a repeatable and automated fashion), we can use an open source tool created by Adobe named ops-cli, along with Terraform from HashiCorp. Terraform supports deploying a Kubernetes cluster in AWS (via what’s called an Amazon Elastic Kubernetes service). We are using ops-cli to perform templating of this AWS EKS terraform module, so that we can re-use it. This allows us to deploy multiple Kubernetes clusters, across different regions/environments.

Once the Kubernetes cluster is up and running, we want to install some common packages before deploying our own apps. These can include: cluster-autoscaler, logging (eg. Fluentd), metrics (eg. Prometheus), tracing (eg. New Relic), continuous deployment (eg. Spinnaker) and so forth. Luckily, these are all already available, packaged as Helm charts (https://github.com/helm/charts/tree/master/stable).

What’s nice about this, is that we can use Terraform to deploy Helm charts inside our newly created Kubernetes cluster. This can be achieved via the Helm Terraform provider (https://github.com/terraform-providers/terraform-provider-helm). The ops-cli is handy in order to minimize code duplication when deploying these common helm packages via Terraform.

There’s a fully working example on the Adobe GitHub page, which deploys a Kubernetes cluster in AWS using ops-cli + terraform + helm, along with the aforementioned services inside the Kubernetes cluster itself: https://github.com/adobe/ops-cli/tree/master/examples/aws-kubernetes

The example follows the official terraform guide (https://learn.hashicorp.com/terraform/aws/eks-intro) for spinning up an EKS cluster, on which we add a layer of templating in order to make it possible to easily create multiple clusters (eg. in multiple environments). Furthermore, we use the terraform helm provider, in order to install some common services in the Kubernetes cluster (examples include kube2iam, dashboard, metrics etc.)

git clone https://github.com/adobe/ops-cli.git



cd ops-cli/examples/aws-kubernetes



# For MacOS/Linux, installs prerequisites (helm, terraform, kubectl etc.)

./update.sh

Configure AWS profile

aws configure --profile my-aws-profile AWS Access Key ID [None]:

AWS Secret Access Key [None]:

Default region name [None]: us-east-1

Creating a new Kubernetes cluster

You can customize the cluster definition (eg. cluster name, region to deploy to etc.) via the conf file:

vim clusters/my-kubernetes-cluster.yaml

This creates the AWS EKS cluster and the worker Auto Scaling Group (ASG). It uses terraform via ops to make API calls to AWS, which will generate the Kubernetes service.

ops clusters/my-kubernetes-cluster.yaml terraform --path-name aws-eks plan

ops clusters/my-kubernetes-cluster.yaml terraform --path-name aws-eks apply

At the time of this writing, it takes up to 15 minutes for AWS to create the Kubernetes resources.

At the end of this step, terraform generates two outputs:

a ConfigMap used by worker nodes to authenticate to the K8s master

The kube config file used from your local machine to connect to the Kubernetes cluster.

Check that kubectl works with the new cluster

The previous step should have generated a kube config file for the new Kubernetes cluster. Check that it exists (and that it points to the right cluster).

cat `pwd`/clusters/kubeconfigs/stage-mykubernetescluster.config export KUBECONFIG=`pwd`/clusters/kubeconfigs/stage-mykubernetescluster.config



kubectl get pods --all-namespaces



# NAMESPACE NAME READY STATUS RESTARTS AGE

# kube-system coredns-7bcbfc4774-4md27 0/1 Pending 0 9m

# kube-system coredns-7bcbfc4774-nrd7p 0/1 Pending 0 9m

Check that the worker nodes have joined the Kubernetes cluster:

kubectl get nodes # NAME STATUS ROLES AGE VERSION

# ip-10-91-56-36.ec2.internal Ready <none> 2m v1.11.5

# ip-10-91-57-197.ec2.internal Ready <none> 2m v1.11.5

Add Kubernetes components (via Helm charts)

To configure additional services inside the Kubernetes cluster, we’ll use the Helm package installer. The example will install the following helm charts:

cluster autoscaler (for the worker nodes)

metrics (kube-state-metrics + dashboard)

kube2iam (for AWS IAM association to kubernetes services)

You can add your own helm charts quite easily. Just put its terraform file in the helm/ folder.

a. Install Helm (Tiller) inside the Kubernetes cluster

Tiller is the Helm daemon that will run in the Kubernetes cluster, in its own pod. The following helm-init plan/apply will install it.

ops clusters/my-kubernetes-cluster.yaml terraform --path-name helm-init plan

ops clusters/my-kubernetes-cluster.yaml terraform --path-name helm-init apply

b. Install helm charts

ops clusters/my-kubernetes-cluster.yaml terraform --path-name helm plan

ops clusters/my-kubernetes-cluster.yaml terraform --path-name helm apply

Note that you can easily add helm charts that you want installed in your Kubernetes cluster (eg. prometheus, grafana, splunk, etc.). Just add these in the compositions/generic/helm folder.

At this point you should be up and running!

$ helm list # NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE

# cluster-autoscaler 1 Feb 2 16:54:16 2019 DEPLOYED cluster-autoscaler-0.9.0 1.12.0 kube-system

# dashboard 1 Feb 2 16:54:16 2019 DEPLOYED kubernetes-dashboard-0.8.0 1.10.0 kube-system

# kube-state-metrics 1 Feb 2 16:54:16 2019 DEPLOYED kube-state-metrics-0.12.1 1.4.0 kube-system

# kube2iam 1 Feb 2 16:54:16 2019 DEPLOYED kube2iam-0.9.1 0.10.0 kube-system

Cluster decommissioning

To decommission existing cluster terraform destroy commands via ops invocation need to be issued. It is very important to destroy helm resources before destroying the underlying AWS worker nodes and AWS EKS control plane. This way external resources created by helm for kubernetes consumption also get destroyed.

ops clusters/my-kubernetes-cluster.yaml terraform --path-name helm destroy

ops clusters/my-kubernetes-cluster.yaml terraform --path-name aws-eks destroy

— —

Let me know if you enjoyed this article by hitting the applause button or leaving a comment below. It would mean a lot.