In this series, we will deploy a Java system with InfluxDB and Grafana on a Kubernetes cluster with Helm. The system is designed to reflect some real-world characteristics of cloud systems but at the same time will be simplified enough to serve this step-by-step series. The deployment will be able to run locally and on AWS with minimal code changes.

This is part 1 of Helm by example series. We will create a Kubernetes cluster running Helm that will work locally with minikube and on AWS cloud with kops.

Introduction

The DevOps world evolves really fast. Every few months, new tools show up replacing other new tools. In this landscape, with every DevOps work we do in Crossword, we realize we can do something differently, better. We have experienced Kubernetes for quite some time now (see our article on how to setup Kubernetes cluster on a barebone server) and we are impressed by the pace and direction Kubernetes is developing.

Recently, we decided to use the Helm tool. Before Helm, we used custom fabric scripts that managed Kubernetes yaml files. Though custom scripts are working fine for us and we managed to create a robust solution that works for our needs, we found serious reasons to try Helm.

It’s a tool for Kubernetes, designed to integrate with this technology specifically.

Helm level of abstraction is closer to DevOps — it’s natural to reason about upgrades, scaling, deployments, backups, platform verification when operating on the Helm abstraction level.

A standardized way for cluster installation, setup, and performing certain operations.

Helm charts are a method for sharing application deployments and a way to share deployment patterns.

Before we proceed to system deployment with Helm, we need to have a working Kubernetes cluster. We will create the cluster on minikube, and later on AWS.

Local Cluster on minikube

I assume you have kubectl installed and working (official guide for installing kubectl). Next step is to install minikube, differs depending on the platform. For MacOS users, I recommend using the homebrew tool:

$ brew cask install minikube





Next, start minikube:

$ minikube start





This command also creates a kubectl context named minikube that enables cluster access. Let's instruct kubectl to use this context:

$ kubectl config use-context minikube





Inspect cluster pods by running:

$ kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-5rhvr 1/1 Running 0 5m26s kube-system coredns-576cbf47c7-6cftt 1/1 Running 0 5m26s kube-system etcd-minikube 1/1 Running 0 4m43s kube-system kube-addon-manager-minikube 1/1 Running 0 4m19s kube-system kube-apiserver-minikube 1/1 Running 0 4m47s kube-system kube-controller-manager-minikube 1/1 Running 0 4m43s kube-system kube-proxy-vstsx 1/1 Running 0 5m26s kube-system kube-scheduler-minikube 1/1 Running 0 4m24s kube-system kubernetes-dashboard-5bff5f8fb8-zmm2s 1/1 Running 0 5m24s kube-system storage-provisioner 1/1 Running 0 5m24s





At this point, we have a working, local Kubernetes cluster. We can access Kubernetes dashboard.

Next step is to install Helm. Following these instructions, on MacOS use:

$ brew install kubernetes-helm





Now, we can install Helm on our Kubernetes cluster.

$ helm init $HELM_HOME has been configured at /Users/jan.broniowski/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!





Helm relies on Tiller pod which is server side of Helm. Let's verify that Tiller is running:

$ kubectl get pod -n kube-system | grep tiller tiller-deploy-845cffcd48-z2mqz 1/1 Running 0 13m





Now we can use Helm commands as described in the documentation. For example, we can display client and server versions:

$ helm version Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}





At this state, Kubernetes cluster is configured on minikube and Helm is working. We have a local environment ready, so let's move to AWS's part.

AWS Cluster

For creating a Kubernetes cluster on AWS, we will use the kops tool.

Recently, AWS started offering a managed Kubernetes cluster — EKS. This solution looks promising but the cost is quite significant — 0.20$ per hour — resulting in ~$400+ a month. For smaller deployments, clusters created with kops work perfectly fine. So unless you run a big cluster or need specific features offered in EKS, a cluster created with kops is enough.

On MacOS, kops is installed with homebrew tool:

$ brew install kops





Verify that kops was installed correctly:

$ kops version Version 1.10.0





Apart from kops, we will need access to AWS CLI. Install the tool and set up your credentials by following the official AWS-cli guide. Having AWS CLI configured, we can start setting up the cluster.

User and IAM

Firstly, let's create a dedicated IAM group for kops. I will name it kops .

$ aws iam create-group --group-name kops { "Group": { "Path": "/", "GroupName": "kops", "GroupId": "AGP...XRS", "Arn": "arn:aws:iam::2...3:group/kops", "CreateDate": "2017-1..." } }





Group will require the following policies:

$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops





Now that we have a group, let's create a user kops-user dedicated to kops operations:

$ aws iam create-user --user-name kops-user { "User": { "UserName": "kops-user", "Path": "/", "CreateDate": "2017-...Z", "UserId": "A...P", "Arn": "arn:aws:iam::4...2:user/kops-user" } }





And now all is left is to add the newly-created user to the kops group:

$ aws iam add-user-to-group --user-name kops-user --group-name kops





Next step is creating SecretAccessKey and AccessKeyId for the kops-user .

$ aws iam create-access-key --user-name kops-user { "AccessKey": { "UserName": "kops-user", "Status": "Active", "CreateDate": "2017-...Z", "SecretAccessKey": "{secret-access-key}", "AccessKeyId": "{access-key-id}" } }





Now it's time to decide in what AWS region you want to locate your cluster. Almost any region is fine, though if you are interested in exploring different load balancing options later in the series, you will need at least two availability zones. Also, choosing a poor, basic region, like Paris, might entail some limitations. For this tutorial, I will use eu-central-1 region.

Now, configure your AWS-CLI to use generated access key. I recommend using a dedicated profile I named kops-profile so that your default profile is not affected:

$ aws configure --profile kops-profile





We're getting closer — there's one more thing we should do before creating a cluster.

By default, kops will use your default ssh key that you have in your environment. This is probably not what you want and I advise to create a dedicated ssh key pair just for your kops user.

Let's generate an shh key pair using ssh-keygen , I named the key: kops-rsa-key

$ ssh-keygen -t rsa -C "kops-rsa-key" -f kops-rsa-key Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in kops-rsa-key. Your public key has been saved in kops-rsa-key.pub. The key fingerprint is: SHA...3 kops-rsa-key The key's randomart image is: ...





We will use this key later in the tutorial.

For now, that's it when it comes to IAM and accesses. We can move towards a more interesting part — creating an actual cluster on AWS.

Cluster Creation

Kops keeps whole cluster configuration and state on disk storage S3 in AWS. We need to create an S3 bucket for this purpose:

$ aws s3api create-bucket --bucket cluster-state-store --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1 { "Location": "http://cluster-state-store.s3.amazonaws.com/" }



As you can see, I named the bucket cluster-state-store . Keep in mind that region selected, eu-central-1 , is the region I chose for my cluster for this tutorial. In addition, let's enable versioning on this bucket:

$ aws s3api put-bucket-versioning --bucket cluster-state-store --versioning-configuration Status=Enabled





Now, I will create two environment variables that kops command will use. First one is the path to kops state store we just created:

export KOPS_STATE_STORE=s3://cluster-state-store





The second one is cluster name:

export NAME=kops-cluster.k8s.local





I named the cluster kops-cluster.k8s.local . If you are reluctant to create NAME variable, you can provide kops-cluster.k8s.local using --name option when using kops commands.

The suffix .k8s.local is very important because we will leverage a kops feature called gossip-based cluster. That means that kops cluster will not need a DNS service for node discovery and other operations related to etcd (don't worry if you don't understand this part). Instead, the cluster will use a gossip communication pattern for distributed systems. Details are out of the scope of this tutorial, but you can read short kops notes about gossip support. Internally, kops uses mesh as distributed communication pattern.

Finally, having variables defined, we can proceed to cluster creation. As mentioned before, I will create the cluster in two zones because we will need them in the next parts of this series. But for now, you can proceed with only one zone. I have to check existing availability zones in my region:

$ aws ec2 describe-availability-zones --region eu-central-1 { "AvailabilityZones": [ { "State": "available", "ZoneName": "eu-central-1a", "Messages": [], "RegionName": "eu-central-1" }, { "State": "available", "ZoneName": "eu-central-1b", "Messages": [], "RegionName": "eu-central-1" }, { "State": "available", "ZoneName": "eu-central-1c", "Messages": [], "RegionName": "eu-central-1" } ] }





There are three zones in my region, I will use zones eu-central-1a and eu-central-1b .

Finally, let's proceed with cluster creation:

$ kops create cluster --cloud=aws --zones=eu-central-1a,eu-central-1b --ssh-public-key=./k8s-kops-user.pub ${NAME}





Kops will generate quite long output that will contain all details about objects that are created. At the end you should see the following:

Must specify --yes to apply changes Cluster configuration has been created. Suggestions: * list clusters with: kops get cluster * edit this cluster with: kops edit cluster kops-cluster.k8s.local * edit your node instance group: kops edit ig --name=kops-cluster.k8s.local nodes * edit your master instance group: kops edit ig --name=kops-cluster.k8s.local master-eu-central-1a Finally configure your cluster with: kops update cluster kops-cluster.k8s.local --yes





Let's follow kops suggestion and execute:

$ kops update cluster ${NAME} --yes





The output will contain information similar to the following:

Cluster is starting. It should be ready in a few minutes. Suggestions: * validate cluster: kops validate cluster * list nodes: kubectl get nodes --show-labels * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.kops-cluster.k8s.local * the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS. * read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.

Congratulations, you created your Kubernetes cluster on AWS!



Next step is to make sure all is fine with the cluster.

Cluster Validation

Issue simple kops command:

Using cluster from kubectl context: kops-cluster.k8s.local Validating cluster kops-cluster.k8s.local INSTANCE GROUPS ... NODE STATUS ... Your cluster kops-cluster.k8s.local is ready





I shortened the output, but the last line is important. If you don't see that information, wait, 5-10 minutes and try again (cluster creation can take a while).



Now, let's use Kubernetes commands to verify the cluster. Remember to setup kubectl context properly:

$ kubectl config current-context kops-cluster.k8s.local





We can check the status of cluster nodes:

$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-....eu-central-1.compute.internal Ready node 3m v1.10.3 ip-....eu-central-1.compute.internal Ready master 3m v1.10.3 ip-....eu-central-1.compute.internal Ready node 3m v1.10.3





And the status of all pods in the cluster:

$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system dns-controller 1/1 Running 0 4m kube-system etcd-server-events-.....compute.internal 1/1 Running 0 4m kube-system etcd-server-ip-....compute.internal 1/1 Running 0 3m kube-system kube-apiserver-....compute.internal 1/1 Running 1 4m kube-system kube-controller-manager-....internal 1/1 Running 0 2m kube-system kube-dns-... 3/3 Running 0 2m kube-system kube-dns-... 3/3 Running 0 4m kube-system kube-dns-autoscaler-... 1/1 Running 0 4m kube-system kube-proxy-ip-....compute.internal 1/1 Running 0 2m kube-system kube-proxy-ip-....compute.internal 1/1 Running 0 3m kube-system kube-proxy-ip-....compute.internal 1/1 Running 0 2m kube-system kube-scheduler-....compute.internal 1/1 Running 0 3m





You should see Ready and Running statuses.



Now, you have fully functional, operating Kubernetes cluster on AWS. You can navigate to AWS EC2 instances where two worker nodes and one master node will be visible.

Helm on AWS

Thanks to Kubernetes, our local cluster on minikube and cloud cluster on AWS are very similar. The installation process of Helm is exactly the same.

$ helm init ... Happy Helming!





Verify the installation with $ kubectl get pod -n kube-system | grep tiller and with $ helm version . For more details refer to minikube part of this tutorial.

At this point, we reached the same state as we did with the local cluster with minikube which was the purpose of this guide. In the next sections, I will describe one useful addon and show you how to clean up and delete the cluster when needed.

Cluster Addons

There are a lot of useful addons for Kubernetes clusters. This topic is out of scope for now, but I will mention a very popular and official extension — Kubernetes dashboard. Follow the official guide on how to install dashboard here.

In essence, you have to install the dashboard:

kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.0.yaml





Then, serve the dashboard with:

$ kubectl proxy Starting to serve on 127.0.0.1:8001





Dashboard will be available under url https:///api/v1/proxy/namespaces/kube-system/services/Kubernetes-dashboard or under url http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ .

Since Kubernetes 1.8, RBAC was introduced and in order to login, you have to provide token or Kubeconfig. Follow this guide for instruction on a token generation and obtaining access to the dashboard.

Cleaning Up

Keeping up a running cluster that is not doing any meaningful work makes no sense.

If you choose to follow the next parts of the tutorial soon, you can scale down worker nodes. To do that issue:

$ kops edit ig --state=s3://cluster-state-store nodes





In the editor set min and max number of instances to 0 . Then, update cluster with the new configuration:

$ kops update cluster kops-cluster.k8s.local --state=s3://cluster-state-store --yes





You can verify that worker nodes were removed with:

$ kubectl get nodes





Disabling worker nodes makes sense if you plan to follow up on the tutorial soon. Otherwise, just delete the cluster.

To delete the cluster, issue:

$ kops delete cluster --name ${NAME}





where ${NAME} is my cluster name - kops-cluster.k8s.local . After inspecting the output and making sure that these are the things you want to delete, confirm deleting of the cluster with:

$ kops delete cluster --name ${NAME} --yes





Delete operation can take a while and kops will be trying to delete some resources couple times, this is normal and expected so don't be surprised. The list of deleted resources should match the output from the cluster installation process.

Summary of Part 1

Congratulations, you managed to create Kubernetes cluster with working Helm tool both locally and in the AWS cloud. You learned how to create the cluster on AWS with kops. Having almost identical local cluster and the remote cluster can be very useful and we will rely on these clusters later in the series when we start to deploy and configure applications. In part 2, I will present the system we will be developing and we will perform first deployments to the clusters.