In this blog post, we will take a look at using VMware Wavefront, a recent acquisition, that is a leading SaaS based metrics and monitoring solutions for Cloud Native Applications. Wavefront supports monitoring Kuberenetes (K8S) and many other applications, but what is really neat about Wavefront is that it not only does it give us deeper metrics about our K8S infrastructure (cluster, pod, namespaces and containers), but it can also be used to help developers instrument their application for metric collection and monitoring and provide a complete end-to-end view. The Wavefront team also recently published this blog post which has a nice video demo of their integration with VMware PKS.

If you missed any of the previous articles, you can find the complete list here:

Step 1 - Sign up for a free 30 day trial of Wavefront here and sign in after you receive your email invitation.



Step 2 - Once logged in, you will be taken to the "Getting Started" page where you will need to select and configure an integration type. Go ahead and click on the Kuberenetes icon which should be at the top of the screen.



Step 3 - On this page, you will be given instructions to setup a Wavefront proxy, proxy service and heapster which all runs as a Pod within the PKS managed K8S Cluster that you to monitor. Since the Wavefront proxy is running within the K8S Cluster, you will need to make sure the nodes can actually connect out to the internet and reach Wavefront. If you do not have direct internet access (which most customers do not in a Production environment), then you can setup a proxy host which does have access. For more details, please see the documentation here.

The nice thing about Wavefront K8S integration, is that it provides the YAML snippets as well as the commands needed to deploy the Pods for your setup. To be able to easily identify your K8S Cluster name within the Wavefront UI, update the clusterName (e.g. wavefront:wavefront-proxy.default.svc.cluster.local:2878?clusterName=k8s-cluster-01&includeLabels=true) when creating the Heapster Pod.



When deploying the Wavefront Heapster Pod, I found that the default K8S Cluster already had a default unused Heapster Pod instance running and you may need to run the following 3 commands to successfully get it deployed:

kubectl create -f heapster.yaml

kubectl delete -f heapster.yaml

kubectl create -f heapster.yaml

Step 4 - To verify that the Wavefront proxy has successfully connected to the Wavefront service, we can check the logs of the Wavefront Pod. First, we need to retrieve the ID by running the following command:

kubectl get pod

To view the logs, simply run the following command along with the Wavefront Pod Id:

kubectl logs [WAVEFRONT-POD-ID]

What you are looking for is the following entry in the logs which indicates a successful connection was made:

2018-04-20 22:27:07,947 INFO [agent:readOrCreateDaemonId] Proxy Id created: 3c1c3cd5-c4d4-4193-9a60-ae71468f8379

Step 5 - After we have confirmed a succesful connection, we can head back to the Wavefront UI to view our data. Click on the Dashboards tab and once metrics have been received, you should see a new link called "Kubernetes Metrics" which you can launch to view your data. This may take a few minutes for the link to show up and I had to refresh a few times before I saw it.



Here is a screenshot of the data from my K8S Cluster which you can see goes all the day down to the application that I had deployed, pretty cool!



Similiar to both vRLI and vROps, Wavefront integration is optional and can be configured as part of a post-deployment task as outlined above. I can imagine in the future, the workflow could be as simple as toggling checkbox to enable Wavefront configuration. Customers would only need to specify the Wavefront URL (whether that is directly to the Wavefront SaaS service or an onPrem Wavefront Proxy Appliance which can reach the service) and PKS would automatically deploy the respective Wavefront Pods and wire everything up automatically. That surely would be a fantastic user experience, right? 🙂