This past week and half, I have been spending quite a bit of time familiarizing myself with the recently released VMware Pivotal Container Service solution, also referred to as VMware PKS for short (yes, that is a K not a C which is a nod to Google's container scheduler Kubernetes). VMware PKS is part of a project that I am currently working on and I figure I would share the process and steps I took to deploy VMware PKS in my own personal lab, in case other folks are interested in trying out this neat and powerful solution for deploying Cloud Native Apps using Kubernetes which was co-developed between VMware, Pivotal and Google.

If you would like to learn more about this first release of VMware PKS and the benefits it provides to both developers (consumers) and operators (admins/SRE) for Kubernetes infrastructure, check out this blog post here. Merlin Glynn, one of the Product Managers for PKS also did an awesome light board video overview of VMware PKS if you want the sparks notes version. If you simply want to give PKS a try without deploying anything, the CNA folks have also published a PKS HOL which can you find here. Another useful resource is the Getting Started with Kubernetes-as-a-Service post from Michael West who works in CNA team and built the PKS HOL.



This will be the first, in a series of articles outlining my VMware PKS deployment and configuration which hopefully can help benefit others as it took me several attempts while learning about the solution. Although the first few articles will include manual guidance, rest assure, there will be some cool automation towards the end but I figure that folks may want to go through this once by hand to get a good understanding on all the different components and how they interact with each other. Plus, some of the PKS-specific automation is still being worked on by the product team and hopefully I will be able to share some of that real soon.

If you missed any of the previous articles, you can find the complete list here:

Components:

Compute - vSphere 6.5+ (vCenter Server + ESXi) and Enterprise Plus license

Storage - VSAN or other vSphere Datastores + Project Hatchway for Persistent Storage Volumes

Networking & Security - NSX-T 2.1 (License included as part of PKS SKU)

PKS - Composed of three primary Virtual Machines: Ops Manager - Web Interface for deploying and managing BOSH and PKS Control Plane VMs BOSH Director - Deploys and Manages Kubernetes Cluster PKS Control Plane - North Bound API to interact with PKS for K8S cluster creation, deletion & resize Harbor (Optional) - Enterprise-class container registry for Docker images



Software Download:

Here is the complete list of software that needs to be downloaded to deploy PKS. I found myself jumping through multiple documents and trying to find the right packages on Pivotal's website, I figure having a single place for this would be useful not only for me but anyone else who might be going through a deployment.

Software Download URL NSX-T https://my.vmware.com/web/vmware/details?productId=673&downloadGroup=NSX-T-210 PKS https://network.pivotal.io/products/pivotal-container-service#/releases/43085 PKS CLI https://network.pivotal.io/products/pivotal-container-service#/releases/43085/file_groups/848 Kubectl CLI https://network.pivotal.io/products/pivotal-container-service#/releases/43085/file_groups/847 Pivotal Ops Manager for vSphere https://network.pivotal.io/products/ops-manager Stemcell for vSphere https://network.pivotal.io/products/stemcells

Lab Environment:

For my deployment, I will be making use of my single physical ESXi host which will server as my "Management" Cluster running all the infrastructure VMs including PKS. On that Management Cluster, I will also be running three Nested ESXi VMs configured with VSAN which then serve as my "Compute" Cluster for my PKS workload. You can also run PKS on just a single Management and Compute Cluster, however the setup is much more complex as number of NAT rules will be required. The deployment model below is the preferred and recommended approach when deploying PKS into Production.



Below are the resource requirements to deploy a fully functional PKS deployment. For NSX-T Manager and the Controllers, you may be able to reduce the CPU and Memory footprint for lab and education purposes as well as only deploying only a single Controller. However, for the NSX-T Edge, there is a hard requirement of 8vCPU and 16GB memory which is pre-checked as part of the NSX-T Container Plugin (NCP) or PKS configurations will fail complete.

Compute/Storage

VM CPU MEM DISK NSX-T Manager 4 16GB 140GB NSX-T Controller x 3 4 16GB 120GB NSX-Edge x 1 8 16GB 120GB Ops Manager 1 8GB 160GB BOSH Director 2 8GB 103GB PKS Control Plane 1 1GB 22GB Harbor 2 8GB 169GB PKS Client (for PKS/Kubectl CLI) 1 1GB 16GB

Networking

Defined within your physical or virtual network infrastructure

Management Network (172.30.51.0/24) - This is where the management VMs will reside including the four PKS VMs. PKS can be deployed behind an NSX-T Logical Switch just like the K8S workload in either routed or NAT mode. For my lab, I chose to keep all my infrastructure VMs on the same management network

(172.30.51.0/24) - This is where the management VMs will reside including the four PKS VMs. PKS can be deployed behind an NSX-T Logical Switch just like the K8S workload in either routed or NAT mode. For my lab, I chose to keep all my infrastructure VMs on the same management network Intermediate Network (172.30.50.0/24) - This network is required to route between our management network via the NSX-T T0 Router and our K8S Cluster Management network as well as the K8S Workload networks. You only need two IPs, one for the gateway which should exists by default and one for the uplink which will reside on the T0. Static routes will be used to reach the NSX-T networks. In a Production or non-lab environment, BGP would be used to peer the T0 to your physical network

Defined within NSX-T

K8S Cluster Management Network (10.10.0.0/24) - This network is used for the K8S management POD which includes services like the Node Agent for monitoring liveness of the cluster and NSX-T Container Plugin (NCP) to name a few

(10.10.0.0/24) - This network is used for the K8S management POD which includes services like the Node Agent for monitoring liveness of the cluster and NSX-T Container Plugin (NCP) to name a few Load Balancer IP Pool (10.20.0.0/24) - This network pool will provide addresses for when load-balancing services are required as part of an application deployment within K8S

(10.20.0.0/24) - This network pool will provide addresses for when load-balancing services are required as part of an application deployment within K8S K8S Service Network IP Block (172.16.0.0/16) - This network is used when an application is requested to deploy onto a new K8S namespace. A /24 network is taken from this IP Block and allocated to a specific K8S namespace allowing for network isolation and policies to be applied at a much granular level between tenants. This is a unique capability that VMware PKS offers which no other solution does today

Here is a logical diagram of my planned VMware PKS deployment:



That is all for now, stay tuned for Part 2 where we will start configuring a PKS "Client" VM that will include all the various CLI tools for managing and requesting K8S Clusters as well as deploying applications on top of the infrastructure.