Continuing with our PKS installation, we are now going to finish up with configuring and deploying the PKS Control Plane Tile which provides a frontend API that will be used by Cloud/Platform Operators to easily interact with PKS for provisioning and managing (create, delete, list, scale up/down) Kubernetes (K8S) Clusters. Once a K8S Cluster has successfully been deployed through PKS, operators simply provide their developers the external hostname of the K8S Cluster and the kubectl configuration file and they can immediately start deploying applications without knowing anything about PKS and how it works! If an application that a developer is deploying requires an external load balancer service, they can easily specify that in their application deployment YAML file and behind the scenes, PKS will automatically provision on-demand an NSX-T Load Balancer to service the application and this is completely seamless and does not require any additional assistance from the operator.

If you missed any of the previous articles, you can find the complete list here:

Step 1 - If you have not already downloaded PKS (pivotal-container-service-*.pivotal), please see Part 1 for the download URL. To import the PKS Tile, go to the home page of Ops Manager and click "Import a Product" and select the PKS package to begin.



Once the PKS Tile has been successfully imported, go ahead and click on the "plus" symbol to add the PKS Tile which will make it available for us to start configuring, similiar to what we did for the BOSH Tile. After that, click on the PKS Tile so we can begin the configuration.



Step 2 - This first section defines the AZ and Networks that will be used to deploy the PKS Control Plane VM as well as the K8S Management PODs. These were all previously defined when we had configured BOSH.

Singleton Jobs: AZ-Management

AZ-Management Balance Jobs: AZ-Management

AZ-Management Network: pks-mgmt-network

pks-mgmt-network Service Network: k8s-mgmt-cluster-network



Step 3 - This next section is for the PKS API endpoint and a certificate will be generated based on your DNS domain. In my environment, the domain is primp-industries.com and you will need to add wildcard in front as shown in the screenshot below.



Step 4 - In the next two section (Plan 1 and Plan 2) are used to configure the size and resources used for each of the VM types for K8S Cluster. During K8S deployment, you can specify these "plans" and decide how big a given VM instance is for different deployment scenarios. For now, you can leave the defaults (you can always come back in later and modify) and you simply need to assign the AZ for placement, which in our case is AZ-Compute which you will need to do for both Plans.



Step 5 - In this section, we need to specify "vSphere" as our IaaS and provide the credentials to our Computer vCenter Server along with the datastore in which persistent disks will be deployed to by PKS. Behind the scenes, when an application requests persistent disks (default is ephemeral), the Project Hatchway plugin intercepts the request and will use these credentials to create a persistent VMDK and make that available back to the application. This is all done seamless and on-demand without any interact between the developer deploying the application and the Cloud/Platform Operator. For the "Stored VM Folder" field, be sure to use the same value that you had specified during the BOSH deployment. If you are unsure, go blog post Part 4 and Step 4 to see what you had selected before proceeding.



Step 6 - In this next section we will provide the NSX-T configurations for the networks that we had created earlier which will be used by K8S Clusters. Start off by selecting "NSX-T" as the network type and then provide credentials to your NSX-T Manager. If you have replaced the NSX-T SSL Certificate, you will need to provide that or you can disable SSL verification which I have done for testing purposes. Next, you will need to provide the name of the Compute vSphere Cluster which has been prepped for NSX-T, in my environment, that is PKS-Cluster.

For next three fields, you will need to the NSX-T UI (this can also be programmatically queried through NSX-T REST API ) to obtain the UUID for the T0 Router, IP Block and Load Balance IP Pool.

T0 Router ID - Navigate to Routing->Routers , select T0-LR and click on the ID to retrieve the UUID as shown in the screenshot below

- Navigate to , select and click on the ID to retrieve the UUID as shown in the screenshot below IP Block ID - Navigate to DDI->IPAM , select PKS-IP-Block and click on the ID to retrieve the UUID as shown in the screenshot below

- Navigate to , select and click on the ID to retrieve the UUID as shown in the screenshot below Floating IP Pool ID - Navigate to Inventory->Groups->IP Pools, select Load-Balancer-Pool and click on the ID to retrieve the UUID as shown in the screenshot below



Note: Pre-check validation for correct NSX-T objects UUIDs will be done when you click save, so if you made a mistake, the UI will alert you.

Step 7 - In this section, we will configure the User Account and Authorization endpoint which we will use to manage users for PKS. You just need to provide a DNS entry and ensure that it is mapped to the same DNS domain that you had configured earlier as the certificate generated will need to match. In my example, I used uaa.primp-industries.com and once the PKS VM has been deployed, you can update your DNS Server to make sure this hostname points back to the IP selected for PKS Control Plane VM or you can update your /etc/hosts file on the PKS Client VM for testing purposes.



Step 8 - In this section, we simply just need to enable NSX-T validation and we can leave the rest alone.



Step 9 - In the very last section, you may need to import an updated Stemcell VM, which we had downloaded from blog post Part 1. If you are not prompted to, then you can move on to the next step.

Step 10 - To begin the PKS Control Plane VM deployment, go ahead and navigate back to the Ops Manager home page and click "Apply Changes" to start the deployment.

This will take some time and in my environment, it took ~30minutes to complete. This is a good time to take a coffee or beer break depending on the hour of the day 😀



Step 10 - If everything was successfully deployed, you can head over to your vCenter Server and you should see another new VM named vm-[UUID] to denote the PKS Control Plane VM. Similiar to the BOSH VM, we can look at the instance_group Custom Attribute to tell the role for this particular VM.



Another way you can easily identify either PKS Control Plane or BOSH VM is simply clicking on the tile in Ops Manager and then select the "Status" tab which will not only give you the VM Display Name in vCenter Server but also the IP Address that was automatically allocated from the PKS Management Network that we had specified within BOSH.

If you recall earlier, we had specified our PKS API endpoint to be uaa.primp-industries.com and now we can take the IP Address from below and create a DNS entry which we will be using in the next article to setup a new PKS user. If you do not have DNS in your environment, you can also add an entry to your /etc/hosts on the PKS Client VM as an alternative.



In our next article, we will demonstrate how to interact with PKS using the PKS CLI to request a new K8S Cluster as an Operator and then walk through a sample application deployment on top of the newly create K8S Cluster like a Developer would normally.