It’s been a number of weeks since I last looked at vSphere Integrated Containers. When I last looked at v0.4.0, one of the issues had been with port mapping not working. This was a bit of a drag, as in the case of web servers running in containers, you’d definitely want this to function. One of the most common container demos is to show Nginx web server running in a container, and port mapping back to the container host, so that you could point to the IP of the container host, and connect to the web server. I recently got access to v0.6.0, which has a whole bunch of improvements, and it also has working port mapping. So to demonstrate this, I thought I’d show off Nginx running in VIC.

Bridge Network Considerations

First of all, you need to verify that your Bridge Network is set up correctly. When using vCenter Server as a target for VIC, a distributed port group on a distributed switch (DVS) needs to be created. Now you must also make sure that the DVS on which the port group resides has correctly configured uplinks and VLAN settings to allow the containers on each host to communicate (if necessary), but also to make sure that the containers can communicate back to the Virtual Container Host. This can be confusing, because if the DVS or dvportgroup is mis-configured, containers on the same host can still communicate, but containers on different hosts cannot. This has caught a few folks out.

Deploying the Virtual Container Host

I’m doing a simple deployment here with the minimal options. In the example below, I am specifying the bridge network (dvportgroup), an image store for container images, a compute resource, typically a resource pool, and the vCenter server and privileges. There are obviously a lot more settings to the VCH, but this is the simplest deployment of the VCH. As you can see below, there are 3 x ESXi hosts in my cluster.

# ./vic-machine-linux create --bridge-network Bridge-DPG \ --image-store isilion-nfs-01 \ -t 'administrator@vsphere.local:VMware123!@10.27.51.103' \ --compute-resource Mgmt INFO[2016-09-21T13:20:52Z] ### Installing VCH #### INFO[2016-09-21T13:20:52Z] Generating certificate/key pair - \ private key in ./virtual-container-host-key.pem INFO[2016-09-21T13:20:53Z] Validating supplied configuration INFO[2016-09-21T13:20:53Z] vDS configuration OK on "Bridge-DPG" INFO[2016-09-21T13:20:53Z] Firewall status: DISABLED on "/CNA-DC/host/Mgmt/10.27.51.10" INFO[2016-09-21T13:20:53Z] Firewall status: DISABLED on "/CNA-DC/host/Mgmt/10.27.51.8" INFO[2016-09-21T13:20:53Z] Firewall status: DISABLED on "/CNA-DC/host/Mgmt/10.27.51.9" INFO[2016-09-21T13:20:53Z] Firewall configuration OK on hosts: INFO[2016-09-21T13:20:53Z] "/CNA-DC/host/Mgmt/10.27.51.10" INFO[2016-09-21T13:20:53Z] "/CNA-DC/host/Mgmt/10.27.51.8" INFO[2016-09-21T13:20:53Z] "/CNA-DC/host/Mgmt/10.27.51.9" INFO[2016-09-21T13:20:53Z] License check OK on hosts: INFO[2016-09-21T13:20:53Z] "/CNA-DC/host/Mgmt/10.27.51.10" INFO[2016-09-21T13:20:53Z] "/CNA-DC/host/Mgmt/10.27.51.8" INFO[2016-09-21T13:20:53Z] "/CNA-DC/host/Mgmt/10.27.51.9" INFO[2016-09-21T13:20:53Z] DRS check OK on: INFO[2016-09-21T13:20:53Z] "/CNA-DC/host/Mgmt/Resources" INFO[2016-09-21T13:20:54Z] Creating virtual app "virtual-container-host" INFO[2016-09-21T13:20:54Z] Creating appliance on target INFO[2016-09-21T13:20:54Z] Network role "client" is sharing NIC with "external" INFO[2016-09-21T13:20:54Z] Network role "management" is sharing NIC with "external" INFO[2016-09-21T13:20:55Z] Uploading images for container INFO[2016-09-21T13:20:55Z] "appliance.iso" INFO[2016-09-21T13:20:55Z] "bootstrap.iso" INFO[2016-09-21T13:20:58Z] Registering VCH as a vSphere extension INFO[2016-09-21T13:21:06Z] Waiting for IP information INFO[2016-09-21T13:21:20Z] Waiting for major appliance components to launch INFO[2016-09-21T13:21:26Z] Initialization of appliance successful INFO[2016-09-21T13:21:26Z] INFO[2016-09-21T13:21:26Z] vic-admin portal: INFO[2016-09-21T13:21:26Z] https://10.27.51.32:2378 INFO[2016-09-21T13:21:26Z] INFO[2016-09-21T13:21:26Z] DOCKER_HOST=10.27.51.32:2376 INFO[2016-09-21T13:21:26Z] INFO[2016-09-21T13:21:26Z] Connect to docker: INFO[2016-09-21T13:21:26Z] docker -H 10.27.51.32:2376 --tls info INFO[2016-09-21T13:21:26Z] Installer completed successfully #

I now have my docker API endpoint – 10.27.51.32:2376 – so I can now begin to deploy containers. Let’s start with an Nginx container. The “-d” flag runs the container in the background, and the -p 80:80 maps port 80 from the container to port 80 on the container host (VCH), i.e. if you connect to port 80 on the VCH, it maps to port 80 on the container.

# docker -H 10.27.51.32:2376 --tls run -d -p 80:80 nginx Unable to find image 'nginx:latest' locally Pulling from library/nginx a3ed95caeb02: Pull complete 8ad8b3f87b37: Pull complete c6b290308f88: Pull complete f8f1e94eb9a9: Pull complete Digest: sha256:c22da5920a912f40b510c65b34c4fcd0fb75e6ad9085ea4939bcda2a6231a036 Status: Downloaded newer image for library/nginx:latest 9f3570e1859aec2c2fec1816754cbe387c7223412819830b062baf50ac22ab11

And lets see if it is running successfully:

# docker -H 10.27.51.32:2376 --tls ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9f3570e1859a nginx "nginx -g daemon off;" 26 seconds ago Running condescending_jones #

All looks good so far. Now to see if we can connect to the web server (Nginx) running in the container using the VCH IP address and port. This is the same IP address as the one used for docker endpoint, but the web server has been mapped to port 80, whereas the docker endpoint is using port 2376.

The final test is to ensure that we can connect to the web server. The easiest way is to launch a browser, and in my case point to http://10.27.51.32:80. You should see the following if port mapping is working correctly: