In this post I will now show you the steps involved in creating a Docker Swarm configuration using docker-machine with Photon Controller driver plugin. In previous posts, I showed how you can setup Photon OS to deploy Photon Controller and I also showed you how to build docker-machine for Photon Controller. Note that there are a lot of ways to deploy Swarm. Since I was given a demonstration on doing this using “Consul” for cluster membership and discovery, that is the mechanism that I am going to use here. Now, a couple of weeks back, we looked at deploying Docker Swarm using the “cluster” mechanism also available in Photon Controller. This mechanism used “etcd” for discovery, configuration, and so on. In this example, we are going to deploy Docker Swarm from the ground up, step-by-step, using the docker-machine with photon controller driver, but in this example we are going to use “Consul” which does something very similar to “etcd”.

*** Please note that at the time of writing, Photon Controller is still not GA ***

The steps to deploy Docker Swarm with docker machine on Photon Controller can be outlined as follows:

Deploy Photon Controller (link above) Build the docker-machine driver for Photon Controller (link above) Setup the necessary PHOTON environment variables in the environment where you will be deploying Swarm Deploy Consul machine and Consul tool Deploy a Docker Swarm master Deploy one or more Docker Swarm slaves (we provision two) Deploy your containers

Now because we wish to use the Photon Controller for the underlying framework, we need to ensure that we are using the photon driver for the docker-machines (step 2 above), and that we have the environment variables for PHOTON also in place (step 3 above). I am running this deployment from an Ubuntu 16.04 VM. Here is an example of the environment variables taken from my setup:

PHOTON_DISK_FLAVOR=DOCKERDISKFLAVOR PHOTON_ISO_PATH=/home/cormac/docker-machine/cloud-init.iso PHOTON_SSH_USER_PASSWORD=tcuser PHOTON_VM_FLAVOR=DOCKERFLAVOR PHOTON_SSH_KEYPATH=/home/cormac/.ssh/id_rsa PHOTON_PROJECT=0e0de526-06ad-4b60-9d15-a021d68566fe PHOTON_ENDPOINT=http://10.27.44.34 PHOTON_IMAGE=051ba0d7-2560-4533-b90c-77caa4cd6fb0

Once those are in place, the docker machines can now be deployed. Now you could do this manually, one docker-machine at a time. However my good pal Massimo provided me with the script that he created when this demo was run at DockerCon ’16 recently. Here is the script. Note that the driver option to docker-machine is “photon”.

#!/bin/bash DRIVER="photon" NUMBEROFNODES=3 echo echo "*** Step 1 - deploy the Consul machine" echo docker-machine create -d ${DRIVER} consul-machine echo echo "*** Step 2 - run the Consul tool on the Consul machine" echo docker $(docker-machine config consul-machine) run -d -p "8500:8500" -h "consul" \ progrium/consul -server -bootstrap echo echo "*** Step 3 - Create the Docker Swarm master node" echo docker-machine create -d ${DRIVER} --swarm --swarm-master \ --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \ --engine-opt="cluster-advertise=eth0:2376"\ swarm-node-1-master echo echo "*** Step 4 - Deploy 2 Docker Swarm slave nodes" echo i=2 while [[ ${i} -le ${NUMBEROFNODES} ]] do docker-machine create -d ${DRIVER} --swarm \ --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \ --engine-opt="cluster-advertise=eth0:2376"\ swarm-node-${i} ((i=i+1)) done echo echo "*** Step 5 - Display swarm info" echo docker-machine env --swarm swarm-node-1-master

And here is an example output from running the script. This is the start of the script where we deploy “Consul”. Here you can see the VM being created with the initial cloud-init ISO image, the VM network details being discovered and then the OS image being attached to the VM (in this case it is Debian). You then see the certs being moved around locally and copied remotely to give us SSH access to the machines. Finally you see that docker is up and running. In the second step, you can see that “Consul” is launched as a container on that docker-machine.

cormac@cs-dhcp32-29:~/docker-machine-scripts$ ./deploy-swarm.sh *** Step 1 - deploy the Consul machine Running pre-create checks... Creating machine... (consul-machine) VM was created with Id: 7086eecb-a23f-48e0-87a8-13be5f5222f1 (consul-machine) ISO is attached to VM. (consul-machine) VM is started. (consul-machine) VM IP: 10.27.33.112 Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with debian... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this \ virtual machine, run: docker-machine env consul-machine *** Step 2 - run the Consul tool on the Consul machine Unable to find image 'progrium/consul:latest' locally latest: Pulling from progrium/consul c862d82a67a2: Pull complete 0e7f3c08384e: Pull complete 0e221e32327a: Pull complete 09a952464e47: Pull complete 60a1b927414d: Pull complete 4c9f46b5ccce: Pull complete 417d86672aa4: Pull complete b0d47ad24447: Pull complete fd5300bd53f0: Pull complete a3ed95caeb02: Pull complete d023b445076e: Pull complete ba8851f89e33: Pull complete 5d1cefca2a28: Pull complete Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274 Status: Downloaded newer image for progrium/consul:latest 2ade0f6a921dc208e2cb4fc216278679d3282ca96f4a1508ffdbe95da8760439

Now we come to the section that is specific to Docker Swarm. Many of the steps are similar to what you will see above, but once the OS image is in place, we see the Swarm cluster getting initialized. First we have the master:

*** Step 3 - Create the Docker Swarm master node Running pre-create checks... Creating machine... (swarm-node-1-master) VM was created with Id: 27e28089-6e39-4450-ba37-cde388f427c2 (swarm-node-1-master) ISO is attached to VM. (swarm-node-1-master) VM is started. (swarm-node-1-master) VM IP: 10.27.32.103 Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with debian... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Configuring swarm... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this \ virtual machine, run: docker-machine env swarm-node-1-master

Then we have the two Swarm slaves being deployed:

*** Step 4 - Deploy 2 Docker Swarm slave nodes Running pre-create checks... Creating machine... (swarm-node-2) VM was created with Id: e44cc8a4-ca90-4644-9abc-a84311ec603b (swarm-node-2) ISO is attached to VM. (swarm-node-2) VM is started. (swarm-node-2) VM IP: 10.27.33.114 Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with debian... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Configuring swarm... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this \ virtual machine, run: docker-machine env swarm-node-2 . .

If you wish to do deploy a slave manually, you would simply run the command below. This is deploying one of the slave nodes by hand. You can use this to add additional slaves to the cluster later on.

cormac@cs-dhcp32-29:~/docker-machine-scripts$ docker-machine create -d photon \ --swarm --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \ --engine-opt="cluster-advertise=eth0:2376" swarm-node-3 Running pre-create checks... Creating machine... (swarm-node-3) VM was created with Id: 2744e118-a16a-43ba-857a-472d87502b85 (swarm-node-3) ISO is attached to VM. (swarm-node-3) VM is started. (swarm-node-3) VM IP: 10.27.33.118 Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with debian... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Configuring swarm... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this \ virtual machine, run: docker-machine env swarm-node-3 cormac@cs-dhcp32-29:~/docker-machine-scripts$

Now both the slaves, and the master have been deployed. The final steps just gives info about the Swarm environment.

*** Step 5 - Display swarm info export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://10.27.32.103:3376" export DOCKER_CERT_PATH="/home/cormac/.docker/machine/machines/swarm-node-1-master" export DOCKER_MACHINE_NAME="swarm-node-1-master" # Run this command to configure your shell: # eval $(docker-machine env --swarm swarm-node-1-master)

to show all of the docker-machines, run docker-machine ls:

cormac@cs-dhcp32-29:/etc$ docker-machine ls NAME ACTIVE DRIVER STATE URL \ SWARM DOCKER ERRORS consul-machine - photon Running tcp://10.27.33.112:2376 \ v1.11.2 swarm-node-1-master * (swarm) photon Running tcp://10.27.32.103:2376 \ swarm-node-1-master (master) v1.11.2 swarm-node-2 - photon Running tcp://10.27.33.114:2376 \ swarm-node-1-master v1.11.2 swarm-node-3 - photon Running tcp://10.27.33.118:2376 \ swarm-node-1-master v1.11.2 cormac@cs-dhcp32-29:/etc$

This displays the machine running the “Consul” container, as well as the master node and two slave nodes in my Swarm cluster. Now we can examine the cluster setup in more detail with docker info, after we run the eval command highlighted in the output above to configure our shell:

cormac@cs-dhcp32-29:~/docker-machine-scripts$ eval $(docker-machine env \ --swarm swarm-node-1-master) cormac@cs-dhcp32-29:~/docker-machine-scripts$ docker info Containers: 4 Running: 4 Paused: 0 Stopped: 0 Images: 3 Server Version: swarm/1.2.3 Role: primary Strategy: spread Filters: health, port, containerslots, dependency, affinity, constraint Nodes: 3 swarm-node-1-master: 10.27.32.103:2376 └ ID: O5ZJ:RFDJ:RXUY:CQV6:2TDL:3ACI:DWCP:5X7A:MKCP:HUAP:4TUD:FE4P └ Status: Healthy └ Containers: 2 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 2.061 GiB └ Labels: executiondriver=, kernelversion=3.16.0-4-amd64, \ operatingsystem=Debian GNU/Linux 8 (jessie), provider=photon, storagedriver=aufs └ UpdatedAt: 2016-06-27T15:39:51Z └ ServerVersion: 1.11.2 swarm-node-2: 10.27.33.114:2376 └ ID: MGRK:45KO:LATQ:DLCZ:ITFX:PSQC:6P4V:ZQYS:NZ35:SLSK:CDYH:5ZME └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 2.061 GiB └ Labels: executiondriver=, kernelversion=3.16.0-4-amd64, \ operatingsystem=Debian GNU/Linux 8 (jessie), provider=photon, storagedriver=aufs └ UpdatedAt: 2016-06-27T15:39:42Z └ ServerVersion: 1.11.2 swarm-node-3: 10.27.33.118:2376 └ ID: NL4P:YTPC:W464:43TA:PECO:D3M3:6EJG:DQOV:BPLW:CSBA:YUPK:JHSI └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 2.061 GiB └ Labels: executiondriver=, kernelversion=3.16.0-4-amd64, \ operatingsystem=Debian GNU/Linux 8 (jessie), provider=photon, storagedriver=aufs └ UpdatedAt: 2016-06-27T15:40:06Z └ ServerVersion: 1.11.2 Plugins: Volume: Network: Kernel Version: 3.16.0-4-amd64 Operating System: linux Architecture: amd64 CPUs: 3 Total Memory: 6.184 GiB Name: 87a4cfa14275

And we can also query the membership in “Consul”. The following command will show the docker master and slave nodes:

cormac@cs-dhcp32-29:~/docker-machine-scripts$ docker run swarm list \ consul://$(docker-machine ip consul-machine):8500 time="2016-06-27T15:43:22Z" level=info msg="Initializing discovery without TLS" 10.27.32.103:2376 10.27.33.114:2376 10.27.33.118:2376

Consul also provides a basic UI. If you point a browser at the docker-machine host running “Consul”, port 8500, this will bring it up. If you navigate to the Key/Value view, click on Docker, then Nodes, the list of members is once again displayed:

Now you can start to deploy containers on the Swarm cluster, and you should once again see them being placed in a round-robin fashion on the slave machines.

To look at the running containers on each of the nodes in the swarm cluster, you must first select the node you wish to examine:

root@cs-dhcp32-29:~# eval $(docker-machine env swarm-node-1-master) root@cs-dhcp32-29:~# docker ps CONTAINER ID IMAGE COMMAND CREATED \ STATUS PORTS NAMES 6920cf9687c1 swarm:latest "/swarm join --advert" 2 days ago \ Up 2 days 2375/tcp swarm-agent 8b2148aeeab8 swarm:latest "/swarm manage --tlsv" 2 days ago \ Up 2 days 2375/tcp, 0.0.0.0:3376->3376/tcp swarm-agent-master root@cs-dhcp32-29:~# eval $(docker-machine env swarm-node-2) root@cs-dhcp32-29:~# docker ps CONTAINER ID IMAGE COMMAND CREATED \ STATUS PORTS NAMES 90af8db22134 swarm:latest "/swarm join --advert" 2 days ago \ Up 2 days 2375/tcp swarm-agent root@cs-dhcp32-29:~# eval $(docker-machine env swarm-node-3) root@cs-dhcp32-29:~# docker ps CONTAINER ID IMAGE COMMAND CREATED \ STATUS PORTS NAMES 9ee781ea717d swarm:latest "/swarm join --advert" 2 days ago \ Up 2 days 2375/tcp swarm-agent

To look at all the containers together, set DOCKER_HOST and port to 3376 (slide right for full output):

root@cs-dhcp32-29:~# DOCKER_HOST=$(docker-machine ip swarm-node-1-master):3376 root@cs-dhcp32-29:~# export DOCKER_HOST root@cs-dhcp32-29:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED \ STATUS PORTS NAMES 9ee781ea717d swarm:latest "/swarm join --advert" 2 days ago \ Up 2 days 2375/tcp swarm-node-3/swarm-agent 90af8db22134 swarm:latest "/swarm join --advert" 2 days ago \ Up 2 days 2375/tcp swarm-node-2/swarm-agent 6920cf9687c1 swarm:latest "/swarm join --advert" 2 days ago \ Up 2 days 2375/tcp swarm-node-1-master/swarm-agent 8b2148aeeab8 swarm:latest "/swarm manage --tlsv" 2 days ago \ Up 2 days 2375/tcp, 10.27.33.169:3376->3376/tcp swarm-node-1-master/swarm-agent-master root@cs-dhcp32-29:~#

Next, run some simple containers. I have used the simple “hello-world” one:

root@cs-dhcp32-29:~# docker run hello-world Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker Hub account: https://hub.docker.com For more examples and ideas, visit: https://docs.docker.com/engine/userguide/

Now examine the containers that have run with “docker ps -a”:

root@cs-dhcp32-29:~# docker ps -a ... NAMES ... swarm-node-3/trusting_allen ... swarm-node-2/evil_mahavira ... swarm-node-3/swarm-agent ... swarm-node-2/swarm-agent ... swarm-node-1-master/swarm-agent ... swarm-node-1-master/swarm-agent-master root@cs-dhcp32-29:~#

I parsed the output just to show the NAMES column. Here we can see that the 2 x hello-world containers (the first two in the output) have been placed on different swarm slaves. The containers are being balanced across nodes in a round-robin fashion.

My understanding is that there have been a number improvements made around Docker Swarm at DockerCon ’16, including a better load-balancing mechanism. However, for the purposes of this demo, it is still round-robin.

So once again I hope this shows the flexibility of Photon Controller. Yes, you can quickly deploy Docker Swarm using the “canned” cluster format I described previously. But, if you want more granular control or you wish to use different versions or different tooling (e.g. “Consul” instead of “etcd”), then note that you now have the flexibility to deploy a Docker Swarm using docker-machine. Have fun!