Let’s get started.

Create a couple of Virtualbox powered nodes for the swarm. Here’s a quick one-liner,

$ for i in master node01 node02; do docker-machine create -d virtualbox $i; done

or a full set of commands:

$ docker-machine create -d virtualbox master

Running pre-create checks…

Creating machine…

…

$ docker-machine create -d virtualbox node01

Running pre-create checks…

Creating machine…

…

$ docker-machine create -d virtualbox node02

Running pre-create checks…

Creating machine… …

This will take a minute or so, depending on the speed of your laptop, to finish completely. Let’s check to make sure that your Docker VM hosts have booted up

$ docker-machine ls

NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS

master — virtualbox Running tcp://192.168.99.103:2376 v1.12.0

node01 — virtualbox Running tcp://192.168.99.104:2376 v1.12.0

node02 — virtualbox Running tcp://192.168.99.105:2376 v1.12.0

Once those VMs are running in Virtualbox, you’ll have three Docker engines at your disposal. We’ll use the Docker engines running on those VMs to create our swarm cluster.

First, let’s create the swarm mode master. We’ll use the Docker engine’s ability to run remotely using the `docker-machine config`as a variable. You’ll notice here as well that instead of switching your environmental variables around, we’ll use Bash variables and Docker Machine config to manipulate our servers.

$ docker $(docker-machine config master) swarm init \

--advertise-addr $(docker-machine ip master):2377

Next, let’s use the second machine to join the node as a worker. We’re using the config as a variable to execute the Docker command remotely again, as we did with the master node. We’ll add in token extraction, which means that we’ll be executing a Docker engine command inside of a command, using backticks,

`docker $(docker-machine config master) swarm join-token worker -q`

to our launch command.

$ docker $(docker-machine config node01) swarm join \

--token `docker $(docker-machine config master) swarm join-token worker -q` \

$(docker-machine ip master):2377 $ docker $(docker-machine config node02) swarm join \

--token `docker $(docker-machine config master) swarm join-token worker -q` \

$(docker-machine ip master):2377

Running `docker swarm join` accomplishes several things.

It sets the Docker engine on the node to swarm mode. It also requests a TLS certificate from the manager, after which it sets the node name to the virtual machine’s hostname, and joins the node to the swarm using the swarm token and sets it to “Active” availability. Swarm also inserts the node into the pre-existing ingress overlay network of the swarm.

You can see that network by issuing the `docker network ls` command

$ docker $(docker-machine config master) network ls

NETWORK ID NAME DRIVER SCOPE

3d7f73f718e2 bridge bridge local

bd2c68b0b740 docker_gwbridge bridge local

a9ab72d43cb4 host host local

3n6c58p3qjyu ingress overlay swarm

ca58df367644 none null local

You’ll notice the docker_gwbridge network, which allows the containers to have external connectivity outside of their cluster.

If you accidentally use the incorrect token, and join your worker node as another master you’ll find that you’ll have to destroy and re-create your cluster due to how the Raft protocol works. Here’s an example of what this looks like. Note that both nodes have a status under “Manager.”

$ docker $(docker-machine config master) node ls

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS

1f1be48ataio89evarv5xbk0y * master Ready Active Leader

a4e48u7wrtfmy935b98wf32yj node01 Ready Active Reachable

You can try stripping the cluster back down to a single node, but you’ll see the following error.

$ docker $(docker-machine config node01) swarm leave

Error response from daemon: You are attempting to leave cluster on a node that is participating as a manager. Leaving the cluster will leave you with 1 managers out of 2. This means Raft quorum will be lost and your cluster will become inaccessible. The only way to restore a cluster that has lost consensus is to reinitialize it with ` — force-new-cluster`. Use ` — force` to ignore this message.

Let’s leave the cluster, initialize a new cluster on the remaining manager, and rejoin the node. You’ll need to use ` — force-new-cluster` to reinitialize.

$ docker $(docker-machine config node01) swarm leave --force

$ docker $(docker-machine config master) swarm init --force-new-cluster --advertise-addr $(docker-machine ip master):2377

$ docker $(docker-machine config node01) swarm join \

--token `docker $(docker-machine config master) swarm join-token worker -q` \

$(docker-machine ip master):2377

Let’s also set up ManoMark’s visualizer, so we can observe what’s going on with our cluster. We’ll start it on the master node, and set it to a non-conflicting port with our upcoming webapp service.

$ docker $(docker-machine config master) run -it -d -p 5000:5000 \

-e HOST=`docker-machine ip master` \

-e PORT=5000 \

-v /var/run/docker.sock:/var/run/docker.sock \

manomarks/visualizer

A fresh visualizer

That container is going to be running on the master node, so let’s point our Docker client at the master’s Docker engine, and check to see what ports that container is advertising.

$ docker $(docker-machine config master) port `docker $(docker-machine config master) ps -ql`

5000/tcp -> 0.0.0.0:5000

We can now check and see the cluster status. Let’s set ourselves another bash variable in order to save some typing.

$ master=$(docker-machine config master)

$ docker $(master) node ls

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS

1v1pbijat414p1clwynrsjgga node01 Down Active

6m1y28qnhagsmaqdg1i74mgz4 * master Ready Active Leader

99eyb7120373kfic8kcjct7kr node02 Ready Active

9wdiv6bf4t4sj4p68hjvcxebh node01 Ready Active

Excellent! Let’s also remove the broken node.

$ docker $(docker-machine config master) node rm a4e48u7wrtfmy935b98wf32yj

$ docker $master node ls

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS

6m1y28qnhagsmaqdg1i74mgz4 * master Ready Active Leader

99eyb7120373kfic8kcjct7kr node02 Ready Active

9wdiv6bf4t4sj4p68hjvcxebh node01 Ready Active

Once we’re done switching back and forth between Docker engines, let’s set our configuration to work with the master node.