If you’re following my blog on a regular basis, you’ve probably read some of my articles on Docker in combination with vRealize Automation (here and here). As a next step in this series, I thought it would be nice to automate the deployment of a Docker swarm. Docker swarm mode allows you to create a cluster of one or more Docker engines. A swarm consists of one or more nodes: physical or virtual machines that run the Docker engine.

In a Docker swarm there are two types of nodes: manager and workers nodes. A manager node maintains the cluster state, runs the scheduling service and serves as a HTTP API endpoint. Worker nodes have the purpose of running containers.

A typical Docker swarm consists of 3, 5 or 7 manager nodes (other configurations are possible but not advised) and an amount of workers nodes depending on the required capacity.

Deploying a Docker swarm involves installing a master node (this is the first manager node) and installing additional master and workes nodes. I’m using vRealize Automation to automatically deploy the swarm, let’s have a closer look on what you will need for this.

First of all we need host virtual machines to will run the docker engine, in this example I will use CentOS for this. The CentOS VM needs the vRA guest agent (gugent) installed: there are numerous articles available on how to do this.

The general installation workflow will look like this:

Deploy the VMs; Configure the Docker-CE yum repository; Install the Docker engine; Configure the Docker swarm master; Configure additional manager and worker nodes;

Ok, let’s have a look at the required scripts. The first script (and thus software component) adds the Docker-CE yum repository:

#Install yum utils /usr/bin/yum install -y yum-utils #add docker repository /usr/bin/yum-config-manager -y --add-repo https://download.docker.com/linux/centos/docker-ce.repo #refresh yum repo and install docker engine /usr/bin/yum makecache fast -y

The second script installs the Docker-CE engine:

#!/bin/bash #Install docker-ce /usr/bin/yum -y install docker-ce #configure docker service /usr/bin/systemctl enable docker /usr/bin/systemctl start docker

The following script configures the first Docker swarm manager:

#!/bin/bash docker swarm init --advertise-addr $(hostname -i):$swarmPort --listen-addr $(hostname -i):$swarmPort tokenManager=`docker swarm join-token manager -q` tokenWorker=`docker swarm join-token worker -q`

To successfully add additional manager and workers we’ll need a token. This token is used in the docker swarm join command on the additional nodes. The tokens are stored in a property that’s part of the vRA blueprint.

Now the scripts to additional nodes, for the additional managers we’ll use:

#!/bin/bash docker swarm join --token $tokenManager $masterIp:$swarmPort --advertise-addr $(hostname -i):$swarmPort --listen-addr $(hostname -i):$swarmPort

And for the additional workers:

#!/bin/bash docker swarm join --token $tokenWorker $masterIp:$swarmPort --advertise-addr $(hostname -i):$swarmPort --listen-addr $(hostname -i):$swarmPort

All the scripts and required virtual machines come together in the following blueprint:

We can do an actual request of Docker swarm as a service:

After the request has been submitted, the required steps are executed automatically. It will take about 15 minutes to deploy a Docker swarm cluster:

A docker node ls will get you an overview of the available nodes in the swarm.

docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 4cawkudzpqnaqmomu6ghxq0r5 dockerhost-055 Ready Active 6s0lu1qh11pk1soxt0dtk4yyq dockerhost-050 Ready Active c42b6i6qbfok8ai7phy7hl0j9 dockerhost-048 Ready Active Reachable ks4bbmxvvt4b7427gbq1n8shg dockerhost-052 Ready Active lqxj4cynhx9lotaq1h81fzyzy * dockerhost-051 Ready Active Leader tm5jpt4fwwy9wocrr1ru3go0a dockerhost-054 Ready Active weskqky69dr3z5q2rslyw7zef dockerhost-053 Ready Active xmcijogmk6kql5e92kr8rb6bx dockerhost-049 Ready Active Reachable

That’s it for now. If you want to play with this blueprint yourself, you can download it here. Use the CloudClient to import the blueprint into vRealize Automation.