Docker has today released its promised orchestration toolkit, announced in December at DockerCon Europe. The three orchestration tools aim to demonstrate Docker’s “100 percent portability” across data server hosting infrastructures, including enabling hybrid cloud architectures to run containerized, distributed applications.

Indirectly responding to initial community concerns when the orchestration toolkit was announced in Amsterdam last year, Docker is stressing the freedom that developers will have to control how multi-container, multi-host applications are built, shipped and run in production.

The three orchestration tools are: Docker Machine, Docker Swarm and Docker Compose. Machine and Swarm are beta releases, while Compose is released as version 1.1.

Docker Machine

Docker Machine enables one-command automation to provision a host infrastructure and install Docker Engine. It’s well-suited to a hybrid environment, says David Messina, vice president of enterprise marketing at Docker, Inc. Sysadmins and Ops “don’t have to learn a separate set of commands to get a Docker container application up” with each data hosting infrastructure provider. With Docker Machine, Messina says, users can use one uniform command that cuts across infrastructure. Twelve drivers are available with the beta release, including Amazon EC2, Google Cloud Engine, Digital Ocean and VMware.

Docker is as yet unable to quantify the number of their users that might be looking to orchestrate distributed applications across a hybrid public cloud architecture, but Messina admits that “ultimately, that’s the goal more broadly: Machine is the key enabler of that, for sure.”

Docker Swarm

Docker Swarm is a clustering and scheduling tool that will automatically optimize a distributed application’s infrastructure based on the application’s lifecycle stage, container usage and performance needs.

“Swarm has multiple models for determining scheduling, including understanding how specific containers will have specific resource requirements — compute and memory being the most obvious examples,” Messina said. “Working with a scheduling algorithm, Swarm will then determine which engine and host it should be running on. Messina gives an example where, in some applications, affinity may be an important consideration — where certain containers might be best run on the same hosts.

“The core aspect of Swarm is that as you go to multi-host, distributed applications, you want to maintain the developer experience and enable complete portability. Swarm provides that continuity, but you also want to have flexibility: for example, the ability to use a specific cluster solution for an application you are working with. This ensures cluster capabilities are portable all the way from the laptop to the production environment.”

Keeping Swarm Flexible

The Swarm release also comes with a Swarm API for ecosystem partners to create alternative or additional orchestration tools that over-ride Docker’s Swarm optimization algorithm for something more nuanced to particular use cases.

This is what Docker has been calling their “batteries-included-but-swappable” approach. Some users may be comfortable with using Docker Swarm to identify optimized clustering of a multi-container, distributed application’s architecture. Others will want to use the clustering and scheduling part of Swarm to set their own parameters, while still others will look to an ecosystem partner’s alternative orchestration optimization product to recommend the best cluster mix.

Apache Mesos’s corporate sponsor, Mesosphere, has been the first Docker ecosystem partner to create an alternative optimization product using the Swarm API. Others are expected from Amazon, Google, Joyent and Microsoft Azure.

“After Swarm was first announced, Mesosphere and Docker got together because engineers at both companies immediately saw how the two projects could work together,” said Matt Trifiro, vice president of marketing at Mesosphere.

Docker founder and CTO Solomon Hykes singled out Mesosphere’s technology as the gold standard for running containers at scale at DockerCon EU (see 35 minutes in):

Trifiro says that for distributed applications running at a large scale, Mesosphere’s orchestration tool is better suited to identifying optimized cluster and scheduling orchestration than the “batteries-included” version of Swarm:

He said there are two things to emphasize with the Mesosphere Swarm integration:

Hyperscale: For any company looking to run containers at large scale in a highly automated environment across hundreds or thousands of servers, either on premises or in the cloud, Mesosphere’s technology is the only publicly available container orchestration system proven at scale — running millions of containers at companies like Twitter, Groupon and Netflix, as well as at some of the largest consumer electronics and financial services companies. Multitenant Diversity of Workloads: Mesosphere’s technology is the only way for an organization to run a Docker Swarm workload in a highly elastic way on the same cluster as other types of workloads. For example, you can run Cassandra, Kafka, Storm, Hadoop and Docker Swarm workloads alongside each other on a single Mesosphere cluster, all sharing the same resources. This makes much more efficient use of cluster resources and greatly reduces operational cost and complexity.

Docker Compose

Multi-container applications running on Swarm can also be built using Docker’s new Compose tool. The Compose tool uses a declarative YAML file to maintain a logical definition of all application containers and the links between them. Compose-built distributed applications can then be dynamically updated without impacting other services in the orchestration chain.

The Subtext: We can Play Well With Others

The way Docker has released Swarm alongside its Swarm API serves to allay some of the community fears that arose when Docker first proposed these solutions in December. Docker has been instrumental in creating an ecosystem economy for community partners who have built products that enhance the DevOps, monitoring, continual improvement, QA and other processes that need to be addressed in a Dockerized, distributed application environment. The initial fear amongst some members of the community was that the move towards creating orchestration tools directly from Docker would too strictly enforce a “Docker way of doing things”. Community members feared that, instead of being able to create an integration product as Mesosphere has done, competing orchestration tools would need to use an elaborate workaround in order to offer an alternative to what Docker was putting on the table.

Docker has repeatedly pushed the 100% portability aspect of the orchestration announcement and the “batteries-included-but-swappable” nature of the Swarm API to subtly address the issue.

“If you look at the orchestration announcement, with all of these working and planning integrations, the reality is that Docker orchestration tools are specifically open to collaboration with ecosystem partners,” Messina said. “Docker communities need to build distributed applications that are multi-container and multi-host — that’s been an absolute mandate from our community. The tools are structured in a way that they are incredibly flexible, and are set up with APIs that allow partners to develop advanced and enriched services. The whole idea here is that we want to maintain the dev experience and 100% portability. How the community wants to land those containers and optimize clustering is set up for freedom of choice.”

Feature image via Flickr Creative Commons.