Reading Time: 5 minutes

One of the features of systemd is its ability to defer the start-up of networked applications until their services are actually requested, a process referred to as socket activation. This isn’t really a new an idea; systemd borrowed the idea from launchd which has been in OS X since Tiger‘s release in 2005, and the venerable Unix inetd has implemented a simpler version of it since the 1980s. Socket activation has a number of advantages over scripted or event-based start-up systems; in particular it decouples the application start-up order from the service description, and can even resolve circular application dependencies. Docker containers (and other types such as systemd’s nspawn or LXC) are almost exclusively network processes, so it would be useful to extend socket activation to them.

Socket activation

Socket activation works by having the systemd daemon open listening sockets on behalf of the application and only starting it when when a connection comes in. It then hands the socket to the newly started application, which then takes responsibility for it.

But one of its limitations is that it requires the activated application to be aware that it may be socket-activated; the process of accepting an existing socket, while simple, is different from creating a listening socket from scratch. Consequently a lot of widely used applications (e.g. nginx) don’t support it. The trend of containerisation exacerbates this situation by adding another layer requiring activation support. However socket activation of containers would solve the problem for any containerised application by deferring it to the container.

Socket activation and Docker

If you Google for “docker container systemd socket activation” you will find quite a few discussions of how to achieve this, most of them concluding it is not possible without explicit support from Docker. While Docker support would be the optimal solution this is not the whole story. The systemd developers have known that it may take some time to get activation support everywhere, and in version 209 they introduced systemd-socket-proxyd, a small TCP and Unix domain socket proxy. This does understand activation, and will sit between the network and our container, transparently forwarding packets between the two. So with a few crafted units (systemd’s configuration system) we can create a socket-activation framework for Docker containers.

Caveat: The current release of Ubuntu has systemd 208, which does not have systemd-socket-proxyd. To try the example below you need 209 or higher; the pre-release of Debian Jessie works, as should any recent RedHat based distribution, e.g. Fedora.

How it works

As is so often the case, an illustration may simplify things:

What is going on here:

We create a socket that listens on the port that will eventually be served by the proxy/container combination. On the first connection to the socket systemd activates the proxy service and hands it the socket. There is also a (passive) service for the container, which proxy has a dependency on. When the proxy is started the container is first brought up. The proxy then shuttles all traffic between the container and the network.

While there some tricks required, conceptually this is quite simple. So let’s put it into practice with systemd and an nginx Docker container.

Getting it running

Creating the container

The first thing we’re going to want to do is create the target container. In this case we’re going to create an empty nginx container using the official nginx image:

docker create --name nginx8080 -p 8080:80 nginx

This is the point you would perform any configuration of the container. Note that we use create and not run ; we don’t want the container to start-up yet. We also name the container appropriately so we can start it later.

The only trick here is that we need to bind the container to a port other than the real-target port (8080 instead of 80 in this case). This is because that port will be owned by the socket/proxy, and you can’t bind two processes to the same socket and interface.

The socket descriptor

Now we have a container, we can construct our activation pipeline. The first piece we need is the initial listening socket. This is called a socket unit and the behavior of the socket is highly configurable, but for our case the setup is quite simple:

[Socket] ListenStream=80 [Install] WantedBy=sockets.target

The [Socket] section describes a simple TCP socket listening on port 80. The [Install] section tells systemd when to start the socket, in this case along with any other sockets configured on the system. We place this unit into a file called /etc/systemd/system/nginx-proxy.socket . This is the only part of the chain that is active after boot, so we need to tell systemd to start it:

systemctl enable nginx-proxy.socket systemctl start nginx-proxy.socket

The proxy service

When systemd receives a connection on the socket it will automatically look for a service with the same name and start it. So we need to create the service-file /etc/systemd/system/nginx-proxy.service :

[Unit] Requires=nginx-docker.service After=nginx-docker.service [Service] ExecStart=/lib/systemd/systemd-socket-proxyd 127.0.0.1:8080

The [Unit] section describes the dependencies of the service. In this case we are telling systemd that before starting the proxy we need the actual container to be up. We’ll configure that service below. The [Service] section is responsible for starting the proxy process; there’s a lot of powerful things that can be configured here, such as what to do if the process fails, but for our purposes a simple start is OK. Note that we forward to 8080, where the container will be listening, and that we don’t tell it to listen on port 80; it just uses the socket it’s handed by systemd. Also notice the absence of an [Install] section; this service is not started by default, but when the socket activates it.

Starting the Docker container

As mentioned above, the proxy triggers the start-up of the container using Require / After of the service configure in the file /etc/systemd/system/nginx-docker.service which looks like:

[Unit] Description=nginx container [Service] ExecStart=/usr/bin/docker start -a nginx8080 ExecStartPost=/bin/sleep 1 ExecStop=/usr/bin/docker stop nginx8080

The basic concept is the same as the proxy service above; the ExecStart line tells systemd how to start the container; systemd prefers processes that don’t run in the background, so we add the -a flag which runs it in the foreground and has the added advantage of forwarding the nginx logs to systemd’s journald logger. The ExecStop tell systemd how to stop the container should someone issue systemctl stop nginx-docker .

The tricky bit here though is the ExecStartPost line; this is run by systemd immediately after the main process is started; in this case we sleep for 1 second before continuing. This is necessary because while docker can start containers very quickly, the process inside the container may take slightly longer to initialise. systemd is also very fast, so what can happen is that the proxy may forward the initial connection before nginx is ready to receive it. So we add a small delay to give Docker/nginx a chance to start; it’s hacky but it works (but see below).

Done (for now)

And that’s it; on boot systemd will start a socket, and on the first connection it a cascade of dependencies will result in an nginx Docker container responding to requests via a proxy. Easy.

Improving the container service

Anyone who’s spent some time trying to optimise system start-up times comes to hate arbitrary sleep s in code or configuration. They either needlessly delay start-up on fast systems, or cause random hard-to-debug failures. The sleep in the ExecStartPost above grates on my nerves, so I’d like to remove it. What we’d really like to do is check if the port is up, and only if it is not do we sleep. We would also like to fail if the port takes too long to come up. To do this I’m going to use netcat and a wrapper script:

#!/bin/bash host=$1 port=$2 tries=600 for i in `seq $tries`; do if /bin/nc -z $host $port > /dev/null ; then # Ready exit 0 fi /bin/sleep 0.1 done # FAIL exit -1

This script takes a host and port, checks if it is responding, and will retry it every 10th of a second for up to 1 minute before failing. We install this under /usr/local/bin/waitport (and make it executable). Now our nginx-docker.service file will look like

[Unit] Description=nginx container [Service] ExecStart=/usr/bin/docker start -a nginx8080 ExecStartPost=/usr/local/bin/waitport 127.0.0.1 8080 ExecStop=/usr/bin/docker stop nginx8080

And that’s it; the waitport script can be tuned to whatever parameters make sense in your systems, and will usually return immediately if the container is quick to start.