🐳 24 random docker tips

Csaba Palfi, Dec 2014

We love docker and had it in production since 0.8 at TES Global (my current client). Couple of us could attend the trainings at dockerConEU thanks to Contino. Here are some of the tips and tricks that will hopefully be useful for anyone who is already familiar with docker basics.

Just pipe docker ps output to less -S so that the table rows are not wrapped:

docker ps -a | less -S

docker logs won't watch the logs by default unless use the -f flag:

docker logs <containerid> -f

docker inspect spits out a lots of JSON by default. You can just use jq to extract a single key. Or you can use the builtin go templating in docker inspect like below:

docker inspect --format '{{.State.Running}}' $(docker ps -lq)

This one is pretty well-known if you follow the docker releases. exec was introduced in 1.3 and allow you to run a new process within a container. There's no need for running sshd in the container or installing nsenter on the host anymore.

You can not only build images from local Dockerfiles but can simply give docker build a git repo URL and it takes care of the rest.

Default images (e.g. ubuntu) don't include package lists to keep them smaller hence the apt-get update in pretty much any base Dockerfile.

Be careful with package installations as those commands are cached as well. Meaning you may get different version when busting the cache or lagging behind on security updates when caching these for too long.

There's an official, truly empty docker image on docker hub. It's called scratch. If you want you can start your images FROM scratch . Most of time you're better off starting from alpine if you want a really small base image (few MBs) as it has a shell and nice package manager, too.

If you don't specify a version in your tag then the FROM keyword will just use latest. Be careful with that and make sure you specify a version if you can.

There are a few places where you can specify commands in your Dockerfile. (e.g. CMD , RUN ). docker supports two ways of doing that. If you just write the command then docker will wrap it in sh -c . You can also write them as an array of strings (e.g [ "ls", "-a"] ). The array notation won't need a shell to be available within the container (as it uses go's exec) and that's the preferred syntax according to the docker guys.

Both ADD and COPY adds local files when building a container but ADD does some additional magic like adding remote files and ungzipping and untaring archives. Only use ADD if you understand this difference.

Each command will create a new temporary image and runs in a new shell hence if you do a cd directory or export var=value in your Dockerfile it won't work. Use WORKDIR to set your working directory across multiple commands and ENV to set environment variables.

CMD is the default command to execute when an image is run. The default ENTRYPOINT is /bin/sh -c and CMD is passed into that as an argument. We can override ENTRYPOINT in our Dockerfile and make our container behave like an executable taking command line arguments (with default arguments in CMD in our Dockerfile).

ENTRYPOINT /bin/ls CMD [ "-a" ] docker run training/ls -l

ADD invalidates your cache if files have changed. Don't invalidate the cache by adding frequently changing stuff too high up in your Dockerfile. Add your code last, libraries and dependencies first. For node.js apps that means adding your package.json first, running npm install and only then adding your code.

Docker has an internal pool of IPs which it uses for container IP addresses. These are invisible to the outside by default and accessible via a bridge interface.

docker run accepts explicit port mappings as parameters or you can specify -P to map all ports automatically. The latter has the advantage of preventing conflicts and looking up the assigned ports can be done as follows:

docker port <containerId> <portNumber> docker inspect --format '{{.NetworkSettings.Ports}}' <containerId>

Each container has it's IP in a private subnet (which is 172.17.0.0/16 by default). The IP can change with restart but can be looked up should you need it:

docker inspect --format ' {{.NetworkSettings.IPAddress}} ' < containerId >

docker tries to detect conflicts and will use a different subnet if needed.

docker run --net=host allows reusing the network stack of the host. Don't do this.

A way to bypass copy-on-write filesystem for a directory or a single file with close to zero overhead (bind mounting).

There's not much point in writing to your volumes when the image is built.

but there's an :ro flag

And available until at least container references them. Can be shared between container with --volumes-from .

You can just mount your docker.sock to provide a container access to the docker API. You can then just run docker commands from within that container. This way a container can even kill itself. There's no need to run the docker daemon within a container.

...treat it accordingly. Docker API access gives full root access as you can map / as a volume, read, write. Or you can just take over the host's network with --net host . Don't expose docker API to public or use TLS if you do.

By default docker runs everything as root but you can use USER in Dockerfiles. There's no user namespacing in docker so the container sees the users on the host but only uids hence you need the add the users in the container.

There was no access control on the docker API until 1.3 when they added TLS. They use mutual authentication: the client and server both has a key. Treat keys as root passwords.

Boot2docker has TLS as default since 1.3 and also generates the keys for you.

Otherwise generating keys requires OpenSSL 1.0.1 then the docker daemon needs to be run with --tls-verify and will use the secure docker port (2376).

We're hopefully getting more granular access control soon instead of all or nothing.