You can parametrize this build process but the lack of mounts or volumes exposes you to some pretty annoying slowness if you have to build external packages for example. This problem is still pervasive in Python, most of the stuff in PyPI is just source packages. Even if now you can publish Linux binaries on PyPI it's still years till most packages will publish those manylinux1 wheels . Even if we'd have wheels for everything there's still the question of network slowness. Setting up caching proxies is inconvenient.

Most Dockerfiles I've seen have something like this:

Now for simple projects this is fine, because you only have a handful of dependencies. But for a larger projects, hundreds of dependencies is order of the day. Changing them or upgrading versions (as you should always pin versions ) will introduce serious delays in build times. Because the container running the build process is pretty insulated (no volumes or mount remember?) pip can't really cache anything.

Staging the build process *

A way to solve this is having a "builder image" that you run to build wheels for all your dependencies. When you run an image you can use volumes and mounts.

Before jumping in, lets look briefly at file layout. I like to have a docker directory and then another level for each kind of image. Quite similar to the layout the builder for official images have. And no weird filenames, just Dockerfile everywhere:

docker ├── base │ └── Dockerfile ├── builder │ └── Dockerfile ├── deps │ └── Dockerfile ├── web │ └── Dockerfile ├── worker │ └── Dockerfile └── ...

In this scenario we'd deploy two images: web and worker . The inheritance chain would look like this:

buildpack-deps:xenial ➜ builder

➜ ubuntu:xenial ➜ deps ➜ base ➜ web

➜ ➜ ➜ ubuntu:xenial ➜ deps ➜ base ➜ worker

In which:

builder has development libraries, compilers and other stuff we don't want in production.

has development libraries, compilers and other stuff we don't want in production. deps only has python and the dependencies installed.

only has python and the dependencies installed. base has the source code installed.

has the source code installed. web and worker have specific customizations (like installing Nginx or different settings).

And in .dockerignore we'd have:

# just ignore that directory, we don't need that stuff when we have "." as the context docker

This layout might seem artificial but not quite:

Both the worker and web need the same source code.

and need the same source code. The deps and base are not in the same image because their contexts are distinct: one needs a bunch of wheels and the other one only needs the sources. This setup allows us to skip building the deps image if the requirement files did not change.

and are not in the same image because their contexts are distinct: one needs a bunch of wheels and the other one only needs the sources. This setup allows us to skip building the image if the requirement files did not change. The web and worker images do not need to have the source code in the context. This allows faster build times. For development purposes we can just mount the sourcecode. More about that later.

In builder/Dockerfile there would be:

# we start from an image with build deps preinstalled FROM buildpack-deps:xenial # seems acceptable for building, note the notes above # about C.UTF-8 - it's not really good for running apps ENV LANG C.UTF-8 # we'd add all the "-dev" packages we need here RUN apt-get update \ && apt-get install -y --no-install-recommends \ python2.7 python2.7-dbg python2.7-dev libpython2.7 \ strace gdb lsof locate \ && rm -rf /var/lib/apt/lists/* ENV PYTHON_PIP_VERSION 8 .1.1 RUN set -eEx \ && curl -fSL 'https://bootstrap.pypa.io/get-pip.py' | python2.7 - \ --no-cache-dir --upgrade pip == $PYTHON_PIP_VERSION ARG USER ARG UID ARG GID # we set some default options for pip here # so we don't have to specify them all the time # this will make pip additionally look for available wheels here ENV PIP_FIND_LINKS = /home/ $USER /wheelcache # and this is the default output dir when we run `pip wheel` ENV PIP_WHEEL_DIR = /home/ $USER /wheelcache ENV PIP_TIMEOUT = 60 # one network request less, we don't care about latest version ENV PIP_DISABLE_PIP_VERSION_CHECK = true RUN echo "Creating user: $USER ( $UID : $GID )" \ && groupadd --system --gid = $GID $USER \ && useradd --system --create-home --gid = $GID --uid = $UID $USER \ && mkdir /home/ $USER /wheelcache WORKDIR /home/$USER

The interesting part here is the USER , UID and GID build arguments. Unless you do something special the processes inside the container runs with root user. This is fine, right? That's the whole point of using a container, processes in the container actually have all sort of limitations - so it don't matter what user runs inside. However, if you mount something from the host inside the container then the owner any new file inside that mount is going to be the same user that the container runs with. The result is that you're going to get a bunch of files owned by root in the host. Not nice.

Because I don't do development with a root account and because user namespaces are surprisingly inconvenient to use I have resorted to recreating my user inside the container. It needs to have the exact uid and git , otherwise I get files owned by an account that don't exist.

Similarly to what was shown before, deps/Dockerfile would have:

FROM ubuntu:xenial RUN locale-gen en_US.UTF-8 ENV LANG en_US.UTF-8 ENV LANGUAGE en_US:en ENV LC_ALL en_US.UTF-8 RUN apt-get update \ && apt-get install -y --no-install-recommends \ ca-certificates curl \ strace gdb lsof locate net-tools htop \ python2.7-dbg python2.7 libpython2.7 \ && rm -rf /var/lib/apt/lists/* ENV PYTHON_PIP_VERSION 8 .1.1 RUN set -eEx \ && curl -fSL 'https://bootstrap.pypa.io/get-pip.py' | python2.7 - \ --no-cache-dir --upgrade pip == $PYTHON_PIP_VERSION COPY .wheels /wheels RUN pip install --force-reinstall --ignore-installed --upgrade \ --no-index --use-wheel --no-deps /wheels/* \ && rm -rf /wheels

And base/Dockerfile :

FROM app-deps # copy the application files and add them on the import path RUN mkdir /app WORKDIR /app COPY setup.py /app/ COPY src /app/src # there are other ways (like .pth file) but this one allows # for easy setup of various bin scripts app might need RUN python2.7 setup.py develop # create an user for the application and install basic tools # to change user (pysu) and wait for services (holdup) ARG USER = app ENV USER = $USER RUN echo "Creating user: $USER " \ && groupadd --system $USER \ && useradd --system --create-home --gid = $USER --base-dir = /var $USER \ && pip install pysu == 0 .1.0 holdup == 1 .0.0 \ && pysu $USER id # this last one just tests that pysu works

For web/Dockerfile we can have something like:

FROM app-base RUN apt-get update \ && apt-get install -yq --no-install-recommends nginx-core supervisor \ && rm -rf /var/lib/apt/lists/* COPY site.conf /etc/nginx/sites-enabled/default COPY supervisor.conf /etc/supervisor/conf.d/ COPY entrypoint.sh / RUN echo "daemon off;" >> /etc/nginx/nginx.conf EXPOSE 80 CMD [ "/entrypoint.sh" ]

To build the images we can run this:

#!/bin/sh set -eux docker build --tag = app-builder \ --build-arg USER = $USER \ --build-arg UID = $( id --user $USER ) \ --build-arg GID = $( id --group $USER ) \ docker/builder # we run this image two times, once to prime a wheel cache mkdir -p .dockercache docker run --rm \ --user = $USER \ --volume = " $PWD /requirements.txt" :/requirements.txt:ro \ --volume = " $PWD /.dockercache" :/home/ $USER \ app-builder \ pip wheel --requirement = /requirements.txt # and the second time to create the final wheel set rm -rf " $PWD /docker/deps/.wheels" mkdir " $PWD /docker/deps/.wheels" docker run --rm \ --user = $USER \ --volume = " $PWD /requirements.txt" :/requirements.txt:ro \ --volume = " $PWD /.dockercache" :/home/ $USER \ --volume = " $PWD /docker/deps/.wheels" :/home/ $USER /wheels \ app-builder \ pip wheel --wheel-dir = wheels \ --requirement = /requirements.txt # and now there are going to be tons of wheels in "docker/deps/.wheels/" docker build --tag = app-deps docker/deps docker build --tag = app-base --file = docker/base/Dockerfile . # this is why we simply ignore "docker/" in .dockerignore -- it # would inflate the context a lot and we don't need the wheels in this step docker build --tag = app-web docker/web