In this article, we’ll be talking about how to start using Docker for python development.

Docker-first Python development - In this article, we’ll be talking about how to start using Docker for python development.

I've always been a bit annoyed at how difficult it can be to avoid shipping test code and dependencies with Python applications. A typical build process might look something like:

create a virtual environment install service dependencies install test dependencies run tests package up code and dependencies into an RPM.

At this point, my service dependencies and test dependencies are intermingled in the virtual environment. To detangle them, I now have to do something like destroy the venv and create a new one, reinstalling the service dependencies.

Regardless of the packaging method, I don't want to pull down dependencies when I deploy my service.

At Twilio, we are in the process of embracing container-based deployments. Docker containers are great for Python services as you no longer have to worry about multiple python versions or virtual environments. You just use an image with exactly the version of Python your service needs and install your dependencies directly into the system.

One thing I've noticed is that while many services are built and packaged as Docker images, few use exclusively Docker-based development environments. Virtual environments and pyenv .python-version files abound!

I recently started writing a new Python service with the knowledge that this would be exclusively deployed via containers. This felt like the right opportunity to go all in on containers and build out a strategy for Docker-first localdev. I set out with the following goals:

don't ship tests and test dependencies with the final image tests run as part of the Docker build failing tests will fail the build IDE (PyCharm) integration

A bit of research (aka Googling) suggested that multi-stage builds might be useful in this endeavor. Eventually I ended up with a Dockerfile that looks something like this:

FROM python3 as builder COPY requirements.txt ./ RUN pip install -r requirements.txt COPY src ./src FROM builder as tests COPY test_requirements.txt ./ RUN pip install -r test_requirements.txt COPY tests ./tests RUN pytest tests FROM builder as service COPY docker-entrypoint.sh ./ ENTRYPOINT ["docker-entrypoint.sh"] EXPOSE 3000

When building an image from this Dockerfile, Docker will build 3 images, one for each of the .python-version statements in the docker file. If you've worked with Dockerfiles before, you know that statement ordering is critical for making efficient use of layer cacheing, and multi-stage builds are no different. Docker builds each of the images in the order they are defined. All of the intermediate stages are ephemeral, only the last image is output by the build process.

In this case, the first stage ( .python-version ) builds an image with all the service dependencies and code. The second stage ( .python-version ) installs the test requirements and test code, and runs the tests. If the tests pass, the build process will continue on to the next stage. If the tests fail, the entire build will fail. This ensures that only images with passing tests are built! Finally, the last stage ( .python-version ) builds on top of our .python-version image, adding the entrypoint script, defining the entrypoint command and exposing port 3000.

So how did I do wrt the initial goals?

don't ship tests and test dependencies with the final image ✓ tests run as part of the Docker build ✓ failing tests will fail the build ✓ IDE (PyCharm) integration ❌

I've met most of the goals, but what about the actual development experience? If I open up PyCharm and import my source code, it complains that I have unsatisfied dependencies :( Fortunately PyCharm Professional has the ability to select a python interpreter from inside a Docker image! Cool, but I have to build the image before I can use its interpreter. But thanks to goal #3, if my tests are failing, I can't build my image...

Lucky for us, we can tell .python-version to build one of our intermediate stages explicitly, stopping the build after the desired stage. Now if I run .python-version , I can select the interpreter from the .python-version image.

Uh oh! The builder image doesn't include my test dependencies! Of course, that's the whole point of the builder image. Let's add another stage we can use for running and debugging our tests.

FROM python3 as builder COPY requirements.txt ./ RUN pip install -r requirements.txt COPY src ./src FROM builder as tests COPY test_requirements.txt ./ RUN pip install -r test_requirements.txt COPY tests ./tests RUN pytest tests FROM builder as service COPY docker-entrypoint.sh ./ ENTRYPOINT ["docker-entrypoint.sh"] EXPOSE 3000

With the .python-version stage, I can build and image with all my service and test code and dependencies. I can even make the localdev container run the tests by default when the container is run. By using the interpreter from this image, I can now debug my failing tests.

Let's take a look again at the initial goals:

don't ship tests and test dependencies with the final image ✓ tests run as part of the Docker build ✓ failing tests will fail the build ✓ IDE (PyCharm) integration ✓

Hooray!

Except there's one thing still bothering me: changes to the service code trigger a reinstallation of our test dependencies. Yuck! Let's take another whack at our Dockerfile:

FROM python3 as builder COPY requirements.txt ./ RUN pip install -r requirements.txt COPY src ./src FROM builder as tests COPY test_requirements.txt ./ RUN pip install -r test_requirements.txt COPY tests ./tests RUN pytest tests FROM builder as service COPY docker-entrypoint.sh ./ ENTRYPOINT ["docker-entrypoint.sh"] EXPOSE 3000

Ok that seems pretty complicated, here's a graph of our image topology:

FROM python3 as builder COPY requirements.txt ./ RUN pip install -r requirements.txt COPY src ./src FROM builder as tests COPY test_requirements.txt ./ RUN pip install -r test_requirements.txt COPY tests ./tests RUN pytest tests FROM builder as service COPY docker-entrypoint.sh ./ ENTRYPOINT ["docker-entrypoint.sh"] EXPOSE 3000

I don't love that the .python-version and .python-version stages both copy over the source directory, but the real question is, does this still meet our initial goals while avoiding excessive re-installs of test dependencies? Yeah, it seems to work pretty well. Thanks to Docker's layer caching, we rarely have to re-install dependencies.

Originally published by** *[* *](https://dev.to/jeremywmoore " *")Jeremy Moore *at* dev.to

=================================================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ Complete Python Bootcamp: Go from zero to hero in Python 3

☞ Python for Time Series Data Analysis

☞ Python Programming For Beginners From Scratch

☞ Python Network Programming | Network Apps & Hacking Tools

☞ Intro To SQLite Databases for Python Programming

☞ Beginner’s guide on Python: Learn python from scratch! (New)

☞ Python for Beginners: Complete Python Programming

☞ The Data Science Course 2019: Complete Data Science Bootcamp

python docker