Local Development with Docker Compose

Docker Compose is a tool for defining and running a multi-container Docker application. In this article you’ll learn why Docker Compose is great for local development, how you can push your Docker images to Heroku for deployment, and Compose tips and tricks.

Introduction to Docker Compose

Let’s start out with a simple python-based multi-container application. This example app is comprised of a web frontend, Redis for caching, and Postgres as our database. With Docker, the web frontend, Redis, and Postgres each run in a separate container.

You can use Docker Compose to define your local development environment, including environment variables, ports you need accessible, and volumes to mount. Everything is defined in docker-compose.yml , which is used by the docker-compose CLI.

The following is the docker-compose.yml for the application:

version: '2' services: web: build: . ports: - "5000:5000" env_file: .env depends_on: - db volumes: - ./webapp:/opt/webapp db: image: postgres:latest ports: - "5432:5432" redis: image: redis:alpine ports: - "6379:6379"

The web service

The first section defines the web service. It opens port 5000, sets environment variables defined in .env, and mounts our local code directory as a volume.

services: web: build: . ports: - "5000:5000" env_file: .env depends_on: - db volumes: - ./webapp:/opt/webapp

The db service

The next service is the Postgres database, which opens port 5432 and uses the latest official Postgres image on Docker Hub.

db: image: postgres:latest ports: - "5432:5432"

The redis service

This section defines our Redis service, which opens port 6379 and uses the official Redis image on Docker Hub.

redis: image: redis:alpine ports: - "6379:6379"

Now that the local development environment is defined in docker-compose.yml , you can spin up all three services with one command:

$ docker-compose up

The following command confirms that all three containers are running:

$ docker ps CONTAINER ID IMAGE COMMAND 8e422ff92239 python_web "/bin/sh -c 'python a" 4ac9ecc8a2a3 python_db "/docker-entrypoint.s" 2cbc8febd074 redis:alpine "docker-entrypoint.sh"

The benefits of Docker Compose

Using Docker and defining your local development environment with Docker Compose provides you with a number of benefits:

By running Redis and Postgres in a Docker container, you don’t have to install or maintain the software on your local machine

Your entire local development environment can be checked into source control, making it easier for other developers to collaborate on a project

You can spin up the entire local development environment with one command: docker-compose up

Pushing your containers to Heroku

When you’re satisfied with the build, you can then push the web frontend directly to the Heroku container registry for deployment (popular CI/CD tools are also supported).

$ heroku container:push web

The python application depends on Postgres and Redis, which you do not push to Heroku. Instead, use Heroku add-ons in production.

Use Heroku add-ons in production

For local development: use official Docker images, such as Postgres and Redis.

use official Docker images, such as Postgres and Redis. For staging and production: use Heroku add-ons, such as Heroku Postgres and Heroku Redis.

Using official Docker images locally and Heroku add-ons in production provides you with the best of both worlds:

Parity: You get parity by using the same services on your local machine as you do in production

You get parity by using the same services on your local machine as you do in production Reduced ops burden: By using add-ons, Heroku – or the add-on provider – takes the ops burden of replication, availability, and backup.

Docker Compose tips and tricks

When using Docker Compose for local development, there are a few tips and tricks we think can help make you more successful.

Create an .env file to avoid checking credentials into source code control

By using Docker and Docker Compose, you can check your local development environment setup into source code control. To handle sensitive credentials, create a .env environment file with your credentials and reference it within your Compose YAML. Your .env should be added to your .gitignore and .dockerignore files so it is not checked into source code control or included in your Docker image, respectively.

services: web: env_file: .env

Mount your code as a volume to avoid image rebuilds

Any time you make a change to your code, you need to rebuild your Docker image (which is a manual step and can be time consuming). To solve this issue, mount your code as a volume. Now rebuilds are no longer necessary when code is changed.

services: web: volumes: - ./webapp:/opt/webapp

Use hostnames to connect to containers

By default Compose sets up a single network for your app. When you name a service in your Compose YAML, it creates a hostname that you can then use to connect to the service.

Our services in Compose YAML:

services: web: redis: db:

Our connection strings:

postgres://db:5432 redis://redis:6379

Running Compose in background mode

When you execute docker-compose up your project is running in the foreground, displaying the output of your services. You can shut down the services gracefully using ctrl+c .

One lesser known option is to use docker-compose up -d to start your containers in the background (i.e., detached mode); you can tear down the Compose setup using docker-compose down . You check the logs of services running in background mode using docker-compose logs .

Multi-dockerfile project structure

When you have multiple services, we suggest creating a subdirectory for each Docker image in your project, with the Dockerfile stored in each respective directory:

/web/Dockerfile /redis/Dockerfile /db/Dockerfile /worker/Dockerfile

We don’t recommend storing all Dockerfiles in the project home directory, since it makes distinguishing between services harder.

Run your containers as a non-root user

It’s a good security practice to run your containers as a non-root user; but more importantly, containers you push to Heroku will run without root access. By testing your containers locally as a non-root user, you can ensure they will work in your Heroku production environment. Learn more in the container registry documentation.

Further reading