In the previous article about Microservices with Python, I was talking about how to create a basic API with some simple steps. You can run that directly with Python, but let’s say we have to integrate it with other systems, such as a Database, ElasticSearch or RabbitMQ.

As I mentioned in the previous article, you can find all the code I am generating on those articles in this GitHub repo: https://github.com/ssola/python-flask-microservice

Third part is available in this link.

What is Docker?

Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density.

Docker allow us to have different container for each one of our dependencies. In this example we are going to include two dependencies in our project:

Elasticsearch

RabbitMQ

But before starting, I need to explain some concepts:

Image

A Docker image is a read-only template with instructions for creating a Docker container. For example, an image might contain an Ubuntu operating system with Apache web server and your web application installed.

Container

A Docker container is a runnable instance of a Docker image. You can run, start, stop, move, or delete a container using Docker API or CLI commands. When you run a container, you can provide configuration metadata such as networking information or environment variables.

Registry

A docker registry is a library of images. A registry can be public or private, and can be on the same server as the Docker daemon or Docker client, or on a totally separate server.

Creating your docker-compose

From this step onwards I am assuming you have Docker installed on your machine. In case you do not have it installed, just follow this link.

It is a common practice to put the docker-compose.yml and Dockerfile in the root of your project. With this approach, you can share your development environment with anyone cloning the project.

Docker Compose allow you to create many containers needed for your service. For instance, in the Docker Compose we can define that we need a MySQL instance and an Elasticsearch up-and-running. We can set both on the same file, then with a single command, we can bring up or down those services.

Dockerfile allow you to create the recipe for a new container. In this case, let’s imagine we need to create a new container to run our application. With Dockerfile we can define to:

Install Python 3.6

Clone my project

Make it run

Understanding the Dockerfile

The Dockerfile allow you to define a recipe to build your image. Based on a given one, like alpine, you can set a list of commands to be executed to achieve some state, for instance, running a python app.

This is the definition of our recipe:

Recipe for a Python 3.5 image

It basically get an alpine image with a python 3.5 already installed. This is nice because we can save some time, we only need to create a directory where to put our files, install the dependencies of our application and that is it.

This is a simple example, but for a production ready image we need to think about:

Setting environment variables depending on production/staging environment.

Tune the image with production ready settings on Flask.

Store the image in some private registry to be able to do immutable deployments.

Defining my dependencies

Open the docker-compose.yml file. Now, we are going to define the dependencies I stated above, Elasticsearch and RabbitMQ.

Most of the time you do not need to create your images. Fortunately, we have many of them publicly available in DockerHub.

Let’s start with the Elasticsearch one. Be careful, when looking for an image in DockerHub check the version of the applications you want to install. I found so many Elasticsearch 1.7 images when the current version is 5.2.0.

In this case, I chose the official image from Elastic elasticsearch:5-alpine .

Probably your first question is, what that Alpine is? Alpine is a base image based on Alpine Linux. It is a super minimal distribution that allows us to create super small containers. It is a good idea to search for images built on top of that base image.

In our docker-compose.yml we are going to add these lines:

And the RabbitMQ dependency too, after the elasticsearch one:

With these few lines we have a composition of two containers. We can build up or down both services at the same time. But before building and running our services we should understand what we just did.

Image : It defines which image should be use to build this service

: It defines which image should be use to build this service Environment : You can define some environment variable that will be used on the image. For instance we can define the user and pass for RabbitMQ

: You can define some environment variable that will be used on the image. For instance we can define the user and pass for RabbitMQ Ports : You can define the port forwarding from the image to your machine

: You can define the port forwarding from the image to your machine Command : If needed you can execute a command after starting the image

: If needed you can execute a command after starting the image Volumes: You can define a mapping between the image filesystem and your host filesystem. This is useful if you want to share the Elasticsearch content between containers

Now we can execute the command docker-compose up -d this will bring up two containers, one with the Elasticsearch and the other one with RabbitMQ. The -d means it will detach the process from your session.