This article will cover building a markdown editor application written in Django and running it in the much talked about and discussed Docker. Docker takes all the great aspects of a traditional virtual machine, e.g. a self-contained system isolated from your development machine and removes many of the drawbacks such as system resource drain, setup time, and maintenance.

When building web applications, you have probably reached a point where you want to run your application in a fashion that is closer to your production environment. Docker allows you to set up your application runtime in such a way that it runs in exactly the same manner as it will in production, on the same operating system, with the same environment variables, and any other configuration and setup you require.

By the end of the article you’ll be able to:

Understand what Docker is and how it is used,

Build a simple Python Django application, and

Create a simple Dockerfile to build a container running a Django web application server.

to build a container running a Django web application server. Setup a Continuous Integration and Delivery (CI/CD) pipelines to test and build the Docker image automatically

What is Docker, Anyway?

Docker’s homepage describes Docker as follows:

“Docker is an open platform for building, shipping and running distributed applications. It gives programmers, development teams, and operations engineers the common toolbox they need to take advantage of the distributed and networked nature of modern applications.”

Put simply, Docker gives you the ability to run your applications within a controlled environment, known as a container, built according to the instructions you define. A container leverages your machine’s resources much like a traditional virtual machine (VM). However, containers differ greatly from traditional virtual machines in terms of system resources. Traditional virtual machines operate using Hypervisors, which manage the virtualization of the underlying hardware to the VM. This means they are large in terms of system requirements.

Docker doesn’t require the often time-consuming process of installing an entire OS to a virtual machine such as VirtualBox or VMWare.

You create a container with a few commands and then execute your applications on it via the Dockerfile.

Docker manages the majority of the operating system virtualization for you, so you can get on with writing applications and shipping them as you require in the container you have built.

Dockerfiles can be shared for others to build containers and extend the instructions within them by basing their container image on top of an existing one.

The containers are also highly portable and will run in the same manner regardless of the host OS they are executed on. Portability is a massive plus side of Docker.

Prerequisites

Before you begin this tutorial, ensure the following is installed to your system:

Python 3.7 or greater,

Python Pip, the package manager,

Docker,

Git and a GitHub account.

Setting Up a Django web application

Let’s jump directly to the application that we’ll dockerize. We’ll start from the Martor project, which implements a live markdown editor for Django:

Go to the django-martor-editor repository.

Use the Fork button:

Click on Clone or download and copy the URL:

Clone the code to your machine using git:

$ git clone YOUR_REPOSITORY_URL $ cd django-markdown-editor

Let’s take a look at the project structure, I’ve omitted some files and folders we won’t be visiting today:

. ├── requirements.txt # < Python module list └── martor_demo # < Django Project root ├── app # < App code │ ├── admin.py │ ├── apps.py │ ├── forms.py │ ├── migrations │ ├── models.py │ ├── templates │ ├── urls.py │ └── views.py ├── manage.py # < Django management tool └── martor_demo # < Django main settings ├── settings.py ├── urls.py └── wsgi.py

You can read more about the structure of Django on the official website. You control the application for development purposes using the manage.py script.

Before we can run it though, we’ll need to download and all the dependencies.

First, create a Python virtual environment:

$ python -m venv venv $ echo venv/ >> .gitignore $ source venv/bin/activate

Next, add some of the Python modules we’ll need:

Gunicorn : gunicorn is an HTTP server. We’ll use it to serve the application inside the Docker container.

: gunicorn is an HTTP server. We’ll use it to serve the application inside the Docker container. Martor: Martor is Markdown plugin for Django

$ echo martor >> requirements.txt $ echo gunicorn >> requirements.txt

Install all the modules using:

$ pip install -r requirements.txt

Push the change to GitHub:

$ git add .gitignore requirements.txt $ git commit -m "added martor and gunicorn" $ git push origin master

And start the development server, you can visit your application at http://127.0.0.1:8000:

$ cd martor_demo $ python manage.py runserver

If you check the output of the previous command, you’ll see this message:

You have 17 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions. Run 'python manage.py migrate' to apply them.

Django prints this warning because it has detected that the database has not been initialized.

To initialize a local test database and get rid of the message run:

$ python manage.py makemigrations $ python manage.py migrate

Testing in Django

In this section, let’s add some tests to the application. Tests are our first line of defense against bugs.

Django uses the standard Unittest library, so we can get on writing tests right away.

Create a file called app/testPosts.py :

# app/testPosts.py from django.test import TestCase from app.models import Post class PostTestCase(TestCase): def testPost(self): post = Post(title="My Title", description="Blurb", wiki="Post Body") self.assertEqual(post.title, "My Title") self.assertEqual(post.description, "Blurb") self.assertEqual(post.wiki, "Post Body")

The code is illustrative of a normal unit test:

Import the Post model from the application.

model from the application. Create a post object with some initial values.

object with some initial values. Check that the values match expectations.

To run the test case:

$ python manage.py test Creating test database for alias 'default'... System check identified no issues (0 silenced). . ---------------------------------------------------------------------- Ran 1 test in 0.001s OK Destroying test database for alias 'default'...

Another tests that Django supplies are the deployment checklists. These are scripts that check for potentially dangerous security settings.

To run the checklist:

$ python manage.py check --deploy

You’ll likely see some warnings. For demo-ing, we can live with the warnings. Once you go to production, you might want to take a closer look at the messages and what they mean.

Static vs Dynamic Files

We just need to make one modification before we can continue. Django has the concept of static files. These are files without any Python code, they are usually images, CSS stylesheets, or JavaScript.

The distinction between static and dynamic is important once we release to production. Dynamic files have code that must be evaluated on each request, so they are expensive to run. Static files don’t need any execution, they don’t need a lot of resources to be served and can be cached with proxies and CDNs.

To configure the static file location:

Edit the file martor_demo/settings.py

Locate the STATIC_ROOT and MEDIA_ROOT variables and replace the lines with these:

# martor_demo/settings.py . . . STATIC_ROOT = os.path.join(BASE_DIR, "static") MEDIA_ROOT = os.path.join(BASE_DIR, "media")

Django collects all static files in one directory:

$ python manage.py collectstatic

Push all modifications to GitHub:

$ git add martor_demo/settings.py app/testPosts.py $ git add static $ git commit -m "add unit test and static files" $ git push origin master

Continuous Integration

With an initial application and some tests in place, it’s time to focus on using Continuous Integration (CI) to build and test the code in a clean, reproducible environment.

Setting up a CI/CD pipeline in Semaphore takes only a few minutes, once it’s in place it, Semaphore will run the tests for you on every update and, if there are no bugs, build the Docker image automatically.

Visit Semaphore and sign up for a free account using the Sign up with GitHub button.

button. Use the + (plus sign) button next to Projects to find your GitHub repository:

Click on Choose next to your repository:

Select the option: Customize it first

This will open the Workflow Builder:

The main elements of the builder are:

Pipeline : a pipeline is made of blocks that are executed from left to right. Pipelines usually have a specific goal such as testing or building.

: a pipeline is made of blocks that are executed from left to right. Pipelines usually have a specific goal such as testing or building. Block : blocks group jobs that can be executed in parallel. Jobs in a block usually have similar commands and configurations. Once all job in a block complete, the next block begins.

: blocks group jobs that can be executed in parallel. Jobs in a block usually have similar commands and configurations. Once all job in a block complete, the next block begins. Job : jobs define the commands that do the work. They inherit their configuration from the parent block.

: jobs define the commands that do the work. They inherit their configuration from the parent block. Promotions: We can define multiple pipelines and connect them with promotions to get complex multi-stage workflows.

The first block has to download the Python modules and build the project:

Click on the first block and set its name to “Build”

On the job commands block type the following:

sem-version python 3.7 checkout mkdir .pip_cache cache restore pip install --cache-dir .pip_cache -r requirements.txt cache store

Click on Run the Workflow .

. Set the branch to master .

. Click on Start.

We have three commands in Semaphore’s built-in toolbox:

sem-version activates a specific version of one of the supported languages. In the case of Python, it also setups a virtual environment.

activates a specific version of one of the supported languages. In the case of Python, it also setups a virtual environment. checkout uses git to clone correct code revision.

uses git to clone correct code revision. cache stores and restores files in the project-wide cache. Cache can figure out which files and directories it needs to keep. We can use it to avoid having to download Python packages each time.

The initial CI pipeline will start immediately, a few seconds later it should complete without error:

Add a second block to run the tests:

Click on Edit Workflow .

. Click on + Add Block.

Set the name of the block to “Test”.

Open the Prologue section, type the following commands. The prologue is executed before each job in the block:

sem-version python 3.7 checkout cache restore pip install --cache-dir .pip_cache -r requirements.txt

Set the name of the first job in the block to “Unit tests”.

Type the following commands:

cd martor_demo python manage.py makemigrations python manage.py migrate python manage.py test

Add a second job called “Checklist” and add the following commands:

cd martor_demo python manage.py check --deploy

This is a good place to add some style checking. Add a third job called “Style check” with the following commands. We’re using flake8 to check the style of the code:

pip install flake8 flake8 martor_demo/ --max-line-length=127

Click on Run the Workflow and Start:

It’s likely that the Style Check will fail. Flake8 is very particular about how the code should look. If you run into Style errors:

Click on the Style Check job and review the log.

job and review the log. Pull the code to your machine: git pull origin master

Fix the errors.

Use git to add , commit and push the fixed code.

, and the fixed code. The CI pipeline will start again. Click on the master branch near the top to see the new pipeline running.

Dockerizing the Application

You now have a simple web application that is ready to be deployed. So far, you have been using the built-in development web server that Django ships with.

It’s time to set up the project to run the application in Docker using a more robust web server that is built to handle production levels of traffic:

Gunicorn : Gunicorn is an HTTP server for Python. This web server is robust and built to handle production levels of traffic, whereas the included development server of Django is more for testing purposes on your local machine only. It will handle all dynamic files.

: Gunicorn is an HTTP server for Python. This web server is robust and built to handle production levels of traffic, whereas the included development server of Django is more for testing purposes on your local machine only. It will handle all dynamic files. Ngnix: is a general-purpose HTTP server, we’ll use it as a reverse proxy to serve static files.

On a regular server, setting the application would be hard work; we would need to install and configure Python and Ngnix, then open the appropriate ports in the firewall. Docker saves us all this work by creating a single image with all the files and services configured and ready to use. The image we’ll create can run on any system running Docker.

Installing Docker

One of the key goals of Docker is portability, and as such is able to be installed on a wide variety of operating systems.

On Windows and OSX install Docker Desktop.

For Linux, Docker is almost universally found in all major distributions.

Writing the Dockerfile

The next stage is to add a Dockerfile to your project. This will allow Docker to build the image it will execute on the Docker Machine you just created. Writing a Dockerfile is rather straightforward and has many elements that can be reused and/or found on the web. Docker provides a lot of the functions that you will require to build your image. If you need to do something more custom on your project, Dockerfiles are flexible enough for you to do so.

The structure of a Dockerfile can be considered a series of instructions on how to build your container/image. For example, the vast majority of Dockerfiles will begin by referencing a base image provided by Docker. Typically, this will be a plain vanilla image of the latest Ubuntu release or other Linux OS of choice. From there, you can set up directory structures, environment variables, download dependencies, and many other standard system tasks before finally executing the process which will run your web application.

Start the Dockerfile by creating an empty file named Dockerfile in the root of your project. Then, add the first line to the Dockerfile that instructs which base image to build upon. You can create your own base image and use that for your containers, which can be beneficial in a department with many teams wanting to deploy their applications in the same way.

We’ll create the Dockerfile in the root of our project, go one directory up:

$ cd ..

Create a new file called nginx.default . This will be our configuration for nginx. We’ll listen on port 8020 , serve the static files from the /opt/app/martor_demo/static directory and forward the rest of connections to port 8010 , where Gunicorn will be listening:

# nginx.default server { listen 8020; server_name example.org; location / { proxy_pass http://127.0.0.1:8010; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /static { root /opt/app/martor_demo; } }

Create a server startup script called start-server.sh . This is a Bash script that starts Gunicorn and Ngnix:

#!/usr/bin/env bash # start-server.sh if [ -n "$DJANGO_SUPERUSER_USERNAME" ] && [ -n "$DJANGO_SUPERUSER_PASSWORD" ] ; then (cd martor_demo; python manage.py createsuperuser --no-input) fi (cd martor_demo; gunicorn martor_demo.wsgi --user www-data --bind 0.0.0.0:8010 --workers 3) & nginx -g "daemon off;"

You then pass the gunicorn command with the first argument of martor_demo.wsgi . This is a reference to the wsgi file Django generated for us and is a Web Server Gateway Interface file which is the Python standard for web applications and servers. Without delving too much into WSGI, the file simply defines the application variable, and Gunicorn knows how to interact with the object to start the webserver.

You then pass two flags to the command, bind to attach the running server to port 8020 , which you will use to communicate with the running web server via HTTP. Finally, you specify workers which are the number of threads that will handle the requests coming into your application. Gunicorn recommends this value to be set at (2 x $num_cores) + 1 . You can read more on configuration of Gunicorn in their documentation.

Make the script executable:

$ chmod 755 start-server.sh

Create the Dockerfile:

# Dockerfile # FROM directive instructing base image to build upon FROM python:3.7-buster . . .

It’s worth noting that we are using a base image that has been created specifically to handle Python 3.7 applications and a set of instructions that will run automatically before the rest of your Dockerfile .

Next, add the Nginx installation commands and COPY the configuration file inside the container:

. . . RUN apt-get update && apt-get install nginx vim -y --no-install-recommends COPY nginx.default /etc/nginx/sites-available/default RUN ln -sf /dev/stdout /var/log/nginx/access.log \ && ln -sf /dev/stderr /var/log/nginx/error.log . . .

It’s time to copy the source files and scripts inside the container. We can use the COPY command to copy files and the RUN command to execute programs on build time.

We’ll also copy the Python packages and install them. Finally, we ensure all the files have the correct owner:

. . . RUN mkdir -p /opt/app RUN mkdir -p /opt/app/pip_cache RUN mkdir -p /opt/app/martor_demo COPY requirements.txt start-server.sh /opt/app/ COPY .pip_cache /opt/app/pip_cache/ COPY martor_demo /opt/app/martor_demo/ WORKDIR /opt/app RUN pip install -r requirements.txt --cache-dir /opt/app/pip_cache RUN chown -R www-data:www-data /opt/app . . .

The server will run on port 8020 . Therefore, your container must be set up to allow access to this port so that you can communicate to your running server over HTTP. To do this, use the EXPOSE directive to make the port available:

. . . EXPOSE 8020 STOPSIGNAL SIGTERM CMD ["/opt/app/start-server.sh"]

The final part of your Dockerfile is to execute the start script added earlier, which will leave your web server running on port 8020 waiting to take requests over HTTP. You can execute this script using the CMD directive.

With all this in place, your final Dockerfile should look something like this:

# Dockerfile FROM python:3.7-buster # install nginx RUN apt-get update && apt-get install nginx vim -y --no-install-recommends COPY nginx.default /etc/nginx/sites-available/default RUN ln -sf /dev/stdout /var/log/nginx/access.log \ && ln -sf /dev/stderr /var/log/nginx/error.log # copy source and install dependencies RUN mkdir -p /opt/app RUN mkdir -p /opt/app/pip_cache RUN mkdir -p /opt/app/martor_demo COPY requirements.txt start-server.sh /opt/app/ COPY .pip_cache /opt/app/pip_cache/ COPY martor_demo /opt/app/martor_demo/ WORKDIR /opt/app RUN pip install -r requirements.txt --cache-dir /opt/app/pip_cache RUN chown -R www-data:www-data /opt/app # start server EXPOSE 8020 STOPSIGNAL SIGTERM CMD ["/opt/app/start-server.sh"]

You are now ready to build the container image, and then run it to see it all working together.

Building and Running the Container

Building the container is very straight forward once you have Docker on your system. The following command will look for your Dockerfile and download all the necessary layers required to get your container image running. Afterward, it will run the instructions in the Dockerfile and leave you with a container that is ready to start.

To build your container, you will use the docker build command and provide a tag or a name for the container, so you can reference it later when you want to run it. The final part of the command tells Docker which directory to build from.

$ mkdir -p .pip_cache $ docker build -t django-markdown-editor . Sending build context to Docker daemon 5.41MB Step 1/16 : FROM python:3.7-buster ---> b567432174fe Step 2/16 : RUN apt-get update && apt-get install nginx vim -y --no-install-recommends ---> Using cache ---> 46f647379df9 Step 3/16 : COPY nginx.default /etc/nginx/sites-available/default ---> Using cache ---> 53af0e710b52 Step 4/16 : RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log ---> Using cache ---> f54cb89076b8 Step 5/16 : RUN mkdir -p /opt/app ---> Using cache ---> e2c82bfe9c0f Step 6/16 : RUN mkdir -p /opt/app/pip_cache ---> Using cache ---> 9417b14a5d18 Step 7/16 : RUN mkdir -p /opt/app/martor_demo ---> Using cache ---> b61eee144f7a Step 8/16 : COPY requirements.txt start-server.sh /opt/app/ ---> Using cache ---> 44cff1209ee7 Step 9/16 : COPY .pip_cache /opt/app/pip_cache/ ---> Using cache ---> f3e5ea5e138c Step 10/16 : COPY martor_demo /opt/app/martor_demo/ ---> 5fcfa2e6c025 Step 11/16 : WORKDIR /opt/app ---> Running in 5e0e8b4ac0a7 Removing intermediate container 5e0e8b4ac0a7 ---> 0d4990ce6e4c Step 12/16 : RUN pip install -r requirements.txt --cache-dir /opt/app/pip_cache ---> Running in 93fcff680e23 Collecting Django Downloading https://files.pythonhosted.org/packages/55/d1/8ade70e65fa157e1903fe4078305ca53b6819ab212d9fbbe5755afc8ea2e/Django-3.0.2-py3-none-any.whl (7.4MB) Collecting Markdown Downloading https://files.pythonhosted.org/packages/c0/4e/fd492e91abdc2d2fcb70ef453064d980688762079397f779758e055f6575/Markdown-3.1.1-py2.py3-none-any.whl (87kB) Collecting requests Downloading https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl (57kB) Collecting martor Downloading https://files.pythonhosted.org/packages/f5/69/fc6c8d748dbc86e5dea58c592de821638f73bf1a147b243ff81093ca3dd9/martor-1.4.6.tar.gz (1.3MB) Collecting gunicorn Downloading https://files.pythonhosted.org/packages/69/ca/926f7cd3a2014b16870086b2d0fdc84a9e49473c68a8dff8b57f7c156f43/gunicorn-20.0.4-py2.py3-none-any.whl (77kB) Collecting pytz Downloading https://files.pythonhosted.org/packages/e7/f9/f0b53f88060247251bf481fa6ea62cd0d25bf1b11a87888e53ce5b7c8ad2/pytz-2019.3-py2.py3-none-any.whl (509kB) Collecting asgiref~=3.2 Downloading https://files.pythonhosted.org/packages/a5/cb/5a235b605a9753ebcb2730c75e610fb51c8cab3f01230080a8229fa36adb/asgiref-3.2.3-py2.py3-none-any.whl Collecting sqlparse>=0.2.2 Downloading https://files.pythonhosted.org/packages/ef/53/900f7d2a54557c6a37886585a91336520e5539e3ae2423ff1102daf4f3a7/sqlparse-0.3.0-py2.py3-none-any.whl Requirement already satisfied: setuptools>=36 in /usr/local/lib/python3.7/site-packages (from Markdown->-r requirements.txt (line 2)) (44.0.0) Collecting certifi>=2017.4.17 Downloading https://files.pythonhosted.org/packages/b9/63/df50cac98ea0d5b006c55a399c3bf1db9da7b5a24de7890bc9cfd5dd9e99/certifi-2019.11.28-py2.py3-none-any.whl (156kB) Collecting idna<2.9,>=2.5 Downloading https://files.pythonhosted.org/packages/14/2c/cd551d81dbe15200be1cf41cd03869a46fe7226e7450af7a6545bfc474c9/idna-2.8-py2.py3-none-any.whl (58kB) Collecting chardet<3.1.0,>=3.0.2 Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB) Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 Downloading https://files.pythonhosted.org/packages/e8/74/6e4f91745020f967d09332bb2b8b9b10090957334692eb88ea4afe91b77f/urllib3-1.25.8-py2.py3-none-any.whl (125kB) Building wheels for collected packages: martor Building wheel for martor (setup.py): started Building wheel for martor (setup.py): finished with status 'done' Created wheel for martor: filename=martor-1.4.6-cp37-none-any.whl size=1365093 sha256=2364550163dc1a1711e4a5b455179f25ec4a735f4f760fc8ae6cf7b9a7eda111 Stored in directory: /opt/app/pip_cache/wheels/56/89/c8/3e9e03817d710195864f4fa373ccb847dbc1deed86b104da9d Successfully built martor Installing collected packages: pytz, asgiref, sqlparse, Django, Markdown, certifi, idna, chardet, urllib3, requests, martor, gunicorn Successfully installed Django-3.0.2 Markdown-3.1.1 asgiref-3.2.3 certifi-2019.11.28 chardet-3.0.4 gunicorn-20.0.4 idna-2.8 martor-1.4.6 pytz-2019.3 requests-2.22.0 sqlparse-0.3.0 urllib3-1.25.8 WARNING: You are using pip version 19.3.1; however, version 20.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Removing intermediate container 93fcff680e23 ---> 0d207d72a9ab Step 13/16 : RUN chown -R www-data:www-data /opt/app ---> Running in 8eb0eac504e7 Removing intermediate container 8eb0eac504e7 ---> 38c7f2eba7b0 Step 14/16 : EXPOSE 8020 ---> Running in 1843c524da90 Removing intermediate container 1843c524da90 ---> facc56f2a7a8 Step 15/16 : STOPSIGNAL SIGTERM ---> Running in d7ce9739eb46 Removing intermediate container d7ce9739eb46 ---> 1052bc6459eb Step 16/16 : CMD ["/opt/app/start-server.sh"] ---> Running in ba62a08ed2c1 Removing intermediate container ba62a08ed2c1 ---> 927b8785ea27 Successfully built 927b8785ea27 Successfully tagged django-markdown-editor:latest

In the output, you can see Docker processing each one of your commands before outputting that the build of the container is complete. It will give you a unique ID for the container, which can also be used in commands alongside the tag.

The final step is to run the container you have just built using Docker:

$ docker run -it -p 8020:8020 \ -e DJANGO_SUPERUSER_USERNAME=admin \ -e DJANGO_SUPERUSER_PASSWORD=sekret1 \ -e DJANGO_SUPERUSER_EMAIL=admin@example.com \ django-markdown-editor Superuser created successfully [2020-01-24 00:00:47 +0000] [8] [INFO] Starting gunicorn 20.0.4 [2020-01-24 00:00:47 +0000] [8] [INFO] Listening at: http://0.0.0.0:8010 (8) [2020-01-24 00:00:47 +0000] [8] [INFO] Using worker: sync [2020-01-24 00:00:47 +0000] [15] [INFO] Booting worker with pid: 15 [2020-01-24 00:00:47 +0000] [16] [INFO] Booting worker with pid: 16 [2020-01-24 00:00:47 +0000] [17] [INFO] Booting worker with pid: 17

The command tells Docker to run the container and forward the exposed port 8020 to port 8020 on your local machine. With -e we set environment variables that automatically create an admin user.

After you run this command, you should be able to visit http://localhost:8020 and http://localhost:8020/admin in your browser to access the application.

Continuous Deployment

After manually verifying that the application is behaving as expected in Docker, the next step is the deployment.

We’ll extend our CI Pipeline with a new one that runs the build commands and uploads the image to Docker Hub.

You’ll need a Docker Hub login to continue:

Head to Docker Hub.

Use the Get Started button to register.

button to register. Go back to your Semaphore account.

account. On the left navigation menu, click on Secrets under Configuration:

Click on Create New Secret .

. Create a secret called “dockerhub” with the username and password of your Docker Hub account:

Click on Save Secret.

Semaphore secrets store your credentials and other sensitive data outside your GitHub repository and makes them available as environment variables in your jobs when activated.

Dockerize Pipeline

Open the CI pipeline on Semaphore and click on Edit Workflow again.

again. Use the + Add First Promotion dotted button to create a new pipeline connected to the main one:

Call the new pipeline: “Dockerize”

Ensure the option Enable automatic promotion is checked so the new pipeline can start automatically:

Click on the first block on the new pipeline. Set its name to “Docker build”.

Open the Prologue and type the following commands. The prologue restores the packages from the cache and prepares the database:

sem-version python 3.7 checkout cache restore mkdir -p .pip_cache pip install --cache-dir .pip_cache -r requirements.txt cd martor_demo python manage.py makemigrations python manage.py migrate cd ..

In the job command box type the following commands. The job pulls the latest image (if exists), builds a newer version, and pushes it to Docker Hub. The --cache-from option tells Docker to try to reuse an older image to speed up the process:

echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin docker pull $DOCKER_USERNAME/django-markdown-editor:latest || true docker build --cache-from=$DOCKER_USERNAME/django-markdown-editor:latest -t $DOCKER_USERNAME/django-markdown-editor:latest . docker push $DOCKER_USERNAME/django-markdown-editor:latest

In the Secrets section, check the dockerhub secret:

Click on Run the workflow and Start.

The CI/CD pipelines start automatically. Once all tests pass, the Dockerize pipeline will create a new Docker image and push it to Docker Hub.

You can pull the image to your machine and run it as usual:

$ docker pull YOUR_DOCKER_USERNAME/django-markdown-editor

Next Steps

We’ve prepared a Docker image with everything needed to try out the application. You can run this image on any machine or cloud service that offers Docker workloads (they all do).

The next step is to choose a persistent database. Our Docker image uses a local SQLite file, as a result, each time the container is restarted all data is lost.

The are many options:

Use a managed database service from a cloud provider.

Run the database inside a VM.

Create a second container with the database and use volumes to persist the data.

Regardless of the option you choose, you will have to:

Configure Django to connect to the database.

Create a new secret on Semaphore with the database connection password.

Pass the database connection parameters as environment variables when starting the Docker container.

Conclusion

In this tutorial, you have learned how to build a simple Python Django web application, wrap it in a production-grade web server, and created a Docker container to execute your webserver process.

If you enjoyed working through this article, feel free to share it and if you have any questions or comments leave them in the section below. We will do our best to answer them, or point you in the right direction.

Having your application running is the first step in the way of Kubernetes. With Kubernetes, you can run your applications at scale and provide no-downtime updates:

Next reads: