A couple of weeks ago and after almost six months in beta, Atlassian has released Bitbucket Pipelines. Pipelines provide continuous delivery pipeline within Bitbucket Cloud which should be very interesting to a lot of developers who are using Bitbucket to host their git repositories. It gives you some of that Heroku magic where you commit your changes, push them to git repository and couple of seconds later (ok, maybe a minute) those changes are live on your website or API.

Unlike Heroku, Pipelines require some configuration through YAML file (and Docker image if you need custom build environment), but once you set it up, it’s magic!

Depending on your development flow, Pipelines are easily configurable to do any combination of building code, running tests and deploying code to staging or production servers.

Pipelines already have pretty good documentation. We quickly got started with basic Python examples and default Bitbucket Pipelines Docker image.

Testing Existing Docker Images

Next, we wanted to use custom Docker image with our production environment: CentOS7 and PostgreSQL 9.4. Here we run into first problems, mainly because our inexperience with Docker and lack of official Docker CentOS7 images with PostgreSQL 9.4. We experimented with several official Docker images, but we couldn’t customize them to our liking. We wanted to install system dependencies (PostgreSQL 9.4, pip, gcc, etc.) through Docker image and keep bitbucket-pipelines.yml file as clean as possible. That proved to be a harder route, but now we have cleanly separated responsibilities:

Docker image already comes with preinstalled system dependencies which allow us to reuse it across multiple projects and speed up Pipelines execution times

bitbucket-pipelines.yml file only contains project specific configuration, runs tests and optionally deploys code

Custom Docker Images

Pipelines do not support Docker Compose or testing against multiple Docker containers at the moment. Therefore, you have to build your full testing infrastructure inside a single Docker container. Although this is a Docker antipattern, it’s doable and currently the only way to run multiple services required by your test.

We are using official CentOS7 Docker image as a base for our images.

Since there is quite a lot of work required just to install PostgreSQL, we decided to build a custom image which adds only PostgreSQL 9.4 on top of CentOS7 (revolucija/centos7-postgresql9.4). Built image is available at Studio Revolution’s Docker Hub and source files required for building the image are available in Studio Revolution’s dockerfiles git repo.

Once we’ve built CentOS7 PostgreSQL 9.4 image, we’ve decided to use it as a base for separate Django image (revolucija/centos7-postgresql9.4-django). This approach has both pros and cons and in our case pros outweigh cons.

Pros:

Clear separation of operating system and Django app dependencies.

Following Docker’s philosophy of easily extendable images.

Giving something useful back to the community. For some developers, CentOS7 + PostgreSQL 9.4 combo without Django might also be helpful.

Cons:

Maintenance and changes are soooo slooooow. If you are extending image and want to change something in the base image, you have to rebuild both base image and extended image. That’s fine when you have a stable base image. However, if you’re just starting with Docker, this flow will be too slow for experimenting and learning Docker’s intricacies.

Django image extends CentOS7 PostgreSQL 9.4 image and installs system dependencies used by most Django projects:

pip

python-devel

Pillow dependencies: zlib-devel, libjpeg-devel, gcc

Psycopg2 dependencies: libpqxx-devel, gcc

openssh-server, openssh-clients (required by Fabric for forward ssh connection from Docker image to our hosting server and Bitbucket git repo)

tox

Fabric

Django itself is not installed in Docker container. Tox installs it once it’s run via standard requirements.txt file.

Pipelines Configuration

First, you have to enable Pipelines in your Bitbucket repo settings.

Then, you need to add bitbucket-pipelines.yml file to the root folder of your repo. bitbucket-pipelines.yml file stores and manages your build configuration. It defines Docker container used to run your builds and general or branch specific pipelines.

PRO TIP: Debug and test locally as much as you can! Run you Docker image locally, login to Docker container and check if all required dependencies are installed and working correctly. Finally, test your bitbucket-pipelines.yml file line by line.

You’ll probably want to start with a simple Pipelines config to test the Docker image, your app dependencies and make sure that tests pass:

image: revolucija/centos7-postgresql9.4-django:1.0 pipelines:

deafult:

- step:

script:

# postgres has to be manually started

- /./start_postgres.sh

# make sure all tests pass

- pip install requirements/production.txt

- python pipelines_demo/manage.py test common

Pipelines Environment Variables

Modern code deployment practices advocate storing config values in environment variables. Pipelines can pass environment variables to your Docker image through Pipelines Environment Variables settings:

Using Tox To Run Your Tests

You can improve the previous config by using tox to run your tests. Tox is used for virtualenv based automation of test activities. It can create virtualenv, install dependencies and run tests.

Tox is configured via tox.ini file:

[tox]

envlist = py27

skipsdist = true [testenv]

passenv = *

deps = -r{toxinidir}/requirements/production.txt

commands = python pipelines_demo/manage.py test common

Now you can further simplify bitbucket-pipelines.yml file:

image: revolucija/centos7-postgresql9.4-django:1.0 pipelines:

default:

- step:

script:

# postgres has to be manually started

- /./start_postgres.sh

# make sure all tests pass

- tox

Auto Deploy Code When All Tests Pass

It’s nice to have automated tests on each commit, but the endgame in Continuous Delivery pipeline is deployment to staging and production servers. We just need to add another line to our bitbucket-pipelines.yml file:

pipelines:

default:

- step:

script:

# postgres has to be manually started

- /./start_postgres.sh

# make sure all tests pass

- tox

- fab deploy:production

Pipelines execute bitbucket-pipelines.yml file line by line and they halt execution as soon as a line fails. If our tests fail, the fab line will not execute.

fab deploy runs Fabric script which automates those boring, repetitive deployment steps nobody likes to do:

connects to remote server via SSH

navigates to project folder

pulls code from the git repo

exports environmental variables

installs requirements

migrates any database changes

collects static files

restarts server

Since Fabric uses SSH to connect to web server, we need to configure SSH keys. Although it’s a straightforward process, there is still a few steps we have to do. Recommended practice is to use Deployment Keys which have read-only access to git repository.

First, generate an SSH key pair:

ssh-keygen -t rsa -b 4096 -N '' -f pipelines_demo

2. Add public key as Deployment key to your Bitbucket repo (Settings -> Deployment keys).

3. Encode the private key and add it as secure environment variable in Pipelines settings:

base64 < pipelines_demo

4. Install the public key on the remote server. Verify that you can SSH into remote server with you public key without having to enter a password:

ssh -i pipelines_demo username@server.example.com

5. Add public SSH key of remote host to your project’s known_hosts file:

ssh-keyscan -t rsa server.example.com > known_hosts

6. Add SSH config to Docker image via bitbucket-pipelines.yml file.

image: revolucija/centos7-postgresql9.4-django:1.0 pipelines:

default:

- step:

script:

- /./start_postgres.sh

- tox

# ssh config for Fabric

- cat known_hosts >> ~/.ssh/known_hosts

- (umask 077 ; echo $DEPLOYMENT_SSH_KEY | base64 --decode > ~/.ssh/id_rsa)

- ssh-add ~/.ssh/id_rsa

- fab deploy:production

Setup Independent Pipelines Flow for Staging and Production

Our typical development flow is to deploy master branch to production server and development branch to the staging server. It is possible to configure branch specific flow in bitbucket-pipelines.yml file:

pipelines:

branches:

master:

- step:

script:

- /./start_postgres.sh

- tox

- cat known_hosts >> ~/.ssh/known_hosts

- (umask 077 ; echo $DEPLOYMENT_SSH_KEY | base64 --decode > ~/.ssh/id_rsa)

- ssh-add ~/.ssh/id_rsa

- fab deploy:production

development:

- step:

script:

- /./start_postgres.sh

- tox

- cat known_hosts >> ~/.ssh/known_hosts

- (umask 077 ; echo $DEPLOYMENT_SSH_KEY | base64 --decode > ~/.ssh/id_rsa)

- ssh-add ~/.ssh/id_rsa

- fab deploy:staging

Conclusion

Pipelines are a welcome addition to Atlassian/Bitbucket ecosystem. We like automation they bring to development and deployment process. We’ll definitely test them out in one of our next projects to see how they work in practice on a real-world project.

We did experience some pain to replicate our production environment in custom Docker image. However, that was mostly due to our inexperience with Docker and lack of official CentOS7 PostgreSQL Docker images.

We have decided to open source everything we’ve built for this demo so that developers sharing our stack can have a faster start with Pipelines than we did. :-)

Django demo app: https://bitbucket.org/revolucija/pipelines_demo

Dockerfiles: https://bitbucket.org/revolucija/dockerfiles

Docker images: https://hub.docker.com/u/revolucija/

We would love to hear your feedback on how to improve this setup. Pull requests are most welcome! :-)

Written by Ozren Lapčević, Senior Developer at Studio Revolution. Illustration by Suzana Košćak.

Studio Revolution is design driven product development studio for businesses that need creative digital experiences and solutions.