Python apps production deployment is fairly easy and straightforward. Well, it should be. The language itself provides this ease of use and logical approach, so why not production stack configuration? Well, things can be complicated when someone tries to seal all backdoors, minimize the possibility of server downtime during deployment, scale deployment to many servers or minimize any point of failure and separate the servers from insecure parts of the Internet or third party libs etc.

Tools

The first tool I would like to introduce is widely known Ansible. It’s an automation tool written in Python that executes a remote command on any ssh connectable server or any number of servers at the same time. Ansible uses YAML file format for storing configuration which is a big advantage as it’s standardized, simple and widely used. Configuration and commands sets are then gathered together into roles which then are organized into playbooks that eventually are executed step by step on a remote machine. Every module with its configuration is then translated into system specific commands which are then executed. I say system specific so an Ansible playbook can run in RPM or DEB based Linux systems, as well as on Unix or even on Windows servers (partially supported but coverage is growing fast, version by version).

The second tool is so obvious that some of us don’t even know its full potential. PIP is “the PyPA recommended tool for installing Python packages” that probably every Python programmer uses. PIP is simple, straightforward and it also provides additional, complex functionality besides downloading a package

The Problem

What I wanted to achieve in this deployment flow was denying access from production environment to external services like Git or PyPA. Why? Well, it’s fairly easy to explain.

When I was working with my GitLab there was a problem with sharing access keys. Every machine connecting to GitLab would have to share the same pregenerated pair of ssh keys or even every machine would have their own key. In the second scenario every public key should be added to GitLab as a deployment key separately and every time a new machine is added to the network. On the opposite side, I would need to remove every key that is no longer used. OK, I could access Git by password, but then I would need to store such a user/pass pair somewhere and allow such access which I prefer not to. Either way, this requires unnecessary extra work.

Another problematic thing could occur with vast network deployment when half of the servers would sync their code with Git then, Git server dies and the second half is not upgraded.

Scenarios can go on and on. Every connection with remote content while deploying can be a liability. The second thing to consider is the code itself. I wouldn't want to accidentally deploy something like:

Untested or test failing code

Unresolved merge, or any merge hell that broke loose and was committed for some reason.

There is any problem with the Git server or connection to such server

That’s why I chose a way that is the most secure one in my opinion. I can download a git repository locally and deploy the already downloaded code. The code can be automatically tested and if succeeded, further deployed.

But hey! There is another problem on the horizon. If I don’t trust GitLab/GitHub or any other VCS solution then why should I trust PyPI servers? Especially, when someone could put bugged code in PyPI repository by mistake. That’s why I choose not to use PIP on a remote server. Every package can be installed beforehand in a local temporary folder and then after local tests sent to servers. Even better thing to do is using stable versions of the lib and not necessarily the newest ones.

Extra

Fun fact: working with servers with unsynchronized time and date. The time issue is rather a marginal but still possible situation that can make PIP fail if server times mismatch greatly with PyPI. Even APT and YUM fails if time is not synced correctly, and sometimes it is not on purpose, or when the virtual machine was woken up from a long slumber.

The biggest issue here is how to send all required codebase. I wouldn't want to airsync a virtualenv, and it wouldn't work either. What I do is to wheel whole application including its dependencies into wheel packages.

Official definition:

“Wheels are the new standard of Python distribution and are intended to replace eggs.”

Wheeling a package is well described on Python Wheel homepage, but the key feature in this case would not be only about wheeling an APP but already installed dependencies as well. What was achieved here is a package that can be deployed seamlessly on any system as long as it is a pure Python package. Additionally, Wheels use manifest files to include files needed in a package, so writing it appropriately can allow a programmer to select only production specific files without any local scripts. Another feature in my approach is the template files for configuration in apps git repository. Jinja 2 files can be placed in the repository and after cloning it prior to deployment they can be rendered with appropriate data stored safely in Ansible vault. This approach guarantees that the config file has a proper structure as current build requires and uses data safely stored in a vault. Another thing is that the same file can be used for local or staging environments with other vaults etc.

The solution

OK, so how to do it?

Ansible PIP module currently (2.3.1) does not support wheeling process but we can achieve same result by executing shell commands as well. Like this:

The second most important step is to install all libraries from wheels on a remote server.

where app_name is obviously our application and app_wheelbase_dir is base directory for all three wheeled libs. This command will install this APP with all dependencies. It’s super fast because there’s no need for downloading all the dependencies from the internet.

The whole process is a bit more complex and I would like to explain in detail step by step this flow.

First of all, the wheel role is run only once by delegation to localhost. This role could be delegated to any other instance, for example, a build machine generating specially prepared test ground or other than localhost system type or even a docker image. Wheel role consists of several commands helping to prepare wheeled build like creating the directory structure, cloning source code from vcs, creating virtualenv and installing our application and dependencies. After this step, we can launch tests or any other commands like CLI commands to test out code. When they are processed, the wheeling process will start. We've come up with an idea of wheeling all packages and then storing those builds for possible errors and quick rollbacks. Whole wheel base directory is emptied then the APP is wheeled by Python PIP. The whole wheelbase is then compressed to a single zip file with a timestamp and/or build no, etc. in its name. Then copied and renamed to “latest.zip” instead of timestamp for convenience. This finishes the wheel process.

Now, prior to deployment and we have our already tested APP with whole environ in a single ZIP container, we can continue with standard Ansible deployment. Some server related stuff are happening like creating users, installing packages and additional services like nginx, Redis and database server which are not a part of this article. You can look them up in the attached GitHub Repo, those roles work pretty well. Finally, the last part of deployment is about installing application, it`s Ansible role has to do only one thing — install an application and its dependencies from wheel to virtualenv.

The easiest thing to do is to send the whole “latest.zip” package to the server and unzip it in the empty wheelbase folder. Then install it all with single ansible pip module command:

pip install app from wheel

Whole problem is reduced to two PIP commands.

After the installation, standard deployment process occurs like copying necessary configuration files, executing run scripts or making db migrations.

Conclusions

Every non-standard approach has its pros and cons. This one is not different and I would like to point some out. Don`t hesitate to share your thoughts in comments below. I would like to know your opinion.

Pros

Can be aborted before any actual deployment takes place

Safe

Easy to automatically test an APP prior deployment

No need to grant any access to Git servers

No need to store any passwords or keys to Git, all other passwords can be stored in Vault.

The same deployment script can be used for almost every Python APP with similar architecture !

Easy to rollback to the previous version with all dependencies

Not so a pro or a con, just in the middle

No speed gain when deploying only to just one server

Overcomplicated for very simple apps

Cons

Requires creating a package structure in a project

Managing MANIFEST files

Not so easy when on different OS type. Requires a Docker or another machine for wheeling Extended Python code like uWSGI or other packages that are listed as not supported by http://pythonwheels.com/

For everybody who wants to analyze whole playbook I have prepared a sample repository: https://github.com/daftcode/python_deployment_on_wheels_ansible

Thanks for reading!

If you enjoyed this post, please hit the clap button below 👏

You can also follow us on Facebook, Twitter and LinkedIn.