If you’re new to Python web development with Django, there are some things that tutorials don’t teach. Deploying to a server when you finish your local development can be a frustrating task. Here is a rundown of the tools and workflow I use for deploying a Django based website.

A few days ago when I launched the new version of Notasbit.com, I ended up having a discussion with a friend about deployment methods for Ruby on Rails and Django websites. My friend is used to Rails development and deployment, something I am not familiar with since Rails 1.8 (looong time ago by now) and he insisted that Ruby Gems method is way easier while Python deployment was a hassle. Yes, for people not familiar with the right tools it can be. There are different versions of Python and library dependencies and versions that can make maintaining a Django website a nightmare.

Here is what I use for deploying Django websites on a live server:

The tools

Git

You cannot be serious about programming in general these days if you’re not using a version control system. In Linus Torvald’s words: “if you’re not using Git, you’re an idiot”. I keep all my projects under version control, even if I’m the only developer on it, even when the code is for personal use only. It not only is a good practice to keep, but it also works as your backup. When I got robbed from my computers, I didn’t loose any project data because it was all backed up on several Git repositories at different places. For deployment tasks, you need to install Git on your server. The server you’re deploying to will serve as a backup as well.

Virtualenv

All that version and dependency hell can be isolated using Python’s Virtualenv tool. This is a tool that will create an encapsulated environment of a given python version and all the libraries you want to use in a project without messing with the system’s main versions. So this way, you can have an old Django 1.4 project running on the same server than another project running a later Django 1.8 or 1.9 version, as well as their respective dependencies.

Fabric

The only reason I still haven’t started using Python 3 on my projects is Fabric. This tool will automate all the project maintenance tasks needed, and that includes doing the deployment. It is similar to a Makefile but you write your tasks in Python code so you don’t need to learn a whole new syntax.

Now to the implementation details…

On your local development machine the project structure will look like this:

Project directory structure. The idea is to have the virtualenv outside the version controlled directory. You can also use virtualenvwrapper and have all your virtualenvs in a separate location. I like to have them contained in a same folder for now.

mainproject.com/ | +-- env/ (virtualenv) | +-- django_project/ | --- manage.py --- .git/ --- fabfile.py --- deploy_tools/ --- All other django project files & dirs

Deployment scripts

In the directory deploy_tools we’ll place the files necessary to configure apache, nginx, gunicorn, uwsgi or whatever other server configuration scripts needed for deployment.

Here’s an example of the Nginx configuration script I use:

upstream mywebsite { #the upstream component nginx needs to connect to server unix:///tmp/mywebsite.sock; # for a file socket } server { listen 80; # the port your site will be served on server_name www.mywebsite.com; # the domain name it will serve for charset utf-8; client_max_body_size 250M; # max upload size, adjust to taste location /static { alias /var/www/mywebsite/static; # your Django project's static files - amend as required } location / { # Finally, send all non-media requests to the Django server. uwsgi_pass mywebsite; uwsgi_param QUERY_STRING $query_string; uwsgi_param REQUEST_METHOD $request_method; uwsgi_param CONTENT_TYPE $content_type; uwsgi_param CONTENT_LENGTH $content_length; uwsgi_param REQUEST_URI $request_uri; uwsgi_param PATH_INFO $document_uri; uwsgi_param DOCUMENT_ROOT $document_root; uwsgi_param SERVER_PROTOCOL $server_protocol; uwsgi_param HTTPS $https if_not_empty; uwsgi_param REMOTE_ADDR $remote_addr; uwsgi_param REMOTE_PORT $remote_port; uwsgi_param SERVER_PORT $server_port; uwsgi_param SERVER_NAME $server_name; } }

Settings handling

There are many ways to solve the problem about settings file management in Django applications. I like to have a general settings file and a local one for the different environments. To achieve this, add a local_settings.py file and add it to the git ignore list.

Then add the following at the end of your general settings.py file:

# Import local settings try: from local_settings import * except ImportError: pass

In your remote server, create a local_settings.py file and add DEBUG = False or override any other setting you need specifically for that server. You can have different settings for local, staging, testing or production or any other servers you need to have.

Fabric tasks

For example, you might need to download a copy of your production database to use for development tests. Instead of typing the same mysqldump command every time, you can automate it like this:

def backup_db(): """ Gets a database dump from remote server """ production1() date = time.strftime('%Y-%m-%d-%H%M%S') dbname = 'MY-DATABASE-NAME' path = os.path.join(os.path.dirname(__file__), 'db_backups') fname = "{dbname}_backup_{date}.sql.gz".format(date=date,dbname=dbname) run("mysqldump -u {dbuser} -p'{password}' --add-drop-table -B {database} | gzip -9 > {filename}".format( database=dbname, dbuser='MY-DB-USER', password='MY-SERVER-PASSWORD', filename=os.path.join('/tmp', fname)) ) get(remote_path=os.path.join('/tmp', fname), local_path=os.path.join(path, fname)) run("rm {filename}".format(filename=os.path.join('/tmp', fname)))

Likewise, you can create a deploy command to get everything in the server. Here’s a fabfile.py example:

from fabric.api import * from fabric.colors import green, red import os import sys import time from fabric.contrib import django import datetime sys.path.append(os.path.join(os.path.dirname(__file__), 'mysite')) django.settings_module('mysite.settings') from django.conf import settings # Hosts production = '[email protected]' # Branch to pull from env.branch = 'master' @production def deploy(): """Uploads files and runs deployment actions to a given server""" # path to the directory on the server where your vhost is set up path = "/var/www/mysite" # name of the application process process = "uwsgi" print green("Beginning Deploy:") with cd(path): run("pwd") print green("Pulling master from Git server...") run("git pull origin %s" % env.branch) # use server's virtualenv for commands with prefix("source %s/env/bin/activate" % path): print green("Installing requirements...") run("pip install -r requirements.txt") print green("Collecting static files...") run("python mysite/manage.py collectstatic --noinput") print green("Migrating the database...") run("python mysite/manage.py migrate") print green("Restart the uwsgi process") sudo("service %s restart" % process) print green("DONE!")

With these tools, all you need to do when deploying new code to the server is to run one fabric command:

fab deploy

And you’re done.

Most of these ideas I took them from the book Test-Driven Development with Django. You can read it online for free to check out more details or parts I didn’t use in this example.

This is not a perfect solution to every case, but I hope this gives you some ideas for the workflow that fits your case. Share in the comments your deployment workflow or any suggestions to improve this one.

Share this: Twitter

Facebook

LinkedIn

Reddit

Pinterest

More

Email

Tumblr



