Note: skip to the bottom for the juicy bits/code snippets

I will spare you the long road which lead to Go (hint: it’s Python and gross amounts of Puppet) and just cut to the chase.

The Lifecycle of Our Go Code

Write some Go code, test it, commit it A Makefile compiles all of our code into a single executable with no real dependencies (read: things that don’t exist outside a modern linux distribution). The Makefile then packages together a .deb package, consisting of the Go executable and assorted upstart scripts and configuration. The Makefile then distributes the .deb package to a private S3 bucket. Our servers at a set interval (currently 10 seconds) perform HEAD requests against the .deb package. If the package was changed, download and install it. Send SIGUSR2 to our bubbli processes, which causes our web machines to gracefully restart immediately and our computer vision machines to restart when it’s next convenient.

There is really nothing revolutionary about anything I’ve mentioned here, but all of these steps when put together result in a great deployment experience.

With a little bit more effort you could set this up to continuously deploy with the proper git hooks.

The Good

Very few moving parts

We have one executable that does virtually everything our servers need to do (except computer vision, which is a C++ program that we call out to). In order to run our web server, you just need to scp it to a linux box, configure some options with environment variables, and run it. Oh, and it’s 1.6MB gzipped.

Catch errors early

Static typing is great, and while you can still shoot yourself in the foot with interfaces, if the code compiles, it’s much more likely to be correct than if a Python web server boots up. Furthermore, since Go doesn’t need a huge set of surrounding components to run properly you remove a whole class of failures due to issues in your deployment environment. Coming from Python where we would run code on top of Werkzeug inside of Gunicorn inside of a Virtualenv inside Supervisord all behind Nginx and configured with Puppet, this was a huge relief.

Deploying scales well

S3 is huge and reliable. Once your package is uploaded, you don’t generally need to worry about whether your distribution strategy is bottlenecked somewhere or dependent on you not losing your SSH connection mid-deploy. Coming from Puppet which consumed obscene amounts of CPU in doing essentially the same work on a bunch of machines, it’s great not to have to worry about a puppet-master as a single point of failure. Of course S3 can fail too, but all things considered, it is much more robust than anything we would maintain in-house.

A reader commented that you should always check MD5 hashes when copying your .deb file around,which is definitely the case. Fortunately, if you use the awscli to do so, this is done automatically!

Deploying is fast

It’s primarily determined by how often our autoupdate Upstart script checks for updated packages. Fortunately S3 requests are $0.004 per 10,000 requests so refreshing every ten seconds costs about $0.10 per month per server.

Deploying is seamless

Using the goagain package (https://github.com/rcrowley/goagain) it’s super very straightforward to write net/http servers that seamlessly hand off a listening socket to a new process, wait a bit for existing requests to finish and then terminate. For some reason, the example is with a generic TCP socket, but you can hand a listener straight to net/http too, just replace

go serve(l)

with

go http.Serve(l, nil)

after you’ve set up your handlers. Restarting a running web process is as simple as sending it a SIGUSR2 signal.

Rollbacks are (theoretically) easy

You can set your S3 bucket to version your package, so that you can restore an old version which will automatically be picked up by your machines and installed. I’ve never actually needed to do this, however.

The Less Good

Must deploy from a binary compatible machine

This is a minor inconvenience if you develop on a Mac, but probably better in the long run because deploying from one OS to another is just asking for trouble. I know that there are some tools in the golang community for cross compiling, but I haven’t tried them out yet.

Configuration isn’t super flexible

At the moment, we ship a bunch of configuration in our Upstart scripts which are packaged into our .deb packages. This feels really dirty, but given how easy it is to deploy, it isn’t a huge issue. I know Instagram just refreshes configuration info from a redis server every 30 seconds, but something like serf (http://www.serfdom.io/) may work too.

Deployment doesn’t self-repair in the case of failure

When we deploy, I watch our aggregated logs (on http://papertrailapp.com) to make sure that the machines restart themselves properly and act accordingly if I see a flood of errors.