We are used to the traditional approach to software development where we first write all the application code that we can, only to realize later that it is not really production ready and a lot of work needs to be done to get it up and running in a production environment. Some examples of such a scenario extend from hardcoded secret keys to ad-hoc configuration that can only run in localhost .

What triggered this thought?

A bit of background before I dive deeper into this. I have been working on the infrastructure team at my day job for the past few months and I was writing a server management script in Golang. This gave me the freedom to compile the script into a binary, without worrying about installing an environment to execute the script.

The script was using CGO and my local development environment was OSX , while I needed to build for Linux . As a result native cross compilation went out of the window[1]. So to build the binary I was doing the following:

Start a docker container with Golang installed Mount my project root on $GOPATH of the docker container Run go build

I had the binary ready, but I also wanted an easy way to ship it. I didn’t want to change my server configurations to include the binary on boot up since that would mean that I have to spin up a new instance with the latest config each time I made the tiniest of changes to the script.

So I ensured that the servers ship with a tiny bash script that would download the binary from a preset S3 bucket and execute it. This made it possible for me to implement an on-the-fly approach where all I had to do was upload the binary to the S3 bucket and the bash script would download and execute it.

When I started off on the script, it was very immature regarding the things that it did, and I expected to keep making incremental changes to it.

This approach looks okay when you are making changes once in a blue moon, but it was horrible when I was actually working on the script and kept testing my changes iteratively one small piece at a time. Being the command line nerd that I am, I hated every minute of having to build the binary manually, and then opening the AWS console, clicking through the options on S3 and uploading the file.

That’s when I took a step back and started thinking about how could I automate this part.

What did I change?

I wrote a Makefile to achieve this. Here’s how it proceeds:

Spin up a docker container with golang installed. Mount the current working directory into the container to make the source files available. Build the binary. Use AWS cli to copy the binary into the S3 bucket. Exit the container and clean up.

And here are the contents of the Makefile :

1 2 3 4 5 6 7 8 9 10 11 12 13 14 PROJECT_ROOT = /go/src/github.com/Instamojo/infra install : go get -v ./... build : install CGO_ENABLED = 1 GOOS = linux go build -a -installsuffix cgo -o $(GOPATH) /src/github.com/Instamojo/infra/run-manage-users docker : docker run --workdir $(PROJECT_ROOT) --rm --volume $(PWD) : $(PROJECT_ROOT) -it golang bash -c "make build" ship : docker aws s3 cp run-manage-users s3://mybucket

As a result each time I made a change, now I could just run make ship and the new binary would be created and uploaded to S3. Once this step was done, testing the script was trivial by running the bash script on the server which downloads the binary and executes it.

End result

When I got the Makefile to work it felt like I had been able to do away with an essential part of the process which was time consuming. make ship was clearly much faster and seamless way to go about doing this.

The irony however, is that I only thought about this approach after having written most of the script. In hindsight, if I had thought about writing the Makefile earlier, it would have saved me a lot of time and frustration.

But we generally do not approach software development like this, since we are way too accustomed thinking about it in the traditional way, where we first write code for our local environment, and then think about building it for staging or production environments.

As a result of this exercise, I realized that pondering upon how we deploy the code is just as important as doing so about the program structure, to not only make the development and the testing processes faster but to be able to ship out the code earlier as well. By making deployments easy you will have done your future-frustrated-self a huge favour when things seem to be working on your machine but not on your colleague’s or on staging/production environments.



Footnotes:

[1]: https://dave.cheney.net/2016/01/18/cgo-is-not-go

Please enable JavaScript to view the comments powered by Disqus.

Disqus