CI (Continuous Integration) is something I’ve not used up until recently and it’s something you’ll come across sooner or later. They’re really powerful tools helping you automate your deployments and other tasks.

Most developers will be aware of some CI tool, wether it’s Circle CI, Travis Ci, GitLab Ci or in our case, Jenkins.

At Boomerang Messaging over time we soon realised our current method of SFTP releases wasn’t scalable or time effective, and with all our new features, we needed to be able to release quickly, effectively and across multiple servers for Dev Testing, QA and production.

With bigger and more complex features requiring more files to be uploaded, deployments became really tricky and could result in missing files or multiple environments out of sync by varying degrees and a comically long task of re-syncing. Deployments could take from a few minutes, to hours to even days with big version upgrades in our codebase. Scaling up, this becomes a huge problem in terms of developer time, debugging time and managing code effectively. We needed a solution to both deployment and how we managed git.

Our main goals were:

Improve deployment speed

Reduce the amount of bugs caused by missing files

Updating multiple environments quickly

Avoiding accidental releases of code that wasn’t production ready

Before, we’d use GIT and branches but we hadn’t leveraged the full benefits by not using any sort of GIT structure. Eventually we’d have various team members out of sync, incomplete updates, manual file merges and a general nightmare to work on at times.

I decided to take on this challenge and solve our issues in one quick swing. A month later…it wasn’t quick, involved many painful moments, ‘ah ha!’ moments and wondering if I should have lived in the woods and never deal with Jenkins or GIT ever again. By the end of it (and more swearing) we’re done! We’ve seen 97% improvement in deployment speed, less rogue releases and a better understanding of what’s in our code base! We can easily export commit history directly to our QA teams and other developers so we’re all aware what was release to which environment.

The first task was restructuring our GIT Repo. From previous use and experience of it working well in any team of more than 3 developers, I opted to introduce GIT Flow. We’d end up with having a master (a true reflection of prod), develop (collaboration branch) and feature/ branches. Undertaking this meant merging a lot of our current branches into one, and merging those back into our current feature branches to align pre-existing branches to avoid future merge issues later. Some branches were over 200 commits out of date and 40 file conflicts! So it was a pretty big task!

Although this alone didn’t solve the direct issue of making releases faster, more reliable and automatic. As well as a finer control on what features were released to different environments. Sometimes developers needed a real server to test code, or needed to push a feature to our staging server and eventually production. Each developer had their own live server, so this becomes quite a hassle to manage.

I knew I wanted to use GIT to play a part in deployments and chuck FTP to the barren lands never to be seen again, but how I’d manage this was foreign at first.

That’s where Jenkins came in. I picked Jenkins over other CI tools as the availability of plugins, community resources and it’s reputation made it stand out to me (It also has a much better name). Using Jenkins, I created a multi-branch pipeline and created a branch for each server we had, always prefixed with server/ and followed by our server’s name. Master would also be tied to production. Using Webhooks in Github, I created a scripted pipeline that handled the deployment process and used SSH details from Jenkin’s credential manager.

I also opted to use a single Jenkinsfile across each branch. The only issue there was keeping the Jenkinsfile synced across all branches that had already been created and we couldn’t merge develop into without introducing unwanted features. So to solve this we have a jenkins/config branch and that’s merged into master and server branches we want to use the new config.

To get our repo source, I used the GIT plugin to clone from our repo and using an ENV variable, I could reference the branch name to clone from the webhook Jenkins received from Github. Making sure we cloned from the right branch each time.

Our simple script then needed stages so we could track where builds might have failed and define our process with clarity. Each stage handled a different set of tasks related to that stage’s name, first we’d start with Clone.

Clone Stage

After cloning, we needed to build the files into a tar.gz file, retaining correct file permissions, hence tar over zip. We could do this using Jenkins ability to run shell commands and write SCP commands to transfer files.

This was great, but we’d still have to SSH in, clear Laravel’s caches, re-run build processes and run other bash scripts I’ve written to restart Laravel’s workers for events we might have added but weren’t picked up without a restart of the worker. With a little more script, we could do this. Saved us the headache of “did you clear the caches?!” or finding the SSH password, SSHing in, cd’ing to the directory and FINALLY running the command.

This worked great, but only on one server. A solution was to the store the SSH credentials with Jenkins and using an if statement, we could switch the IP and server user name for each server we had and use credentials from the credentials manager.

With some painful realisations of our set up in different servers it worked! We had Jenkins going full steam! We had a new modular codebase, so we could release a batch of features at once by a tagged branch they were merged into, or we could selectively move new features into different servers for testing, without affecting develop branch. Once we were happy, they’d be in develop!

The downside was, no-one else had Jenkins access and we’re not giving it out like candy (we don’t even have free candy :( ) Jenkins has a nifty plugin Extended-Email Notification for this to compliment it’s pre-installed SMPT abilities. I wanted to notify developers if their deployment was successful or failed, why and what server. To do this, I had to update our script to use a try catch.

Now we’d get emails as soon as the build was completed or failed so we could view the server, or find out why it failed. We’re yet to run code tests, so usually it would be an SSH issue, or a failure to run a script.

We also needed a way to hand our repo’s change log and commit history to our QA team so they new what to test or had been released, when and by who. I edited the script to only send the attached files and to QA depending on the branch.

Another evolution later and we finally had it — a fully working Jenkins pipeline and builds under 30s including building assets, cache clearing and team notifications depending on the server. Here’s the final script:

With more servers, it’s as simple as creating a new branch from develop prefixed with server a small script update and SSH credentials and in 10 minutes we’d have a whole new server ready to use.

It took a lot of time, pain and headaches but as a team we agreed it was the right move and the ideal solution to our issue specific to us, and aspects other teams face. We’ve got a modular codebase that we can drop new features whenever we want with ease and a full history of whats happened. A happy dev team, a happy QA team and a much stronger product in the end.

Adding a defined Git Flow pattern and Jenkins has been a great task and huge benefit to our team and in my opinion, a must for any large scale app. Or even worth playing with in small teams and apps.

You can ask questions on here, or tweet to me @mrmonkeytech