Delivering and deployment of software is hard. Continuously doing so while dealing with ever-changing requirements and scenarios in a secure and reproducible way? Even harder!

For most people, it becomes very difficult to do fully structured, reliable, and yet flexible build and deployment processes that can scale to dozens of Linux distribution releases. But with the Open Build Service, it all becomes possible in a reasonably manageable fashion!

Let’s start at the beginning...

Datto, like most modern software development companies, uses a Linux-based system for its servers and customer equipment. However, the general focus is often just about making the software, rather than shipping it. When it came time to actually ship the software, Datto used to wrap everything in ZIP archives, push them, and then unpack them remotely onto the filesystem. This made things difficult as more complex “setup” tasks were required.

This worked for some time, but as our software delivery requirements got more complex, it became harder to keep things sane as we shipped software with our bespoke delivery system. This really stopped scaling after the second time we had to upgrade all the servers and devices to newer base Linux distribution software.

So, we made the transition to using proper-ish Linux packaging for our software to handle this more smoothly. And like most small companies that do not know much about this space, we built our own system for doing this. And of course, our system for building and publishing those packages was a custom monstrosity! It was also very limited in what it could do. As you can see below, the code for the actual package build was rather simple...

A sample of the code from our custom build system for building packages

There were a number of issues with the custom system that we mostly hand-waved away:

Dependent packages couldn’t build against each other (thus, no package build chains)

We were restricted to only one Linux distribution family (e.g. Ubuntu)

Setting up a package to be built was very arcane and was easy to get wrong

We didn’t have good failure logging

But it worked for building packages for our servers and customer devices, until we introduced the Datto Linux Agent in 2015.

The Datto Linux Agent breaks the mold (and me!)

At launch, the Datto Linux Agent (DLA) supported Fedora, Red Hat Enterprise Linux/CentOS, Debian, and Ubuntu. And there were plans to add SUSE Linux Enterprise and openSUSE later! Moreover, DLA was actually broken up into multiple components and needed to be built in a specific chain to have everything built properly.

Unfortunately, the deficiencies in our existing package build tooling meant that I was forced to build every DLA release by hand, spending a week to actually build and validate them. At the time, that was 18 builds (9 for kernel part, 9 for userspace part)! That meant that when a DLA release was going to happen, I could not do anything else that week, and that was killing me!

I knew my manual package build process wasn’t going to be able to scale to handle building for more distributions in any kind of timely fashion. So, I started looking into what to do to make this less burdensome.

After looking into the problem and examining what others had done, I came up with three major alternatives:

Build a brand new custom system, possibly Buildbot-based with some extensions for handling dependency resolution using the Hawkey module

Adapt Fedora’s Mock and Koji to extend it to support Debian/Ubuntu builds

Use SUSE’s Open Build Service

So why the Open Build Service?

The Open Build Service (OBS) is a software solution created by SUSE to build and manage the openSUSE and SUSE Linux Enterprise distributions. However, it was designed from the beginning to support a wide variety of Linux based platforms. Notably, it can build packages, repositories, and images for Red Hat/Fedora, SUSE, and Debian/Ubuntu systems. SUSE offers a hosted version as the openSUSE Build Service, and the appliance image is freely available from the OBS website for you to deploy your own instance.

openSUSE Build Service home page

The Open Build Service has several capabilities to be integrated with VCSes (such as Mercurial, Git, Subversion, etc.) that allow it to either pull code or receive code regularly to build. Once it has the necessary inputs, it can build the software for all enabled targets. After the build, it can fire off events to trigger automated tests and/or review processes. Failures can also trigger events and notifications, depending on your configuration.

Some highlights of awesome features in OBS:

Source input flexibility through “source services” that allow scripted retrieval and processing of sources

Easy scaling of resources through OBS workers that auto-connect to the master

Multiple build engines for building software in a particular preferred way

Pre-install images can be defined for speeding up build environment setup

Automatic reverse dependency rebuilding on package updates to ensure dependencies are linked correctly

Fully customizable build environments using VMs or containers

We use many of these features to support reliably building our software to feed into our delivery pipeline.

Datto’s OBS instance and how we use it

Datto’s OBS instance was deployed using the official appliance installer provided on the website. The workers run in a VM without nested virtualization, so container build environments are used.

Though we primarily target Ubuntu for our production environment, we build nearly all software with RPM spec files using OBS’s spec builder engine and debbuild! Even though we are using spec files as the build mechanism, the packages are native, (mostly) proper Debian packages! In fact, most of our packages are built using the same spec file for an RPM distribution (usually Fedora) and Ubuntu, as a way to sanity check everything and verify we retain some degree of portability.

Naturally, the Datto Linux Agent is a bit special. It has been built for over 25 Linux distribution targets across the Red Hat/Fedora, SUSE, and Debian/Ubuntu distribution families leveraging this strategy!

Our packaging workflow is designed around Git repositories being the source of truth. Every commit gets built in our CI infrastructure as a so-called “scratch build” where the outputs are thrown away after verifying that the build was successful, including on every proposed change via a pull request.

A scratch build log for DLA

As you can see above, each scratch build is paired to each commit, identified by the branch it came from, as well. These scratch builds are also often used for doing smoke tests and end-to-end integration tests for the software in a form that is relatively close to how it would roll out in our production systems.

When we’re ready to release, we make a commit to bump the version of the software and make a Git tag for it. That tag triggers a pipeline to push it through our release workflow, which submits it to the build service for the “final” build.

DLA code for release being submitted

At that point, it gets built in our internal build service instance, which looks something like this:

OBS package home page for DLA

Once the builds are successful, they are pulled from the build service and pushed into our package repository server to be made available for consumption. At this point, the software is tested and then promoted to be incorporated into our systems and products.



And what about build chains? The OBS actually handles that for us automatically. Because it tracks all the dependency chains for all the packages built in the system, dependency shifts (updated package builds or updates from the Linux distribution) automatically trigger rebuilds that are sequenced in the correct order to minimize rebuild cycles.

For heavier builds like the Linux Agent, we also set up pre-install images that OBS automatically (re)builds so that build environment construction when the packages need to be built take seconds instead of minutes. These cached build environment images are kept fresh through OBS’ dependency tracking, just like with regular package builds!

A couple of other fun statistics:

Datto’s instance builds over 400 packages across nearly 300 projects , producing over 930 internal repositories !

across nearly , producing over ! The last mass rebuild took roughly five hours across six builders!

Summary

In this post, I described how Datto originally built and released its software, how it kept falling short of our needs, and what ultimately led to us looking to put together a better build system. In the course of investigating several alternatives, we landed on the Open Build Service due to its power and flexibility and have leveraged many of its features to support our development and release process.

One of the very interesting properties of how we build our software is that supporting new distributions for any software is generally very easy (in the order of days instead of weeks or months). If we wanted to, we could flip a switch and move to a different Linux distribution relatively quickly, because our tooling supports RPM and Debian distribution targets equally well.

And in the end, the key is that it all just works. Packages are built and pushed through our pipeline and released semi-automatically, making the process of getting new software out the door incredibly smooth and simple. This allows us to iterate rapidly enough that we can go from prototyping a new feature to full implementation within a matter of weeks, instead of months to years!