Is it me, or are Make and Puppet really quite similar beasts? Both of them essentially say, ‘give me a description of the end result, I’ll make sure the end result looks like that.’

Build systems like Make say, ‘given these source files, I’ll make sure that your compiled files are up-to-date.’ The ‘end result’ is one or more compiled files. The description is a set of source files and some rules for how they are to be combined. Configuration systems like Puppet say, ‘given this set of package names, configuration files, and other resource identifiers, I’ll make sure your system is up-to-date.’ The ‘end result’ is a configured system. The description is a list of package names, configuration file templates, and so on, with rules for how they are to be combined.

The ‘resources’ in Make are files identified by filepaths. The ‘resources’ in Puppet are just a little more abstract: files are one kind of resource, but so are ‘packages’, ‘users’, ‘services’, and other things. In Make, we describe what we want as a set of filepaths, like dist/apache. In Puppet, we describe what we want as a set of resource identifiers, like [File[“/etc/hosts”], Package[“apache”]].

Files in Make can have dependencies on other files, so we can say ‘dist/apache depends on apache.c’. We are declaring that dist/apache is a function of apache.c. Resources in Puppet can have dependencies on other resources, so we can say ‘Service[“apache”] depends on Package[“apache”]’. We are declaring that in some sense, the running service apache is a function of the installed package apache.

Your Makefile represents a directed acyclic graph where vertices are files and edges are ‘is a function of’ relationships. Your Puppet configuration represents a directed acyclic graph where vertices are resources and edges are ‘must be configured before’ relationships.

When Make runs, it constructs this graph, and runs the rules in an order such that for each rule, the input files are built before the output files. (This is a white lie; Make avoids a lot of this work using file timestamps.) When Puppet runs, it constructs the dependency graph, and configures each resource in an order such that for each resource, its dependent resources have been configured before it.

Okay, so build systems and configuration management systems are structurally similar —who cares? I do, and here’s why you should.

In several personal projects, I have a Make build system that takes my source files and compiles them using a few different tools. Those tools are provided by packages in my operating system, and I use a separate Puppet file to ensure that those packages are installed on my development machine. A very similar setup exists at my workplace. Here’s a simple example. One Makefile:

hello: main.o factorial.o hello.o

g++ main.o factorial.o hello.o -o hello

and a Puppet file:

package { “g++-4.9”:

ensure => present,

provider => ‘apt’

}

An implicit rule is that developers on this project must run Puppet to configure the development machine and then run Make to build the project.

More importantly, the dependencies between the Makefile and the Puppet configuration are implicit. What tools have to be installed, and at what versions, for such-and-such a rule in the Makefile to run successfully? The Makefile declares its dependencies on the object files, but forgets to declare its dependency on g++. What happens when you update g++? Make doesn’t know about it, so it doesn’t recompile your application.

Wouldn’t it be better if we could instead write:

g++:

apt-get install g++-4.9 hello: main.o factorial.o hello.o g++

g++ main.o factorial.o hello.o -o hello

Notice the benefits:

g++ becomes a build input for the hello file. This is just right, isn’t it? Dependencies should be explicit!

The developer just has to say make hello. They don’t have to care about which compiler is being used; that’s an implementation detail.

We have reduced two separate systems to one! Hooray for simplicity!

Yes, there are nits to pick, and precise semantics to work out, but stop being a nay-sayer.