In the previous post in this series, we learned how the pets vs. cattle metaphor teaches us a new way of approaching cloud server architecture. Instead of unique pets which require constant, individual care, we focus instead on cattle which are identical, homogenous units that can be added en masse and removed with ease. Cattle servers are, in other words, fungible resources.

“It takes a family of three to care for a single puppy, but a few cowboys can drive tens of thousands of cows over great distances.” Joshua McKenty, CTO of Piston Cloud

This post explains more about the cattle mindset, sometimes called the noflake approach. In contrast to a pet server, which is looked after by an administrator beavering away at a console, we want cattle servers to be configured with no intervention at all.

Pet servers are administered manually. We ssh into them, tweak configurations, download and install new packages, and perform maintenance tasks such as checking memory or network usage to make sure there aren’t any runaway processes.

Not so with cattle servers. The difference is that with pet servers, configuration is done afterwards, or post facto. We create the server, and then administer it, changing the state of the server over a period of time. Whereas with cattle servers, configuration is done beforehand, or ex-ante. We write down, before the server is created, all the things we would like to be done on the server after it boots. We then send that configuration to the server, and the actions are executed on our behalf.

The actions will be executed by a configuration management (CM) tool such as Puppet or Chef. The list of things that we want to be done after booting take the form of configuration files that the CM tools understand, and are written in specialised, sometimes declarative, domain specific languages.

If you are familiar with the Ruby programming language, you will be at home with either Puppet or Chef, as both allow a Ruby syntax option for their domain specific languages. You can, for example, use Chef at Engine Yard, so take a look at our open source Chef recipes on Github to get a feel for what a Chef recipe looks like.

Pros and Cons

To recap quickly:

Deployment Configuration point Server model Manual Post facto Pet Automatic Ex-ante Cattle

There are advantages and disadvantages to each approach. An obvious advantage of post facto administration is that it is reactive. If something went wrong during the setup process, the admin could rectify the problem interactively, in real time. On an ex-ante cattle server, the actions in the CM configuration files have to be correct before they are sent to the machine.

But once the configuration is correct, suddenly the cattle servers have a huge advantage: because the config is standardised, you can feed it to many servers, and the results should be identical across all servers, barring hardware malfunctions. This means that you can have high-speed, efficient replicability. With replicability also comes scalability and, should a server need to be replaced, easy reprovisioning.

Another advantage of using CM is that a team can work on the config files together. Files can be commented on, shared, versioned, diffed, and attributed. If you keep them under version control using Git, for example, you can even track the history of changes. These features of CM allow the collaborative management of your production configuration across a team of arbitrary size. And because the configuration is independent of the present, arbitrary, state of a specific pet server, it’s much easier to debug.

Configuration Lifecycle

Even if you’re using CM, there are a variety of options as to how you apply it. If you’re using virtualisation, there are two primary places where configuration can be applied: before or after the machine image is compiled.

After Compilation

One approach is to start out with a very basic machine image as a base for all of your servers. If you’re using something like AWS, there are plenty of these available already. If you’re rolling your own setup, you could use one of the many available OS images. (Most UNIX-like distros provide bootable ISOs which can effectively serve as base images.)

Your CM configuration is then responsible for updating all of the local packages, installing the new packages you need, and configuring the rest of the system. This sort of approach makes life easier for your development team, because the base image will very rarely change. New configurations can be tested locally, pushed to a repository, and you’re done.

But each time you boot an instance the configuration needs to be run. This can be brittle and hard to debug if it goes wrong. And depending on how much configuration is needed, it may be a while before the new instance is ready to start accepting requests.

Before Compilation

To take the opposite approach, you could apply the configuration to the base image once, and then compile a new image. Whenever you want to spin up an instance, you can then boot that image. Each instance spun up in this way will be byte-for-byte identical, and good to go without the need for any additional configuration.

Naturally, instances booted like this are going to be able to start serving requests much faster, which is important if you’re scaling out a cluster to deal with a sudden surge of traffic.

One of the drawbacks of this method is that each change to your configuration requires a new image to be compiled and installed into your VM management system. And this can introduce delays and cause additional work for your development team. Additionally, if you already have a running cluster of instances that use an old image, you will need to replace them all so that they are using the new image. This can become very complex to manage.

Hybrid Approach

These are just two examples. There’s a whole spectrum of techniques that lie in between these two points. For instance, if your boot times are very slow, you can “warm” your cluster by spinning up new instances ahead of time. This can be especially useful before a big announcement, or in any other situation where you know that traffic will surge.

At Engine Yard, we take a hybrid approach. The bulk of our standard configuration is done before the image is compiled. And after the machine boots, we handle customer specific configuration, which can’t be done ahead of time. In fact, this whole process is neatly abstracted away behind an Apply button on your dashboard, which will re-run all of the Chef configurations across your cluster.

Fortunately, Chef configurations are designed to be idempotent, meaning you can re-run them as many times as you like, and there should be no side-effects; the results will always be the same.

Conclusion

The configure before you boot method may seem unusual to those who are used to the pet model of server maintenance. In a cloud computing environment, however, clusters containing many server instances need to be spun up quickly and homogeneously. This shows obvious benefits to using a CM for ex-ante server configuration. And as well as the benefits that are required to make a cloud architecture seamless, you get additional advantages from keeping your configs in version control.

Configuration management can be complex to set up, depending on the size of your app. Fortunately, most platform providers automate this for you. It’s one of the benefits of going with a PaaS instead of an IaaS. A significant chunk of the hard work is done for you.

In the next post in this series, we’ll look at the constraints that are placed on app developers as they attempt to design for deployment on cattle servers.

Are you more used to the pet model, or the cattle model? And which form of deployment do you prefer: pre-baked images, post-boot configuration, or something else entirely? We’d love to hear from you in the comments. Give us your opinions!