Good team member.

Chef is a well recognized configuration management tool which I use extensively at my current work. However I keep pushing to Sparrowdo – Perl6 configuration management tool and find those two tools play nicely together.

In this post I am going to give a few examples on how I use Sparrowdo to simplify and improve Chef cookbooks development workflow.

Running chef client on a target host.

Here is the most useful scenario how I use Chef and Sparrowdo together. My working environment implies launching ec2 Amazon instances get configured by chef. Instead of ssh-ing to an instance and running a chef-client on it, I delegate this task to Sparrowdo using wrapper called Sparrowdo::Chef::Client.

Let’s install the module:

$ zef install Sparrowdo::Chef::Client

And then create a simple Sparrowdo scenario:

$ cat sparrowfile module_run 'Chef::Client', %( run-list => [ "recipe[foo]", "recipe[baz]" ], log-level => 'info', force-formatter => True );

Here we’re just running two recipes called foo and bar. And define some chef client’s settings, like log level and enabling force-formater option. Now we can run a chef-client on a target host:

$ sparrowdo --host=$target_host

Post deployments checks.

It is always good idea to check a server’s state right after the deployment. There are reasons why I prefer to not keep such a checks inside my Chef scenarios. And it seems there is a trend which is seen as new monitor and audit tools appear at the open source market with InSpec and goss among them, to list a few.

Likewise Sparrowdo has some built-in facilities to quickly test an infrastructure.

Let me give you a few examples.

Check system processes.

Say, we reconfigure an Nginx server by using some Chef recipes, sometimes Chef is not able to ensure that Nginx starts successfully after deploy or even if it does I don’t want to grep huge chef client logs ( sometimes there is a load of them ) to find out whether an Nginx gets started successfully. Happily, here is the dead simple solution – the usage of Sparrowdo asserts:

$ cat healthcheck.pl6 proc-exists 'nginx';

This function checks that file /var/run/nginx.pid exists as well as the related process with the PID taken from the file does. If you need to handle uncommon file paths for pid files, you can always set the path explicitly:

$ cat healthcheck.pl6 proc-exists-by-pid 'nginx server', '/var/run/nginx/ngix.pid';

Moreover if you only know the “name” of the process ( well technically specking this regular expression to match the process command ), simply have this:

$ cat healthcheck.pl6 proc-exists-by-footprint 'nginx web server', 'nginx\s+master';

Having this simple Sparrowdo scenario just run it against a target server to check that Nginx process exists:

$ sparrowdo --host=$target_host --sparrowfile=healthcheck.pl6

I always put Sparrowdo scenarios and Chef cookbook files together and commit them to Git repository:

$ git add sparrowfile healthcheck.pl6 $ git commit -a -m 'sparrowdo scenarios for chef cookbook'

And finally let me give an example of checking web application endpoints by sending http requests. Say, we have Chef recipe which deploys an application that should be accessible by http GET / request. Sparrowdo exposes handy http-ok asserts to deal with such a checks:

$ cat healthcheck.pl6 http-ok;

That is it! This is the simplest form of http-ok function’s call to verify that the web application is responsible by accepting requests for GET / route. Under the hood it just:

resolves hostname as those one you run sparrowdo issues http request using curl utility:

$ curl -f http://$target_host

There are options how you can call http-ok function. For example, you may define endpoints and set http port:

$ cat healthcheck.pl6 http-ok(port => '8080' , path => '/Foo/Bar' );

Follow Sparrowdo documentation for full description of http asserts function.

Conclusion

Using Sparrowdo and Chef together could be efficient approach when developing and testing server’s configuration scenarios. Sparrowdo is proven to be able to adopt any requirements and play nicely with other ecosystems and frameworks being an intelligent “clue” to bind together various tools and get your work done.