These days, the number of tools that we use to deal with our infrastructure and services are endless; seemingly as if a new batch pops and takes over the world almost overnight. It’s hard to limit yourself to one of them since they all have their specialties and weaknesses.

That’s why every once in a while, you need a little help gluing things together. Today, we’re going to look at one approach to harmonize the relationship between Ansible and Chef.

Architecture

In this case, we’re going to have Chef do what it’s good at doing in provisioning the system, and Ansible be responsible for orchestrating the hosts and knowing about both environments’ state.

For now we’re going to examine doing this with a pre-existing Chef setup, and adding Ansible sugar on top. Eventually this will be migrated to using chef-solo and building images to be reused, but for now this will do.

Inventory

First, we’ll need to ensure that we have the two systems integrated by ensuring that the Ansible inventory looks at both the Chef server, as well as the current state of AWS.

How do we do this, you ask? Simple: Ansible provides the ability to reference a folder of inventories and in our case, we’ll just toss the two dynamic inventory scripts that we need in there:

To simplify this a little, here’s a quick bash snippet:

mkdir -p inventory && pushd inventory curl https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py -O curl https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini -O curl https://raw.githubusercontent.com/benner/ansible-dynamic-inventory-chef/fix/variable_name/chef_inventory.py -O chmod +x * .py popd

NOTE: There’s a bug in the version of chef_inventory.py that the maintainer provides, and since they haven’t fixed it, in the above script references the fixed file, instead of the canonical one.

Next we need to make sure that all of the request variables are set correctly:

# chef_inventory specific stuff export CHEF_PEMFILE = "~/.knife/chef-server-validation-key.pem" export CHEF_USER = "chef-username" export CHEF_SERVER_URL = "https://api.chef.io/organizations/myorg" # ec2_inventory specific stuff export AWS_ACCESS_KEY_ID = "YOURAWSACCESSKEY" export AWS_SECRET_ACCESS_KEY = "YOURAWSSECRETKEY"

Now that you have everything ready to go, let’s test them both to ensure they work properly:

inventory/ec2.py # should return a bunch of groups and hosts inventory/chef_inventory.py --list # should also return a bunch of groups and hosts

If that all works as intended, then you should be good to go! You’ll need to add a -i inventory for each ansible run, or drop the following into ansible.cfg :

[defaults] inventory = ./inventory

Instantiating and provisioning new hosts

So here’s where the tricky part comes in: how do we spin up a new node in AWS, while still being able to add it to Chef server, register it as a new client, and provision it with whatever runlist we want?

Here’s one way to approach it that I’ve found works well, it takes advantage of the knife command so you’ll need chefdk installed on the machine this is being run from.

We do this in two steps, effectively:

Spin up the host using Asnbile in the correct subnet, with proper SG’s, etc

Use the reference to that host to bootstrap the node using the knife command

Here’s an example role to do just that:

# role/create-server - name : spin up server ec2 : state : present image : " {{ ami }}" region : " {{ region }}" zone : " {{ availability_zone }}" group_id : " {{ security_groups }}" instance_type : " {{ instance_size }}" key_name : " {{ aws_keypair }}" vpc_subnet_id : " {{ subnet }}" tags : Name : my-ansible-bootstrapped-server role : chef_role register : ec2_instances - name : wait for instance ssh port to be up wait_for : port=22 host="{{ item.private_ip }}" with_items : " {{ ec2_instances.instances }}" - name : knife bootstrap shell : > knife bootstrap -y {{ item.private_ip }} --environment chef_prod_env # will be used later --node-name {{ item.tags.Name }} --run-list 'role[{{ chef_role }}]' --ssh-user {{ initial_user }} --sudo with_items : " {{ ec2_instances.instances }}" - name : register hosts into new_hosts hostgroup add_host : groups : new_hosts name : " {{ item.private_ip }}" with_items : " {{ ec2_instances.instances }}"

Now that you have a simple role, you’ll need to run that from localhost that has the right keys needed to talk to AWS and chef:

# create-web-server-play.yml - hosts : localhost connection : local gather_facts : false vars : availability_zone : us-east-1a security_groups : sg-12345678 instance_size : t2.micro aws_keypair : my_keypair subnet : subnet-12345678 region : us-east-1 ami : ami-12345678 initial_user : ubuntu chef_role : my_system_role roles : - role : create-server

Adding instances to an ELB

For now I’m going to assume you have an ELB already setup and ready to go, since that’s a bit more of an involved process. I’m also going to assume you already have a way to add the instances to an ELB (if that’s something you’re interested in, let me know in the comments below!) via a role.

This is where it gets handy to have the environment info from Chef AND Ansible available to you:

# add-hosts-to-elb-play.yml # tag_role_my_system_role comes from AWS inventory # chef_prod_env is assumed in the chef server, and used above - hosts : " tag_role_my_system_role:&chef_prod_env" roles : - role : add-host-to-elb

There you go! Now you can reference hosts in both Chef, and AWS.

Future steps