Note: This post requires novice knowledge of terraform

I’m a huge fan of SaltStack and one of the reasons why is the ability to drop into Python when a more powerful solution is necessary. I used this ability in order to reliably mount EBS and Ephemeral volumes on AWS instances. If you’re not familiar with this problem let me explain.

When launching an AWS instance and supplying it with a block device configuration, AWS creates this mapping within their API. We can query what AWS believes this mapping to be via the meta-data service rest API. This API is accessible from the instance itself and can be used to query information about itself from the view of AWS’s api. If you want a quick example log into ssh on an aws instance and try this:

services/root@ip-10-10-1-168:/home/ubuntu# curl 169.254.169.254/latest/meta-data/block-device-mapping/ ami ebs1 ephemeral0 ephemeral1 ephemeral2 ephemeral3 ephemeral4 ephemeral5 ephemeral6 ephemeral7 root@ip-10-10-1-168:/home/ubuntu# curl 169.254.169.254/latest/meta-data/block-device-mapping/ebs1 sdf

This provides a great way for us to query data about the instance, however this data is not always correct. When querying for block devices the AWS api has it’s perceived mapping, but the linux kernel can name the drive with a different prefix. We can always be sure however that the last character of the mapping is consistent. You can read more about block device mapping here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html

So with this information, wouldn’t it be great to be able to always know what the correct mapping was based on the instance’s perspective? The perfect tool for this is SaltStack’s grains.

Grains are facts provided by the instance\host itself. They are small python programs which return a dictionary to the salt-minion agent. When the master queries the host’s grains it converts the dictionary to a yaml representation, and we are free to use those keys and values in our states. Since this is all just python, we can easily create custom grains for our salt master to use.

In order to slip stream this tutorial, I created a git repo here: https://github.com/ldelossa/salt-grains-tutorial

This repo holds a terraform file, the grains file, and a sample salt bootstrap file just incase you were curious about boot-strapping salt from bash (we use curl in our provisioning steps). Since this post’s focus is on SaltStack, I will not explain the terraform file in details, but I will provide some instruction.

The terraform file creates a full test VPC and all the necessary components needed for connectivity EXCEPT for a key pair used for instance connectivity. I expect you to generate your own key pair, download the pem file, and place the path to the pem within this terraform configuration. You will need to fill in data at the following lines

#lines 10-14 provider "aws" { access_key = "" # your aws access key secret_key = "" # your aws secret key region = "" # region at which the vpc will be launched } #lines 170, 178, 258 private_key = "${file("")}" #Place the path to your downloaded key within the quotes

You may notice at the top of the file there’s a variable declaration called local_public_ip. Terraform is going to ask you for your public IP-Address for security reasons. I’ve locked down connectivity to our instance based on your workstation’s public ip because… you know security. If you want to get fancy here’s how I run the terraform apply command.

salt-grains-tutorial$ TF_VAR_local_public_ip=$(curl -s ipinfo.io/ip) terraform apply

All this is doing is running the terraform apply command with an ad-hoc environment variable which maps to the defined variable local_public_ip in our salt-tutorial.tf file. The curl command hits a useful API end point which reports back your public IP address. If this is a tab bit over your head, just run terraform apply and google your public ip address when prompted.

Okay so now, once terraform is done doing it’s thing it should output two IP addresses for you, a salt master (salt-master01) and a client server (server01). Let’s log into salt-master01 to get things going.

Let’s make sure we see our minion and accept it’s key

root@ip-10-10-1-168:/home/ubuntu# salt-key Accepted Keys: Denied Keys: Unaccepted Keys: server01 Rejected Keys: root@ip-10-10-1-168:/home/ubuntu# salt-key -A The following keys are going to be accepted: Unaccepted Keys: server01 Proceed? [n/Y] Y Key for minion server01 accepted.

Okay now with our minion registered with our salt mater, let’s take a look at the custom grain that we uploaded to our salt-master. All custom grains by default reside in /srv/salt/_grains. Let’s take a look at our custom grain block-devices.py

import httplib import os DEVICE_MAPPING_URI = '/latest/meta-data/block-device-mapping/' def detect_devs(): dev_list = [ x for x in os.listdir('/sys/block') if x.startswith('xvd') or x.startswith('sd') ] return dev_list def _metadata_call(url): try: conn = httplib.HTTPConnection("169.254.169.254", 80, timeout=1) conn.request('GET', url) response = conn.getresponse() if response.status != 200: return return response.read() except: return def _get_block_devices(): block_device_grain = { 'ephemeral': [], 'ebs': [] } detected_devs = detect_devs() for mapping in _metadata_call(DEVICE_MAPPING_URI).split('

'): device = _metadata_call(DEVICE_MAPPING_URI + mapping) if mapping.startswith('ephemeral'): for dev in detected_devs: if dev[-1] == device[-1]: block_device_grain['ephemeral'].append(dev) elif mapping.startswith('ebs'): for dev in detected_devs: if dev[-1] == device[-1]: block_device_grain['ebs'].append(dev) return block_device_grain def main(): grains = {} grains['block_devices'] = _get_block_devices() return grains

So what’s going on here? First thing we do is set a constant variable indicating AWS’s meta-data URI that provides us the registered block device mappings for the instance. This is how we will find what the AWS API assumes is mounted on the host.

Next we define a function which simply returns a list. We use a list comprehension to parse out devices which begin with either xvd or sd from the directory /sys/block. We wind up with a list of devices from the instance’s perspective. We know that his list always contains the valid block device names!

We quickly define a wrapper for making http calls to the AWS Metadata instance, this should be self explanatory.

We then get to the actual work. The logic behind this grain is as follows:

Create a dictionary for holding lists of ephemeral and ebs devices. This list will become the grain data presented to our salt master.

Create the list of detected drives using the defined function above

For ever block-device mapping that the meta-data service returns, do an additional lookup on this mapping giving us AWS’s perspective of what the block device name is

If the mapping contained the word ‘ephemeral’ then check the last character of the drive name, if the last character matches a disk within the detected_devs list, append the detected_dev into the block_device_grains[‘ephemeral’] list.

Do the same thing logic as above, but with ‘ebs’ mapping name.

Return the block_device_grain

The last main() is the entry point for the grain. This is what the salt-minion agent will run when collecting grain information. We are using the top level tag of ‘block_devices’ for our grain, and then the dictionary information will populate underneath. I will show the yaml a little later in the post.

Okay! So hopefully that makes sense to you guys. Let’s take a look at how this works. The first thing we want to do is make sure our salt-master01 can talk to server01. Issue the following command:

root@ip-10-10-1-168:/home/ubuntu# salt 'server01' test.ping server01: True

If you do not receive “True” then there is a connectivity issue and you will need to troubleshoot further. Next we need to sync the custom grain to our client

root@ip-10-10-1-168:/home/ubuntu# salt 'server01' saltutil.sync_all server01: ---------- beacons: grains: - grains.block-devices log_handlers: modules: output: proxymodules: renderers: returners: sdb: states: utils:

With our grain synced we can now try to receive the custom information!

root@ip-10-10-1-168:/home/ubuntu# salt 'server01' grains.get block_devices server01: ---------- ebs: - xvdf ephemeral: - xvdd - xvde - xvdf

Alright! We now have a consistent and reliable way to identify the correct naming of the block device from the instances\host perspective. This makes our state writing in the future a lot simpler, and equates to less jinja2 logic.

In part2 I’m going to go over how we can use these values and create reusable state which mount our block devices in a extensible fashion.