Update: we've released a command line tool that expands upon and automates the pattern described below.

We recently decided to deploy a microservices project using Docker and Amazon ECS, the AWS EC2 Container Service. There are several configuration management considerations in deployments like this: First is a strategy to store values (such as secret API keys) that our processes need in order to run and communicate. The second consideration is a mechanism to retrieve these values. Third is making these values available to processes running inside our containers.

Requirements

Security: Because these values are often secret, we need a secure way to handle storing values and transferring them from our development machines to the processes running in containers on Amazon’s ECS. Separation of config: In past projects we have had success following the Twelve-Factor App’s best-practices for Config and have elected to do so again with this project. Twelve-Factor calls for “a strict separation of config from code” and advocates storing these values in environment variables (“env vars”) for a variety of good reasons. This GitHub issue on the Docker project discourages the use of env vars for a number of good reasons, but by modifying our approach slightly, we believe we have mitigated the concerns. Ease-of-use: Lastly, we want to avoid managing AWS API and encryption keys on development machines as much as possible and avoid doing so entirely on EC2 instances.

Secret Value Storage

We considered some of the services out there for handling secrets such as Hashicorp’s Vault product or even rolling our own service, but we were reluctant to introduce a new service dependency. Our solution needed to be lightweight and secure, so we hit upon the idea of storing our values in S3 using client side encryption via the AWS Key Management Service (KMS).

Specifically, we implemented a suggestion found here in the AWS forums. A .env file (consisting of a bunch of ENV_VAR=value mappings) will be encrypted on the developer’s machine using the AWS S3 ruby client and then sent to S3 where it can be retrieved and decrypted by clients with access. This all happens without managing or storing encryption keys locally or on our AWS EC2 instances.

To share our encrypted .env file on S3 we:

Create a Customer Master Key (CMK) in the KMS. Make sure to select which IAM users and roles can use the CMK to encrypt and decrypt data via the KMS API. The users are the developers who will need to encrypt, the roles are attached to EC2 instances that will need to decrypt the data. Use this Ruby script and S3 and KMS client libraries to encrypt the file locally and upload it.

As you can see in the script, the S3 encryption client takes all the hard work out of client side encryption, encrypting the data before it is passed along to S3 for storage, by using an AWS KMS-managed CMK. Just give the encryption client the CMK key ID and the client will take care of retrieving a data encryption key, encrypting the data and sending along the necessary metadata for future decryption. See the above links for a detailed description of what the process entails.

Once the encrypted file is on S3, it’s available to any client with access.

Retrieving Our Configuration Values

To allow our EC2 instance and any containers running on it access to our configuration values, we assign it a role with the required permissions using an instance profile. With this role assigned, we no longer have to manually distribute keys to each instance to grant them access to AWS resources including S3 and KMS.

Once the proper role is assigned, access happens seamlessly. The EC2 instance and containers on it require no special setup to download our configuration values from S3, nor do we have to decrypt the values once we’ve downloaded them. AWS takes care of all of this.

Under the hood the AWS CLI and client libraries fetch the instance’s metadata, including its role, and acquire temporary access keys and a session token which they use when making requests for things like the content of our S3 bucket.

Curious how AWS does this, we did a little reading and found that the instance metadata, including the role, is available from our instances via a request to a simple URI like this: http://169.254.169.254/latest/meta-data/

To assume a role in subsequent requests, the CLI and client libraries get security credentials (a key and token) from the following URI (where “s3access” is the name of the role): http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access

This left us wondering: What is that magic IP address and how does the AWS service providing metadata know which instance is requesting it? The magic IP address is a link-local address (as it is in the IPv4 block 169.254.0.0/16), which is valid only for communication on the local network. Presumably AWS’s routers do not forward requests to these addresses.

AWS apparently does not advertise exactly how they determine which instance has requested its metadata, but two possibilities are looking at the internal IP address of the instance making the request, or perhaps the AWS-controlled dom0 host is processing the request before it is sent out on the network.

Setting Environment Variables for Processes Within Containers

There is a lot of discussion in the Docker community about best practices for handling sensitive data in containers. See this GitHub issue for examples.

When starting up a container (and the process running inside it) on our development machines, options can be passed to the docker run command including individual env vars or a path to a file containing many env vars.

$ docker run --env "FOO=bar" $ docker run --env-file ./path/to/.env

Having used the --env and --env-file options on our development machines when exploring ways to set the values on EC2, we initially we looked for ways to pass our env vars to the docker run command. Because ECS handles all starting and stopping of containers, we’d have to do it using their task definitions, the way ECS configures containers that should run together. These task definitions do include an option to specify env vars, but the definitions are stored as JSON in files committed to our code repository, not a place where we want our secret values stored.

To work around this, we came up with a solution that wraps our target process (the one running in the docker container) with a script that downloads and encrypts our variables, then exports them to the environment and starts the process. It’s worth noting that the env vars are stored only in memory for that process, never written to the container’s file system or stored in the EC2 instance.

Here is a gist of the solution.

Thoughts? We welcome forks and feedback. Find us on Twitter: @promptworks.