In this post we're going to create an AWS CloudFormation project using the command line interface tool CloudFoundation.

The cut and dry definition:

The CLI tool for creating, managing, and deploying large CloudFormation templates and projects.

The down-to-earth explanation of why, in bullet points:

CloudFormation templates are awesome because it's infrastructure as code.

CloudFormation template writing is a pain because everything is in one file that's 100000000s of lines long.

Big solutions like Terraform / Troposphere are cool... but require learning an entire new framework / system / syntax

Because of this most of us wind up in one of two groups:

1) Dealing with the pain of writing in bigger files, and doing all sorts of exporting between templates, and creating weird dependencies between stacks

2) Learning Terraform / Troposphere like frameworks

The problem with #1 is self-evident. For #2 though, aside from the learning curve, you wind up bringing this sort of swiss-army-knife-tank to a gun fight. You wanted to write a CloudFormation template, but instead you're writing this completely different syntax and hoping it supports the flow and infrastructure you want to build. Oh, you also have a brand new player in your infrastructure pipeline, as if it wasn't already complicated enough.

So how does CloudFoundation differ?

Though it lets you split, organize, deploy, and update with convenience, ultimately you're still just writing CloudFormation templates. Meaning if you decide to use something else, hey, you just cfdn build-all on your CloudFoundation project, and walk away with all of your fully built templates.

Requirements

In order to follow along you'll need to meet the following requirements:

Have Node 8.9.1+ installed

Have Npm 5.5.1+ installed

Have an AWS Account and User created with Access Keys

Caveats

The two biggest caveats for this are that (1) this is NOT a CloudFormation tutorial and (2) this is not an AWS tutorial. Yes we talk about both, but only in context of CloudFoundation - if we stopped tried to explain every nuance along the way, this post would never end.

Code

You can find the resulting codebase for completing this project here:

The Completed Codebase for this Guide

Table of Contents

Assuming you've met the previously mentioned requirements, simply run the following:

npm install -g cloudfoundation

Afterwards, run:

cfdn --version

To ensure it's installed.

To get started, create a new directory to house our project-to-be and then cd into it:

mkdir cloudfoundation-project && cd $_

Once inside, run:

cfdn init

This will begin the inquiry process to scaffold out a new CloudFoundation project. You'll get asked the following questions:

1) What is the name of your new project?

Pretty easy, just pick the name of your project. This is mainly used to fill in defaults and to generate your package.json file for the project. It does need to follow NPM package naming conventions. Though if you're doing something outside of those, your naming might be a bit wild anyway :).

2) Would you like a production VPC template included?

The first foundational template. If you select yes, your project will include a template named vpc that is a multi-az, any region, production ready Virtual Private Cloud (aka vpc). It exports all of its useful values (i.e. VPC ID, Subnet IDs, etc) so that you can use this template as a starting point for other templates, thus saving you the time of building out a VPC.

You can read more about the Production VPC Template here.

I'm going to select yes. Though we won't dive into it it's a GREAT example of how to organize extremely large templates. Obviously you don't have to use it even if you do include it.

3) Would you like an encrypted, multi-AZ RDS Aurora Database template included?

The second foundation template. Selecting yes results in another template being included called db . This is a multi-az, encrypted RDS Aurora Database template. Similar to the VPC one, it exports all useful values (i.e. an SG to attach to instances that need access, db links, etc), so that it can be used with other templates. The intention is to save you the time of building out and RDS Aurora db cluster.

In addition to the db template creating an entirely NEW RDS Aurora Cluster, it can also be used with RDS Snapshots. Meaning if you have a valid Snapshot that will work with Aurora, you can also just pass that to the template and be good to go.

You can read more about the RDS DB Template here

(Aside: both #2 and #3 point out something further down the roadmap for CloudFoundation - more reusable templates use in your own projects)

4) Would you like to set up a Local or Global Profile?

In order to Validate, Deploy, Update, and Describe templates and their deployed stacks, we need AWS Credentials (Access Keys). These are managed in CloudFoundation via "Profiles." Each "Profile" consists of an AWS Access Key ID, AWS Secret Key, and a default AWS Region. If you're familiar with the AWS CLI's concept of Named Profiles, this should seem similar.

Unlike AWS CLI Named Profiles, CloudFoundation profiles can also be local or global. Local profiles are confined to only the current project, where as global profiles can be used in any project.

Fow now select Local . This will confine this profile to only this project.

5) Which type of profile would you like to add?

There are two options here:

Set up a CFDN Profile - manually enter in your AWS Keys and Region

Import an AWS Profile - pull in a profile from the AWS CLI

If you have the AWS CLI installed, and have set up Named Profiles, you can just pull in those for usage here. Note that CloudFoundation only READS from the AWS CLI, meaning you don't have to worry about it messing with any of the AWS CLI's settings.

If you don't have the AWS CLI installed select the first option and input your keys. We'll cover Profiles more in depth at the tail end of the post.

6) Check out the project!

After completing step 5, your project will be scaffolded out and good to go. The project structure is pretty simple:

project ├── README.md ├── package.json ├── .gitignore ├── .cfdnrc └── src ├── db └── vpc

.cfdnrc - is a special file that keeps all of the settings for the current project and parameters / options for deployed stacks.

src/ - the main working directory of the project. Every top level directory within src/ is a "template directory." A template directory is built down into a CloudFormation Template ultimately. The vpc template directory looks like this:

vpc ├── conditions/ ├── mappings/ ├── outputs/ ├── parameters/ ├── resources/ ├── description.json └── metadata.json

A template directory consists of 7 files OR directories each representing a part of the Cloudformation Template Anatomy.

These can be ONE file or ONE directory. The file or directory must be named after one of the Template Sections:

resources/ or resources.json

conditions/ or conditions.json

description.json (must be ONE file with one property "Description")

mappings/ or mappings.json

metadata/ or metadata.json

outputs/ or outputs.json

parameters/ or parameters.json

If the Template Section is a directory, it can have any number of sub-directories and files but must have at least ONE .json or .js file and that file, at bare minimum, must have an empty object {} .

When we build or deploy the vpc template directory, all of the parts are compiled into one valid JSON file. This structure and format allows you to structure and organize your templates however you'd like. For exmaple, the resources section of the vpc template looks like:

vpc └── resources/ ├── cloudwatch/ ├── ec2/ ├── iam/ └── vpc/ ├── network-acls │ └── ... ├── route-tables │ └── ... ├── security-groups │ └── ... ├── subnets │ └── ... ├── vpc.json ├── nat-gateway.json └── internet-gateway.json

When the vpc template is built down, all of resources in the resources directory are properly built and included into one resources section of the built template. For example:

The vpc.json file:

{ "VPC": { "Type": "AWS::EC2::VPC", "Properties": { // ... } } }

The internet-gateway.json file:

{ "InternetGateway": { "Type": "AWS::EC2::InternetGateway", "Properties": { // ... } }, "InternetGatewayAttachment": { "Type": "AWS::EC2::VPCGatewayAttachment", "Properties": { // .. } } }

When this template is built down, the resources section will include both of these as required by valid CloudFormation Syntax:

{ "Resources": { "VPC": { "Type": "AWS::EC2::VPC", "Properties": { // ... } }, "InternetGateway": { "Type": "AWS::EC2::InternetGateway", "Properties": { // ... } }, "InternetGatewayAttachment": { "Type": "AWS::EC2::VPCGatewayAttachment", "Properties": { // .. } } } }

The organization of the vpc 's resources section should also look somewhat familiar. Although it's entirely in your control, this outlines the CloudFoundation Suggested Template Management Strategy. Organizing the resources section to mirror the AWS Console makes it far easier to navigate and onboard others to help work on the template.

To expand on this, if we were in the AWS Console and wanted to view our Subnets, from the main page we'd click on VPC and then click on Subnets . Similarly in our vpc template, subnets are under resources/vpc/subnets . Reducing the cognitive load of something already complicated, however small, goes a long way.

On to creating our own template. Instead of making one entirely from scratch, we're instead going to split up an official AWS Sample Template for a LAMP stack. The reason for this is to show how much more maintainable a template can be when used with CloudFoundation vs. an extremely small example that would almost be just as maintainable in a single file. Here's the link to the sample template:

Lamp Instance Template

Though it's not too big, it's still big enough where you're going to suffer some overhead as it grows (let alone explaining it to someone who's never seen CloudFormation before).

1) In your project directory run:

cfdn create

2) Name your template lamp

The base template is now scaffolded. It looks like this:

lamp/ ├── resources/ ├── conditions/ ├── mappings/ ├── metadata/ ├── outputs/ ├── parameters/ └── description.json

Each of the nested directories have an index.json but we'll split our template out into even finer files. Now let's begin splitting out the sections of that sample template.

3) In the parameters/ directory, make a new file called ec2.json

In this file we're going to put in all of the parameters relevant to our EC2 resources. Put the following in the file:

{ "KeyName": { "Description": "Name of an existing EC2 KeyPair to enable SSH access to the instance", "Type": "AWS::EC2::KeyPair::KeyName", "ConstraintDescription": "must be the name of an existing EC2 KeyPair." }, "InstanceType": { "Description": "WebServer EC2 instance type", "Type": "String", "Default": "t2.small", "AllowedValues": [ "t1.micro", "t2.nano", "t2.micro", "t2.small", "t2.medium", "t2.large", "m1.small", "m1.medium", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge", "m4.large", "m4.xlarge", "m4.2xlarge", "m4.4xlarge", "m4.10xlarge", "c1.medium", "c1.xlarge", "c3.large", "c3.xlarge", "c3.2xlarge", "c3.4xlarge", "c3.8xlarge", "c4.large", "c4.xlarge", "c4.2xlarge", "c4.4xlarge", "c4.8xlarge", "g2.2xlarge", "g2.8xlarge", "r3.large", "r3.xlarge", "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", "i2.8xlarge", "d2.xlarge", "d2.2xlarge", "d2.4xlarge", "d2.8xlarge", "hi1.4xlarge", "hs1.8xlarge", "cr1.8xlarge", "cc2.8xlarge", "cg1.4xlarge"], "ConstraintDescription": "must be a valid EC2 instance type." }, "SSHLocation": { "Description": " The IP address range that can be used to SSH to the EC2 instances", "Type": "String", "MinLength": "9", "MaxLength": "18", "Default": "0.0.0.0/0", "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})", "ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x." } }

4) In the parameters/ directory, make a new file called db.json

Similar to #3, we're going to put all of the parameters related to the database in this file. Put the following in the file:

{ "DBName": { "Default": "MyDatabase", "Description" : "MySQL database name", "Type": "String", "MinLength": "1", "MaxLength": "64", "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*", "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters." }, "DBUser": { "NoEcho": "true", "Description" : "Username for MySQL database access", "Type": "String", "MinLength": "1", "MaxLength": "16", "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*", "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters." }, "DBPassword": { "NoEcho": "true", "Description" : "Password for MySQL database access", "Type": "String", "MinLength": "1", "MaxLength": "41", "AllowedPattern" : "[a-zA-Z0-9]*", "ConstraintDescription" : "must contain only alphanumeric characters." }, "DBRootPassword": { "NoEcho": "true", "Description" : "Root password for MySQL", "Type": "String", "MinLength": "1", "MaxLength": "41", "AllowedPattern" : "[a-zA-Z0-9]*", "ConstraintDescription" : "must contain only alphanumeric characters." } }

As an added bonus, CloudFoundation will also respect and enforce the rules for the parameters when you use its deploy and update functionality. When using those commands, CloudFoundation will walk you through an inquiry process and let you fill in the values (vs. messy long shell declaration). We'll see more on this later.

5) Remove the parameters/index.json file

Though there's absolutely no harm in leaving it, we might as well get rid of it.

Now let's deal with those long-winded mappings in the template...

6) In the mappings/ directory create a new file called instance-type-arch.json and input the following:

{ "AWSInstanceType2Arch": { "t1.micro" : { "Arch" : "PV64" }, "t2.nano" : { "Arch" : "HVM64" }, "t2.micro" : { "Arch" : "HVM64" }, "t2.small" : { "Arch" : "HVM64" }, "t2.medium" : { "Arch" : "HVM64" }, "t2.large" : { "Arch" : "HVM64" }, "m1.small" : { "Arch" : "PV64" }, "m1.medium" : { "Arch" : "PV64" }, "m1.large" : { "Arch" : "PV64" }, "m1.xlarge" : { "Arch" : "PV64" }, "m2.xlarge" : { "Arch" : "PV64" }, "m2.2xlarge" : { "Arch" : "PV64" }, "m2.4xlarge" : { "Arch" : "PV64" }, "m3.medium" : { "Arch" : "HVM64" }, "m3.large" : { "Arch" : "HVM64" }, "m3.xlarge" : { "Arch" : "HVM64" }, "m3.2xlarge" : { "Arch" : "HVM64" }, "m4.large" : { "Arch" : "HVM64" }, "m4.xlarge" : { "Arch" : "HVM64" }, "m4.2xlarge" : { "Arch" : "HVM64" }, "m4.4xlarge" : { "Arch" : "HVM64" }, "m4.10xlarge" : { "Arch" : "HVM64" }, "c1.medium" : { "Arch" : "PV64" }, "c1.xlarge" : { "Arch" : "PV64" }, "c3.large" : { "Arch" : "HVM64" }, "c3.xlarge" : { "Arch" : "HVM64" }, "c3.2xlarge" : { "Arch" : "HVM64" }, "c3.4xlarge" : { "Arch" : "HVM64" }, "c3.8xlarge" : { "Arch" : "HVM64" }, "c4.large" : { "Arch" : "HVM64" }, "c4.xlarge" : { "Arch" : "HVM64" }, "c4.2xlarge" : { "Arch" : "HVM64" }, "c4.4xlarge" : { "Arch" : "HVM64" }, "c4.8xlarge" : { "Arch" : "HVM64" }, "g2.2xlarge" : { "Arch" : "HVMG2" }, "g2.8xlarge" : { "Arch" : "HVMG2" }, "r3.large" : { "Arch" : "HVM64" }, "r3.xlarge" : { "Arch" : "HVM64" }, "r3.2xlarge" : { "Arch" : "HVM64" }, "r3.4xlarge" : { "Arch" : "HVM64" }, "r3.8xlarge" : { "Arch" : "HVM64" }, "i2.xlarge" : { "Arch" : "HVM64" }, "i2.2xlarge" : { "Arch" : "HVM64" }, "i2.4xlarge" : { "Arch" : "HVM64" }, "i2.8xlarge" : { "Arch" : "HVM64" }, "d2.xlarge" : { "Arch" : "HVM64" }, "d2.2xlarge" : { "Arch" : "HVM64" }, "d2.4xlarge" : { "Arch" : "HVM64" }, "d2.8xlarge" : { "Arch" : "HVM64" }, "hi1.4xlarge" : { "Arch" : "HVM64" }, "hs1.8xlarge" : { "Arch" : "HVM64" }, "cr1.8xlarge" : { "Arch" : "HVM64" }, "cc2.8xlarge" : { "Arch" : "HVM64" } } }

7) In the mappings/ directory create a new file called instance-type-nat-arch.json and input the following:

{ "AWSInstanceType2NATArch": { "t1.micro" : { "Arch" : "NATPV64" }, "t2.nano" : { "Arch" : "NATHVM64" }, "t2.micro" : { "Arch" : "NATHVM64" }, "t2.small" : { "Arch" : "NATHVM64" }, "t2.medium" : { "Arch" : "NATHVM64" }, "t2.large" : { "Arch" : "NATHVM64" }, "m1.small" : { "Arch" : "NATPV64" }, "m1.medium" : { "Arch" : "NATPV64" }, "m1.large" : { "Arch" : "NATPV64" }, "m1.xlarge" : { "Arch" : "NATPV64" }, "m2.xlarge" : { "Arch" : "NATPV64" }, "m2.2xlarge" : { "Arch" : "NATPV64" }, "m2.4xlarge" : { "Arch" : "NATPV64" }, "m3.medium" : { "Arch" : "NATHVM64" }, "m3.large" : { "Arch" : "NATHVM64" }, "m3.xlarge" : { "Arch" : "NATHVM64" }, "m3.2xlarge" : { "Arch" : "NATHVM64" }, "m4.large" : { "Arch" : "NATHVM64" }, "m4.xlarge" : { "Arch" : "NATHVM64" }, "m4.2xlarge" : { "Arch" : "NATHVM64" }, "m4.4xlarge" : { "Arch" : "NATHVM64" }, "m4.10xlarge" : { "Arch" : "NATHVM64" }, "c1.medium" : { "Arch" : "NATPV64" }, "c1.xlarge" : { "Arch" : "NATPV64" }, "c3.large" : { "Arch" : "NATHVM64" }, "c3.xlarge" : { "Arch" : "NATHVM64" }, "c3.2xlarge" : { "Arch" : "NATHVM64" }, "c3.4xlarge" : { "Arch" : "NATHVM64" }, "c3.8xlarge" : { "Arch" : "NATHVM64" }, "c4.large" : { "Arch" : "NATHVM64" }, "c4.xlarge" : { "Arch" : "NATHVM64" }, "c4.2xlarge" : { "Arch" : "NATHVM64" }, "c4.4xlarge" : { "Arch" : "NATHVM64" }, "c4.8xlarge" : { "Arch" : "NATHVM64" }, "g2.2xlarge" : { "Arch" : "NATHVMG2" }, "g2.8xlarge" : { "Arch" : "NATHVMG2" }, "r3.large" : { "Arch" : "NATHVM64" }, "r3.xlarge" : { "Arch" : "NATHVM64" }, "r3.2xlarge" : { "Arch" : "NATHVM64" }, "r3.4xlarge" : { "Arch" : "NATHVM64" }, "r3.8xlarge" : { "Arch" : "NATHVM64" }, "i2.xlarge" : { "Arch" : "NATHVM64" }, "i2.2xlarge" : { "Arch" : "NATHVM64" }, "i2.4xlarge" : { "Arch" : "NATHVM64" }, "i2.8xlarge" : { "Arch" : "NATHVM64" }, "d2.xlarge" : { "Arch" : "NATHVM64" }, "d2.2xlarge" : { "Arch" : "NATHVM64" }, "d2.4xlarge" : { "Arch" : "NATHVM64" }, "d2.8xlarge" : { "Arch" : "NATHVM64" }, "hi1.4xlarge" : { "Arch" : "NATHVM64" }, "hs1.8xlarge" : { "Arch" : "NATHVM64" }, "cr1.8xlarge" : { "Arch" : "NATHVM64" }, "cc2.8xlarge" : { "Arch" : "NATHVM64" } } }

8) In the mappings/ directory create a new file called instance-ami.json and input the following:

{ "AWSRegionArch2AMI": { "us-east-1" : {"PV64" : "ami-2a69aa47", "HVM64" : "ami-97785bed", "HVMG2" : "ami-0a6e3770"}, "us-west-2" : {"PV64" : "ami-7f77b31f", "HVM64" : "ami-f2d3638a", "HVMG2" : "ami-ee15a196"}, "us-west-1" : {"PV64" : "ami-a2490dc2", "HVM64" : "ami-824c4ee2", "HVMG2" : "ami-0da4a46d"}, "eu-west-1" : {"PV64" : "ami-4cdd453f", "HVM64" : "ami-d834aba1", "HVMG2" : "ami-af8013d6"}, "eu-west-2" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-403e2524", "HVMG2" : "NOT_SUPPORTED"}, "eu-west-3" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-8ee056f3", "HVMG2" : "NOT_SUPPORTED"}, "eu-central-1" : {"PV64" : "ami-6527cf0a", "HVM64" : "ami-5652ce39", "HVMG2" : "ami-1d58ca72"}, "ap-northeast-1" : {"PV64" : "ami-3e42b65f", "HVM64" : "ami-ceafcba8", "HVMG2" : "ami-edfd658b"}, "ap-northeast-2" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-863090e8", "HVMG2" : "NOT_SUPPORTED"}, "ap-northeast-3" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-83444afe", "HVMG2" : "NOT_SUPPORTED"}, "ap-southeast-1" : {"PV64" : "ami-df9e4cbc", "HVM64" : "ami-68097514", "HVMG2" : "ami-c06013bc"}, "ap-southeast-2" : {"PV64" : "ami-63351d00", "HVM64" : "ami-942dd1f6", "HVMG2" : "ami-85ef12e7"}, "ap-south-1" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-531a4c3c", "HVMG2" : "ami-411e492e"}, "us-east-2" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-f63b1193", "HVMG2" : "NOT_SUPPORTED"}, "ca-central-1" : {"PV64" : "NOT_SUPPORTED", "HVM64" : "ami-a954d1cd", "HVMG2" : "NOT_SUPPORTED"}, "sa-east-1" : {"PV64" : "ami-1ad34676", "HVM64" : "ami-84175ae8", "HVMG2" : "NOT_SUPPORTED"}, "cn-north-1" : {"PV64" : "ami-77559f1a", "HVM64" : "ami-cb19c4a6", "HVMG2" : "NOT_SUPPORTED"}, "cn-northwest-1" : {"PV64" : "ami-80707be2", "HVM64" : "ami-3e60745c", "HVMG2" : "NOT_SUPPORTED"} } }

Once again, feel free to remove mappings/index.json since it won't be used.

Creating the resources portion of the template

Though the parameters and mappings of a CloudFormation template can be messy, generally the bulk of work happens in resources . Looking at our sample template, though there are only two resources (server and security group), the WebServerInstance resource outlines a common problem - nested files / scripts, etc in CloudFormation templates. Because of these nested files and scripts, this resource is huge.

We're going to take the time to split out that messy PHP file to show how CloudFoundation also helps make this area more maintainable. Though we could do it for ALL of the scripts, we're going to just do it for one to keep things moving along.

1) Create a new file at resources/ec2/instance.js

First note, that this is a JS (javascript) file. When working with CloudFormation, we will often need scripting of some sort that just isn't available in the vanilla JSON format. CloudFoundation allows you to mix / match / parse in JavaScript as much as you'd like, the only requirement of .js files is that they set module.exports equal to an Object.

In our instance.js file input the following:

const fs = require('fs') // pull in the PHP file - we'll make this shortly const index = fs.readFileSync(`${__dirname}/files/index.php`, 'utf-8') module.exports = { "WebServerInstance": { "Type": "AWS::EC2::Instance", "Metadata" : { "AWS::CloudFormation::Init" : { "configSets" : { "InstallAndRun" : [ "Install", "Configure" ] }, "Install" : { "packages" : { "yum" : { "mysql" : [], "mysql-server" : [], "mysql-libs" : [], "httpd" : [], "php" : [], "php-mysql" : [] } }, "files" : { "/var/www/html/index.php" : { "content": { "Fn::Sub": index, // Fn::Sub to fill in "Ref" values in the file }, "mode" : "000600", "owner" : "apache", "group" : "apache" }, "/tmp/setup.mysql" : { "content" : { "Fn::Join" : ["", [ "CREATE DATABASE ", { "Ref" : "DBName" }, ";

", "GRANT ALL ON ", { "Ref" : "DBName" }, ".* TO '", { "Ref" : "DBUser" }, "'@localhost IDENTIFIED BY '", { "Ref" : "DBPassword" }, "';

" ]]}, "mode" : "000400", "owner" : "root", "group" : "root" }, "/etc/cfn/cfn-hup.conf" : { "content" : { "Fn::Join" : ["", [ "[main]

", "stack=", { "Ref" : "AWS::StackId" }, "

", "region=", { "Ref" : "AWS::Region" }, "

" ]]}, "mode" : "000400", "owner" : "root", "group" : "root" }, "/etc/cfn/hooks.d/cfn-auto-reloader.conf" : { "content": { "Fn::Join" : ["", [ "[cfn-auto-reloader-hook]

", "triggers=post.update

", "path=Resources.WebServerInstance.Metadata.AWS::CloudFormation::Init

", "action=/opt/aws/bin/cfn-init -v ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource WebServerInstance ", " --configsets InstallAndRun ", " --region ", { "Ref" : "AWS::Region" }, "

", "runas=root

" ]]}, "mode" : "000400", "owner" : "root", "group" : "root" } }, "services" : { "sysvinit" : { "mysqld" : { "enabled" : "true", "ensureRunning" : "true" }, "httpd" : { "enabled" : "true", "ensureRunning" : "true" }, "cfn-hup" : { "enabled" : "true", "ensureRunning" : "true", "files" : ["/etc/cfn/cfn-hup.conf", "/etc/cfn/hooks.d/cfn-auto-reloader.conf"]} } } }, "Configure" : { "commands" : { "01_set_mysql_root_password" : { "command" : { "Fn::Join" : ["", ["mysqladmin -u root password '", { "Ref" : "DBRootPassword" }, "'"]]}, "test" : { "Fn::Join" : ["", ["$(mysql ", { "Ref" : "DBName" }, " -u root --password='", { "Ref" : "DBRootPassword" }, "' >/dev/null 2>&1 </dev/null); (( $? != 0 ))"]]} }, "02_create_database" : { "command" : { "Fn::Join" : ["", ["mysql -u root --password='", { "Ref" : "DBRootPassword" }, "' < /tmp/setup.mysql"]]}, "test" : { "Fn::Join" : ["", ["$(mysql ", { "Ref" : "DBName" }, " -u root --password='", { "Ref" : "DBRootPassword" }, "' >/dev/null 2>&1 </dev/null); (( $? != 0 ))"]]} } } } } }, "Properties": { "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }, "InstanceType" : { "Ref" : "InstanceType" }, "SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ], "KeyName" : { "Ref" : "KeyName" }, "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash -xe

", "yum update -y aws-cfn-bootstrap

", "# Install the files and packages from the metadata

", "/opt/aws/bin/cfn-init -v ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource WebServerInstance ", " --configsets InstallAndRun ", " --region ", { "Ref" : "AWS::Region" }, "

", "# Signal the status from cfn-init

", "/opt/aws/bin/cfn-signal -e $? ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource WebServerInstance ", " --region ", { "Ref" : "AWS::Region" }, "

" ]]}} }, "CreationPolicy" : { "ResourceSignal" : { "Timeout" : "PT5M" } } } }

Instead of nesting the long PHP file, we have it in a separate file that we'll make here shortly. We pull it in with the Node.js fs module, and then run it through CloudFormation's Fn::Sub to allow any referenced values to be subbed out.

Note that if you wanted to do this to ALL the scripts and files in this resource you absolutely can. The VPC Template's Bastion instance has examples of all the .conf and User Data scripts being pulled out into their own files and using parameters / resource references.

2) Create a new file at resources/ec2/files/index.php

<html> <head> <title>AWS CloudFormation PHP Sample</title> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> </head> <body> <h1>Welcome to the AWS CloudFormation PHP Sample</h1> <br /> <?php // Print out the current data and time print "The Current Date and Time is: <br/>"; print date("g:i A l, F j Y."); ?> <br /> <?php // Setup a handle for CURL $curl_handle=curl_init(); curl_setopt($curl_handle,CURLOPT_CONNECTTIMEOUT,2); curl_setopt($curl_handle,CURLOPT_RETURNTRANSFER,1); // Get the hostname of the instance from the instance metadata curl_setopt($curl_handle,CURLOPT_URL,'http://169.254.169.254/latest/meta-data/public-hostname'); $hostname = curl_exec($curl_handle); if (empty($hostname)) { print "Sorry, for some reason, we got no hostname back <br />"; } else { print "Server = " . $hostname . "<br />"; } // Get the instance-id of the intance from the instance metadata curl_setopt($curl_handle,CURLOPT_URL,'http://169.254.169.254/latest/meta-data/instance-id'); $instanceid = curl_exec($curl_handle); if (empty($instanceid)) { print "Sorry, for some reason, we got no instance id back <br />"; } else { print "EC2 instance-id = " . $instanceid . "<br />"; } $Database = "localhost"; $DBUser = "${DBUser}"; // Values that will be subbed via Fn::Sub !! $DBPassword = "${DBPassword}"; // Obviously never output the PWD... but this is just to show the CloudFormation REF values // getting output into our PHP file here print "Database = " . $Database . "<br />" . $DBUser . " + " . $DBPassword . "<br />"; $dbconnection = mysql_connect($Database, $DBUser, $DBPassword) or die("Could not connect: " . mysql_error()); print ("Connected to $Database successfully"); mysql_close($dbconnection); ?> <h2>PHP Information</h2> <br /> <?php phpinfo(); ?> </body> </html>

Right away it should be obvious how much easier it is to write scripts and other files that need to be nested into CloudFormation templates. Additionally, note that you can also Fn::Sub resources or parameters into this file. When this is run the $DBUser and $DBPassword will be subbed out for their respective parameter values.

3) Create a new file at resources/ec2/security-groups.json and input the following:

{ "WebServerSecurityGroup" : { "Type" : "AWS::EC2::SecurityGroup", "Properties" : { "GroupDescription" : "Enable HTTP access via port 80", "SecurityGroupIngress" : [ {"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"}, {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : { "Ref" : "SSHLocation" }} ] } } }

Note that there's nothing stopping you from nesting even more and more folders. So if you really wanted to you could put this in resources/ec2/networking-security/server-security-group.json

Finishing up the template - outputs and description

The final two sections of our template left are the outputs section and description . If you're wondering what happened to AWSTemplateFormatVersion it's actually set for you. Transform is not yet supported.

1) Change outputs/index.json to the following:

{ "WebsiteURL" : { "Description" : "URL for newly created LAMP stack", "Value" : { "Fn::Join" : ["", ["http://", { "Fn::GetAtt" : [ "WebServerInstance", "PublicDnsName" ]}]] } } }

2) Leave description.json as it is

This is the only property that cannot be a directory. It should always be description.json with exactly one property Description set to a string.

Now that our template is split out, we can validate the template for both JSON errors and CloudFormation errors. In order to do so ensure that you have a profile set up, otherwise only JSON errors will be validated.

cfdn validate

2) Select the template lamp

And that's all there is to it. If you had any errors in your template files, both the files and exact line numbers of the error will be output! This is obviously far more useful than the cryptic errors that are returned when validating with CloudFormation only.

If you don't want to do inquiries every time, the short hand for validating this template would be the following:

cfdn validate lamp -p <name of profile>

One thing to note, not all errors are caught when sending it up for CloudFormation validation. This isn't specific to CloudFoundation though. Many of the major errors that will occur with CloudFormation aren't caught until it's in the deploy process itself.

Though using CloudFoundation's deploy and update functions never require you to build down your template, if you ever want the template in its final form, build and build-all will do the trick.

To build an individual template run the following:

cfdn build [name of template]

If you pass a name of the template, two new files will be created in your project directory:

dist/name-of-template.json

dist/name-of-template.min.json

These will be the fully built templates with all of their sections.

If you'd like to build down all of your templates instead of doing it one-by-one, simply run:

cfdn build-all

When developing CloudFormation templates, assuming you're doing it without any frameworks, the process of going to the console frequently, double checking parameters, or typing out long-winded CLI commands... well it's not very enjoyable. So let's get into deploying templates with CloudFoundation and see how it can make life in the cloud easier.

Run the deploy command:

cfdn deploy

This will begin the set of inquiries to deploy. We'll be asked to select values for our stack parameters and options.

Selecting the Stack Name, Profile and Region

The first part of the deploy inquiry process if for general information.

1) Select our template lamp

2) Input a name for your stack lamp-stack

3) Select the profile we set up from the cfdn init step

A profile is required to deploy and update since it has all of the credentials needed to communicate with AWS.

When this stack is deployed it will be bound to this profile. Meaning you can't just switch the profile and then update it. To those unfamiliar with CloudFormation, this is no different than the normal workflow. A stack can only be in one AWS Account and you can't update it to magically start pointing to another one.

The shorthand for doing the previous three steps is:

cfdn deploy lamp -s lamp-stack -p <name of profile>

4) Select the region to deploy the stack to

The profile's default region will be selected but you can choose any region. A profile isn't bound to a region - it only has a default one.

Like profiles, when this stack is deployed, it will be bound to this region.

Selecting the Stack Parameters

After filling in the general information, values for the stack's parameters are asked for.

Like the AWS Console, CloudFoundation will enforce any rules you've set on your Stack Parameters. For example, given the following parameter:

{ "SSHLocation": { "Description": " The IP address range that can be used to SSH to the EC2 instances", "Type": "String", "MinLength": "9", "MaxLength": "18", "Default": "0.0.0.0/0", "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})", "ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x." } }

Input is validated against the AllowedPattern , MinLength , MaxLength , and Type . Additionally, if the Type is one of the AWS-Specific Parameter Types, instead of asking you to remember or go get the values yourself, using the profile it will pull the usable values from AWS. Given the following parameter:

{ "KeyName": { "Description": "Name of an existing EC2 KeyPair to enable SSH access to the instance", "Type": "AWS::EC2::KeyPair::KeyName", "ConstraintDescription": "must be the name of an existing EC2 KeyPair." } }

CloudFoundation will use the stack's profile to grab all KeyPair options available and allow you to select one.

At this point, go through and fill out all of the values for your parameters.

Selecting the Stack Options

When you get to the following question:

Would you like to add tags to this stack?

You've crossed over into setting options for the stack. Let's walk through each of them.

You can add any number of tags to your deployed Stack. Nothing fancy here.

If you have an IAM role that you would like to deploy the stack with (instead of the profile), you can specify its ARN here.

3) Would you like to configure advanced options for your stack? i.e. notifications

This will allow us to configure 4 other settings if you specify yes:

Set up an SNS notifications for this stack?

If you select yes, you'll be able to either configure brand new SNS notification topic and endpoint or use an existing one. Doing so will let you receive notifications, through email for example, about your stack and its status.

Enable termination protection?

Whether or not you can delete a stack without finagling with termination protection.

Set timeout in minutes before stack creation fails

How many minutes the stack can attempt creation before failing. Leave it blank if you don't want a time limit.

Select an action to take on stack creation failure

What to do if the stack fails? You can select rollback , delete , or nothing . The rollback option is the default.

This leads us to one final question...

4) Allow stack to create IAM resources?

If you have any IAM resources in your template, this option must be set to yes. This is the equivalent of the console checkbox at the end labeled "I acknowledge that AWS CloudFormation might create IAM resources." It's also the AWS CLI equivalent of specifying CAPABILITY_IAM in IAM capabilities.

Since our stack doesn't have any, you can say no, though nothing will be affected if you say yes.

Review the Stack Options and Parameters

After completing all of the above questions you'll be shown a review. Confirm it to move forward.

Save the stack settings for later

Even though this is more convenient than long shell commands and back and forth in the console, it'd be kind of a hassle if you had to put this in every time. Instead you can save the settings and parameters for later usage in our .cfdnrc file.

Select yes. We'll cover this more later.

Deploy the stack!

After saving the settings the stack will deploy and you'll see the Stack ID printed. If you're curious, you can go to your .cfdnrc file and find your lamp-stack under the lamp template and see all of the parameters and options you selected. We'll cover the file and its structure later. It becomes particularly useful in deploying and updating, especially if you don't want to go through an inquiry process every time.

With our template deployed as a stack, being able to get information on it is obviously important. To do so, run the following:

cfdn describe

Select the template and then the stack you'd like to describe. You'll see output of all of the information regarding your stack's deploy. The short hand to describe our stack is:

cfdn describe lamp -s lamp-stack

By default, describe prints all of a stack's information. However, you can limit some of the information returned by passing more options:

-s, --stackname <stack name>

Name of the stack to describe. If no stack name option is included, you'll be prompted to choose an existing stack.

-a, --status

Show status information about the stack

-p, --parameters

Show parameters information about the stack

-i, --info

Show advanced information about the stack.

Show tags attached to the stack.

-o, --outputs

Show output information from the stack.

As an example, if you just want to get our lamp-stack 's status and outputs quickly, we could just run:

cfdn describe lamp -s lamp-stack -ao

We can also update stacks deployed by simply changing our template and then running:

cfdn update

This command will ask us for any new parameters, review our selected options, and update the stack with any new changes to the template. Let's do a simple update to our template - add a name tag to our instance.

1) Add another parameter to parameters/ec2.json

Add the following right under SSHLocation in parameters/ec2.json :

{ "SSHLocation": { // .. }, "ParamInstanceName": { "Type": "String", "Description": "Name tag for the instance" } }

2) Add a new property to resources/ec2/instance.js

{ "Properties": { // ... other properties "UserData" : { // ... }, "Tags": [{ "Key": "Name", "Value": { "Ref": "ParamInstanceName" } }] } }

This will tell our instance to deploy to a subnet of our choosing.

cfdn update

or the short hand without inquiries:

cfdn update lamp -s lamp-stack

This will begin the update process. The CLI will ask for your new parameter options, if any have been added, and allow you to review and update any other stack options. Type in the name tag for your instance, leave all other options as-is, and confirm your update.

After the update request is sent up, you'll see the success update message. This success update message means that all initial validation passed, and that it's on its way up for an update. Run the describe command to see its status:

cfdn describe lamp -s lamp-stack -a

If you run this immediately you'll see the UPDATE_IN_PROGRESS status. Once the update has completed, the EC2 instance should have the Name tag from the new parameter. If you have the AWS Console open, you can navigate to EC2 > Instances and see that the Name Tag actually changes the Name column for the instance.

After deploying a stack, the .cfdnrc file in your project is updated for past settings. Its structure is the following:

{ "project": "name-of-project", "profiles": { "nameOfLocalProfile": { "aws_access_key_id": "abcd", "aws_secret_access_key": "efgh", "region": "us-east-1" } }, "templates": { "newTemplate": { "newStack": { // .. stack options and parameters } }, "vpc": { "network": { // .. stack created from the included VPC template } } } }

So in our case, the .cfdnrc should look like:

{ "project": "cloudfoundation-project", "profiles": { "local": { "aws_access_key_id": "abcd", "aws_secret_access_key": "efgh", "region": "us-east-1" } }, "templates": { "lamp": { "lamp-stack": { "profile": "local", "region": "us-east-1", "options": { "tags": [ { "Key": "StackName", "Value": "lamp-stack" } ], "advanced": { "snsTopicArn": "arn:aws:sns:us-east-1:123456789012:lamp-stack-sns", "onFailure": "ROLLBACK", "timeout": 10, "terminationProtection": true }, "capabilityIam": true }, "parameters": { "DBName": "MyDatabase", "DBUser": "user", "DBPassword": "password", "DBRootPassword": "password", "KeyName": "test-key", "InstanceType": "t2.small", "SSHLocation": "0.0.0.0/0", "ParamInstanceName": "Cloudfoundation Test Instance" }, "stackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/lamp-stack/abc-123-efgh" } } } }

(Note: if you didn't add tags or specify some of the advanced options, those won't show up.)

The structure shouldn't be too confusing - every top level property nested under templates represents a template. Every top level property under a template, i.e. lamp-stack , represents a stack deployed from that template. All properties under the stack are the different options and parameters for that stack's deploy. If the stack has a stackId property, that signals to CloudFoundation that the stack is deployed. If there is no stackId , CloudFoundation will read allow you to deploy it using those options (which we'll do in the next section).

Aside from tracking current values of our deployed stack, the .cfdnrc file can also be used to declare deploys and updates without running through the CLI's inquiry process every time. Because a .cfdnrc file can contain potentially sensitive parameters, this file should never be made publicly available.

Let's see how we can do deploys and updates using the .cfdnrc file.

To keep things short and simple, we'll simply deploy a different version of our lamp template.

1) Open up your project's .cfdnrc file

2) Copy and Paste the lamp-stack as another stack called lamp-stack-two . REMOVE the stackId :

{ "project": "cloudfoundation-project", "profiles": { "local": { "aws_access_key_id": "abcd", "aws_secret_access_key": "efgh", "region": "us-east-1" } }, "templates": { "lamp": { "lamp-stack": { // previous stack. }, "lamp-stack-two": { "profile": "local", "region": "us-east-1", "options": { "tags": [ { "Key": "StackName", "Value": "lamp-stack" } ], "advanced": { "snsTopicArn": "arn:aws:sns:us-east-1:123456789012:lamp-stack-sns", "onFailure": "ROLLBACK", "timeout": 10, "terminationProtection": true }, "capabilityIam": true }, "parameters": { "DBName": "MyDatabase", "DBUser": "user", "DBPassword": "password", "DBRootPassword": "password", "KeyName": "test-key", "InstanceType": "t2.small", "SSHLocation": "0.0.0.0/0", "ParamInstanceName": "Cloudfoundation Test Instance" } } } } }

What we've done here is declared that a new stack should exist under the lamp template called lamp-stack-two . It should use all of the specified options and parameters. What we've done is no different from running...

cfdn deploy

...again and answering the inquiries with all of the same options and parameters as the original lamp-stack . The main difference here is there is no stackId property because it hasn't been deployed. To deploy a pre-defined stack, ensure that it does NOT have a stackId , and run the short hand:

cfdn deploy lamp -s lamp-stack-two

CloudFoundation will let us review the settings and ask if we'd like to confirm and save. Deploy the stack.

Using the short hand is the only way to deploy predefined stacks. If we run the full version, CloudFoundation will assume we want to create a new set of stack options.

This method allows us to declare the parameters and options in a file, thus letting us see everything holistically vs piecemeal questions and/or long shell commands. It also gives us a place to quickly reference what's being used for our stack.

We can also update our stacks using the .cfdnrc file.

1) Add a new parameter to parameters/ec2.json :

{ "ParamInstanceName": { // .. previous }, "ParamDeveloperName": { "Type": "String", "Description": "Name of Developer" } }

2) Add another tag to resources/ec2/instance.js :

{ "Properties": { // ... other properties "UserData" : { // ... }, "Tags": [ { "Key": "Name", "Value": { "Ref": "ParamInstanceName" } }, { "Key": "Developer", "Value": { "Ref": "ParamDeveloperName" } }, ] } }

3) Change our lamp-stack-two to use new values for the parameters in the .cfdnrc file

We'll add the new parameter ParamDeveloperName and also change some of the existing ones:

{ "project": "cloudfoundation-project", "profiles": { "local": { "aws_access_key_id": "abcd", "aws_secret_access_key": "efgh", "region": "us-east-1" } }, "templates": { "lamp": { "lamp-stack": { // previous stack. }, "lamp-stack-two": { "profile": "local", "region": "us-east-1", "options": { "tags": [ { "Key": "StackName", "Value": "lamp-stack" } ], "advanced": { "snsTopicArn": "arn:aws:sns:us-east-1:123456789012:lamp-stack-sns", "onFailure": "ROLLBACK", "timeout": 10, "terminationProtection": true }, "capabilityIam": true }, "parameters": { "DBName": "MyDatabase", "DBUser": "user", "DBPassword": "password", "DBRootPassword": "password", "KeyName": "test-key", // different "InstanceType": "t2.medium", "SSHLocation": "0.0.0.0/0", // different "ParamInstanceName": "Cloudfoundation Test Instance BIGGER", // new "ParamDeveloperName": "J Cole Morrison" } } } } }

Note: don't actually put the comments in there.

Run the update command:

cfdn update

Select our lamp template and our second stack lamp-stack-two . Confirm the updates and deploy the updated stack. You'll see the success message that the stack is updating.

After the update is complete, we'll have a new instance that's a t2.medium that has a new Name and Developer tag. (Note that updating instance Size in CloudFormation results in a new Instance).

Note: if you have go back to update the lamp-stack you'll need to add the ParamDeveloperName as well since it's now a part of the stack template.

Deleting stacks is equally as simple.

1) Run cfdn delete

2) Select the template lamp

3) Select the stack lamp-stack

Confirm that you'd like to remove the stack and CloudFoundation will begin the delete stack process. Repeat again to remove the second demo stack we created as well.

The final thing to talk about are profiles.

Profiles in CloudFoundation are sets of AWS credentials used to validate, describe, deploy, update, and delete CloudFormation stacks on AWS. A profile consists of Access Key ID and Secret Key credentials for an AWS user as well as a default region. If you're familiar with the [AWS CLI's named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html, these work almost the same way. If you're unfamiliar with profiles in context of the AWS CLI, you can almost think of them as AWS user credentials - they're what you need in order to make authenticated / authorized requests to AWS.

Let's add a new profile.

1) Run the add-profile command:

cfdn add-profile

This will prompt you to select the type of profile you'd like to add. CloudFoundation has two types of profiles:

Local Profiles - profiles that are available for use ONLY with the current CloudFoundation project

Global Profiles - profiles that are available for use with any CloudFoundation project

Local profiles are kept in the local project's .cfdnrc file, where as Global profiles are kept in the ~/.cfdn/profiles.json file. Because the profiles are stored in the .cfdnrc (as well as potentially sensitive parameter values), do not make your .cfdnrc file publicly available.

To reference Global profiles in a cfdnrc, you simply specify the name. For example, if you create a global profile named global , in the .cfdnrc file, the profile would just be:

{ "project": "name-of-project", "profiles": { // no local profiles }, "templates": { "newTemplate": { "newStack": { "profile": "global", // .. stack options and parameters } } } }

Because global was made available, even though it's not in the .cfdnrc file, CloudFoundation will know to use the global one.

2) For Would you like to set up a Local or Global Profile? Select Global (all projects)

As mentioned above, this will create a global profile.

3) For Which type of profile would you like to add? ...

There are two ways to create a profile:

Set up a CFDN Profile - manually input access and secret keys

Import an AWS Profile - import profile credentials from the AWS CLI

Profiles can only be imported from the AWS CLI if you have the CLI installed and also have profiles set up. The relationship between CloudFoundation and the AWS CLI is one way though: though you can import profiles from the AWS CLI, CloudFoundation will never modify credentials in the AWS CLI.

4) Select either of the options to complete the process

We've set up a CFDN profile before (in the original cfdn init step), however, if you have the AWS CLI available and would like to import a profile, go for it.

More on Profiles

All stacks are associated with a profile when deployed. This is no different than deploying a stack over the AWS CLI or console in the sense that you must also use specific sets of AWS credentials to do so.

To manage profiles, CloudFoundation has the the following methods:

add-profile - Add a new profile

update-profile - Update an existing profile. Only local profiles for the current project are made available for update.

list-profiles - List global profiles and local profiles available to the current project.

remove-profiles - Remove a profile.

import-profiles - Import all profiles in the AWS CLI to CloudFoundation.

CloudFoundation is open source and has the following planned for the future:

An add-template function to select and add more pre-made templates to your project (like the VPC and DB ones)

function to select and add more pre-made templates to your project (like the VPC and DB ones) Events and change-set before deploy and updates

Methods to add CloudFormation resources to files via CLI

Of course the roadmap is completely open to suggestions and other improvements.

CloudFoundation offers a way to manage, build, and deploy large CloudFormation projects. It sits between large framework solutions and writing raw templates. This post hopefully clarifies the day-to-day usage of CloudFoundation so that others writing large templates can do so with greater ease and maintenance.

As always, if you find any typos or mistakes or have feedback, PLEASE drop me a comment!