Here at Stavvy, we are using a Serverless approach to building our SaaS product along with the Serverless Framework and Terraform to manage our AWS cloud infrastructure. After trudging through the Infrastructure as Code trenches for months, we’ve successfully deployed hundreds of API endpoints, Lambda functions, Queues, Databases, and much more. Thankfully, we’re here to show you how it’s all done.

This post is for you if

I am or may be building an application with a Serverless architecture.

The application will include an API along with core infrastructure such as a database.

I am looking for real-world Serverless experience rather than “Hello World” mockups.

I want to see a Serverless Application using Infrastructure as Code to the fullest.

I am using Amazon Web Services.

Why use both?

Most applications will require public facing infrastructure (APIs, application servers) and core infrastructure such as databases.

Serverless Framework (referred to herby as “Serverless”) handles the public facing infrastructure (API Gateway, Lambda + application code), but falls short on the core, shared infrastructure. To create a dynamo table or an S3 bucket using Serverless, you will have to configure the whole thing in AWS CloudFormation.

Terraform seamlessly handles creating + managing shared infrastructure, however it will not deploy any of your application code to cloud functions.

Using both, you can utilize the deployment functionality of Serverless along with the core infrastructure management with Terraform. Terraform will be used to define any infrastructure that is shared across Serverless functions, in addition to any other infrastructure that your app may require. Serverless is responsible for deploying your application code and managing the API endpoints in API Gateway.

First: How to structure your cloud environments

We suggest using a multi-account strategy for your application environments using AWS Organizations for the following reasons.

a) AWS account limits could affect production environments

There is a limit of 200 cloud formation stacks in a single AWS account, and there is a limit of 3000 concurrent lambda invocations as well. It’s not hard to imagine a scenario where your dev environment may spike invocations due to an error, and it would be irresponsible if the production env went offline as a result.

b) Applications are differentiated by more than naming conventions. Although the Serverless Framework names infrastructure by stage, separating by account adds an additional layer of isolation. With Terraform, using separate AWS environments allows you to avoid prefixing all your infrastructure names and ids with the env in order to avoid conflicts.

How it comes together

The result is that a unique AWS account holds production infrastructure, another holds qa, and a third holds dev.

Match environments between Serverless and Terraform

Both Serverless and Terraform have ways of specifying environments: Serverless uses stages and Terraform uses workspaces .

To deploy the qa environment, you will run

terraform workspace select qa && terraform apply

serverless deploy --stage qa

Configure AWS Profiles

In order for Serverless and Terraform to access the correct AWS accounts, you will need to configure AWS credential profiles on your machine that match the environments. Your ~/.aws/credentials file should look similar to the following:

[prod]

aws_access_key_id=xxx

aws_secret_access_key=xxx [qa]

aws_access_key_id=xxx

aws_secret_access_key=xxx [master]

aws_access_key_id=xxx

aws_secret_access_key=xxx

The master profile is associated to your AWS root account.

Configure Serverless

In your serverless.yml file, make sure you have the following code in your provider block:

provider:

name: aws

stage: ${opt:stage, 'dev'}

profile: ${self:provider.stage}

This matches the profile value to the stage value, and keeps them in sync.

Configure Terraform

provider "aws" {

profile = "${terraform.workspace}"

region = "${var.region}"

}

Use the above code to match the provider profile to the Terraform workspace .

Additionally, if you want to use S3 as a backend for Terraform, you can specify the master profile in your Terraform configuration

terraform {

profile = "master"

...

}

Sharing infrastructure between Terraform and SF

Perfect! We’ve got environments configured, but how can we make Terraform and Serverless talk to each other? Serverless has the ability to import values from multiple sources, however AWS Parameter Store (SSM) is the most convenient.

As an example, let’s say we need to manage the database password in Terraform but use the password in Serverless.

First, we’ll create the secret resource in Terraform

resource "aws_secretsmanager_secret" "database_password" {

name = "database_password"

}

We’ll enter the password manually through the console, but the secret resource is created via Terraform. In addition to the secret, we also need permissions to access this secret. For that, we’ll create an IAM Policy for this secret, also with Terraform.

resource "aws_iam_policy" "db_pw_policy" {

name = "db_pw_policy"

policy = <<EOF

{

"Version": "2012-10-17",

"Statement": [

{

"Effect": "Allow",

"Action": "secretsmanager:*",

"Resource": [

"${aws_secretsmanager_secret.database_password.arn}"

]

}

]

}

EOF

}

Next, let’s create SSM resources to transfer the values that Serverless will need.

resource "aws_ssm_parameter" "db_pw_policy" {

name = "/db_pw_policy"

type = "String"

value = "${aws_iam_policy.db_pw_policy.arn}"

} resource "aws_ssm_parameter" "db_pw_name" {

name = "/db_pw_name"

type = "String"

value = "${aws_secretsmanager_secret.database_password.id}"

}

Lastly, in our serverless.yml file we will import these values.

# Pull in external variables into the custom section

# to keep things organized

custom:

databasePasswordName: ${ssm:/db_pw_name}

databasePasswordPolicy: ${ssm:/db_pw_policy} provider:

environment:

# pass the secret name to function via environment

DB_PW_NAME: ${self:custom.databasePasswordName} # assign the role created below to our functions

role: !GetAtt DefaultExecutionRole.Arn # Define the IAM role for this service's functions here,

# and include the password policy under "ManagedPolicyArns"

resources:

Resources:

DefaultExecutionRole:

Type: AWS::IAM::Role

Properties:

AssumeRolePolicyDocument: ...

ManagedPolicyArns:️

- ${self:custom.databasePasswordPolicy}

The only thing left to do is to use the AWS SDK to retrieve the secret’s value during execution of the lambda function.

When Responsibilities overlap 😬

If you look closely in the code block above, we’ve created the IAM Role for this Serverless service with AWS CloudFormation instead of using Terraform. Why is that?

It turns out that there are a variety of situations where the responsibility for Terraform and Serverless overlap, and it’s something that makes things like environment setup more difficult but we have to deal with it.

Listed below are all the situations we’ve run into where the line between Serverless and Terraform responsibility cross:

Creating a shared API Gateway REST API

If you have multiple Serverless services for the same API, then you must share the API resource. You can use Terraform to create the shared resource as seen below.

resource "aws_api_gateway_rest_api" "main" {

name = "api-name"

description = "API for our App"

}

Then export the rest api ID and root resource ID using SSM and import/assign to the provider.apiGateway.restApiId and provider.apiGateway.restApiRootResourceid fields respectively.

While you can do this with CloudFormation, we’d highly recommend you stick to Terraform due to the next situation listed below.

2. Mapping a API Gateway Custom Domain Name and base path to your deployed API Gateway stage

We assume that you don’t want your production API to have a URL such as https://8g2d723rbs.execute-api.us-east-2.amazonaws.com/prod/…, so you’ll need to create a custom domain name and then create a base path mapping between the domain name and the API Gateway deployed stage, not to mention Terraforming the domain and certificate information itself.

The catch 22 here is that the Serverless Framework creates the API Gateway Stage when you deploy your first API service. So you need to perform a Serverless deployment prior to being able to create the base path mapping below.

resource "aws_api_gateway_base_path_mapping" "main" {

api_id = "${aws_api_gateway_rest_api.main.id}"

stage_name = "${terraform.workspace}"

domain_name = "${aws_api_gateway_domain_name.main.domain_name}"

}

This unfortunately creates a bit of back and forth between Serverless and Terraform when you initially set up an environment, but once you’re through, you easily manage all resources as code.

3. Default API Gateway responses

It’s necessary to put CORS headers on API gateway responses so that if you request an incorrect resource, you receive the actual error and not a CORS error.

You can do that with CloudFormation or with Terraform, as seen below

resource "aws_api_gateway_gateway_response" "default_4xx" {

rest_api_id = "${aws_api_gateway_rest_api.main.id}"

response_type = "DEFAULT_4XX" response_parameters = {

"gatewayresponse.header.Access-Control-Allow-Origin" = "'*'"

"gatewayresponse.header.Access-Control-Allow-Headers" = "'*''"

}

}

4. API Gateway Authorizers cannot be declared multiple times.

If you have an authorizer function declared in Serverless and need all other functions to use that authorizer, you will quickly run into issues as each function will attempt to create its own “API Gateway Authorizer” resource. This will throw errors, so it’s necessary that you create a shared authorizer.

You can either create a shared authorizer resource with cloud formation in Serverless or you can create the authorizer in Terraform as shown below:

data "aws_lambda_function" "authorizer" {

function_name = "auth-function-name-${terraform.workspace}"

} resource "aws_api_gateway_authorizer" "main" {

rest_api_id = "${aws_api_gateway_rest_api.main.id}"

authorizer_uri = "${data.aws_lambda_function.authorizer.invoke_arn}"

}

In this case, we import the authorizer function (must already be deployed by Serverless) and then we utilize that data object in the gateway authorizer resource. Lastly, we use SSM to export the authorizer ID and use that in all of our Serverless functions that require authorization.

5. Managing Function IAM Roles

We find that function permissions are very unique to the purpose of the function, and it would be a pain-in-the-neck if we defined every function IAM role in Terraform and had to export that to the necessary function in Serverless.

As such, we’ve made an exception to our use of Terraform and used Terraform to create/export all IAM Policies for shared resources, and then we use CloudFormation to construct the actual function IAM Role. An example of this can be seen in the end of the last section.

Where things just fall short 💔

It’s important to remember that both the Serverless Framework and Terraform are not tools provided by AWS, so there will be discrepancies. One notable challenge that we ran into was using API Gateway 2 with websockets in our application. Unfortunately Terraform simply doesn’t support API Gateway 2 yet, so we either have to create that manually or rely on Serverless to create some of that for us.

Another thing to remember is that while standing up AWS resources is mostly free, some resources do not delete quickly. For example, S3 buckets take about an hour to fully delete and AWS Secrets take a minimum of 7 days. This means that some care is necessary when creating resources because it may not be incredibly easy to re-do certain parts of your infrastructure quickly.

That’s it!

I hope you have smooth sailing with your Serverless experience, and if you believe in Serverless architecture and want to get your hands dirty, we’re always hiring @ Stavvy.

👋