The full stack dog’s spoiling the main idea of this blog 🙁

(Updated 24/07/2019 — Added information about AWS CDK)

Hello, hello again. This time let us talk about Serverless framework that I love so much. Full disclaimer: everything I say below is my personal opinion based on my experience of managing a bunch of lambda functions with help of this awesome tool.

First things first, for those of you who don’t know, Serverless framework is a software suite that dramatically simplifies composition and deployment of FaaS based systems. In the AWS world this means that the framework uploads your code and creates a CloudFormation stack for your project consisting of one or more lambda function and other required AWS resources – lambda triggers, databases, IAM roles and so on. You can read more about it here. From now on I’m gonna assume that you’re somehow familiar with it and have a very clear understanding of what it’s capable of.

So what’s not nice

As I’ve kinda already mentioned above – Serverless basically does two things:

Uploads your application code to S3. Creates AWS resources required to run the code the way you want it.

So whilst the upload bit is all straightforward and easy, the resources creation part can actually be broken down into four categories:

Lambda functions – quite obvious. Resources associated with functions’ triggers, e.g. API endpoints. IAM role(s). Other AWS resources described in the “resources” section of a Serverless config file.

If you’ve ever seen this file, you’ve probably noticed that its resource section is actually nothing but a piece of a CloudFormation (I’ll be calling it CF for shorter sometimes) template. That is provided you know what CloudFormation template looks like, otherwise you might not notice this of course. Anyways, you can create and configure AWS resources using this section. For an instance if you have a look at some examples on the Serverless website you’ll find a sample application config, that creates a function, and a DynamoDB table, and sets up permissions that the function’s role will have on the table. How very handy and convenient! Except this isn’t.

You see, the first problem comes from the fact that Serverless Framework deploys your entire application as a single CloudFormation stack. That means that your DynamoDB table’s lifecycle is bound to your application’s lifecycle. Now, if you want to delete and re-create your table – you need to re-deploy your serverless project. Which is less than ideal, because there are certain situations when you want to delete and re-create your table independently from functions relying it.

The most obvious scenario is a process of restoring your table from a native backup. In case of DynamoDB your native backups can only be restored into a new table. Therefore you should either redirect your functions to a restored table, or delete an original corrupted table and create a new one from a backup with the same name. Either way you will end up having a new table, different to one that has been added with your CloudFormation stack, which may or mayt noit belong to thisd stack, depending on how you’ve named it. Moreover, this freshly restored table will lack some settings such as TTL, and a stream with all its subscriptions. These things you’ll have to reconfigure separately. We’ll discuss how we can achieve this without a manual intervention a bit later.

In the end of the day, this means, that you’ll be able to successfully maintain your DynamoDB table as part of your Serverless project only in a perfect world where nothing ever goes wrong.

Another issue that’s made us to steer away from deploying some resources using CloudFormation in general is the fact that sometime it takes a while for the CF syntax to catch up with new AWS features. There are certain things released by AWS that you may want to use immediately, but it takes a few months before they become available in CloudFormation. This is not always the case, but when this is you may decide to have a very good look at alternative methods of managing your AWS resources.

Let’s make things nicer!

Ok then, let’s finally talk about one of the alternatives that we use in my company to address the problem of managing DynamoDB tables. The name of this cure is AWS CLI.

For those of you who don’t know (which at this point of this blogpost I expect to have an absolute minority), AWS CLI is a command line tool provided by AWS so their users can manage their cloud things from a terminal. It really is a command line version of the AWS console, which is yay, since console means scripting and scripting means automation which we all want. Otherwise we would not do infrastructure as code, would we? Here’s the CLI reference for DynamoDB btw: reference

Table creation

The very first thing you’d probably likes to do is to script table creation. This is an easy task (not like all other tasks are anything difficult though) since all you need is a CLI command and a JSON definition of your table. Please, please have a look at the examples below.

A deployment script and definition file

Please note, that there is also an option to pass your table details as CLI parameters, though this will make your script far less readable. I can’t see any reasons to do that over using a separate JSON file. Apart from personal preferences, which I don’t judge. Normally I don’t.

Table settings

Now here comes a solution for the second CloudFormation problem – which is the fact that some settings cannot be set using the CF template. Sometimes they can be set in the template, but can not restored from a backup so we need to be able to apply them to an existing table. However we are in luck here since we have decided to switch to ol’ good terminal commands.

Take TTL (time to live) as an example. This setting is not saved as a part of a native DynamoDB backup, meaning we need to apply it to a table after it is being restored. And since we already going to have a script that apply this setting, why not use it as the only source of truth?

Here, have a look at a shell script using AWS CLI to set TTL on a ‘created’ field of the table from the previous example:

This is how we can enable TTL on our existing table

Stream subscriptions

Another thing that is lost once you restore your table from a backup is its stream and stream subscriptions. If you don’t know what DynamoDB stream is, please stop doing whatever you doing and make yourself familiar with this concept.

Once again, we can use the power of AWS CLI to save the day. A script below 👇 enables a stream on the table we’ve created in one of the previous examples and subscribes a lambda function to it. Sweet!

Lambda functions + DynamoDB streams = ❤️

Now since we have these three scripts we are free to use them any way we want. For example one could combine all three and execute them in a deployment pipeline. There’s also always an option to combine scripts two and three and use them as part of a disaster recovery tool. Do whatever you pleased as long as it makes sense. The point is – shell scripts make us jolly flexible at automating the resources’ lifecycle.

For the curious ones I also provide a serverless.yml which you would use should you decide to ignore my advise:

Wow, so cool, it is probably the only right way to deal with things!

Yes and no, but mostly no. Definitely No with a big fat “N”.

A good (or bad) thing about IT is that there’s always more than one way to achieve something. Same here. Apart from my excellent shell scripts I would like to mention two other ways to overcome limitations of Serverless framework and CloudFormation. These are not paths I’ve walked, but they might be the better ones.

Terraform

Terraform is a cloud-agnostic infrastructure as code management tool. I have never used it, but I’ve heard it is good. It has good modularity – or so I was told, also it is community driven. It also has subjectively better syntax than CF. Feature wise it also gets ahead of the Amazon’s offering. For example you can configure a Kibana instance to use Cognito for authentication using only Terraform— something that I’ve done some time ago, using nothing but two CF templates and three shell scripts.

Unlike CloudFormation, Terraform is a tool, not a native AWS service, meaning you’ll have to set it up yourself. For example you’ll need to decide where to store your state file.

CloudFormation macros

There’s this functionality in CF called macros that allow developers to perform custom template transformations when deploying a stack. The cool bit here is that you can also run your own lambda function to perform transformations, or do whatever you want it to do. I mean it is just a lambda function. Here’s an article on how this can be achieved using Serverless framework.

I’ve never tried this myself, but I feel like I absolutely should give it a go.

AWS CDK

AWS CDK is an Amazon’s SDK that helps you to create and manage your AWS infrastructure with ease using the programming language of your choice. So unlike, say, CloudFormation, that relies on declarative templates, AWS CDK allows you to manage your infrastructure as code. Literally.

You can read more about it here. Sounds appealing to me.

Is that all?

For today – yes.

Infrastructure as code is cool and so on and so forth. However it is also very easy to screw up, and if you do so – it will ruin your life. Or a small portion of your life, like maybe one day. Anyways you won’t be pleased. So code responsibly and never stop looking for better solutions. I mean, like better than what you have in the moment, and better than that one I’ve presented to you in this article.

Peace.