Last November, alongside the announcement of Bring Your Own Runtime (or BYOR for short), the AWS Lambda team announced another new feature called “layers.” This feature was somewhat overshadowed in the BYOR fanfare, as it wasn’t as exciting as being able to use your own runtime in your lambda functions, but it is one of the most powerful features now available in AWS Lambda. So powerful, in fact, that BYOR itself is built upon it.

At IOpipe, we immediately understood the potential of AWS Lambda Layers, and were among the first AWS partners to publish layers. But since then we’ve gotten numerous questions about them from both developers and engineering leadership. More specifically:

What are they? Why are they useful? How do I use them?

If you’ve found yourself asking any of those questions about AWS Lambda Layers, then you’ve come to the place. In this article, I will explain what AWS Lambda Layers are, why they’re useful, and how you can start using them with only a couple of additional lines of config.

By the end, hopefully you’ll agree that this feature is just as powerful as a custom runtime.

Before we proceed, this post assumes that you’re familiar with AWS Lambda and have deployed at least one lambda function. If you’re just getting your feet wet, check out my introduction, The Right Way™ to do Serverless in Python. If Python isn’t your thing, there’s also an AWS Lambda getting started guide.

Peeling back the layers

When I think about AWS Lambda Layers, I picture a cake. Not any old cake, but one of those multi-layered kinds you typically see at weddings. Then, I picture someone coming along and smashing that multi-layered cake into a single unified layer.

As messy as that sounds, it’s also (kind of) how AWS Lambda Layers work. Only in this case, your Lambda function is the top layer of the cake and the runtime is the bottom layer. Then, there’s all those layers in between with the cream filling.

Ok, now that I’ve got your stomach grumbling, let’s drop this sweet analogy and start peeling this onion. A layer, as it applies to AWS Lambda, is a zip archive — much like how a Lambda function itself is a zip archive containing all the files necessary to handle an invocation.

In fact, a Lambda function is a layer already, it’s just that up until now you’ve been deploying single layer Lambda functions. However, that’s not technically true, as the runtime itself is a layer, too.

It’s just that prior to BYOR, it was a layer managed by AWS. But the point here is this: it’s layers all the way down.

In the beginning, there was a runtime

Now that we understand that everything is a layer in AWS Lambda, let’s talk about how they work together. If we go back to visualizing a cake smash, with AWS Lambda Layers, the smash is a bit more orderly. By orderly, I only mean there’s an order to it — it can still get messy. But there’s some basic rules that should help you stay on top of things.

First things first, there’s the runtime. This is always the first layer. Every layer that comes after starts with this layer as the base. A runtime is the software that runs the software, if you’re using a runtime like Python, then this would be the Python interpreter and all of its dependencies. This layer also bootstraps your lambda function. This means different things for different runtimes, but, in short, it means connecting an invocation to a handler. An invocation is an external trigger, such as an HTTP request and the handler is the entry point of your lambda function.

The runtime accepts this trigger as an event and passes it on to your lambda function, then collects the response. The bootstrap is the glue between your lambda function and the outside world.

After the runtime layer, when you deploy your lambda function, the layers work like this:

The first layer after the runtime, which itself is a zip archive, gets extracted. Any system files are expected to be extracted into the /op directory. There are runtime-specific subdirectories under /opt . For example, in the python3.6 runtime, there's a /opt/python/lib/python3.6/site-packages directory for Python libraries. Anything else should go into /var/task . Every subsequent layer is extracted on top of the layer that proceeded it. The same rules apply regarding /opt and /var/task . If any file paths conflict, the subsequent layer overwrites the proceeding one. The final layer, which is your lambda function, is extracted into /var/task . Like with layers, if there are any file path conflicts, they are overwritten.

As you can see, the order is important. Every layer can overwrite files in the layer that proceeded it. It also means layers can build on top of other layers to extend functionality.

Layers on their own should be more-or-less self-sufficient. The only dependency should be the runtime itself (and perhaps not even that, in the case of compiled binaries). But as long as each layer is putting their files in sensible places for that runtime, then you can do some pretty powerful things.

Layers as composition

One of the areas where AWS Lambda Layers really shines is composition. By composition, I mean combining two or more layers for added functionality. This can be especially useful when we’re talking about dependencies that need to be precompiled and result in large binaries.

Two classic examples of this are NumPy and SciPy, both mainstays when it comes to data processing and machine learning in Python. Prior to AWS Lambda Layers you would need to package the compiled versions of both of these libraries with your lambda function, which would mean for every deploy your archive would need over 100MB of precompiled code coming along for the ride. In addition to deployment package bloat, it also means more storage on S3 and longer cold starts.

But with AWS Lambda Layers, you can reduce this dependency overhead to a single ARN. For example, if you’re using the Serverless framework, your serverless.yml would only need the following two lines:

layers:

- arn:aws:lambda:us-east-1:668099181075:layer:AWSLambda-Python36-SciPy1x:2

Or if you’re using SAM, you would add these lines to your template.yml :

Layers:

- arn:aws:lambda:us-east-1:668099181075:layer:AWSLambda-Python36-SciPy1x:2

And just like that, on your next deploy, this layer will be included, and you’ll have NumPy and SciPy precompiled and ready to go.

Keep in mind that the above ARNs assume you’re in us-east-1 and that you're using the python3.6 runtime. You'll need to swap out the region with your AWS region and if you're using python2.7 then replace python36 with python27 . Be sure to read over the relevant sections of the Serverless and SAM docs as well. And check here for more information about the official AWS NumPy & SciPy layer used in the example above.

For a curated list of layers available, the AWSome Layers repo is a great resource.

Deploying Monitoring and Observability as a layer

To automatically get real-time visibility into the most granular behaviors of your application via IOpipe Lambda Layers, add the following lines to your YAML file for each language.

[Note: If you want the step-by-step guide to adding IOpipe to your functions without code changes, read our help documentation on using Lambda Layers.]

IOpipe Node | ARN: arn:aws:lambda:<region>:146318645305:layer:IOpipeNodeJS810:<version>

Link: IOpipe Node Layer | nodejs6.10, nodejs8.10 |

IOpipe Python | ARN: arn:aws:lambda:<region>:146318645305:layer:IOpipePython:<version>

Link: IOpipe Python Layer | python2.7, python3.6, python3.7

IOpipe Java | ARN: arn:aws:lambda:<region>:146318645305:layer:IOpipeJava8:<version>

Link: IOpipe Java Layer | java8

If you have any questions about AWS Lambda Layers or serverless observability for your application, email hello@iopipe.com or try IOpipe for free here.