We’re super excited to dig into our newest feature — any tool that can give you a better glimpse into your serverless functions like this one can is sure to make our user’s lives easier!

The increased ease you can experience in deployment and operations with serverless is quickly paired with a trade-off: you lose some transparency. It can be hard to get that visibility back. This is why our new profiling feature is so important — being able to see CPU profiling data of your Lambda functions over time and over iterations is a key ability to diagnose problems and identify solutions quickly.

Knowing your AWS Lambda functions’ CPU usage isn’t just critical for diagnosing problems; scaling and cost also factor in. Our profiling data can show you if you need to push up (or pull down!) the CPU allocation for your function, which can help save you money or show you where your AWS Lambda application needs to scale.

Our new feature allows you to see CPU profiling of your entire AWS Lambda code base by adding a few lines of code to your functions. On your IOpipe dashboard, you’ll see a download link for a file that, when loaded into Chrome DevTools, presents a flame graph representing the CPU statistics for each label you have set. At a glance, you’ll be able to determine which part of your AWS Lambda functions take the most time and CPU.

A look at profiling data from IOpipe

Let’s take a look at the implementation and use of the profiling the best way we know how: by diving in with some code!

Including and configuring the IOpipe profiling plugin

In order to include the IOpipe profiling plugin in your Node.JS AWS Lambda function, you need to `require(‘@iopipe/profiler’)` in order to have the module loaded, then tell the main IOpipe library that you wish to use the plugin in configuration. All in all, it’ll look a lot like this:

There are a few configuration options for the profiling plugin:

enabled — just including the profiling plugin does not activate it. You need to set this to `true` in order to receive profiling data from your AWS Lambda applications. You can also configure an environment variable: $IOPIPE_ENABLE_PROFILING=true

— You need to set this to `true` in order to receive profiling data from your AWS Lambda applications. You can also configure an environment variable: sampleRate — you can fine-tune the number of samples taken for your application, but this default to a fairly granular 1000 us.

— you can fine-tune the number of samples taken for your application, but this default to a fairly granular 1000 us. debug — this option gives you the ability to see the debug logs, in case you want to see more data about the profiler’s run through. It’s set to `false` by default.

Once you have included and configured the profiling plugin to run, you should start to see IOpipe profiling data right away! Well, kind of — you’ll start to see download links with your invocations. Let’s walk through opening and deciphering some data!

Viewing the IOpipe profiling data

You should see download links for profiling data after setting up the plugin. In order to view this data, you’ll need to open up chrome DevTools to the JavaScript Profiler tab (if you don’t see it, click the three vertical dots in the upper right corner — it’s under ‘More Tools’) and import the file by clicking ‘Load’ and finding the .cpuprofile file you just downloaded.

Once you’ve done this, you should see a flame chart breakdown of everything your Lambda function did via CPU stack traces! This can be a lot to process, so let’s break down the basics of the data you get.

What IOpipe profiling data tells you about your AWS Lambda functions

CPU usage over the invocation: A handy chart shows you any spikes in the CPU usage in your AWS Lambda function — giving your a overview of the resource utilization of your function over the invocation time.

Flame Chart: This shows you what was executing in your AWS Lambda function, when. This is great for diagnosing everything from errors moving through to pinning down race conditions!

Call Stacks: You can even drill down into call stacks to see exactly what called what, when! (Sorry, but profiling technology still can’t tell you why.) Need to see exactly what is going on at any instant? This is the place for you! You can even select from a tree view, which goes top-down, or Heavy, which goes takes a bottom-up approach.

Things to Keep in Mind

IOpipe profiling, just like most CPU profiling, has effects on performance. We highly recommend that you use this in your application strictly as a development-cycle tool, and forego running it on production-side AWS Lambda functions.

Right now, profiling is supported in Node.JS only. We’ll make sure to keep you updated as we add this functionality to more languages!

For the moment, this is in beta to customers on paid IOpipe plans. We’ll be announcing more information on rollout as we move forward.

If you’d like to try IOpipe, we’ve got a free trial with no commitment and you can sign up here. If you want to connect with the team, participate in feedback sessions, or have general questions — peep us in our community slack channel.

— -

We’re super excited to be adding this feature — debugging AWS Lambda functions can get tricky, and we’re happy to bring you a tool that can bring clarity and perspective to your development process! Keep an eye out or future developments!