AWS Lambda Go vs. Node.js performance benchmark: updated 🔥

56,786 reads

Horse Racing Neck & Neck. http://www.publicdomainpictures.net/

Just this week AWS announced the release of Go for their Lambda service. This is pretty exciting as Go straddles a great niche between Java and Node.JS with regard to type safety, programming model and performance.

The community around Lambda and Serverless/FaaS computing in general already created libraries and frameworks to “shim” Go applications using Node.js, but now support is officially there.

Test Code: Fibonacci sequence

We prepped two really simple Lambda functions. Both calculate a Fibonacci sequence of 30 numbers. That’s it. Yes, this is a very minimal test but that is kind of the point. I explicitly do not want to incorporate any web frameworks or database interactivity through some third party libraries. Those types of benchmarks are great and very relevant, but this one is bare bones.

NOTE: As was pointed out in the comments, the original Node.js code used recursion to calculate the Fibonacci sequence whereas the Go code does not.

This impacts the call stacks and is probably not a fair comparison. With this excellent bit of crowd sourced knowledge, I’ve update this write up where applicable.Find the old Node.js code in this Gist.

The Node code:

The Go code:

Both functions take no input and just log the Fibonacci numbers to stdout. After printing 30 numbers, the output is the string “done” and passed in a format that AWS API Gateway understands.

Test Setup

I deployed both functions with completely standard resource profiles of 128MB memory. Each function had an API Gateway attached to it with no authentication to not add any overhead on that part of the request/response cycle. A quick smoke test HTTP call to both endpoints showed everything was working, and already a ~340ms of a difference in response time was noticeable.

However, these requests don’t take into account any “warming up” of the Lambda containers. For this we ran a longer test using loadtest, sending a total of 1000 requests at 10 requests / second using five concurrent workers. We turned on keepalive.

$ loadtest -c 5 -k -n 1000 --rps 10 https://<api-endpoint>

Test Results: Fibonacci

There is almost zero difference between Node.js and Go in this compute intensive test. What is interesting is that the recursive version shows us how much of an impact the recursive calls in Node have on the general performance. The execution duration for the Node.js recursive function is almost 10x the execution duration of the Go function.

Text updated after adjusting recursive Node.js code

Max requests: 1000

Concurrency level: 5

Agent: keepalive

Requests per second: 10

Node.js Node.js (rec.) Go

Mean latency: 76.9 ms 407.8 ms 75.3 ms

50% 73 392 67

90% 95 492 91

95% 101 526 109

99% 201 709 226

100% 630 814 562 (longest request)

For the recursive version (marked with rec.) AWS Cloudwatch metrics gave a similar picturem where the Node.js code was almost 10x slower. But, as mentioned, this is not the case for the non-recursive version.

Node.js execution duration

Go execution duration

Test Code: S3 and Dynamo interaction

As mentioned in the comments, the Fibonacci sequence is nice as a starter but doesn’t really represent a real world scenario. So I whipped up an extra example. This Lambda function

Grabs a ~50kb image from S3. Writes its LastModified timestamp to a DynamoDB table.

This mimics a typical scenario for upload sites or general file processing. The test setup is exactly the same as the Fibonacci test, with just an added S3 bucket and Dynamo table. Both versions use the standard AWS SDK for each language respectively. The Dynamo table has its write capacity pumped up to 1000 units to allow for enough throughput.

Node.js

The Go code:

Test Results: S3 & Dynamo interaction

This is a much clearer result than the former test, it is only in the 99% percentile that both tests are somewhat equal but still far apart. All values below this threshold are all in favour of Go. This is where users that have high volume AWS functions could really save money when switching to Go, as their bill could effectively be cut by up to ~40%.

Max requests: 1000

Concurrency level: 5

Agent: keepalive

Requests per second: 10

Node.js Go

Mean latency: 252.2 ms 109.7 ms

50% 203 91

90% 384 151

95% 478 197

99% 894 435

100% 8103 1133(longest request)

NOTE: Updated after adding a reader to the io.Reader body in the Go code as per a remark in the comment section. This had virtually no impact on the results, probably due to excessive S3 caching. Not expect, but hey, what is?

Dynamic vs. Compiled

What’s not really apparent in any of the AWS marketing blurb is that you actually provide AWS Lambda with a precompiled Go binary. AWS does not compile the Go source files for you and this has a couple of consequences.

Firstly, AWS Lambda is actually not “really” running Go code. Instead it’s running a binary that listens on a specific port and is passed a message in a specific wire format. This is actually pretty good as it opens up the possibility for AWS adding other compiled languages like Rust or C++ later down the road, building on their current Go engine.

Secondly, on a less positive note, having pre-compiled binaries means you cannot use the rather excellent built in code editor. This mini IDE courtesy of AWS’s recent Cloud9 acquisition is top of the class and really makes the Lambda service page feel a bit like JsFiddle or CodePen, but for backend code that could run in production at the touch of a button. I’m a big IntelliJ / WebStorm user, but Cloud9’s stuff is really, really good. 👍

AWS Lambda built in code editor

Conclusion

Go support for AWS Lambda opens up a pretty significant cost saving and performance benefit for those running workloads on Lambda. Exciting stuff will be happening!

If you liked this article, please show your appreciation by clapping 👏 below!

Tim is a product advocate for https://vamp.io, the smart & stress free application releasing for modern cloud platforms.

Tags