You are viewing the second round of web application framework benchmarks. We have since conducted follow-up rounds that include more community contributions. Check out the new stand-alone framework benchmarks site if you are interested in the latest and most accurate data.

Last week, we posted the results of benchmarking several web application development platforms and frameworks. The response was tremendous. We received comments, recommendations, advice, criticism, questions, and most importantly pull requests from dozens of readers and developers.

On Tuesday of this week, we kicked off a pair of EC2 instances and a pair of our i7 workstations to produce updated data. That is what we're sharing here today. We dive right in with the EC2 JSON test results, but please read to the end where we include important notes about what has changed since last week.

JSON serialization test

In this test, each HTTP response is a JSON serialization of a freshly-instantiated object, resulting in {"message" : "Hello, World!"} . First up is data from the EC2 m1.large instances.

Dedicated hardware

Here is the same test on our Sandy Bridge i7 hardware.

Database access test (single query)

How many requests can be handled per second if each request is fetching a random record from a data store? Starting again with EC2.

Dedicated hardware

Database access test (multiple queries)

The following tests are all run at 256 concurrency and vary the number of database queries per request. The tests are 1, 5, 10, 15, and 20 queries per request.

Dedicated hardware

New & Improved: Now with latency!

At the advice of readers, this round of data was collected using Wrk (https://github.com/wg/wrk). In the first round from last week, we used WeigHTTP (https://github.com/lighttpd/weighttp). This change accounts for the very slight increase in rps seen in several frameworks, including those that saw no change to their benchmark or library code. Our conjecture is that Wrk is just slightly quicker at processing requests.

We didn't switch tools to improve the rps numbers, though. Some readers wanted to see data points that WeigHTTP wasn't providing us. Wrk gives latency data including average, standard deviation, and maximum. For example:

Making 100000 requests to http://10.253.42.235:8080/ 8 threads and 256 connections Thread Stats Avg Stdev Max +/- Stdev Latency 10.07ms 7.80ms 73.59ms 77.37% Req/Sec 2.99k 1.07k 8.00k 88.42% 100002 requests in 3.68s, 59.89MB read Requests/sec: 27202.70 Transfer/sec: 16.29MB

The latency information is now available in the results panels above (the rightmost tab in each panel).

The raw Wrk output from the latest run is in the Github repository.

Additional “stripped” tests

We received community contributions for Rails and Django that removed unused "middleware" components to fine-tune the configuration of these two frameworks to the particular use-case of these benchmarks. We've accepted these contributions but identified them as Django Stripped and Rails Stripped.

We have also retained the original Django and Rails tests (with some other tweaks).

To reiterate the intent of this benchmark exercise: we want to identify the high-water mark of performance one can expect from each framework for real-world applications. Real-world applications will do much more than serialize "Hello, World" and random rows from a simple database table. But we use these simple tests as stand-ins for an application. For that reason, we intentionally did not turn off features that are enabled by default (such as support of HTTP sessions) in our first-round tests.

Still, there is value in demonstrating the degree of increased performance that can be realized by fine-tuning a framework to your application's specific needs. Don't need sessions? What kind of savings can you expect if you turn session support off?

We are not yet certain how best to differentiate tests that exercise the framework mostly as provided versus those that fine-tune the configuration for the particular use-case of these benchmarks. For now, we use the "stripped" name suffix.

Revised Environment Details

Images for sharing

Contributions

We are grateful to have received Github pull requests and comments from dozens of users: Licenser, th0br0, davidmoreno, Skamander, jasonhinkle, pk11, vsg, knappador, RaphaelJ, chrisvest, dominikgrygiel, jpiasetz, mliberty, nraychaudhuri, bjornstar, shenfeng, bitemyapp, jmgao, larkin, ryantenney, normanmaurer, hlship, burtbeckwith, sashahart, abevoelker, tarndt, skelterjohn, myfreeweb, gleber, sidorares, philsturgeon, patoi, dcousineau, asadkn, BeCreative-Germany, rrevi, goshakkk, tarekziade, julienrf, mitsuhiko, jerem, huntc, alexbilbie, AlReece45, jameswyse, CHH, hassankhan, Nazariy, and onigoetz. A big thank you to all of you!

We have indicated any frameworks that received community review or for which the tests were wholly contributed by the community with a ✓ flag after their name in the results tables. For example: play-scala✓.

About TechEmpower

We provide web and mobile application development services and are passionate about application performance. Read more about what we do.