If you have any comments about this round, please post at the Framework Benchmarks Google Group .

Frameworks flagged with a icon are part of the TechEmpower Performance Rating (TPR) measurement for hardware environments. The TPR rating for a hardware environment is visible on the Composite scores tab, if available.

Each framework's peak performance in each test type (shown in the colored columns below) is multiplied by the weights shown above. The results are then summed to yield a weighted score. Only frameworks that implement all test types are included.

Environment scores are only available for rounds or ad-hoc runs that include the full suite of TPR-tagged frameworks. If any TPR-tagged frameworks are missing from a run, we aren't able to compute a fair environment score. This run is missing the following TPR-tagged frameworks:

TPR-3 is a composite hardware environment score for a, derived from all test types for TPR-taggedframeworks.

The results rendered here were generated using commit ID . Although this is a mismatch, the composite weights for are used below as a placeholder.

Frameworks and test implementations change over time, and our composite score weights are only computed for official rounds (e.g., " "). For the composite scoring shown below to be meaningful, results should be gathered using implementation versions that correspond with the most recent official Round.

For a more detailed description of the requirements, see the Source Code and Requirements section.

In this test, the framework responds with the simplest of responses: a "Hello, World" message rendered as plain text. The size of the response is kept small so that gigabit Ethernet is not the limiting factor for all implementations. HTTP pipelining is enabled and higher client-side concurrency levels are used for this test (see the "Data table" view).

For a more detailed description of the requirements, see the Source Code and Requirements section.

In this test, each response is a JSON serialization of a freshly-instantiated object that maps the key message to the value Hello, World!

For a more detailed description of the requirements, see the Source Code and Requirements section.

Whitespace is optional and may comply with the framework's best practices.

In this test, the framework's ORM is used to fetch all rows from a database table containing an unknown number of Unix fortune cookie messages (the table has 12 rows, but the code cannot have foreknowledge of the table's size). An additional fortune cookie message is inserted into the list at runtime and then the list is sorted by the message text. Finally, the list is delivered to the client using a server-side HTML template. The message text must be considered untrusted and properly escaped and the UTF-8 fortune messages must be rendered properly.

For a more detailed description of the requirements, see the Source Code and Requirements section.

In this test, each request is processed by fetching multiple cached objects from an in-memory database (the cache having been populated from a database table either as needed or prior to testing) and serializing these objects as a JSON response. The test is run multiple times: testing 1, 5, 10, 15, and 20 cached object fetches per request. All tests are run at 512 concurrency. Conceptually, this is similar to the multiple-queries test except that it uses a caching layer.

For a more detailed description of the requirements, see the Source Code and Requirements section.

In this test, each request is processed by fetching multiple rows from a simple database table and serializing these rows as a JSON response. The test is run multiple times: testing 1, 5, 10, 15, and 20 queries per request. All tests are run at 512 concurrency.

For a more detailed description of the requirements, see the Source Code and Requirements section.

In this test, each request is processed by fetching a single row from a simple database table. That row is then serialized as a JSON response.

You are viewing a partial set of results. This run is incomplete.

In the following tests, we have measured the performance of several web application platforms, full-stack frameworks, and micro-frameworks (collectively, "frameworks"). For more information, read the introduction , motivation , and latest environment details .

If you are hosting a results.json file somewhere public and would like to share a visualization of it, paste its URL below to generate a shareable visualization URL.

Feedback has been continuous and we plan to keep updating the project in several ways, such as:

We expect that all frameworks' tests could be improved with community input. For that reason, we are extremely happy to receive pull requests from fans of any framework. We would like our tests for every framework to perform optimally, so we invite you to please join in.

View the latest results from Round 19 . Or check out the previous rounds .

Ready to see the results of the latest round?

In a March 2013 blog entry , we published the results of comparing the performance of several web application frameworks executing simple but representative tasks: serializing JSON objects and querying databases. Since then, community input has been tremendous. We—speaking now for all contributors to the project—have been regularly updating the test implementations, expanding coverage, and capturing results in semi-regular updates that we call "rounds."

Note: We use the word "framework" loosely to refer to platforms, micro-frameworks, and full-stack frameworks.

This is a performance comparison of many web application frameworks executing fundamental tasks such as JSON serialization, database access, and server-side template composition. Each framework is operating in a realistic production configuration. Results are captured on cloud instances and on physical hardware. The test implementations are largely community-contributed and all source is available at the GitHub repository .

Current and Previous Rounds

Round 19 — Representing the result of processing over 4,600 pull requests at GitHub, Round 19 also introduces composite scores (scores reflecting the results from all test types) and hardware environment performance ratings.

Round 18 — This round included several requirements clarifications such as specificying how often implementations are required to recompute the response Date header and stricter validation.

Round 17 — Another Continuous Benchmarking run promoted to an official round, Round 17 now includes 179 frameworks. In this round, we permitted Postgres query pipelining, which has created a stratification of database tests. We are optimistic that over time, more test implementations will be able to leverage this capability.

Round 16 — Now Dockerized and running on a new 10-gigabit powered hardware environment, Round 16 of the Framework Benchmarks project brings new performance highs and increased stability.

Round 15 — The project exceeded 3,000 stars on GitHub and has processed nearly 2,500 pull requests. Continuous benchmarking results are now available on the Results dashboard.

Round 14 — Adoption of the mention-bot from Facebook has proven useful in notifying project participants of changes to their contributions. Continuous benchmarking provided a means for several community previews in this round, and we expect that to continue going forward. Note that this round was conducted only on physical hardware within the ServerCentral environment; tests on the cloud environment will return for Round 15.

Round 13 — Microsoft's ASP.NET team delivers the most impressive improvement we've seen in this project—a 85,000% increase in plaintext results for ASP.NET Core—making it a top-performing framework at the fundamentals of HTTP request routing. Round 13 also sees new hardware and cloud environments from ServerCentral and Microsoft Azure.

Round 12 — Marking the last round on the Peak environment, Round 12 sees some especially high Plaintext scores.

Round 11 — 26 more frameworks, three more languages, and the volume cranked to 11.

Round 10 — Significant restructuring of the project's infrastructure, including re-organization of the project's directory structure and integration with Travis CI for rapid review of pull requests, and the addition of numerous frameworks.

Round 9 — Thanks to the contribution of a 10-gigabit testing environment by Peak Hosting, the network barrier that frustrated top-performing frameworks in previous rounds has been removed. The Dell R720xd servers in this new environment feature dual Xeon E5-2660 v2 processors and illustrate how the spectrum of frameworks scale to forty processor cores.

Round 8 — Six more frameworks contributed by the community takes the total count to 90 frameworks and 230 permutations (variations of configuration). Meanwhile, several implementations have been updated and the highest-performance platforms jockey for the top spot on each test's charts.

Round 7 — After a several month hiatus, another large batch of frameworks have been added by the community. Even after consolidating a few, Round 7 counts 84 frameworks and over 200 test permutations! This round also was the first to use a community-review process. Future rounds will see roughly one week of preview and review by the community prior to release to the public here.

Round 6 — Still more tests were contributed by the developer community, bringing the number of frameworks to 74! Round 6 also introduces an "plaintext" test type that exercises HTTP pipelining and higher client-side concurrency levels.

Round 5 — The developer community comes through with the addition of ASP.NET tests ready to run on Windows. This round is the first with Windows tests, and we seek assistance from Windows experts to apply additional tuning to bring the results to parity with the Linux tests. Round 5 also introduces an "update" test type to exercise ORM and database writes.

Round 4 — With 57 frameworks in the benchmark suite, we've added a filter control allowing you to narrow your view to only the frameworks you want to see. Round 4 also introduces the "Fortune" test to exercise server-side templates and collections.

Round 3 — We created this stand-alone site for comparing the results data captured across many web application frameworks. Even more frameworks have been contributed by the community and the testing methodology was changed slightly thanks to enhancements to the testing tool named Wrk.

Round 2 — In April, we published a follow-up blog entry named "Frameworks Round 2" where we incorporated changes suggested and contributed by the community.

Round 1 — In a March 2013 blog entry, we published the results of comparing the performance of several web application frameworks executing simple but representative tasks: serializing JSON objects and querying databases. The community reaction was terrific. We are flattered by the volume of feedback. We received dozens of comments, suggestions, questions, criticisms, and most importantly, GitHub pull requests at the repository we set up for this project.

Unofficial Results

We operate a continuously-running benchmarking environment. You can see unofficial results as they are collected at the TFB Results Dashboard.

Join the conversation