A couple of months ago, in response to someone else's speed claims, I posted a comment that CherryPy's built in WSGI server could serve 1200 simple requests per second. The demo used Apache's "ab" tool to test ("-k -n 3000 -c %s"). In the last few days before the release of CherryPy 3.0 final, I've done some further optimization of cherrypy.wsgiserver, and now get 2000+ req/sec on my modest laptop.

threads | Completed | Failed | req/sec | msec/req | KB/sec | 10 | 3000 | 0 | 2170.79 | 0.461 | 358.18 | 20 | 3000 | 0 | 2080.34 | 0.481 | 343.26 | 30 | 3000 | 0 | 1920.31 | 0.521 | 316.85 | 40 | 3000 | 0 | 2051.84 | 0.487 | 338.55 | 50 | 3000 | 0 | 2051.84 | 0.487 | 338.55 |

The improvements are due to a variety of optimizations, including:

Replacing mimetools/rfc822.Message with custom code for reading headers.

Using socket.sendall instead of a socket fileobject for writes.

Generic hand-tuning of code loops.

I want to make it clear that the benchmark does not exercise any part of CherryPy other than the WSGI server. I used a very simple WSGI application (not the full CherryPy stack):

def simple_app(environ, start_response): """Simplest possible application object""" status = '200 OK' response_headers = [('Content-type','text/plain'), ('Content-Length','19')] start_response(status, response_headers) return ['My Own Hello World!']

The full stack of CherryPy includes the WSGI application side as well, and consequently takes more time. But that has risen from about 380 requests per second in October to:

Client Thread Report (1000 requests, 14 byte response body, 10 server threads): threads | Completed | Failed | req/sec | msec/req | KB/sec | 10 | 1000 | 0 | 536.86 | 1.863 | 85.36 | 20 | 1000 | 0 | 509.47 | 1.963 | 81.01 | 30 | 1000 | 0 | 499.28 | 2.003 | 79.39 | 40 | 1000 | 0 | 491.90 | 2.033 | 78.21 | 50 | 1000 | 0 | 504.32 | 1.983 | 80.19 | Average | 1000.0 | 0.0 | 508.366 | 1.969 | 80.832 |

If you want to benchmark the full CherryPy stack on your own, just install CherryPy and run the script at cherrypy/test/benchmark.py .

Here's the other script for the "bare server" benchmarks: