As a hosting provider, we run hundreds of web servers with varying configurations. Some are tuned to work with large systems, some are tuned to work with lots of domains and some a tuned to be highly resource efficient. The “one size fits all” approach doesn’t work with web technology simply because the tools and the tasks vary so greatly.

We’re setting up a new production web server for our own site and as it’s a chance to start fresh, the thought of course turned to “what’s the best web server for our site?”. After looking around at various benchmarks and reviews of the more common web servers, none of the benchmarks seemed to have been run in the last few years or focussed on thousands of connections with static content. This wasn’t the scenario I wanted to see data on.

So, I set about running a few benchmarks on what I considered to be the top 3 Linux based web servers for a moderately busy site. This is why I’ve labelled the article “Part 1”, as I want to cover multiple scenarios in a few follow-up articles to encompass a variety of scenarios. For this test we'll be using WordPress, however I'll be testing other platforms in the follow-up articles as well.

The Environment and Test Limitations

For these tests, we’ll be using a CentOS 7 based Virtual Private Server (VPS) with 2GB of RAM and 2CPU’s allocated. This is a fairly vanilla entry level system which many of our clients use, which means it's a great starting point for relevance to our clients (and hopefully others as well). Being somewhat resource limited means that we also need to consider the resource usage of the web server as well as the performance.

WordPress wise we’re simply running a vanilla install of WordPress 4.1. We’ve benchmarked WordPress twice in the past (WordPress 4.0 Performance Benchmarking and WordPress 4.1 vs WordPress 4.0 Performance Comparison), so we know the rough performance baseline as well as performance tweaks required to make WordPress sing. This time around we haven’t installed any plugins, nor made any changes to the base install. We're also running the MySQL database from the same VPS, which is common for smaller sites. This isn’t what we recommend for large sites with high traffic loads but again we're targeting the smaller, more typical deployment scenarios.

We’re also only calling the main page, so while it gives us some general data it’s not perfectly indicative of the actual user performance. Only real world testing can give you this and it’ll vary significantly depending on the exact site configuration.

The testing is being run from a separate VPS on the same compute node in order to eliminate any network issues. The basic test we’re running is with 5 concurrent users and making 5000 calls to the homepage. Our Apache Benchmark call looks like this:

ab -c 5 -n 5000 http://benchmarks/wordpress/

The 5 concurrent calls is to simulate a moderately busy site. It should also be noted that 5 concurrent calls doesn’t translate to 5 concurrent users. Similarly, the requests per second also do not translate directly to users per second. These actual figures and the translation between will vary from site to site and vary across different platforms. The 5 concurrent connections will however give a reasonable approximation of what I consider to be moderate to high usage for a business website.

Apache 2.4 + mod_php

Versions

PHP 5.4.16

MySQL 5.5 (MariaDB)

Apache/2.4.6 (CentOS)

Even with the latest Apache 2.4, the default PHP configuration is through mod_php. This means that PHP is loaded as a module within Apache and runs as an embedded process. There are of course many pros and cons to running this way. Without going into a full comparison, essentially every Apache process needs to load PHP, regardless of if it needs it or not (eg when serving static files).

Lets look at our results:

Requests per second: 12.75 [#/sec] (mean) Time per request: 392.196 [ms] (mean) Time per request: 78.439 [ms] (mean, across all concurrent requests)

The important figures here are the requests per second and the mean time per requests across all concurrent requests. So in our testing config, the vanilla WordPress site is capable of just under 13 Requests Per Second (RPS).

I also use a basic little one-line script (from here) to show the number of Apache processes and the average memory per process. Here’s the script:

ps -ylC httpd --sort:rss | awk '{sum+=$8; ++n} END {print "Tot="sum"("n")";print "Avg="sum"/"n"="sum/n/1024"MB"}'

And the output:

Tot=187024(9) Avg=187024/9=20.2934MB

In this test scenario, Apache had 9 running processes with an average of 20MB each. Obviously if we were also serving out static content then we’d see more processes and the memory usage would be higher. This is when running a very basic WordPress instance too, if we’re running a much more intensive software suite like Magento then we can expect this resource usage to be much higher. It’s something we intend to cover in one of the future comparisons.

Apache + mod_php + disable Apache modules

One of the major complaints I always hear about Apache is that it’s “bloated”. The downside to having a large amount of functionality built in is that you need to load all of these features when running Apache. Thankfully, Apache is quite modular and we can simply turn off a lot of the unused features.

Note: Here be dragons. Turning off features without understanding the implications nor what they do is a disaster waiting to happen. Do not do this without understanding the implications.

I’m not going to list all of the modules I disabled, simply because I don’t want this to be a how-to guide or to have others blindly disable features without understanding the implications. I disabled about 20 modules in total, which included some of the mod_authz, mod_dbd and similar because I knew they weren’t being used for this basic site. Here’s the results:

Requests per second: 12.95 [#/sec] (mean) Time per request: 385.967 [ms] (mean) Time per request: 77.193 [ms] (mean, across all concurrent requests)

The performance results are virtually indistinguishable, which is what we expected. Where we may see a difference is the memory usage, since the modules still need to be loaded even if they're not being used. Here’s the result of our little test script:

Tot=168388(11) Avg=168388/11=14.9492MB

There’s certainly a drop in memory, but since the number of processes also varied, we can only directly compare the total result. The difference is 182MB vs 164MB with the some of the modules disabled. A difference of roughly 10% isn’t a big gain, so it would only be worth doing if you have a very large Apache installation or if memory usage is absolutely critical.

Apache + mod_fcgid

Next up was switching Apache to use mod_fcgid to implement a FastCGI call to a separate PHP instance. The tests were re-run and as the PHP processing was now performed separately, the memory used per Apache process dropped to 2.7MB. Of course, we now have the PHP instances and virtually the same system level memory usage (as measured by running "free" on the server). However, it means that having Apache serve static content is more efficient as it won't have to load PHP with each process. Since this is what a typical webserver does, running PHP separated from PHP makes great sense.

But what about performance? We're now achieving 13.35 RPS, which is roughly the same as running mod_php.

Apache + PHP-FPM

One of the newer methods of the FastCGI implementation is via PHP-FPM. This has been the “goto” method to implement more efficient PHP based systems.

Our result? 13.33 RPS. Memory wise, it’s roughly the same as both mod_php and mod_fcgi. There are of course other advantages to PHP-FPM which aren’t measured by performance (such as adaptive process spawning), so even if there’s no performance gain for a WordPress site it's worth considering.

Nginx + PHP-FPM

This is the most common scenario (nicknamed the LEMP stack) which many recommend as the best fit for high performance PHP based sites. Nginx is certainly a powerful system and was designed to beat the C10k problem. This means it's designed to handle tens of thousands of concurrent connections without degrading the performance. It does this by using events instead of threads, which is a more efficient system at high usage levels. While this isn't the limitation we're hitting here, we've benchmarked it anyway to if it provides any performance gains.

How does it perform for our scenario? 12.89 RPS. While it's slightly lower than Apache, it's close enough that it's not statistically significant.

Nginx + PHP-FPM + Opcache

As we’ve found from previous WordPress benchmarking, using Opcache to eliminate the need to continually recompile the PHP code made a significant difference to performance. The tests for this scenario also hold true. With Opcache enabled, we can now achieve 35.36 RPS. Like we found with our previous benchmarking, the difference is significant Essentially, we've nearly tripled the performance of the system.

If you have your own dedicated VPS, this is clearly very quick and easy performance gain with no additional cost.

Apache + PHP-FPM + Opcache

To confirm that the above Opcache boost is just as applicable to Apache, we reran the tests with Apache and Opcache enabled using PHP-FPM. The result is 37.13 RPS, so roughly the same performance as the Nginx configuration. Again, regardless of your webserver of chioice, this is a great increase.

OpenLiteSpeed

The LiteSpeed Webserver is something I’ve heard plenty about, but never actually used myself. It’s heralded as a lightweight and high performance webserver, which like Nginx is event driven. To test, I used the open source variant called OpenLiteSpeed. According to the LiteSpeed Tech website, the performance features should be the same as the commercial version.

Installation was quite easy and as an added bonus there’s a neat web interface to manage everything from. PHP is also specific to OpenLiteSpeed and needs to be installed separately. We went with PHP 5.4 like the rest of the systems so that it’s directly comparable.

And the performance? We managed 13.35 RPS. This is without the opcache running, which means that the results are perfectly inline with the Nginx and Apache based systems. Like Nginx, I'm sure the increased performance will come from higher traffic environments and it's something we also intend to test further.

HHVM + Nginx

If you haven’t heard of HHVM, it’s essentially a virtual machine which uses Just In Time (JIT) compilation to optimise the executed code. The main team involved with (and who originally released it) is Facebook, who run the world’s largest PHP based application. Optimising a system which handles millions of concurrent users is no small feat, so they should know a thing or two about getting PHP to perform!

One key point to remember is that HHVM isn’t 100% compatible with all PHP code (but it’s improving all the time). Most of the common PHP platforms will certainly work (like WordPress) but you may run into some small problems with custom code or complex systems. Basically, you need to test and you need to test it properly.

Performance wise? Well the numbers certainly tell the story. We can now serve out 93.95 RPS! This is a massive difference and betters the opcache improvements by a factor of nearly 3. We’re now talking an improvement of over 7 times the stock Apache configuration, all without additional hardware.

If your site and code is compatible with HHVM and you want to extract every last bit of performance from your system, this is certainly the way to do it. It’s something we’re going to be watching closely as the project matures!

Conclusion

Right, if you’ve made it this far then well done! There’s a few things which need to be re-iterated here so that the results are used in the right context.

Firstly is our test scenario. It’s a limited test scenario using WordPress on a small VPS. This is what the results are based on and they only reflect this scenario. You can extrapolate the results slightly for your environment, however the clear message is conduct your own testing for your environment.

Under a moderate load with a very basic WordPress install, it’s clear that the underlying web server doesn’t significantly contribute to the performance nor the amount of users the server can handle. With HHVM, we were able to sustain over 90 requests a second. This translates to over 7 million hits a day if it was sustained for 24 hours. Even if you said the site is only busy for 6 hours a day, that’s still nearly 2 million hits. That’s still a very busy site! What would be the result if you did nothing? Even at our starting figure of 12 requests a second, you’d be able to sustain a million hits a day (if evenly spread over 24 hours). Here's a quick table to summarise what's available:

Requests per second Requests in 6 hours Requests in 24 hours 10 216,000 1,152,000 50 1,080,000 5,760,000 100 2,160,000 8,640,000

When looking at these figures, you'll want to ensure your server can handle the peak load and still have plenty of burstable headroom.

This brings us to our second point, the age old “premature optimisation is the root of all evil” yet again sings true. Work out where your true choke points are and attack those first. If 95% of the load on your server is the PHP code compilation then ensure you’re running opcache and if it’s a mostly static site then go for full page caching. If your code is compatible, go the whole hog and use HHVM.

Thirdly, we’re regularly told “don’t use Apache” or “Apache is bloated” and it’s somehow insinuated that using platforms which are decades old somehow translates into poor performance. This isn’t the case. Conversely, it doesn’t meant that using other web servers such as Nginx or OpenLiteSpeed are necessarily a bad thing. Both are very highly capable platforms and being event driven will certainly scale well. If your site is going to be receiving more than a few hundred thousand hits a day, then you’ll be well served by any of the webservers tested.

Lastly, let us know what we should cover next? Do you want to see more concurrent access? On a larger server? Running Magento? Please just leave a comment below to let us know.