I am, at best, a fly-by-night sysadmin. I grew to adult nerdhood doing tech support and later admin work in a Windows shop with a smattering of *nix, most of which was attended to by bearded elders locked away in cold, white rooms. It wasn't until I started managing enterprise storage gear that I came to appreciate the power of the bash shell, and my cobbled-together home network gradually changed from a Windows 2003 domain supporting some PCs to a mixture of GNU/Linux servers and OS X desktops and laptops.

Like so many others, I eventually decided to put my own website up on the Internets, and I used the Apache HTTP server to host it. Why? I had an Ubuntu server box sitting in front of me, and Apache was the Web server I'd heard about the most. If Apache was good enough for big sites, it should be good enough for my little static personal site. Right?

But it wasn't quite right for me. Here's why—and what I learned when I spent a weekend ripping out my Apache install and replacing it with lightweight speed demon of a Web server called Nginx.

Old and busted

Apache was easy to set up. I almost typed "trivially" easy, but going into an Apache setup with nothing more than a plucky attitude and the knowledge that "Apache is some software that hosts websites" means you're going to face a learning curve. Still, after no more than an hour or two searching Google for help and poking through Apache's conf files, I had a website, and it was on the Internet! A few months later, Ars ran a piece on getting free SSL/TLS certificates. I immediately wanted to try it—not because I had any real need for it, but just to see how certs worked. Less than a day after the piece ran, I had a class 2 wildcard SSL/TLS certificate for my domain, and my Web server was rocking the https.

Things ran well this way for a couple of years, but as I started doing more with the Web server, it began to be apparent that my setup, while perfectly workable, could be better. In particular, adding Tectonicus (a Minecraft map renderer which generates millions of tiny tiles and stitches them together with a Google Maps-style interface) to the Web server showed me that things were less than optimal. Even over my local network, Apache struggled to serve the map at a suitably snappy pace. The Web server is a dual-core AMD E-350 with 2GB of RAM and a Vertex 2 solid state drive (SSD), and it would serve the site's static images instantly. But the htop tool showed that the Apache processes went CPU-crazy any time the Tectonicus map was being served; both cores shot 100 percent usage as the screen slowly filled with tiles.

Additionally, I began running a small wiki on the same box. This used Dokuwiki, a wiki server which can be skinned to closely resemble MediaWiki but which stores its data in flat files rather than requiring a database. Dokuwiki requires PHP, a widely used scripting language that runs on a huge number of Web servers around the world, so this meant I needed to install some manner of PHP package into my current setup.

There were many paths to take. Since I had installed Apache on Ubuntu the easy way, by typing "sudo aptitude install apache2," I got what is known as the Apache MPM Prefork version. This is the most commonly installed version of Apache, and it works by launching a number of separate Apache processes to handle Web requests. It does not use multiple threads, but instead parcels work out to child Apache processes (for a good refresher on the difference between a thread and a process, check out this Ask Ars feature on the topic). Prefork is the default Apache installation because Apache is an extensible Web server that can be customized to do all sorts of useful things by adding modules, and some of the modules that people might want to install don't work well when run in a multithreaded fashion.

The drawback to doing everything with processes is that Apache prefork can be a bit of a memory hog, especially under load. Another precompiled flavor of Apache can be installed as an alternative: Apache MPM worker. "Worker" differs from "prefork" in that worker's processes are multithreaded, giving them the ability to service more requests with fewer system resources. This can translate into faster pages served with less RAM and CPU. However, because some Apache modules don't necessarily work well when run under multithreaded Apache, you have to specifically select this version to install on Ubuntu and on other GNU/Linux distros with package management.

A bit of searching showed that Apache worker could go a long way toward making Tectonicus serve its tons of tiles faster, but switching would cause some issues with PHP. The built-in Apache PHP module, "mod_php," is one of those modules that can have issues running multi-threaded. I was faced with quite a bit of software ripping and replacing to switch from mod_php to a standalone PHP.

A post by Ars forum member Blacken00100, however, pushed me in a new direction entirely. Apache with standalone PHP might prove far less optimal than a lightweight event-driven Web server like Nginx with standalone PHP. My mental wheels began turning. I figured that, so long as I was going to be doing some work, I might as well go all the way and see if I could set up what is widely regarded as the fastest Web server around.

The new hotness

Nginx (pronounced "engine-ex") is a lightweight Web server with a reputation for speed, speed, speed. It differs from Apache in a fundamental way—Apache is a process- and thread-driven application, but Nginx is event-driven. The practical effect of this design difference is that a small number of Nginx "worker" processes can plow through enormous stacks of requests without waiting on each other and without synchronizing; they just "close their eyes" and eat the proverbial elephant as fast as they can, one bite at a time.

Apache, by contrast, approaches large numbers of requests by spinning off more processes to handle them, typically consuming a lot of RAM as it does so. Apache looks at the elephant and thinks about how big it is as it tucks into its meal, and sometimes Apache gets a little anxious about the size of its repast. Nginx, on the other hand, just starts chomping.

The difference is summed up succinctly in a quote by Chris Lea on the Why Use Nginx? page: "Apache is like Microsoft Word, it has a million options but you only need six. Nginx does those six things, and it does five of them 50 times faster than Apache."

Nginx particularly excels at serving static files—like the Tectonicus map tile images. For larger websites, it's often employed as a front-end Web server to quickly dish up unchanging page content, while passing on requests for dynamic stuff to more complex Apache Web servers running elsewhere. However, I was interested in it purely as a fast single Web server.

Like everything else mentioned in this article, Nginx is available from the Ubuntu package repositories with a quick "sudo aptitude install Nginx." After stopping Apache, I had Nginx installed in moments. Building further on Blacken's advice, I also instead installed php5-fpm, which is a heavily modified PHP package with built-in FastCGI capabilities. Blacken's recommendation to go with php5-fpm instead of the older and better-known php5-cgi bundle was because fpm's ability to turn on or turn off new PHP processes as dictated by server load makes it a much smarter and more powerful package; it can consume fewer resources and at the same time scale transparently under load and maintain speed.

If your needs are simple, like mine, then getting an operational PHP installation with php5-fpm is an easy affair. The main configuration file (/etc/php5/fpm/php-fpm.conf under Ubuntu 11.10) didn't need to be altered at all, while the pool configuration file (/etc/php5/fpm/pool.d/www.conf) only needed some slight adjustment. The pool conf file defines how php5-fpm will accept CGI requests from the Web server; by default, php5-fpm listens on TCP port 9000 for requests from the Web server, but I changed this to use a Unix socket file instead, since running CGI requests through a local TCP port introduces some tiny amount of latency. It likely won't matter unless your website will be churning out lots of pages, but I wanted to do things the "correct" way. Additionally, the pool conf file lets you specify the user and group that the pool processes will run as—it's a good idea to set this to the same user and group that your Web server uses.

Most importantly, the pool conf file lets you define the minimum and maximum number of PHP processes that will be spawned if php-fpm is configured in "dynamic" mode. This lets you start with only one or two active processes to serve PHP requests, but you can tell php-fpm that it's allowed to spawn more processes as needed. The only real limit is the amount of RAM and CPU you have to spare. For my tiny website, I set php-fpm to start with a single process, with the option of spawning up to 10. Finally, the pool conf file lets you specify traditional PHP configuration values, like maximum memory usage, maximum upload size, the location of your sendmail binary, and so on.