Thanks to Konstantin Pavlov from nginx for his contributions to the config used in this article.

nginx (pronounced 'Engine X') has excellent official documentation but putting all the logic together can take a while. An average web app in 2017 might want:

HTTP/2 support in all browsers

For speed! One of the pages on our blog loads in 1.9s on HTTP 1.1. The same page loads in 600ms over HTTP/2.

IPv6 support

If you're working on IoT devices, which often require IPv6.

Load balancing between multiple app servers with automatic failover.

So you can upgrade your app without taking it offline.

A branded 'sorry' page

Just in case you break both the app servers at the same time.

A separate server that handles blogs and marketing content

So you can keep your blog independent of the main app and update it on its own schedule.

Correct proxy headers for working GeoIP and logging.

So your app servers can see the proper origin of browser requests, despite the proxy. Because asking customers for their country when you already know is a waste of their time.

Support for HTML5 Server Sent Events

For realtime streaming.

An A+ on the SSL Labs test

So the users can connect privately to your site.

The various www vs non-www, HTTP vs HTTPS combinations redirected to a single HTTPS site.

This ensures there's only one, secure copy copy of every resource for both clarity and SEO purposes.





We encourage you to check out the official nginx docs. However...

Since combining all that logic can take a while we've put everything together into this guide and a matching GitHub project.

Not just the config, but why we chose the options we did so you can make sensible choices yourself, and a diagram explaining how the config fits together. You can also go straight to the git repo to get the files - but keep reading, you need more more than just nginx to get HTTP/2 working!

But why not

.... PaaS load balancers?

PaaS load balancers are very easy to set up. On the other hand, Paas are often much slower than alternatives for one or both of these reasons:

They sometimes they don't support ECC HTTPS certificates, which delays initial connection time.

They sometimes they don't support HTTP/2, which means no multiplexing, no header compression, and a larger text based format. I.e, HTTP 1.1 slows down pulling down the page.

... HAProxy?

We really like HAProxy and have used it in production for years. But as of writing HAProxy 1.7 still doesn't fully support HTTP/2. HAProxy can support passing traffic onto another server than supports HTTP/2 - causing a lot of confusion on the internet - but HAProxy can't terminate a HTTP/2 connection itself. The most recent update from HAProxy themselves was from November 2016:

Will HTTP/2 be on the roadmap of 1.8 or 2.0?

Yes definitely. But I know too well that a community-driven project cannot have a roadmap, just hints for contributors. Also I am a bottleneck because while I review and integrate code I cannot develop. I thought I would have H2 in 1.6 if you remember :-) So let's say that we'll put a lot of efforts into getting H2 in 1.8. I also count a lot on SPOP to help move some contribs outside of the core and to release some of my time :-)

The person writing is Willy Tarreu, CTO and Lead Software Developer of HAProxy.

So until HAProxy fully supports HTTP/2, your simplest solution is nginx.

... a more complex solution that solves problems larger businesses have?

Once you have regular income from your app and a couple of million users you'll have many other considerations. You'll definitely have more than a couple of app servers and you'll want autoscaling, more complex phased deploys, and all kinds of other fun. But to get there, you'll need your app running on a fast, properly configured web server that doesn't go down for upgrades.





Getting an OS for your load balancer

Oh, and by the way:

The only LTS Linux distro that can serve HTTP/2 right now is Ubuntu 16.04.

RHEL 8, CentOS 8 and Debian 9 will in future.

So HTTP/2 works on every current browser. For servers, though, you're incredibly limited:

There are two ways browsers and servers negotiate to use HTTP/2: NPN (older) and ALPN (newer). The current version of Chrome needs ALPN.

You will need a server OS which includes OpenSSL 1.02 to have ALPN.

Right now, the only LTS server OS that includes openssl 1.02 is Ubuntu 16.04. CentOS 8, RHEL 8 and Debian 9 will also have the required version of openssl but they're not announced yet. You could build and maintain openssl 1.02 on an older OS but you really don't have the time to do that.

The nice thing: distros new enough to have openssl 1.02 are likely to have a new nginx (anything newer than 1.9.5 supports HTTP/2). Ubuntu 16.04 has 1.10.0:

apt-get install nginx

Configuring nginx

We've made CertSimple's nginx config with some specific goals:

Keep related configs as includes - this allows stops us from repeating ourselves and neatly bundles related functinality in a single file: The non-www and www will need to share HTTPS config. The app server and the static server will need to share adding proxy headers. You can disable features by removing a single include. Don't use HTML5 SSE in your app? Comment it out.

We also use multiple server directives rather than rewrite rules because it's clearer and nginx prefers it.

Here's a diagram of how it all fits together:

Set up our redirect server s

A server in nginx-speak, is a virtual server than handles connections for a specific server name or IP. These are the pink boxes in the diagram above.

Users might visit our site with or without the www, and with https:// or just http:// . So modern sites have four nginx server s: each variant of non-www or www www, plaintext or HTTPS. One of those servers does most of the work - we like HTTPS/non-www since HTTPS is needed for current browsers and non-www is short. The other 3 server s redirect permanently to the main server.

server { listen 80; listen [::]:80; server_name www.example.com; return 301 https://example.com $request_uri ; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name www.example.com; include conf.d/https.conf; return 301 https://example.com $request_uri ; } server { listen 80; listen [::]:80; server_name example.com; return 301 https://example.com $request_uri ; include conf.d/https.conf; }

PS. If you have't already done it, tell Google Search Console which server you prefer too!

Set up our main HTTP/2 server

Our main server terminates HTTPS, and passes a series of location s onto different backend servers via plain HTTP 1.1. A location is just a match to a URL path.

We have a static webserver for blogs and marketing type content. We use a regex (the ~ character) to send a specific set of paths to that webserver. If you're not super familiar with regexs, we check if the URL starts with ( ^ ) then one of various combinations with the (/blog|/help) part - | is regex for 'or'.

character) to send a specific set of paths to that webserver. If you're not super familiar with regexs, we check if the URL starts with ( ) then one of various combinations with the part - is regex for 'or'. Anything that doesn't match the above goes to our main app server. The http://app refers to the name upstream app you'll see elsewhere - these servers run our main web app.

refers to the name you'll see elsewhere - these servers run our main web app. We also enable a error_page location for 'gateway' errors, ie we have a nice-looking page for errors where nginx itself can't connect to the servers. Regular errors - 500s, 404s, 403s, etc - are handled by the app servers. But gateway arrors (502, 503, 504) typically occur when we can't get to the app servers - all the app servers have gone down. Users should never see this page, but if they do, we still want it to look like we're in control while we panic.

Here's our main server:

server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name example.com; include conf.d/https.conf; location ~ ^(/ help |/blog|/images/blog) { proxy_pass http://localhost:8000; include conf.d/proxy.conf; } location / { proxy_pass http://app; include conf.d/html5-sse.conf; include conf.d/proxy.conf; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } include conf.d/error-page.conf; }

Set up HTTPS!

Both our HTTPS www-redirector and our main site need HTTPS, which we include via conf.d/https.conf :

ssl_certificate /etc/https/cert-and-intermediate.pem; ssl_certificate_key /etc/https/private-key.pem; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; ssl_dhparam /etc/https/dhparam.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS' ; ssl_prefer_server_ciphers on; add_header Strict-Transport-Security max-age=15768000; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/https/root-and-intermediate.pem; resolver <IP of your DNS server>;

As the file itself notes, you should immediately visit the Mozilla TLS Generator to get the latest cipher suites and TLS versions - these change over time as weaknesses are discovered in older crypto mechanisms.

Thanks Drake. Updated the file yet? Good.

You'll also combine the files you get from your certificate provider together to make the various PEM files. Just concatenate them (using the Unix cat command - yes that's why it's called 'cat') or a text editor. Order doesn't matter since the -----RFC 7468 HEADERS----- and the contents inside indicate what each is.

BLATANT PLUG

Speaking of HTTPS: we're CertSimple, a startup focused on EV HTTPS. If it's important to prove a real business controls your website, we're really good at that.

END BLATANT PLUG

Add neccessary proxy headers

For the major location we also include conf.d/proxy.conf which will add the necessary headers for GeoIP and proper logging. Without these, it would seem like all the users were connecting from the load balancer! Here's proxy.conf :

proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-Proto $scheme ; proxy_set_header X-Forwarded-For $remote_addr ;

Handle HTML5 SSE

Pretty much everything we make these days needs realtime, and we find HTML5 SSE simpler than websockets. However you'll need to tell nginx to handle them, which we do via html5-sse.conf .

proxy_set_header Connection '' ; proxy_http_version 1.1 ; chunked_transfer_encoding off; proxy_buffering off;

Before you deploy your new load balancer

Get a metric for how long your current pages load!

Test in Chrome since you want to make sure ALPN works. Start an incognito window, open Developer Tools. Since's HTTP/2's multiplexing affects nearly every part of the conection, check the Network tab and the Load stat on the bottom right. Run a few reloads and record the average.

Review your config

Don't just blindly copy lines without understanding them. Don't need a feature on a paticular server or location? Remove it!

Test your config without restarting nginx

Just run:

nginx -t

For nginx's own config tester.

Migrating from an old load balancer to a new load balancer on a new IP?

If you're switching from an old load balancer to a new one, you'll need to plan things.

First test it out on a local machine: modify a hosts file on a testing box to point to a new server, and test from that machine before you do anything else. Check different backends, HTML5 SSE and web socket performance, and anything else that could break.

Look good? Revert your hosts file and let's plan a proper cutover.

Your migration will be performed by changing DNS to point to the new server. Of course DNS records are cached, so go check out the current TTL (cache time) on the DNS record for your load balancer.

Take note of the current TTL value: say it's 24 hours currently: this means you'll have to wait at least 24 hours for old cached DNS to expire.

Modify the TTL to something low - like 15 seconds - so you can easily swap back if you need to.

Then, once the original TTL - eg, 24 hours in this case - has expired, and all the DNS clients are using the new, low-TTL DNS record, update the DNS entry to point to the new server.

Enjoy the period of heightened awareness. If anything goes wrong, you can flip DNS back and have clients pointing at your old load balancer in 15 seconds (or whatever the new TTL is).

Now you're deployed

Scan your site with SSL Labs scan. An A+ should be easly achievable. Fix anything that needs fixing.

Open Incognito window, start DevTools and add the 'Protocol' column:

Yes, h2 is HTTP/2. No, we don't know why.

Now the fun part: check the load times of those URLs you measured earlier. We've had 3x speedups across a bunch of pages since moving to HTTP2.