Welcome to a supplemental edition of our "Web Served" series, a DIY guide on tackling the challenges of setting up and running a Web server for fun. It’s been a while since we last published an entry—so long in fact, that at some point very soon, I’ll be going back through the series and bringing everything up to date with current versions and commands. But after spending the last weekend tinkering with shifting my personal site over to all-HTTPS, it was just too much fun not to share.

Note that if you’re not the kind of person who thinks screwing around with the command line is fun, this probably isn’t a guide you’re going to be interested in.

Encrypt all the things

The unencrypted Web is on the way out, and that’s a good thing. We’re still making the switch here at Ars—subscriptors can use HTTPS today, but we’re still working out the mixed content kinks for everyone else (the main holdup is handling the ad networks. Since subscriptors don’t see ads, there’s no holdup there!). But if you’ve followed along with the previous Web Served pieces you’ve probably got a shiny Nginx instance happily serving up pages and an SSL/TLS certificate so that privacy-minded visitors have the option of using HTTPS on your site.

In this guide, we’re going to take things a step further and make everything HTTPS for everyone. At the same time we’re going to start participating in HSTS—that’s "HTTP Strict Transport Security," a way to ensure that your site communicates to your visitors that not only do you support HTTPS, but that you insist on it.

How will we accomplish this? There are a lot of potential ways, but the way I did it with my personal site (and the way I’m going to describe herein) is by employing "HTTPS termination." In other words, we’re going to stick a reverse-proxy application in front of the Web server to handle the HTTPS part. This winds up being a lot simpler and more flexible than trying to do all-HTTPS with just your Web server’s redirection abilities. So while it may seem a little counterintuitive that adding another app to the stack is the simpler way, trust us: it really, really is.

Dat stack

To start, we’re going to make the same assumptions about software that we’ve made in all the previous Web Served pieces: this is targeted at admins running a Linux-based system with Nginx as the Web server application. You might also have any number of components or applications in line behind Nginx, like php-fpm or WordPress or wilder things. That’s OK; they’ll all benefit.

At home, I’ve also got Varnish Cache sitting in front of my Nginx instance. Varnish is a fast Web caching application that has saved my poor little personal site from some crazy reddit- and Ars-driven traffic storms in the past, but most caching software won’t work with HTTPS traffic. Because the HTTPS negotiation happens between the end-user and Nginx—which sits below Varnish in the stack—all Varnish sees of HTTPS traffic is the encrypted side. You can't cache what looks like an unending string of unique, encrypted nonsense.

The thing we’re bolting on in front of every other application in our Web stack is a little application called HAProxy. HAProxy is best known as a powerful load balancer—you stick it in front of your website and use it to parcel out requests to a bunch of physical Web servers. A bunch of enormous sites on the Internet make heavy use of HAProxy’s ability to spread out and manage traffic (like reddit, for example), but as of its latest version HAProxy also gained the ability to do SSL termination. Now it can negotiate and establish HTTPS connections with remote clients on behalf of the actual Web server.

That’s the key: we’re going to install HAProxy, feed it our SSL/TLS certificates, tell it to redirect all HTTP requests to HTTPS, and then point it at our actual Web server as its back-end. Keeping Varnish in the mix is what motivated me to do this on my personal site, since this sidesteps almost all of the problems with caching encrypted content. However, even if you don’t have a cache layer (or if you’re using Nginx’s built-in static asset caching abilities, which are less flexible but also easier to deal with than Varnish’s config language), this guide will still work for you without any problems.

So—let’s begin.

Installing HAProxy

I elected to put HAProxy on its own physical server for simplicity's sake, but there's no reason you couldn't do this all on a single box if that’s all you’ve got to work with. Ports 80 and 443 on the HAProxy server will be exposed to the Internet and get both HTTP and HTTPS traffic. All HTTP requests will be given a 301 redirect response to the same URL but with an HTTPS scheme, and then requests will be forwarded to the back-end Web server (your Nginx instance) as plain HTTP. The HAProxy instance handles all of the SSL/TLS connections and Nginx sees everything as plain ol' HTTP.

The HAProxy packages included with Ubuntu 14.04 LTS aren't anywhere remotely near current, and so we need to add the following PPA before we get going:

sudo add-apt-repository ppa:vbernat/haproxy-1.5

This PPA is provided by the Debian HAProxy team, so it's OK to trust (the PPA here will get you the latest version, but there's also an option for adding a backported stable repo if you'd prefer). After adding the PPA, update your sources and install haproxy:

sudo aptitude update sudo aptitude install haproxy

The main HAProxy configuration file lives at /etc/haproxy/haproxy.cfg , so pop that open for editing. Here's the configuration I'm using:

global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private ssl-default-bind-ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4 ssl-default-bind-options no-sslv3 no-tlsv10 tune.ssl.default-dh-param 4096 defaults log global mode http option httplog option dontlognull option forwardfor option http-server-close timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http frontend yourservername bind *:80 bind *:443 ssl crt /etc/ssl/private/cert1.pem crt /etc/ssl/private/cert2.pem acl secure dst_port eq 443 redirect scheme https if !{ ssl_fc } rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubDomains;\ preload rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure default_backend webservername backend webservername http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc } server webservername 192.168.1.50:80 listen stats *:9999 stats enable stats uri /

Nothing like a gigantic code block to get the blood pumping! Let's break this down. If you need a vim syntax highlight config for HAProxy, you can grab a good one right here.