This post may contain affiliate links. Please read my disclaimer for more info.

If you’ve got a slew of different applications running on your home network, it might be time to add a reverse proxy. What is a reverse proxy? It allows you to access your services at a nice easy to remember URL rather than an IP Address and port. For example, instead of accessing Home Assistant at http://192.168.1.2:8123 I can instead type https://homeassistant.example.com. On top of creating a reverse proxy in today’s article, we’ll also be adding HTTPS support via Let’s Encrypt. This will give us a secure connection on our LAN so that when we connect to the application we know there is no one listening while on our network. Maybe a bit overkill, but it does give you the nice green badge in your browser too.

If you’re like me, you’re a bit wary about forwarding ports on your router to your local network. I run lots of different services on my network and don’t want them exposed via the reverse proxy to the internet. I’m okay using VPN (or WireGuard) to connect to my network to use my application. Therefore, I wanted to get HTTPS working without having to open any ports on my router. The solution: DNS validation!

To summarize, my requirements when I started this project were:

Wanted to access my services at subdomains like plex.example.com and homeassistant.example.com

and Only available on my local network

Don’t want to open any ports on my router for validation or usage. All done without port forwarding.

Want everything sent over HTTPS

Every machine on the network knows where to access example.com automatically

To reiterate, this does not allow you to access your services outside your network. Check out Nabu Casa ($5/month) to access Home Assistant outside your network or looking into setting up WireGuard/VPN (coming in a later article). The HTTPS support, in this case, is just to secure data being transferred on your local network.

The Prerequisites

So to start off we need a few things. You need a domain name that you own that you can use for your network. If you’re not aware there is a .network TLD so a great suggestion would be yourname.network or yourlastname.network . For the purposes of this article, I’ll be using example.com , so when you see that replace it with the domain name you own. I have always used NameCheap for my domains, but use whatever provider you like.

The next thing you need is an account on Digital Ocean. This can be a free account, we won’t actually be running any VPS services. We’ll just be using the DNS services of Digital Ocean to perform the HTTPS challenge. Digital Ocean has a fully featured API available so it’s easy to use to automate the entire process.

Next, you need something that is running dnsmasq on your local network. This is to modify your LAN’s DNS settings so that anyone on your network trying to access https://example.com is routed to the server on your network instead of an external site. If you’re running Pi-hole on your local network, it uses dnsmasq underneath so you’ll be good to go. If not, do yourself a favor and go check out that project.

Finally, you’ll need a machine that can run Docker containers. We’ll be using a nicely done prebaked image that makes setup easy. I’m going to be using docker-compose as well but that’s optional.

Account Setup

Whatever domain name registration company you decided on, you need to modify the settings so that they point to Digital Ocean’s domain name services. Don’t worry, this won’t affect any other domains you have with the company. This can be done on a domain-by-domain basis. Digital Ocean has a great guide on how to do this for popular domain name services like NameCheap, GoDaddy, HostGator and others. It essentially boils down to changing the name servers to ns1.digitalocean.com , ns2.digitalocean.com , and ns3.digitalocean.com .

Back in Digital Ocean, add your domain by logging in and click “Create” in the top right and choosing “Domains/DNS”. Enter in your domain and click “Add Domain”.

Your domain will be added to Digital Ocean’s DNS services now and all the records can be handled through Digital Ocean. Next, we need our API token for accessing Digital Ocean programmatically. Click the “API” tab on the left side of the screen. Click “Generate New Token” and give it a name.

Your token will be shown and make sure to copy it and put it aside for the moment. We’ll need it soon and Digital Ocean only displays this token when you first create it for security purposes.

Setting Local DNS Records

We don’t have the reverse proxy running yet, but when we do we’ll want to access it by typing in something like https://example.com in your browser. So how can we tell all our machines on our network to use the local reverse proxy for example.com instead of going out to the internet and trying to resolve it? You may have heard of editing your hosts file to tell your computer the domain goes to a specific IP address. This would work fine, but is a hassle to do on all the machines on your network. And if the IP address changes, a real pain to go and update everything again.

This is where using dnsmasq that comes with Pi-hole comes in handy. First ssh into your device that’s running Pi-hole.

Create a new file by running the following:

sudo nano /etc/dnsmasq.d/04-pihole-dns-reverse-proxy.conf 1 sudo nano /etc/dnsmasq.d/04-pihole-dns-reverse-proxy.conf

Next, add a single line that tells dnsmasq to go the IP address of the machine that will be running the reverse proxy docker container. You don’t want to put the IP address of Pi-hole (unless they are on the same machine), you really want the IP address of where you plan on running the reverse proxy container. Your file should look something like this:

address=/example.com/192.168.1.2 1 address=/example.com/192.168.1.2

This will tell dnsmasq and Pi-hole to change all lookups for that domain to your local server, instead of trying to find it on the internet. Exit and save the file and then run pihole restartdns to have the change updated in Pi-hole.

Running the Container

Now it’s time to actually start running the reverse proxy server. We’re going to be using a docker container done by the LinuxServer.io folks called letsencrypt. You can see it in Docker Hub. This image uses Nginx for the reverse proxy. While there are probably simpler reverse-proxy applications, I like Nginx because you’re never going to outgrow it. There are a ton of people using Nginx for production environments. There’s also a ton of documentation and example snippets available online for loads of different services.

First I made a new directory for the configuration files needed for the container called docker-reverse-proxy. In this new folder create a docker-compose.yml file with the following contents.

--- version: '3' services: letsencrypt: image: linuxserver/letsencrypt cap_add: - NET_ADMIN volumes: - ./config/letsencrypt:/config environment: - TZ=America/Chicago - PUID=1000 - PGID=1000 - EMAIL=youremail@example.com - URL=example.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=digitalocean ports: - 80:80 - 443:443 restart: unless-stopped 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 --- version : '3' services : letsencrypt : image : linuxserver/letsencrypt cap _ add : - NET _ ADMIN volumes : - . /config/letsencrypt :/config environment : - TZ=America/Chicago - PUID=1000 - PGID=1000 - EMAIL=youremail @ example . com - URL=example . com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=digitalocean ports : - 80 :80 - 443 :443 restart : unless-stopped

Breaking down the interesting parts:

We’re going to base the container off of the linuxserver/letsencryt image. This image runs the reverse proxy server (using Nginx) and does the HTTPS validation (using letsencrypt). There is a cron job in the server to keep the certificate always up to date.

image. This image runs the reverse proxy server (using Nginx) and does the HTTPS validation (using letsencrypt). There is a cron job in the server to keep the certificate always up to date. We’re going to mount a config directory on our host into the container. More to come about that in a second.

A few environment variables to set: First set the timezone to your local timezone, there is a list of standard timezone entries on Wikipedia. Set the PUID and GUID to the user id and group id of the user running the container. You can find these out by running the id command. Email is needed for the certificate generation URL should be the domain you control Set SUBDOMAINS to wildcard . This will allow us to generate a wildcard certificate so all our subdomains will be covered by the same certificate. That way, you don’t need a separate certificate for homeassistant.example.com and plex.example.com you just need to manage it in one place. VALIDATION should be set to dns and DNSPLUGIN set to digitalocean to tell the container how to perform the validation.

Expose ports 80 and 443 from the container. HTTPS traffic is done over port 443 and HTTP traffic is over port 80.

Set the restart policy to unless-stopped . This will make the container start whenever your docker daemon starts unless you explicitly stop the container.

API Key Configuration

So in the last section, we talked about a configuration directory getting mounted in the container. Before starting up the container lets make that directory and use the Digital Ocean credentials file. We need to make a directory structure like this for the above docker-compose file to work.

. ├── config │ └── letsencrypt │ ├── dns-conf │ │ ├── digitalocean.ini └── docker-compose.yml 1 2 3 4 5 6 . ├── config │ └── letsencrypt │ ├── dns - conf │ │ ├── digitalocean . ini └── docker - compose . yml

So essentially at the same level as the docker-compose.yml file, make a config/letsencrypt/dns-conf directory. Inside that directory, there should be a single file named digitalocean.ini . In that file, paste the key that you got when creating your DigitalOcean account.

dns_digitalocean_token = 1234567890987654321abcdef12345fedcba 1 dns_digitalocean_token = 1234567890987654321abcdef12345fedcba

Now we can start the container up by running docker-compose up letsencrypt . You should see some logging from the container showing the certificate getting generated and challenges being done to prove that you own the domain by using the Digital Ocean API. Once that’s done, you should navigate to https://example.com in your browser to see the following message.

Congrats! You now have a working nginx reverse proxy server. You should see a green check box in your browser indicating that the page was served over HTTPS and is encrypted. Now we need to get some working subdomains.

Subdomain Configuration

One reason I like this Docker image is that it comes with a ton of sample subdomain configurations for popular applications like Home Assistant, Plex, Sonarr, Radarr, Deluge and more. They all follow the same general approach to getting them configured and working. When you started the docker container, you might have noticed a whole bunch of new files got populated in that configuration directory. If you look at the config/letsencrypt/nginx/proxy-confs directory you’ll see various sample reverse proxy configuration files.

The team at LinuxServer.io has really done a great job on documenting each subdomain configuration. The general flow is:

Rename the conf file to remove the .sample at the end Open up the file and read the instructions at the top and make the necessary config changes Restart the docker container

These subdomain configuration files need to know the IP address and port where the service is running, so that it can route traffic correctly. There are a couple of ways for nginx to resolve the IP address for the service.

You can set the IP address manually in the configuration file (this is what I end up doing most of the time)

If the service is in the same docker-compose file as the reverse proxy, they will share the same docker network so you can use the hostname of the other service

Lastly, you can bridge different docker networks so the services can see each other’s hostnames

Let’s take a look at the Grafana config file. At the top of it reads:

# make sure that your dns has a cname set for grafana and that your grafana container is not using a base url

This is warning us that this configuration file needs to be able to resolve the grafana hostname to the IP address running the service. If Grafana is on a different computer on your network or in a different docker-compose file then the grafana hostname won’t be resolved. To set the IP address manually you can set the proxy_pass to the IP adress and port of the service. In my case, Grafana is running on 192.168.1.2:3000 . So the relevant block in my configuraiton file looks like:

location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; proxy_pass http://192.168.1.2:3000; } 1 2 3 4 5 6 7 8 9 10 11 12 13 location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include / config / nginx / proxy . conf ; resolver 127.0.0.11 valid = 30s ; proxy_pass http : //192.168.1.2:3000; }

After restarting the container you’ll be able to access Grafana at https://grafana.example.com . Now go and add all your services!

Conclusion

While it seems like a lot of steps in this article it really is quite easy to get a reverse proxy setup on your local network thanks to the excellent letsencrypt image. You also stay totally secure by not opening any ports on your router and using HTTPS for all your local traffic. How are you managing all the URLs to services on your network? Bookmarks? Other reverse proxy applications? Let me know in the comments!