As a frequent browser of /r/selfhosted on Reddit I am always pleased to see people getting into hosting their own stuff. A number of posts appear from time to time that are asking how exactly to get into hosting and what you need for it so, although there are already guides out there – I decided to write this very beginner friendly guide for anyone who wants to get into self hosting some stuff but doesn’t know where to begin.

In this guide I will cover setting up a Linux machine as well as some basic self hosted software solutions for some of the popular tasks that you may want to get into and explain what everything means along the way. This guide is written for beginners and so if you are already familiar with some terminology and hosting techniques, Linux commands and so on you may find this guide non helpful for you.

As there is a lot to cover I will split this guide up into sections that you can skip or refer to as and when needed.

This guide obviously is meant as a basis to get you started and by no means will cover all the possibilities but you’re not just limited to the software listed in this post.

If you want to know more about self hosting and where to find more information there are some Reddit links at the end.

Enough jibber-jabber, let’s get on with it…

Guide Contents

Hardware & Operating System

– Hardware Choices

– Operating System

Setting Up SSH

– Getting the IP Address

– Connecting via SSH

Becoming Powerful – Installing Sudo

A Little Port Security

– Changing the SSH Port

– Enabling UFW

Installing Nginx Web Server

– Nginx Configurations

Domain Names

– Dynamic IP Addresses & DNS

– Pointing the Domain To Your Server

What the dock is Docker?

– Installing Docker

– Setting up Nextcloud

– Persistent Data

– Port Mapping

– Let’s Install Nextcloud!

Nginx Reverse Proxy Setup

Using LetsEncrypt to Secure Sites

Hardware & Operating System

Right off the bat I will also like to point out this guide is written for Linux systems only as that is what I am most familiar with.

Hardware Choices

To run a server you have a few choices when it comes to what to run it on. You could use an old laptop, a Raspberry Pi, an old computer or do it virtually with a VPS (Virtual Private Server).

If you’re going the VPS route, I recommend Hetzner. DigitalOcean is another popular choice.

You may be wondering on the specs of the system but to be honest, that really depends on what you want to be running. As hardware varies so much all I can really say is give it a go and see if it works for your uses and needs.

Whichever route you decide to take with the hardware (or VPS) you’ll need to set up an operating system to run your stuff on. As it’s a server we’ll be running headless, meaning that there is no desktop with icons and the such but just a command line that you can type commands into. If you’ve never done this before, don’t fret – it’s not as confusing as it looks nor as difficult as you may think. We’ll talk about that in a bit once we get onto setting up the server.

Operating System

If you’re using a Raspberry Pi as pictured above then for server usage I highly recommend using Armbian or even DietPi. These are lightweight operating systems that have a minimal installation without unnecessary extra software taking up space and/or resources.

If you’re using a laptop, desktop PC or VPS you have more options. For me I usually go with the latest Debian. You may also choose Ubuntu Server, CentOS and many more but for this guide we will use Debian 9. You usually have the added bonus of not having to download the ISO if you are using a VPS as the providers usually have a selection that you can just choose to install automatically. If this is the case you might want to skip over to the next section.

Anyway, the exact Debian ISO that I will be using in this guide is this one:

https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-9.6.0-amd64-netinst.iso

Note: That ISO is for a 64 bit system using an AMD or Intel CPU, see the Debian website for other versions if you require a different one.

You can write it to a USB stick using Etcher or burn it to a CD.

Note: You will need to have the computer connected to a network to use this exact installation method.

Once we’ve got the operating system you can boot from the USB or CD and you’ll be presented with this screen:



Choose “Install” and follow the prompts to set your language, country, locales and keyboard mapping. The setup should then configure your network and eventually you’ll get to a screen asking for your “hostname” this is basically the name of the computer on the network it can be pretty much anything but choose something simple enough to remember.

We are going to leave the “domain name” screen blank and just press Enter to continue which brings us to the “Root password” screen. Here you need to set a strong password for the “root” user. This is a user that is basically the admin of all admins, the number one boss. What root says happens, happens.

We won’t be using the “root” account very much at all but make it something you can at least remember.

Once done and you’ve typed it in twice you’ll be asked to create a user for yourself. This is the one we will be using 99% of the time so make sure to remember the username and create a decent password that you also won’t forget. It doesn’t matter too much about the “Full name” I just choose the same as the username. In this example I will use “fuzzy” for both.



Once you’ve entered your password and setup the user you will be asked to partition your hard disk. Please note that following this method will wipe the entire disk.

If you wish to create partitions manually or anything other than use the whole disk then please look elsewhere as I am not covering that in this guide.

Disclaimer over we can proceed. Choose “Guided – use entire disk” and then “All files in one partition” followed by “Finish partitioning and write changes to the disk” then choosing “Yes” to format your hard disk.

Once the formatting is complete the system will begin installing. When it has finished installing the base system you may get to a screen asking if you want to scan additional media for packages you can choose “No” and then the closest mirror to the servers location. It should be highlighted depending on where you selected your location at the start of the installation process, the same with the next screen. The default mirror in the server location is usually just fine.

Once the mirror is selected you can leave the “HTTP Proxy” screen blank and just press enter and the system will continue setting up some more things for a moment or two.

When you get asked about the “Package usage survey” it’s totally up to you how you answer. I usually say no.

Once it’s done a little more of it’s thing we end up at the following screen asking about which software to install. Using the arrow keys and space bar to select and deselect items this is my default go-to for server setups. Note the lack of the “Debian Desktop environment“. In my opinion, it’s really an unnecessary waste of resources to have a Desktop in a server environment.

Using “Tab” to get to “<Continue>” you can press Enter with the above settings and let the installation continue once more.

The next screen you get to is “Install the GRUB boot loader on a hard disk” you don’t really need to know what a boot loader is or even what GRUB is at this stage but in most cases, especially with just one hard drive and one OS you’ll want to say “Yes” here and then choose your hard drive.

If all goes well you should see the below screen meaning it’s time to remove the USB stick or CD and press Enter to reboot the system.

Once you restart the computer and let it boot up you should see this screen below. If you do then happy days, we’re ready to get to the proper fun stuff!

Setting Up SSH

So now we’re all done with getting the OS installed it’s time to connect to our server remotely using a protocol called SSH (Secure Shell/Secure Socket Shell). This basically enables us to use the computer’s command line as if we had a screen and keyboard plugged into it.

Getting the IP Address

There are two ways to get the local network IP of the freshly installed server. You can either login to your router and check it or login at the server itself and type a command to view it.

Note that if you are using a VPS then this step is most likely irrelevant as you can just get the IP from your VPS provider account/server pages.

If you wish to get the IP by logging in to the system then simply type your username, in my case “fuzzy” and press enter, then type the password that you set during the OS setup and press enter again. Note that there will be no visual representation of your password typing.

Once you’re in you can type

ip a

The one you want is the “inet” that doesn’t say “127.0.0.1” or “link/loopback” – if you’re using ethernet it’s most likely labelled as “eth0“, “link/ether” or something similar. In the case of the screenshot above, my IP would be “10.0.2.15“.

Now that we have the IP address, it’s time to connect to the system and get to installing some stuff!

If you had a screen and keyboard plugged in to an old PC or whatever then now is the time when you can unplug it and go truly headless.

Connecting via SSH

If you are using Windows you will need to install PuTTY to connect to the server and issue commands.

If you are using Linux or a Mac as your main operating system you can use the terminal software to make a connection to your server like so:

Obviously replacing the username and IP with your own.

I am using a virtual machine installation to write this guide so I will be using VMWare screenshots when necessary but the commands and results will be exactly the same with PuTTY and terminal connections.

Becoming Powerful – Installing Sudo

Once you’ve made a connection it’s time to allow your user to do admin commands when needed. This is called “sudo” or “super user do” which basically means “run the following command as an administrator”. By default if you followed the steps above when installing the OS, “sudo” isn’t installed.

To install stuff you must be an administrator but how can we become one if we can’t use “sudo“? We login as the “root” user temporarily that’s how!

Simply issue the command: su and enter the root password that you set during the OS installation process. You are now temporarily logged in as the root user.

With this new found power we can update the list of packages (basically a library of software that we can install) and install “sudo” itself.

To do this run apt update to update the packages list and then once it’s done run apt get install sudo to install “sudo“.

Now sudo is installed but we can’t go using it just yet because our main user isn’t a member of the “sudo” group. This is so that super user commands can’t just be run by anybody chucking “sudo” in front of it, they have to be allowed to use “sudo” in the first place.

To add our user to the group we do:

usermod -aG sudo fuzzy

(Obviously replacing “fuzzy” with your own username.)

So sudo is installed, our main user is added into the sudo group, now what?

We logout. In order for the sudo group addition to become active we need to login again so to do this we can issue exit twice. Once to exit from the su account and again to exit from our own account. Then just as above we login via SSH again. Now we’re really ready to install some cool stuff!

A Little Port Security

Now we’re freshly logged in, all “sudoed up” let’s install a little bit of software.

I’ll begin with UFW (Ultimate firewall) for some protection. This stops people accessing ports on our system that they’re not really allowed to be accessing.

If you are unfamiliar with ports, think of them like rooms in a house. The IP address is the address of the house itself and the ports go to specific things in the house so for example SSH runs by default on port 22. Web servers run on ports 80 and 443. I usually change my SSH from port 22 to a random unused port for added security. We’ll talk about that in a second, let’s get UFW installed first.

It’s as simple as it was to install “sudo” earlier. We simply issue:

sudo apt install ufw

Upon the first time using “sudo” you will be greeted by a charming text about power and responsibility and you will also need to type in your password.

Once you’ve typed your password in, UFW will install.

It may now be installed but it’s not active or enabled. I usually like to change my SSH port first, allow it with UFW and then enable it just in case something happens to my connection I don’t want to risk being locked out by UFW from enabling it before I’ve setup any ports that are allowed to be accessed so let’s set up a different SSH port now.

Changing the SSH Port

As I mentioned earlier, SSH is on port 22 by default. For added security I like to change this to something else. You can pretty much pick any port number to use but to be on the safe side of not picking a common one that might be in use by something you may want to use in the future, pick a random easy to remember 5 digit number that is less than . For the sake of this guide I will use port 13666 for no particular reason.

To change this port to something else a file needs to be edited. You may or may not have seen memes about “vi” or “vim” around. If you haven’t nevermind but if you have, it’s OK, we’re using “nano“.

To edit the ssh file first open it with nano. We need to use “sudo” here as it’s not a file that we own ourselves.

sudo nano /etc/ssh/sshd_config

You are the presented with the file opened in “nano”. You can move around the text with the keyboard arrow keys.

Go to the line that says “#Port 22” and change it so it says “Port 13666” (Without the “#”). Once done press CTRL+O then Enter to save the file and CTRL+X to exit the editor.

Now the SSH service needs to be restarted with sudo service sshd restart



Once restarted we can check it has worked by logging out of SSH and back in again with the new port number. exit will log out.

To change the port to the new one on PuTTY simply change the “port” number from 22 to 13666 in the connection settings.

To change the connection port on a terminal on a Linux or Mac machine, simply add -p 13666 to the end of the ssh connection command. For example:

Enabling UFW

With UFW installed it’s time to enable it but first we need to let through the SSH traffic otherwise we won’t be able to connect to it. There are two common protocols used to connect to ports on machines and these are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). You don’t really need to know how they work (although feel free to search around if you want to know more about TCP vs UDP) but you do need to know which ones you are using for certain ports. In most cases it’s TCP – as it is with our SSH connection.

To enable the TCP port of 13666 we simply run:

sudo ufw allow 13666/tcp

This will update the allowed ports on the system for both IPv4 and IPv6 connections.

Note that if you can also use UFW to enable UDP ports with sudo ufw allow XXXX/udp and both TCP and UDP with sudo ufw allow XXXX although we only need TCP at this time.

We can now enable UFW by doing sudo ufw enable You should then see “Firewall is active and enabled on system startup“.

Now we have some basics down, let’s install a web server and serve a website.

Installing Nginx Web Server

Depending on your uses you may not need a web server but it’s quite likely that you will unless you just want to run some game servers or something.

I have chosen Nginx for a few reasons – It’s quite light, is a great reverse proxy (more on this later) and I am the most comfortable in using it.

To begin installing Nginx we will use “apt” just like before with UFW and sudo.

sudo apt install nginx

Then answer Y (or just press Enter) to the prompt asking if you want to install the packages required for Nginx and let it do its thing.

Once it’s done you’ll be back at a prompt.

Nginx doesn’t start itself after installation so to start it we can run sudo service nginx start to get it going.

Note that you can also issue sudo service SERVICENAME stop and sudo service SERVICENAME restart to stop or restart a service respectively.

Once it is started, you’ll be once again back at a prompt. So now what? Well, Nginx has a default page which will enable you to see if it’s working by visiting your IP address in a web browser.

If we fire up a browser (Firefox in my case for example) and type the IP address of our server we get, oh, this…

Side note: You may have noticed my IP address has changed from earlier in the guide. Ignore this and pretend it has always been “192.168.1.116”.

Nginx is running, there were no errors displayed so why isn’t it working? Those paying attention will remember that we have UFW blocking access to our ports and also remember that I said web traffic uses ports 80 (for http) and 443 (for https). So let’s let the traffic through with UFW.

UFW has some built in presets so instead of typing in sudo ufw allow 80/tcp and sudo ufw allow 443/tcp we can just do:

sudo ufw allow http && sudo ufw allow https

The “&&” between commands means “and then run this one” so it’s the same as manually running the first command and then manually running the second command afterwards.

Now when we try to visit our IP Address we get:

Woohoo!

Nginx Configurations

There are many ways to configure Nginx to suit your needs. You can install PHP and use it to run a WordPress site or whatever but this guide isn’t strictly about web servers and I can’t possibly cover all the uses. There are plenty of WordPress and PHP guides out there if that’s your goal. I will however cover using Nginx as a reverse proxy for other services such as Docker containers and other running services on various ports.

You can also use multiple Nginx site config files for each service but I like to keep it all in one, the default file which can be edited with:

sudo nano /etc/nginx/sites-enabled/default

I will cover some reverse proxy configurations in the Docker section after domains for when we set up a Nextcloud instance.

Domain Names

I’m guessing that you don’t want to be typing in your external IP address each time you want to connect to your server? That would be silly right!

This is where domain names come in. You can pick them up for pretty cheap (I highly recommend Gandi as it comes with free email for every domain!) or even get a free one from Freenom!

Either way once you have your domain you’re going to want to point it to your server. This is done through DNS (Domain Name System) which is basically like a phonebook for IPs – You have the name, what’s the actual address?

Personally I use CloudFlare to manage my DNS (and point the domain’s “nameservers” to CloudFlare) but you can use the domain registrars own DNS servers just fine also.

Dynamic IP Addresses & DNS

For those of you using a system at home you may have what’s called a “dynamic IP address“. This means that your IP changes from time to time, during reconnects or when your ISP decides to change it for whatever other reason they may have. This would cause an issue with pointing a domain to your IP as it may only be your IP for a little while.

Some ISPs offer static IP addresses and if this is something you’re interested in then go for it but if not, or it costs too much money then there’s a solution! This is called “DDNS” or “DynDNS” and stands for “Dynamic Domain Name System“. This gives you a domain that you can use in place of your IP. The domain will always be up to date with whatever your current IP is so it will always be able to connect to your server.

There are a few ways to get a DynDNS domain, one of them is to register your own domain, something like “server.example.com” on a dynamic DNS service such as freedns.afraid.org.

I’ll cover setting up a subdomain on afraid.org using one of their already available options and getting it updating on your server so the IP is always correct.

Once you have signed up you can choose from a big list of domains registered by the sites users at http://freedns.afraid.org/domain/registry/

For this example I will use the crabdance.com domain from their list and use “fuzzytek” as my subdomain choice. I can then choose my subdomain, fill in the captcha and then click “Save”.

Notice how the IP is already filled in. It doesn’t matter if it’s incorrect though as we will update it soon anyway.

Once that’s saved we can get a “cron” job going on the server to update the IP automatically.

You can find the line for your cron file at http://freedns.afraid.org/dynamic/ and then click on “Quick cron example“.

Login via SSH and run

cron -e

All the instructions you need are on the “Quick cron example” page.

Pointing the Domain To Your Server

To get the domain name to go to the right place you’ll want to edit its DNS settings. How you do this depends on where you registered the domain (or where you point its nameservers) so you’ll need to refer to the sites guides on changing the domains DNS.

There are a few different types of DNS settings and the ones I will be covering here are called “CNAME” and “A Record” for IPv4 and “AAA Record” for IPv6 connections.

For example we will use the domain “example.com” as that’s exactly what it’s designed for!

So you’ve got example.com and you want to point it to your server. First off you need an “A Record” to point it to an IP. (Your servers static external IP, not the internal one!)

If you’re wanting to use a subdomain for something (subdomain.example.com) you’ll want to use “CNAME” to set this up.

You will also need to use a CNAME if you have set up DDNS in the previous steps. Point the CNAME root of your domain (or subdomain, whichever you are pointing to the server) to your DDNS subdomain. So for example, mine would point to “fuzzytek.crabdance.com”

I can’t really give any specifics here as each case will be different but refer to your registrar or name server company instructions and examples to get this going.

What the Dock is Docker?

So you’ve browsed around /r/selfhosted and all the other related subreddits and seen this thing called “Docker” everywhere. Yep, me too and I was a late comer to the docker party but boy am I glad I joined. In short it makes things soooo much easier to manage. You don’t have to use Docker of course but I am including it as it has changed my personal approach to self hosting services completely.

As an example I used to run a server with a WoW game server, Factorio game server, Nginx serving WordPress and an instance of Nextcloud all as their own things running individually on my system. It felt cluttered and awkward. Docker let me run these same things but each one had its own independent space from each other and the main system. All that was installed on the system itself was Nginx and of course Docker. EASY to maintain and not feel so cluttered.

In this guide I am going to cover installing Nextcloud in a docker container and using our Nginx install to point to it from a domain name. (If you haven’t got a domain name, I can’t recommend Gandi highly enough – and no there is no affiliate link or such here, just a very good service in my opinion!)

I have chosen Nextcloud as it is a popular install for those wanting to self host as it does a few things. Namely it enables you to host your own files (think Google Drive, Dropbox etc…) as well as store all your phone contacts and calendars. It also has notes and all sorts of other plugins and extensions.

Installing Docker

Before we can get to Nextcloud we need to install Docker itself. There is a guide on their website but I will cover it here anyway.

Make sure that the list of packages is up to date with sudo apt update

Then we need to install some requirements before we can install Docker itself. Some of these may already be installed.

sudo apt install \ apt-transport-https \ ca-certificates \ curl \ gnupg2 \ software-properties-common

Note that the “\” is there to indicate that there is a new line but the terminal will see the command as one long line. This is just so it’s easier to read for us.

Now we add what’s called a “GPG key” it’s basically some security so we know that the Docker we are trying to install is legit.

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

If you wish you can then verify it with sudo apt-key fingerprint 0EBFCD88

Now we need to add the Docker repository to our system so that it knows where to look when we ask for it to install Docker.

sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/debian \ $(lsb_release -cs) \ stable"

Once that’s done, update the packages cache again so it includes Docker.

sudo apt update

Then install it:

sudo apt install docker-ce

If all goes well you should be able to run

docker run hello-world

and receive an error about permissions. This is because Docker currently will only work if you use “sudo” before any of its commands.

We can remedy this by adding your user account to the docker group similar to how we did with “sudo” earlier.

sudo usermod -aG docker $USER

“$USER” is a variable that equals the name of the current logged in user so for me it would be “fuzzy”.

Time to logout then back in again then you should be able to run the command below without any issues. It will get the “image” from the docker repo and run it and then exit.

docker run hello-world

Setting up Nextcloud

As stated earlier Nextcloud is a popular file cloud storage and phone book and calendar service.

We will cover installing it with Docker and using Nginx as a reverse proxy to get it running.

We need the image from the Docker Hub. The Docker Hub is basically a catalogue of software that we can install using Docker. BUT we can just pull the latest one by trying to run Nextcloud as it will be automatically retrieved for us if we don’t have it when we attempt to run it for the first time.

Persistent Data

Before we attempt to run Nextcloud and let it pull the image and set it up for us let’s briefly talk about persistent data.

By default if you run a Docker container the data inside the container will be deleted if you delete the container. To avoid this we can mount a folder on our host system as a “volume” on the container, everything inside the folder on the container will be inside the folder on our system, letting us access the files inside the Docker container. You can read more about volumes here.

Port Mapping

Docker containers themselves can use pretty much any port they like for services and we can map that to a different port on our host machine. This means we could have 3 different containers that run some sort of web server on port 80 but we can map those to say 8000, 8001, 8002 on the host as a port can only be occupied by one thing at a time.

Let’s Install Nextcloud!

OK so now we know about volumes and ports lets use that knowledge to install Nextcloud in a Docker container and then setup a reverse proxy to point to it with a subdomain using Nginx.

It should all become clearer as we do it if you’re still a little confused.

To install Nextcloud and map it’s container port “80” to our own port “5000” and store all user data inside a “nextcloud” folder our home directory we run:

docker run -d \ -v /home/$USER/nextcloud:/var/www/html \ -p 5000:80 \ --name nextcloud nextcloud

Docker will then pull all the needed images and extract them and create a container named “nextcloud” which will be running the Nextcloud instance inside it.

Now you can try to access it by typing your internal IP address into a browser followed by “:” then the port number, which we chose as “5000“, so in my case it would be: “http://192.168.1.116:5000“.

You wouldn’t normally be opening any more ports but it all depends on how you’ve set up your server in the first place. If it’s on a VPS and you’re using an external IP to connect you may need to temporarily allow the port to check everything works and then disable it again after.

So to test it I run:

sudo ufw allow 5000/tcp

Then visit the URL and see:



This tells me everything works and if I previously allowed port 5000 I can deny it with:

sudo ufw deny 5000/tcp

Now we will setup an Nginx reverse proxy to access our Nextcloud instance via a subdomain that we would have set up earlier in the DNS section with a “CNAME” record.

For the sake of example I will use “nextcloud.example.com” as our address for this instance.

Note that if you restart the system the container will no longer be running, you can start it again with the same settings as when you first run it by using its name that we set with:

docker container start nextcloud

Nginx Reverse Proxy Setup

So currently in its default state Nginx is set up to serve web pages from a directory (“/var/www/html“) but in order to make it serve our Nextcloud Docker container we need to change a few things around.

First off, edit the Nginx default site config file with nano.

sudo nano /etc/nginx/sites-enabled/default



Once there you can have a read through and see what’s what if you like or you can just continue on with the guide.

We’re going to only be serving Docker containers and using Nginx as a reverse proxy during this guide. If you want it to be used in the traditional serving files way still, you will need to create another config in the “sites-enabled” folder.

Anyway, to make it serve our Nextcloud Docker container we need a fresh start so holding CTRL+K will remove all the lines.

We can then copy and paste this into its place. (Obviously change nextcloud.example.com to whichever domain you’re pointing at your server.)

(PuTTY users can right-click to paste things, terminal users vary depending on which terminal you are using but CTRL+SHIFT+V works for my terminal software.

server { listen 80 default_server; return 444; } server { listen 80; listen [::]:80; server_name nextcloud.example.com; # Proxy to the Nextcloud server location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Host $http_host; proxy_set_header Host $http_host; proxy_max_temp_file_size 0; proxy_pass http://localhost:5000; proxy_redirect http:// https://; } }

You can save with CTRL+O then Enter and exit with CTRL+X.

I’ll explain a little what each part means.

server { listen 80 default_server; return 444; }

This top section means that if someone was to type your external IP into their browser or you haven’t set up a subdomain to be picked up by Nginx but it’s pointing at the server it will give a “444” error to the visitor. Like how a “404” error means “not found” a “444” means “Connection Closed Without Response” and is used to tell Nginx to close the connection and give no response indicating to the user that they have tried to connect to something that doesn’t exist.

The “listen 80” means to listen on port 80. – The default port for “http” traffic.

Next we have the next “server” section. Each new site will have its own “server” section. Port 80 is the same as before only there is also an added one for IPv6 connections.

Next up is “server_name” which is pretty self explanatory. It will pickup that specific domain name (or subdomain) which is pointed at the servers external IP address and apply everything within the “server” section to it.

The “location /” means the root of said domain or subdomain so in this case it’s “nextcloud.example.com/” if it was “location /something” it would refer to “nextcloud.example.com/something“.

You don’t need to worry too much about the other settings right now but take note of the “proxy_pass” line. This forwards the request to this specific IP and port. In the case of our Nextcloud Docker container it’s running on localhost (or 127.0.0.1) on port 5000 so it’s set to “http://localhost:5000“.

Once you’ve done this, save and exit from nano as we have done so previously and then you need to restart Nginx so that the new config is loaded with:

sudo service nginx restart

If all goes well then you should be able to visit http://nextcloud.example.com and view your site just as you could when visiting using the IP address and port method.



Using LetsEncrypt to Secure Sites

So we’ve covered the realm of http but what about those nice secure green locked https sites that run on port 443?

For those we need a certificate, not like a “Congratulations you’re site is secure!” sort of certificate but one issued from a legit place. Fortunately we can get free certs from LetsEncrypt!

There is absolutely no reason in this day and age to be running a site on just plain old http. Unless it’s entirely internal of course but even then, better to be that bit safer right?

To get a certificate from LE we will need to install a little bit of software from Certbot first. This will enable us to not only get the cert but apply it to our Nginx config file automatically!

Before we can do that we need to enable some more repos. To do this we edit a file and add a line to it like in the following steps:

sudo nano /etc/apt/sources.list

Add the following line to the bottom and then save and exit.

deb http://ftp.debian.org/debian stretch-backports main

You can now reload the packages cache and install the certificate software.

sudo apt update && sudo apt install python-certbot-nginx -t stretch-backports

After the install is complete, you can request a certificate and have it put in the Nginx config with:

sudo certbot --nginx

Follow the prompts and enter the requested information.



Once you have entered the information and chose your domain from the list you can let the software add it to the config file and automatically redirect all traffic to https.

If you open the Nginx config file, you will notice that it now has certificate information inside it as well as a redirect section to send all traffic through https instead of http.

sudo nano /etc/nginx/sites-enabled/default



Make sure Nginx picks up the changes with a sudo service nginx restart

If all goes well then you should be able to visit

http://nextcloud.example.com or https://nextcloud.example.com

(Obviously replacing the subdomain and domains with your own site!)

For a more in depth look at LetsEncrypt and CertBot and what they can do for you visit their site at letsencrypt.org and certbot.eff.org.

Hopefully this guide has been somewhat helpful in getting into self hosting things! If there are more things you would like to see here in this post or think I have forgotten to include or that you don’t understand then please don’t hesitate to leave a comment and I’ll get back to you and update the post if I’ve overlooked something or made a mistake.

More Information

If you want more information on self hosting and the kind of things you could host check out the following subreddits over on Reddit.

/r/selfhosted | /r/homeserver | /r/webapps | /r/datahoarder