If you find this useful, please consider sharing it on social media to help spread the word!

The intent of this series is to get new home labbers up and running with a beginner-friendly, versatile, and maintainable home lab, and equip them with the knowledge to confidently manage and utilize their systems, eventually working all the way up to advanced home networking, high availability systems (HA), storage area networks (SAN), Email servers, and more!

Upcoming segments for the “How to Home Lab” series

Proxmox Intermediate Features 1 Pools, VM Protection Setting, Storage Configurations, Samba Shares

Proxmox Intermediate Features 2 Managing VM Updates Efficiently, Offline Backup Copies, Proxmox Configuration Backups

Service monitoring with Nagios

Docker/Portainer

GitLab CI/CD

Docker Swarm

Kubernetes

This list is subject to change at any time, I may add/remove/reorder the items in this list at any time in order to keep the series cohesive and useful.

Don’t miss out on the next segment - Subscribe to my mailing list so you can stay up to date! And if you get stuck somewhere along the way, feel free to reach out to me for assistance, I’ll be happy to help out any way that I can.

Important note

I will soon be covering the process of migrating the home lab pfSense instance to serve the entire home network, so we can make better use of pfSense and make accessing VMs a lot easier. This is best done with some extra hardware for pfSense, it can be done while keeping pfSense as a VM but I don’t recommend it unless you have a cluster of Proxmox hosts to enable high availability (HA) on the pfSense VM.

If you don’t have any extra hardware but are up for doing a little shopping, I’ll have some recommendations for you at the end of this article. Alternatively, you should be able to leave everything as is and continue on with future segments with little to no issue, so don’t worry!

Preface

If you learn one thing from me, please let it be log management, this is in my opinion the most important thing you can do for security and uptime.

I’m going to give two options for this, my preferred being Graylog which is very resource intensive (they recommend 4 cores and 8GiB of RAM, although I’ve got it running on 6GiB on one system and it seems to run OK, but it’s using some swap and there is an obvious performance difference).

Centralized logging puts all of your logs in one place and keeps them cohesive, for example if you restore a host from a snapshot or backup you’ll still have all the logs to reference later if needed. Additionally, it adds some tamper-proofing to your systems, in the event of a compromise it’ll be harder for the bad actor to hide their tracks if they can’t access the remote log server.

But far more important than setting up centralized logging is actually looking at it. Make a point to at least skim all of your logs regularly, and look into anything that seems unfamiliar to you (which may be a lot when you first start, but it gets much easier with time and practice).

Early detection of problems is hands down one of the most impactful things you can do for better security and maintenance of your systems, with a well trained eye and proper attention you can remedy most common concerns before they become a real problem, and I can’t stress enough how valuable that is.

Base Configuration

We’re going to make use of rsyslog in both examples, the difference will be in how you view logs and the extra features you get from a system like Graylog.

VM

As always, we’ll start by provisioning a new VM, I’ll call it log-server .

Note: If you choose to install Graylog, you may need to add CPU cores and extra RAM, I also had to add some disk space, 32GiB should be plenty.

SSH into the new VM (via the ap server on the IP address of the pfSense server if you’ve followed along from part 1), install any software updates, and do our boilerplate configuration.

sudo apt update sudo apt -y dist-upgrade sudo bash -c "bash <(wget -qO- https://raw.githubusercontent.com/dlford/ubuntu-vm-boilerplate/master/run.sh)"

Note 1: If you get an error running updates about “failure to acquire lock”, it’s probably due to unattended upgrades running in the background, wait a few minutes and try again, or run the command watch "ps aux | grep -v grep | grep unattend" to watch for the processes to end (the process unattended-upgrade-shutdown will stay running but you can continue when the others have stopped).

Note: 2 Always check the source code before running a remote script, the source code for my boilerplate configuration tool can be viewed here.

Network

Log in to your pfSense VM ( https://IPADDRESS:8080 ) and head to Services > DHCP Server.

Click the Add button at the bottom of the page to add a new IP address reservation (Static Mapping), note the other IP addresses in use, in my case I’ll use 172.16.44.104 because it’s the next available address. You’ll need to grab the MAC address of the log-server VM from Proxmox under Hardware > Network Device.

Fill in the first five form fields, then click Save at the bottom, and Apply Changes.

I got the order backwards this time, I usually run the boilerplate script after setting up networking so there is no need to restart the VM a second time, but we can skip the extra restart with the following commands from the log-server VM’s command line to force renew the IP address.

ip route sudo ip addr flush ens18 && sudo dhclient ens18

Snapshot

If you intend to try out both methods just for the sake of experimenting, take a snapshot here so you can roll back to this point instead of provisioning the VM all over again.

Server Configuration

If you’re using Ubuntu 18.04, you should already have rsyslog installed by default (if the directory /etc/rsyslog.d exists, you’re all set), otherwise you’ll need to install it with your distributions package manager (e.g. for CentOS sudo yum update && sudo yum install rsyslog ).

It should also be configured to start on system boot by default, you can check this with the command systemctl status rsyslog and making sure the Active: line reads active (running) , if it’s not running use the command sudo systemctl start rsyslog to start it, and sudo systemctl enable rsyslog to start it at boot time.

Option One - The Simple Approach

We need to configure rsyslog to accept incoming log messages from other hosts. We’ll need to edit the file /etc/rsyslog.conf to achieve this. We’ll uncomment the four lines below to enable both TCP and UDP.

module ( load = "imudp" ) input ( type = "imudp" port = "514" ) module ( load = "imtcp" ) input ( type = "imtcp" port = "514" )

TCP connections are more reliable because of error checking and guaranteed delivery, while UDP is faster and uses less resources but if packets are dropped they won’t be recovered. The overhead of TCP is pretty small, but it can add up when every host on your network is transmitting to the log-server almost constantly, and UDP does just fine on the local network, so I generally only enable UDP unless I have a situation that requires TCP like an offsite host connected through a VPN tunnel.

Below those lines, I also add the following to save remote logs into their own file for each remote host, otherwise they’d all pile into the main /var/log/syslog file with the log-server ’s logs.

$template remote-incoming-logs, "/var/log/%HOSTNAME%-rsyslog.log" *.* ?remote-incoming-logs & ~

Restart rsyslog to load the new changes:

sudo systemctl restart rsyslog

You can do all kinds of neat tricks with how and where logs are stored, for more information check the man page :

You may also want to configure logrotate to archive, compress, and eventually delete old logs from remote hosts, that’s why I added -rsyslog.log to the target filenames in the rsyslog.conf file. We could create a new configuration with different parameters, but I’ll just use the existing syslog configuration file so remote logs will be treated the same as the log-server ’s local system logs, by adding the following line to the top of the file /etc/logrotate.d/syslog , above the line /var/log/syslog :

/var/log/*-rsyslog.log

Of course, here’s another man page if you want to customize the behavior of logrotate :

Lastly, we’ll install the lnav tool to view and search logs.

sudo apt install -y lnav

You can now use the command lnav to open up the local syslog , or lnav /var/log/logfile.log to open any other log file. Here are some handy tricks for navigating with lnav :

Key Action / Search n / N Jump to the next/previous search hit g / G Go to the top/bottom of the file e / E Jump to next/previous error w / W Jump to next/previous warning ctrl + r Reload (useful for clearing filters quickly) : Type a command ? Open help

From the command entry, I often use filter-in some text I want to see and filter-out some text I want to hide , there are lots of other goodies in the help menu to check out, these are just the ones I use most often.

Option Two - Graylog

As I mentioned, they recommend 4 cores and 8GiB of RAM to run Graylog, if you’ve got the hardware I highly recommend it! These instructions are based on (or mostly directly copied from) the documentation at docs.graylog.org.

First we’ll need to install some dependencies, some of which may be from the Ubuntu “universe” repository which isn’t always enabled by default, so we enable that first just to be sure.

sudo add-apt-repository universe sudo apt install -y apt-transport-https openjdk-8-jre-headless uuid-runtime pwgen

We’ll need MongoDB, these commands will install the latest version from the official MongoDB repository.

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list sudo apt update sudo apt install -y mongodb-org sudo systemctl daemon-reload sudo systemctl enable mongod.service sudo systemctl restart mongod.service

We’ll also need Elasticsearch, which is what gives Graylog it’s full-text search powers.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - echo "deb https://artifacts.elastic.co/packages/oss-6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list sudo apt update sudo apt install -y elasticsearch-oss

We’ll need to make a few changes to the file /etc/elasticsearch/elasticsearch.yml , first un-comment cluster.name and change the value to graylog , then add the line action.auto_create_index: false just under it.

cluster.name: graylog action.auto_create_index: false .. .

Then start it up and configure it to start at boot.

sudo systemctl daemon-reload sudo systemctl enable elasticsearch.service sudo systemctl restart elasticsearch.service

And finally, the Graylog package.

wget https://packages.graylog2.org/repo/packages/graylog-3.1-repository_latest.deb sudo dpkg -i graylog-3.1-repository_latest.deb sudo apt update sudo apt install -y graylog-server graylog-integrations-plugins

We need to generate a password hash for the root user, the Graylog docs provide this command to do so:

echo -n "Enter Password: " && head -1 < /dev/stdin | tr -d '

' | sha256sum | cut -d " " -f1

Copy the output from that command, open up the file /etc/graylog/server/server.conf , and paste it on the line root_password_sha2 = right after the = .

We also need a password_secret value, which the comment in the file suggests you use the command pwgen -N 1 -s 96 to generate this value. Go ahead and do that, and paste it in to the file as well.

You’ll also need to uncomment the line http_bind_address = ... and change it’s value to 0.0.0.0:9000

0.0.0.0:9000 means port 9000 on all network interfaces, as apposed to something like 172.16.44.104:9000 which would only serve Graylog on the network interface with that IP address.

Scan the rest of the file for any additional configuration changes you wish to make, the comments are very well written so you should be able to understand most of the options with little to no hunting around (thank you Graylog team!). Then start up Graylog and configure it to auto-start.

sudo systemctl daemon-reload sudo systemctl enable graylog-server.service sudo systemctl start graylog-server.service

Next, you’ll need to configure your NGINX server to proxy traffic to the log-server VM on port 9000 as we did in part 4 of this series.

Once you get to the login screen, log in with the username admin and the root password you created earlier.

Head over to System > Inputs.

Click on Select Input, and choose Syslog UDP, then click Launch new input.

Check the box for Global, we’re only using a single node for this setup, but I’m sure we’d want this enabled for all nodes if we had a cluster. Fill in a title, change the port to 5514 (you have to do some extra configuration to use privileged ports, so anything above 1024 will work just fine), the defaults are fine for the rest of the options, then click Save.

Note: You may repeat this for Syslog TCP if desired, TCP connections are more reliable because of error checking and guaranteed delivery, while UDP is faster and uses less resources but if packets are dropped they won’t be recovered. The overhead of TCP is pretty small, but it can add up when every host on your network is transmitting to the log-server almost constantly, and UDP does just fine on the local network, so I generally only enable UDP unless I have a situation that requires TCP like an offsite host connected through a VPN tunnel.

Lastly, click on Show received messages, then click the “Not updating” dropdown and choose “1 second”, and we should see them start to come in after we configure our first client in the next step.

Once we get some log messages coming in, you’ll want to create a dashboard in the “Dashboards” tab, you can use the “Search” tab to look for pretty much anything you want (check the docs for the syntax, it’s pretty easy to pick up), and create dashboard widgets from there with interesting data points. You should spend some time browsing the documentation that is linked from the main page after you log in, Graylog is a very powerful tool with tons of useful features, don’t be afraid to experiment with it!

Client Configuration

Now we can configure rsyslog on one of your other VMs, as before make sure it’s installed on this system. All we need to do is put a new file in the /etc/rsyslog.d directory and restart rsyslog .

Important: Change the port from 514 to 5514 if you are using Graylog!

/etc/rsyslog.d/100-log-server.conf

*.* @log-server:514

Make sure you can reach log-server by hostname from this VM ( ping log-server ), you could also use an IP address instead:

*.* @172.16.44.104:514

The file name isn’t important, as long as it’s in the /etc/rsyslog.d/ directory and ends in .conf , it will be read by rsyslog and applied.

The *.* tells rsyslog to transmit all logs, but you can get really specific if you want to, refer to the man page link from above for more information.

Then just restart rsyslog :

sudo systemctl restart rsyslog

If you are using the simple method, you should now see a new file on the log-server host with the hostname of the sending system, in my case /var/log/nginx-rsyslog.log . Which can be viewed by running lnav /var/log/nginx-rsyslog.log .

If you are using Graylog, you should see some output now (if not, try reloading the page and you should get something).

That’s pretty much all there is to it, don’t forget to configure the rest of your hosts to utilize your new centralized logging server!

Server Hardware Recommendations

These are my recommendations for the upcoming migration of pfSense to a new machine.

If you’re already running Proxmox on solid hardware, the Qotom mini PC is a great choice. If you want to upgrade your Proxmox hardware and use your existing hardware for pfSense, you could go with dedicated server hardware (I avoid these because they’re loud, but they offer some great extra features you won’t find on other hardware), pretty much any modern Desktop PC tower, or a custom built machine like I did.

Dedicated pfSense Hardware

Consumer and business customers will quickly appreciate that the Netgate SG-1100 packs a serious punch with the factory edition of pfSense® software, world-class price-performance, elegant packaging, and an unbeatable low price. Available on Amazon.

If you don't have one already, I highly recommend a Qotom Q355G4 for pfSense, mine has been fantastic, and it's pretty light on power usage (about 10 watts on average). Available on Amazon.

Dedicated Server Hardware

HP ProLiant DL360 G7 combines performance, intelligent power and cooling management with IT management tools and essential fault tolerance, all optimized for space constrained installations. Available on Amazon.

The HPE ProLiant MicroServer Gen10 delivers an affordable compact entry level server specifically designed for small offices, home offices, or small business environments. Available on Amazon.

My Custom Rig

The AMD Ryzen Threadripper processor is designed to provide indisputable multi-processing supremacy on the X399 ultimate platform for desktop. Available on Amazon.