A few months ago I wrote a post covering how I was building my home lab https://hackernoon.com/the-open-home-lab-stack-5e5858722fee

My home setup has simplified substantially since then and I’ll share here what I’m doing..

Whats written here is my opinion, and while I’m open to suggestions and proper discussion, I’ll shut down anyone looking to troll.. No point, no one cares..

Also I’m not an English major, as long as most of the words are in the right order and right place.. lets be happy

Endgame

The main purpose of the equipment is to feed my home media experience, provide an internal and external facing Plex server, provide it with media and have that accessible from anywhere.

The secondary purpose of the setup was to provide myself with a local git server and a Jenkins deployment for automating tasks on the 4 boxes as much as possible.

Hardware

So what’s my home experience made up of? Well I’m not rocking a chassis with loads of rack mounted servers, i’m running lower end hardware which over the years has run everything from Proxmox to ESXi via Xenserver on it. It’s not rocket fast because I don’t need it to be I need storage and ram..

HP Proliant Gen 8

The core to this setup is a HP Microserver Gen 8 running 16Gb ram and 12TB of storage.

I love these boxes they are cheap and work well as low impact home servers.

Gigabrix

These were used at the last company I worked for as a cheaper alternative to the NUC and I bought two of them. Sold as barebones devices both of mine have 128Gb SSD and 8GB of RAM. They are usually silent but do if you run Windows on them have a tendency to run hot and the fans kick up.

Lenovo 10115

I bought this many many years ago to run XBMC on and its served me will with 4Gb ram and a 500Gb Hard disk its played many parts over the years on my home network. It runs silent and hasn’t every failed me. OS’s run first time out it without mucking about with drivers.

OS

With this new setup i’ve got a mix of 2 OS’s Ubuntu and CentOS. I personally prefer CentOS if i’m building a system from scratch however on the Ubuntu based distros they are both running for a purpose.

Zentyal — Development Edition

I’m running Zentyal on the 4Gb Lenovo device as the core controller for the network, its a great small business out of the box distro which is Ubuntu 16.04 (at time of writing) based and provides an easy to use web interface for setting things up.

Within my setup i’ve got Zentyal running as the DHCP server for the network and by extension the DNS server for the internal network forwarding traffic to public DNS provider. I’ve got the servers picking up their IP’s via MAC on this box as well so I don’t have to worry about assigning static IP’s to them.

I’ve also got Zentyal acting as a Windows Active Directory server with a Mac and 2 Windows PC’s attached to the Domain. You can use the RSAT tools for Windows to manage the server for the more “advanced” areas of AD, however for creating users and groups with home shares the Web interface is fine.

This for me works far better than trying to wade through the hell that is LDAP, lifes to short for that..

CentOS

Running on one of the Gigabrix and the HP Gen 8 I find CentOS to be one of the more bullet proof distros. Installed with the minimum settings it has a low footprint and as the free enterprise wing of redhat has a wealth of online support for setting it up and running installs on. Its a well supported OS with full support for both hardware types.

Lubuntu

On the final gigabrix i’ve got Lubuntu installed, a VERY lightwight Ubuntu 18.04 based desktop based on the LXDE Desktop. I need something lightwight snappy and well supported to act as a jumpbox and gui based system as none of the other servers have a GUI on them.

As an example of just how lightweight Lubuntu is I run it on a 512Mb 1CPU VM on my Macbook and its lightening fast even running Firefox and several terminal windows and it has a tiny footprint if you install it in the new Minimal installation option with the 18.04 installer.

Config

Thats the hardware and the OS on each box, how do I have them configured?

Puppet

I have a Puppet 5 server installed on the CentOS Gigabrix, which provides some basic configuration installing the simple net-tools, bind-utils, nmap, traceroute, wget and curl on the boxes.

I then install Webmin and Cockpit on the boxes for Update management

Finally I make sure that the selinux is disabled on CentOS (same thoughts at home as LDAP here..)

I have a set of scripts for doing daily disk and system checks and mail them to myself which are also deployed and setup as cron jobs.

This puts each box in a standard state where I know what i’m dealing with

Webmin

Turns out Webmin is like Marmite, you either love it or hate it.

I use Webmin mainly for 2 core things, its a simple easy method under the Package Updates area to provide the OS with a method of checking for updates every week and deploying them. This information is mailed to me once complete with an updated package list.

Yes, i could use Puppet for this or do it manually I just find it easier on Webmin and like the simple web management interface

Cockpit is a totally usable alternative to Webmin

It’s also got its useas for managing Docker images which is covered really well here: https://www.linux.com/learn/intro-to-linux/2017/3/make-container-management-easy-cockpit

With the underlying configuration setup with Puppet, Webmin and/or Cockpit its time for the meat of the system

Docker

Requires little to know introduction, this container system is taking over the world and quite rightly so. Because of Docker I was able to stop running ESXi 6 on the HP Gen8 and get the systems running on the CentOS installation natively using containers rather than VMs

Installing Docker on CentOS is easy

curl -fsSL https://get.docker.com/ | sh sudo systemctl start docker

sudo systemctl enable docker

sudo systemctl status docker

By default, running the docker command requires root privileges — that is, you have to prefix the command with sudo . It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you'll get an output like this:

Output docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.

See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

sudo usermod -aG docker $(whoami)

You will need to log out of the Droplet and back in as the same user to enable this change.

If you need to add a user to the docker group that you're not logged in as, declare that username explicitly using:

sudo usermod -aG docker username

The rest of this article assumes you are running the docker command as a user in the docker user group. If you choose not to, please prepend the commands with sudo .

Local Storage

On my setup I’ve set /media/docker/ as the local storage location for Docker images

mkdir -p /media/docker/

The journey with docker starts with the knowledge that while I can manage it from a command line, home is a lazy place so the first thing I install is something to manage the docker images with using a gui.

Portainer: https://portainer.io/

Simple to setup, supports Docker Swarm (i’m not using it here, however do in other locations) and provides a clean interface for seeing what is happening on the HP.

The setup within CentOS

docker create — name=portainer \

— restart=always \

-v /media/docker/portainer/data:/data \

-v /var/run/docker.sock:/var/run/docker.sock \

-e PGID=1001 -e PUID=1001 \

-e TZ=Europe/London \

-p 9000:9000 \

portainer/portainer firewall-cmd — add-port=9000/tcp — permanent

firewall-cmd — reload

You can then open https://<docker server ip>:9000 and run through the setup by adding a password and setting up local management.

Now we have this setup we can setup some more Docker Containers

I’ve tried to keep services off ports 80/443 as it causes issues later, so you’ll notice i’ve mapped external facing ports

GitLab

docker run --detach --name gitlab --hostname hp.paedave.local --publish 8443:443 --publish 8880:80 --publish 8022:22 --name gitlab --restart always --volume /media/docker/gitlab/config:/etc/gitlab --volume /media/docker/gitlab/logs:/var/log/gitlab --volume /media/docker/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest



firewall-cmd --add-port=8443/tcp --permanent

firewall-cmd --add-port=8880/tcp --permanent

firewall-cmd --reload

firewall-cmd --list-all

Jenkins

docker run --name jenkins -p 8180:8080 -p 50000:50000 -e PGID=1001 -e PUID=1001 -v /media/docker/jenkins:/var/jenkins_home jenkins/jenkins:lts firewall-cmd --add-port=8180/tcp --permanent

firewall-cmd --reload

firewall-cmd --list-all

Plex

With Plex I’m installing it with a Plex pass, you’ll notice the PLEX_CLAIM in the docker command line, you need to go to https://plex.tv/claim to get this token.

docker run \

-d \

--name plex \

--restart=always \

--network=host \

-e TZ=Europe/London \

-e PLEX_CLAIM=<Enter your token here> \

-v /media/docker/plex/database:/config \

-v /media/docker/transcode/temp:/transcode \

-v /media/shared/plex:/data \

plexinc/pms-docker:latest firewall-cmd --add-port=32400/tcp --permanent

firewall-cmd --add-port=3005/tcp --permanent

firewall-cmd --add-port=8324/tcp --permanent

firewall-cmd --add-port=32469/tcp --permanent

firewall-cmd --add-port=1900/udp --permanent

firewall-cmd --add-port=32410/udp --permanent

firewall-cmd --add-port=32412/udp --permanent

firewall-cmd --add-port=32413/udp --permanent

firewall-cmd --add-port=32414/udp --permanent

firewall-cmd --reload

firewall-cmd --list-all

Poste.io

If you’ve never heard of this, neither had I its a great self hosted mail server find out more at https://poste.io/

docker run \

-p 25:25 \

-p 8081:80 \

-p 110:110 \

-p 143:143 \

-p 443:443 \

-p 587:587 \

-p 993:993 \

-p 995:995 \

-v /etc/localtime:/etc/localtime:ro \

-v /media/dockerdata:/data \

--name "mailserver" \

-h "mail.nov1972.com" \

-t analogic/poste.io



firewall-cmd --add-port=25/tcp --permanent

firewall-cmd --add-port=8081/tcp --permanent

firewall-cmd --add-port=110/tcp --permanent

firewall-cmd --add-port=143/tcp --permanent

firewall-cmd --add-port=443/tcp --permanent

firewall-cmd --add-port=587/tcp --permanent

firewall-cmd --add-port=993/tcp --permanent

firewall-cmd --add-port=995/tcp --permanent

firewall-cmd --reload

firewall-cmd --list-all

Apple Time Machine Server

Now this one was a huge shock, I spent about an hour trying to get this to work natively (stupid me) however this docker works perfectly.

Kudos: https://github.com/odarriba/docker-timemachine

Install the core Docker image

docker run -h timemachine --name timemachine --restart=unless-stopped -d -v //mnt/exthdd/timemachine:/timemachine -it -p 548:548 -p 636:636 odarriba/timemachine firewall-cmd --add-port=548/tcp --permanent

firewall-cmd --zone=public --permanent --add-port=548/udp

firewall-cmd --zone=public --permanent --add-port=5353/tcp

firewall-cmd --zone=public --permanent --add-port=5353/udp

firewall-cmd --zone=public --permanent --add-port=49152/tcp

firewall-cmd --zone=public --permanent --add-port=49152/udp

firewall-cmd --zone=public --permanent --add-port=52883/tcp

firewall-cmd --zone=public --permanent --add-port=52883/udp

firewall-cmd --add-port=636/tcp --permanent

firewall-cmd --reload

firewall-cmd --list-all

We now need to add a user/password (enter your own below)

Choose a name for the share (i’ve used MacbookPro)

Finally a folder off the /timemachine we used above to save the backup to

docker exec timemachine add-account <your username><your password> MacbookPro /timemachine/macbookpro cat >> /etc/avahi/services/afpd.service << EOF

<?xml version="1.0" standalone='no'?>

<!DOCTYPE service-group SYSTEM "avahi-service.dtd">

<service-group>

<name replace-wildcards="yes">%h</name>

<service>

<type>_afpovertcp._tcp</type>

<port>548</port>

</service>

<service>

<type>_device-info._tcp</type>

<port>0</port>

<txt-record>model=Xserve</txt-record>

</service>

</service-group>

EOF change /etc/nsswitch.conf to

hosts: files mdns4_minimal dns mdns mdns4 systemctl enable avahi-daemon

systemctl restart avahi-daemon

To start using this

If you use Avahi, open Finder, go to Shared and connect to your server with your new username and password.

Alternatively (or if you don’t use Avahi) from Finder press CMD-K and type afp://your-server where your-server can be your server’s name or IP address (e.g., afp://my-server or afp://192.168.0.5).

Go to System Preferences, and open Time Machine settings.

Open Add or Remove Backup Disk…

Select your new volume.

Done

I’ve got these and a few more docker images I don’t want to publish here which feed Plex running on this box. I run a simple script to reinstall them should I need to.

Nomachine — https://www.nomachine.com/

On the Lubuntu Jumpbox I have ssh access however for remote access to the desktop remotely I’ve installed Nomachine Server and NoMachine client on my Macbook and Android Tablet.

Its quick, adapts to the speed of the connection (via VPN from outside) and easy to install.

Scripts

I use a set of scripts on each server for some basic monitoring, as an example the Gigabrix running CentOS has a USB3 4TB external disk attached to it. I use this for backup purposes

With /media/internal/backup/ mapped to the Ext disk i can run as a cron job

#!/bin/sh

RESULTS=/tmp/rsyncoutput.txt

MYDATE=$(date)

echo $MYDATE > $RESULTS rsync -rvap --include '*/' --include '*.mkv' --exclude '*' /media/shared/plex/tv/ /media/internal/backup/hp/tv/ >> $RESULTS

rsync -rvap --include '*/' --include '*.mkv' --exclude '*' /media/shared/plex/watching/ /media/internal/backup/hp/watching/ >> $RESULTS

rsync -rvap --include '*/' --include '*.mkv' --exclude '*' /media/shared/plex/movies/ /media/internal/backup/hp/movies/ >> $RESULTS cat $RESULTS | mail -s "Backup - Media Report" your@mailserver.com

Notification of Low Disk

#!/bin/sh

#---------- VARIABLES -------------

REPORT=/tmp/backupdisk.txt

MYDATE=$(date)

MYNAME=$(hostname)

#---------- SCRIPT ---------------- ## Create new report

echo $MYDATE > $REPORT

echo $MYNAME >> $REPORT ##Find the percentage of disk space used

DISKPERCENT=$(df --output=pcent /dev/sdb1| tr -dc '0-9')

DFOUTPUT=$(df -h | grep sdb1) if [ "$DISKPERCENT" -gt "95" ];then

echo "-------- WARNING!!!!! -----------" >> $REPORT

echo "USB DISK FOR BACKUP ALMOST FULL" >> $REPORT

echo "$DFOUTPUT" >> $REPORT

else

echo "USB BACKUP DISK SPACE AT $DISKPERCENT %" >> $REPORT

fi cat $RESULTS | mail -s "Backup - Media Report" your@mailserver.com

What i’m currently working on is using Jenkins to run these scripts (and others) instead of cron on each machine as this way i’ll get a log file and audit of when things are running.

Finally i’m looking for a nice method for monitoring servers and docker containers using opensource software so I can see ram, disk and cpu usage now and historically of each container I run