To get the Argo Tunnel working in Kubernetes, we need to first install helm on the computer we run kubectl from. I use a debian based system for this, so these commands are the appropriate ones for a recent debian that has snap.

Part [ 1 2 3] of a series of more. I don’t know how much more yet as this is primarily written to document my setup so I can refer to it later when I wonder why/how I did something.

After restoring this from the backup I made of my old unifi video controller, I logged in, selected the camera, and under Manage I changed its reporting IP to the new container IP.

I’m using the /mnt/data directory that is the second SATA SSD drive for these containers to store their data on.

On the old Unifi controller, I went to Settings -> Site and clicked the Export Site button. You can download a backup file here, but the more important step is to migrate the devices to the new controller by providing the new IP for them to inform. Once this step is done, the devices should show up in the new controller shortly.

This can take a few minutes to come-up, it has to play with ARP responses and such. Once the new Unifi controller was up, I restored the backup and logged in. None of the devices will show up.... yet!

First, login to existing controller, and download a full backup. Then setup the new controller, my unifi-controller.yaml looks like this:

I’m changing the IP of where the existing Unifi controller runs, which comes with an interesting drawback. All the existing Unifi devices need to start reporting to the new controller.

To allocate LAN IP addresses for containers in Kubernetes I use MetalLB . I installed it using kubectl (Rancher makes it easy to download the kube config file).

When running containers that I want available on my LAN, it’s handy to expose them under their own LAN IP. To do this I set my DHCP server to stop allocating addresses past .189 , and will reserve the remaining IP addresses for container use.

Part [ 1 2 3 ] of a series of more. I don’t know how much more yet as this is primarily written to document my setup so I can refer to it later when I wonder why/how I did something.

That’s it for a first day of configuring things. Next up I’ll need to setup MetalLB so that my Kubernetes containers I start with Rancher get LAN IP’s rather than shuttling everything through the default nginx ingress.

Mapping port 80/443 to different local ports is to avoid intereference from the ingress proxy which will be running on this same node.

With RancherOS running happilly, its time to install Rancher on the VM. This is relatively easy, from the RancherOS VM shell, just run:

I followed these directions to install RancherOS under ProxMox VE. Reproduced here with a fix to the cloud-config.yml as the example didn’t validate.

Finish setup in the AWS Console for the Storage Gateway, your computer will need to be able to talk directly to the VM running the appliance VM. You will be asked to set a cache drive, select the additiona 150Gb drive.

Look at the console in proxmox to determine the IP, and change it as desired for a static IP.

Determine the location of the LVM disk used by the new VM (something like /dev/pve/vm-100-disk-0 ).

Provision a Proxmox VM with the given size disk, using an IDE disk emulation target. I gave my VM 16GB of memory, as I wasn’t sure how much it would want.

The first thing I wanted to try was utilizing hybrid storage in AWS with their Storage Appliance. Unfortunately AWS only provides a VMWare image. I found a few articles online that indicated this was rather easy to convert to a raw disk image for use in Proxmox, and got it working rather quickly.

I have set all the VM hard drives under the default lvm-thin pool which is on the NVMe SSD for performance. The SATA SSD is to be used for persistent data volumes for containers.

I grabbed the ISO for installing ProxMox 6.0, based on a buster debian distro. Using the ISO directly from my computer was rather easy from the Supermicro IPMI Java interface. While the ikvm HTML5 interface is more convenient, the Java-based console makes it a breeze to attach local .iso files as a CD/DVD drive to the server.

I’ve been reading a lot of Serve the Home , and their glowing review of Proxmox VE convinced me to give it a try. I’ve been enjoying it so far and it’s easy to get started with and makes running KVM a breeze.

If I really go nuts with VM’s and containers I still have 2 DIMM slots free for another 128 GB of memory. I’ve found that for personal use containers usually run into RAM pressure much earlier than CPU pressure.

In the event an SSD fails, I’m only a 10min drive from a Best Buy where I got both the SSD’s. I consider this a more likely failure scenario than the PSU, CPU, memory, or motherboard failing in my experience (I already ran a memtest suite against the memory before installing proxmox, which comes with memtest on their live ISO).

This is a bit more than the last buuild, but is 100% SSD storage without any redundancy. I’m relying on cloud backups and accepting that I will have some downtime if a part fails. I consider this an acceptable trade-off to keep costs lower with the hope that once a SSD has proven itself for a few months it should last much longer than a spinning platter drive as I don’t anticipate heavy read/write loads that would wear out the drives.

It took awhile before Epyc embedded processors and vendors 1U systems started to hit the market, but the 8-core/16-thread Epyc 3251 in a nice half-length 1U is now widely available. Serve The Home did a great review of this system which helped convince me to purchase one.

Given my Internet connectivity and increased focus in my spare time on containerized applications using Kubernetes, I figured it might be a good time to ‘downsize’ in power consumption while increasing my virtualization capabilities. The new server configuration I ended up with is a fraction of the size (half-length 1U vs full-length 2U) and power (190w idle vs 40w idle), while having 4x the cores and over 8x the memory capacity. 7 years has definitely helped on what you get for the same price!

A few years ago I built a home NAS and virtualization server. While I moved past SmartOS after a year or two to FreeNAS, the hard drives are aging and I realized it isn’t very efficient compared to what’s available now. From my prior post, the hard drives were replaced with 6 4TB drives in a ZRAID-2. This ended up giving me vastly more space than I had any need for and as I now have a 1Gbps fiber line I don’t feel the need to store 16 TB locally.

Part [1 2 3 ] of a series of more. I don’t know how much more yet as this is primarily written to document my setup so I can refer to it later when I wonder why/how I did something.

After building my new server capable of running SmartOS, it was time to give it a spin!

If you’ve only built desktop machines, its hard to express how awesome IPMI KVM is. No longer do you need to grab another keyboard / video monitor / mouse (the KVM), you just plug in the IPMI Ethernet port on the motherboard to your switch and hit the web-server its running. It then lets you remotely access the machine as if you had it hooked up directly. You can get into the BIOS, boot from ISO’s on your local machine, hard reset, power down, power up, etc. It’s very slick and means I can stick the computer in the rack without needing to go near it to do everything that used to require a portable set of additional physical hardware.

Note This post assumes some basic knowledge of OS virtualization. In this case QEMU, KVM (which was ported by Joyent to run on SmartOS), and Zones. I generally refer to them as VM’s and will differentiate when I add a Zone vs. a KVM instance.

First Go at SmartOS Installation is ridiculously easy, there is none. You download SmartOS, put it on a USB stick or CD-ROM, and boot the computer from it. I was feeling especially lazy and used the motherboards IPMI KVM interface to remotely mount the ISO image directly from my Mac. Once SmartOS booted, it asked me to setup the main ZFS pool, and it was done. SmartOS runs a lot like a VMWare ESXI hyper-visor, with the assumption that the machine will only be booting VM’s. So the entire ZFS pool is just for your VM’s, which I appreciate greatly. After playing with it a little bit, it almost felt.... too easy. I had really allocated at least a week or two of my spare time to fiddle around with the OS before I wanted it to just work, and having it running so quickly was almost disappointing. The only bit that was slightly annoying was that retaining settings in the GZ (Global Zone) is kind of a pain. You have to drop in a service file (which is XML, joy!) on a path which SmartOS will then load and run on startup. This was mildly annoying, and some folks on the IRC channel suggested I give OpenIndiana a spin, which is aimed more at a home server / desktop scenario. There was also a suggestion that I give Sophos UTM a spin instead of pfsense for the firewall / router VM.

OpenIndiana Since OpenIndiana has SmartOS‘s QEMU/KVM functionality (needed to run other OS’s like Linux/BSD/Windows under an illumos based distro), it seemed worth giving a go. It actually installs itself on the system unlike SmartOS, so I figured it’d take a little more space. No big deal. Until I installed it. Then I saw that the ZFS boot pool can’t have disks in it larger than 2TB (well, it can, but it only lets you use 2TB of the space). Doh. After chatting with some IRC folks again, its common to use two small disks in a mirror as a ZFS boot pool and then have the much larger storage pool. Luckily I had a 250GB drive around so I could give this a spin, though I was bummed to have to use one of my drive bays just for a boot disk. Installation went smoothly, but upon trying to fire up a KVM instance I was struck by how clunky it is in comparison to SmartOS. Again, this difference comes down to SmartOS optimizing the heck out of its major use-case.... virtualizing in the data-center. In SmartOS there’s a handy imgadm tool to manage available images, and vmadm to manage VM’s. These don’t seem to exist for OpenIndiana (maybe as an add-on package?), so you have to use the less friendly QEMU/KVM tools directly. Then the KVM failed to start. Apparently the QEMU/KVM support in OpenIndiana (at least for my Sandy Bridge based motherboard) has been broken in the latest 3 OpenIndiana releases for the past 5 months. There’s a work-around to install a specific set of packages, but to claim QEMU/KVM support with such a glaring bug in a fairly prominent motherboard chip-set isn’t a good first start. My first try to install the specific packages failed as my server kernel-panicked halfway through the QEMU/KVM package installation. Upon restarting, the package index was apparently corrupted. The only way to fix it is to re-install OpenIndiana... or rollback the boot environment (a feature utilizing ZFS thus including snapshots). Boot environments and the beadm tool to manage them are a bit beyond the scope of this entry, but the short version is that it let me roll-back the boot file-system including the package index to a non-mangled state (Very cool!). With QEMU / KVM finally installed and working, I installed and configured Sophos UTM in a KVM and was off and running. Except it seemed to run abysmally slow... oh well, I was about to go on vacation anyways. I set the KVM to load at boot-time and restarted. Upon loading the KVM at boot, the machine halted. This issue is apparently related to the broken QEMU / KVM packages. It was about time for my vacation, and I had now played with an OS with some rather rough edges in my spare time for a week. So I powered it off, took out the boot drive, and went on my vacation.

Back to SmartOS When I got back from my vacation, I was no longer in the mood to deal with failures in the OS distribution. I rather like the OpenIndiana community, but now I just wanted my server to work. SmartOS fit the bill, and didn’t require boot drives which was greatly appreciated. It also has a working QEMU / KVM, since its rather important to Joyent. :) In just a day, I went from a blank slate to a smoothly running SmartOS machine. As before, installation was dead simple, and my main ZFS pool zones (named as such by SmartOS) was ready for VM’s. Before I added a VM I figured I should have an easy way to access the ZFS file-system. I turned on NFS for the file-systems I wanted to access and gave my computer’s IP write privilege and the rest of the LAN read-only. This is insanely easy in ZFS: zfs set sharenfs = rw = MYIP,ro = 192.168.2.0 zones/media/Audio To say the least, I love ZFS. Every other file-system / volume manager feels like a relic of the past in comparison. Mounting NFS file-systems on OSX used to suck, but now its a breeze. They work fast and reliably (thus far at least).

Setting Up the Router KVM First, I needed my router / firewall KVM. I have a DSL connection, so I figured I’d wire that into one NIC, and have the other NIC on the motherboard go to the LAN. SmartOS virtualizes these so that each VM gets its own Virtual NIC (VNIC), this is part of the Solaris feature- set called Crossbow. Setting up the new KVM instance for Sophos UTM was simple, I gave it a VNIC on the physical interface connected to the DSL modem and another on the physical interface connected to my switch. Besides for the fact that the VM was working without any issues like I had in OpenIndiana, I noticed it was much faster as well. Unfortunately for some reason it wasn’t actually routing my traffic. It took me about an hour (and clearing the head while walking the dog) to see that I was missing several important VNIC config options, such as dhcp_server , allow_ip_spoofing , allow_dhcp_spoofing , and allow_restricted_traffic . These settings are needed for a VM that intends to act as a router so that it can move the packets and NAT them as appropriate across the VNICs. Once I set those everything ran smoothly. So far, this only took me about 3 hours and was rather simple so I decided to keep going and get a nice network backup for the two OSX machines in the house.

Setting Up Network Backups After some research I found out the latest version of netatalk would work quite nicely for network Time Machine backups. I created a zones/tmbackups ZFS file-system, and two nested file-systems under that for my wifes’ Macbook and my own Mac Mini. Then I told ZFS that zones/tmbackups should have compression enabled (Time Machine doesn’t actually compress its backups, transparent ZFS file compression FTW!) and I set quota’s on each nested file-system to prevent Time Machine from expanding forever. Next I created a Zone with a SmartOS Standard dataset. Technically, the KVM instances run in a Zone for additional resource constraints and security, while I wanted to use just a plain Zone for the network backups. This was mainly because I wanted to make the zones/tmbackups file-system directly available to it without having to NFS mount it into a KVM. If you’ve ever compiled anything from source in Solaris, you’re probably thinking about how many days I spent to get netatalk running in a Zone right now. Thankfully Joyent has done an awesome job bringing a lot of the common GNU compiler toolchain to SmartOS. It only took me about an hour to get netatalk running and recognized by both macs as a valid network Time Machine backup volume. Unfortunately I can’t remember how exactly I set it up, but here are the pages that gave me the guidance I needed: http://www.trollop.org/2011/07/23/os-x-10-7-lion-time-machine-netatalk-2-2/

http://wiki.openindiana.org/oi/Netatalk

http://marcoschuh.de/wp/?p=839 I’ve heard that netatalk 3.x is faster, and will likely upgrade that one of these days.

Setting Up the Media Server KVM One of the physical machines I wanted to get rid of was the home theater PC I had built a few years back. It was rarely used, not very energy efficient, and XBMC was nowhere near spouse-friendly enough for my wife. We have an AppleTV and Roku, and I figured I’d give Plex a try on the Roku since the UI was so simple. I setup a KVM instance and installed Ubuntu 12.04 server on it. Then I added the Plex repo’s and installed their Media Server packages. Fired it up and pointed Plex at my Video folders and it was ready to go. The Roku interface is slick and makes it a breeze to navigate. Being based on XBMC means that it can play all the same media and trans-codes it as necessary for the other network devices that want to play it. At first Plex ran into CPU problems in the KVM... which I quickly realized was because I hadn’t changed the default resource constraints. The poor thing only had a single virtual CPU... after giving it a few more it easily had enough CPU allocated to do the video trans-coding. While KVM runs CPU-bound tasks at bare-metal speed, disk I/O is virtualized. To reduce this problem I have Plex writing its trans- coded files to the ZFS file-system directly via an NFS mount. The media folders are also NFS mounted into the Media Server KVM. I threw some other useful apps onto this KVM that I was running on the home theater PC and left it alone.