Software is Complicated

It’s Sunday morning. After another horrific sunrise, I realize I can’t just lay in bed anymore. It’s too bright for me to enjoy myself. So, casually rolling off the bed and onto the floor, I spring into action. Time to set some goals.

Goals I've heard of this thing called Xen Hypervisor, which can be used to run more than one Operating System on the same PC, but without slowing everything down to a crawl. I've been curious about it for some time, and I know Amazon Web Services uses it, so maybe I'll install that. It's not like I'm doing anything better. It does need to be installed from within a compatible operating system, so I'll install it on my PC. Install Xen Hypervisor on my PC. The only problem is, my PC needs to be able to download the software, and it can't do that until I set up an internet connection. Darn it, I can’t finish #1 without configuring an internet connection on my PC. I lied, there's another problem! Before I can configure an internet connection on my PC, I need to set up a firewall, so that I'm not open for the whole damn internet to try and hack me. My life is complicated enough just dealing with the sun in the morning, I don't need computer viruses keeping me up at night too. Darn it double time, I can’t do step 2 without configuring the firewall on my PC. Shoot, I still haven't made a firewall on my laptop yet, and since I can apply an existing firewall to both my laptop and my PC once I have it set up, I guess I'll start with the laptop, since it's a little easier to develop on. Heckin darn it, I haven’t configured the firewall on my laptop yet, so I don’t have a firewall configuration to push to my PC anyway.

Time to get to work Guess I’ll work my way up the list. Fortunately, I took some notes in January about setting up an ideal baseline firewall configuration for my workstations, that can then be tweaked to match changing requirements. It allows all traffic from localhost, allows established connections, allows all outbound traffic, and allows ping requests. In these notes, the first thing I find is a big heading, “View Rules”, so I start with that. Unexpectedly, I find a whole bunch of Docker rules littered throughout my IPTables rules. Sad! This isn’t bad, in and of itself, but it throws a wrench in the monkey works. If I want to be able to use Docker normally, I have to make sure I don’t blast its rules to kingdom come when I apply my own IPTables firewall using Ansible (which I’ve been applying all of my settings through so I don’t have to go through this brain damage every time I decide to rebuild one of my computers). A little research on the world wide web shows a rather contentious history around the Docker rules. It appears that in an older version of Docker, binding to a container port would, by default, open up that container to the entire world, not just the network available through an existing IPTables configuration! That’s not desirable at all. My Docker containers should only be exposed if I choose to expose them! Trying not to shake with too much rage, I decide to research how to prevent Docker from doing anything with IPTables. I can come back and punch deliberate holes in my firewall at a later time of my choosing, but security is more important to me than containerization for now. Time to add another task to complete. Disable Docker’s control over IPTables. The Docker documentation shows that if I add a handy “--iptables=false” flag to my Docker daemon, it won’t do anything with the firewall. Neato! But where can I find the daemon? How do I add the setting? I decide to do some digging.

I know what I'm doing! It looks like in systems using systemd, the daemon lives in “/etc/systemd/system/docker.service.d/”, but that path doesn’t exist for me. I’m running a system using upstart, so my Docker configuration seems to live in “/etc/default/docker”. Neat! Looks like I get to cross item 5 off my list. After writing some Ansible code to add the “--iptables=false” flag to my daemon options, I go ahead and rerun my configuration. I review the file, just to be sure the changes I’d hoped for are correctly implemented, and sure enough, they are! Now, since IPTables rules don’t persist after a reboot, all I have to do is restart my computer and I should be good to go!

Err... Maybe not Unfortunately, life is never that easy. After rebooting my computer, I find that all of the IPTables rules for Docker are still there. I must have bungled something up. Spending some time with my dear friend Google, I find a few possibilities. The iptables=false flag has been misunderstood by a number of loud developers, so maybe its behavior was changed.

I may have entered the flag wrong, though I find a number of other developers using exactly the same configuration as I am, character for character, so that doesn’t seem likely.

Maybe my operating system implemented some breaking changes, it appears that on the Debian version just before the one I’m using, Docker was installed under systemd, though I confirm that my system is definitely using Upstart. I have no idea what I’m doing.

Crying and tearing my hair out So, I go totally bonkers. Reading through the documentation in depth, I apply any and every setting that looks like it might be remotely relevant. IP Forwarding? Disabled. IP masquerading? Disabled. IPv6? Disabled. I’m going all-in, baby. My desperation is climbing to daring new heights. Naturally, my efforts are futile. Tearing my hair out, I decide to try something I normally avoid at all costs. Checking the logs. Reading the daemon script, “/etc/init.d/docker”, I find that the log files should be stored in “/var/log/docker”, or “/etc/default/daemon.log”. Feeling fresh hope, I check those files. They don’t exist. Screaming incoherently and crying seems like my next best option. Naturally, I go back to the documentation. Day of days! It seems that the command “journalctl -u docker.service” should give me the answers I seek. I run that command, and am face to face with the holy grail. My beautiful logs. All 14 lines or so. But there’s nothing remotely useful, other than an “auplink executable not found” warning. My sources online tell me it’s a harmless warning that won’t affect usability, and since the warning takes place when Docker is trying to unmount something, I assume this may be a memory leak at worst. Certainly a concern, but not critical.

Going nuclear Time to take the nuclear option. I drop my flags directly into the daemon startup script. Forget Ansible. Forget the default configuration location. I’m going to figure this out, no matter the cost. Unfortunately, the result is the same. I’m beginning to lose hope that this is an issue I can overcome on my own. Maybe it’s a bug. Finding one post after another where people did exactly what I’m doing and had my desired result doesn’t make me feel any better. By all accounts, I seem to be doing everything exactly right. But then I take a step back. Maybe the issue isn’t whether I’ve picked the correct options, but whether the options are being included at all. Working from a hunch, I decide to check whether I can find anyone having trouble getting Docker to accept options at all. Sure enough, some scrubs running Ubuntu started having issues around May 2015 with their flags being accepted. Apparently, versions 17 and up expect the daemon flags to be included in “/etc/docker/daemon.json”, regardless of whether they’re using upstart or systemd. Which version am I using? 17.06.

I did it! That was it, I’m afraid. Once I used Ansible to apply a settings file in the correct location, with my flags formatted in JSON, everything worked exactly the way I wanted. Now, I can get on with my life and start working on my own firewall, since I don’t have one in my way. Too bad the day is almost over. Granted, this didn’t take all day – I worked out, got lunch with my girlfriend, did some grocery shopping, but even so, I’m not expecting to make too much more progress tonight.