Oh man. A second Illumos blog post in a week. Illumos is really a wonderful kernel and an excellent server operating system. Joyent’s SmartOS and the venerable OpenIndiana are top notch kits for managing distributed systems or really anything that you want to containerize/virtualize/otherwise run for a long time. It’ a fun environment to hack in for my free time.

I won’t get into the philosophical reasons for why folks should be more interested in Illumos because if you want that you can read or listen to any of a number of blog posts and podcasts by Bryan Cantrill (who is much better spoken on the subject than I).

Instead, let’s look at another problem I’ve run into with my OpenIndiana workstation recently. This post will not be deep on a technical level, but could be useful for some simple troubleshooting of ZFS/zones interaction.

I’ve spent some time this week hacking on compiling a more recent Rust package on Illumos which has resulted in a spectrum of varying degrees of success (I can’t seem to compile rust 1.18.0 even after getting a working cargo bootstrap and rust 1.17.0). A majority of that work was actually done in the global zone. For anyone who doesn’t know – in Illumos or Solaris the global zone is the default operating system. It effectively has control over all the system’s processes. The global zone will always exist even if you don’t create any other zones. There’s no similar concept in Linux distros (that I’m aware of) But you could imagine it like if every Linux OS ran all of its system processes out of a chroot or an lxc container by default with the expectation that at some point you’re likely going to use other containers too.

Now onto the problem set: Realistically, we have this lovely container tech baked into Illumos that can give us nice, clean (and isolated) build environments rather than muddying our functioning OS/packages/etc. Yes, this is something containers are used for like all the damn time in Linux and in fact the Habitat studio by default will create a build environment for you in a flavor native to your OS or within a docker container. If it’s unclear why you might want a clean-slate build environment for compiling software then you’ve got some more googling to do after you read this blog post.

When I realized how dumb it was to be doing what I’ve been doing I decided I ought to get real and clean up after myself. As I mentioned in a previous post it had been some months since I’d opened my OpenIndiana VM so I started by checking A couple specific things: the state of my network devices, whether or not I have any ZFS datasets associated with any zones, and whether or not I have any zones carved out already.

λ › dladm show-link LINK CLASS MTU STATE BRIDGE OVER e1000g0 phys 1500 up -- -- λ › zfs list | grep zones λ › zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / ipkg shared

Perfect (or so I think). Let’s go ahead and carve out a new zone.

λ › zonecfg -z dummyzone dummyzone: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:dummyzone> create zonecfg:dummyzone> set zonepath=/zones/dummyzone zonecfg:dummyzone> set autoboot=false zonecfg:dummyzone> set bootargs="-m verbose" zonecfg:dummyzone> set ip-type=shared zonecfg:dummyzone> add net zonecfg:dummyzone> set address=192.168.1.181/24 zonecfg:dummyzone> set physical=e1000g0 zonecfg:dummyzone> set defrouter=192.168.1.1 zonecfg:dummyzone:net> end zonecfg:dummyzone> verify zonecfg:dummyzone> commit zonecfg:dummyzone> exit λ ›

So let’s take a quick look at the configurations we just made which at its heart includes a bunch of assumptions (never a good idea). We named the zone dummyzone, and then decided that /zones/dummyzone is where we want to install the zone to. Then we’ve also decided we don’t want this container to boot itself when the global zone boots (because it’s for development and I can start it when I want it) and for that process to happen in a verbose way.

The section after add net is important as well. We’ve configured the zone with a static ip address and it’s going to share our host’s primary interface and IP stack. We could have gone the path of creating a VNIC etc. but this should suffice for the purpose of this zone. At this point we can check to make sure we’ve added the container into our list of available zones.

λ › zoneadm list -cv ID NAME STATUS PATH BRAND IP - dummyzone configured /zones/dummyzone ipkg shared

Excellent. Now let’s install the zone.

λ › zoneadm -z dummyzone install ERROR: the zonepath must be a ZFS dataset. The parent directory of the zonepath must be a ZFS dataset so that the zonepath ZFS dataset can be created properly.

Whoops! What’s going on here? If we did a vanilla install on this workstation with no configuration whatsoever we’ve likely ended up with a ZFS root filesystem. We can determine that pretty easily so let’s find out

λ › mount -p rpool/ROOT/openindiana-2 - / zfs - no /devices - /devices devfs - no /dev - /dev dev - no ctfs - /system/contract ctfs - no proc - /proc proc - no mnttab - /etc/mnttab mntfs - no swap - /etc/svc/volatile tmpfs - no xattr objfs - /system/object objfs - no bootfs - /system/boot bootfs - no sharefs - /etc/dfs/sharetab sharefs - no /usr/lib/libc/libc_hwcap1.so.1 - /lib/libc.so.1 lofs - no fd - /dev/fd fd - no rw swap - /tmp tmpfs - no xattr swap - /var/run tmpfs - no xattr rpool/export - /export zfs - no rw,devices,setuid,nonbmand,exec,xattr,atime rpool/export/home - /export/home zfs - no rw,devices,setuid,nonbmand,exec,xattr,atime rpool/export/home/eeyun - /export/home/eeyun zfs - no rw,devices,setuid,nonbmand,exec,xattr,atime rpool - /rpool zfs - no rw,devices,setuid,nonbmand,exec,xattr,atime /dev/dsk/c3t1d0s2 - /media/VBOXADDITIONS_5.1.28_117968 hsfs - no ro,nosuid,noglobal,maplcase,rr,traildot /export/home/eeyun - /home/eeyun lofs - no

The top line there shows me in this case what I was wondering. It appears that / is in fact ZFS. So let’s see whether or not /zones has a ZFS dataset associated with it.

λ › zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 37.2G 11.0G 32K /rpool rpool/ROOT 22.7G 11.0G 23K legacy rpool/ROOT/openindiana 161M 11.0G 13.0G / rpool/ROOT/openindiana-1 20.5M 11.0G 14.5G / rpool/ROOT/openindiana-2 22.5G 11.0G 16.1G / rpool/dump 2.00G 11.0G 2.00G - rpool/export 10.3G 11.0G 23K /export rpool/export/home 10.3G 11.0G 23K /export/home rpool/export/home/eeyun 10.3G 11.0G 6.70G /export/home/eeyun rpool/swap 2.13G 12.7G 479M -

And of course it definitely doesn’t, which explains our previous error. So we’re going to need to create a ZFS dataset for grouping our zone roots. In the case of the system I’m running here it does look like I carved out a separate dataset under export . So let’s use that for aggregating our zone roots. We’re going to need to do a couple of things to clean up our partially “installed” zone first though.

λ › zoneadm -z dummyzone uninstall λ › zfs create rpool/export/zones λ › zonecfg -z dummyzone zonecfg:dummyzone> set zonepath=/export/zones/dummyzone zonecfg:dummyzone> verify zonecfg:dummyzone> commit zonecfg:dummyzone> exit

There. Now we’ve “uninstalled” the zone (effectively removing any leftovers in the zones directory). Then created our new ZFS dataset before re-configuring our zone to point to the appropriate directory location. Now let’s attempt another installation.

λ › zoneadm -z dummyzone install A ZFS file system has been created for this zone. Sanity Check: Looking for 'entire' incorporation. Image: Preparing at /export/zones/dummyzone/root. Publisher: Using openindiana.org (http://pkg.openindiana.org/hipster/). Cache: Using /var/pkg/publisher. Installing: Packages (output follows) Packages to install: 153 Mediators to change: 5 Services to change: 12 DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 153/153 37636/37636 277.0/277.0 463k/s PHASE ITEMS Installing new actions 56291/56291 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Note: Man pages can be obtained by installing pkg:/system/manual Postinstall: Copying SMF seed repository ... done. Done: Installation completed in 221.916 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process.

Fantastically dope! We have our first functioning zone! Now we could have gotten more granular with our ZFS datasets and created a separate one for each zone we might want to run. If these zones were running on an actual server and each zone was serving network traffic that might be a good idea. For the purposes of workstation development, we’re ok with what we’ve got here. So now what?

Well. First, lets start the container and then verify that it’s running.

λ › zoneadm -z dummyzone boot λ › zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / ipkg shared 1 dummyzone installed /export/zones/dummyzone ipkg shared

What’s next its sort of up to how you want to use the space. For me, having an isolated network connected environment with it’s own ZFS dataset is enough. I can log into the container and compile code without muddying my global zone! But, let’s login and set a root password. Doing so will allow us to use the zone’s console connection.

λ › zlogin dummyzone [Connected to zone 'dummyzone' pts/2] The Illumos Project SunOS 5.11 illumos-4dfc19d703 October 2017 root@dummyzone:~# passwd passwd: Changing password for root New Password: Re-enter new Password: passwd: password successfully changed for root

With that we’re ready to treat the zone like a development environment. We can start using ips or pull down pkgsrc and get the tools we need to start hacking!

It is possible that I will continue to write about Illumos and the BSDs in the future. I love Unix operating systems and because projects like OpenIndiana and SmartOS have such wonderful documentation theres little need for off-the-cuff bloggery on the subject. That being said, I guess we’ve got to just be the change we want to see in the world eh?