Setting up Kerberized NFS on a client can be a bit challenging, especially if you’re trying to do it across multiple hosts. So, I decided I wanted to take on the challenge of creating an easy to deploy Docker container, using NetApp’s Trident plugin to make life even easier.

Why do I want Kerberos?

With Kerberos on NFS mounts, you can encrypt traffic for authentication (krb5), integrity checking (krb5i) and for end-to end packet encryption (krb5p). I covered the benefits of using krb5p in a previous blog. I also covered how to use NFS with the new FlexGroup driver in Docker + NFS + FlexGroup volumes = Magic!

I also cover krb5p in Encrypt your NFS packets end to end with krb5p and ONTAP 9.2!

But this blog covers FlexVols only, since FlexGroup volumes can’t use NFSv4.x – yet.

Why do I want NFSv4.x?

NFSv3 is a great protocol, but it has some disadvantages when it comes to locking and security. For starters, v3 is stateless. NFSv4.x is stateful and manages locks much better, since it’s done on a lease basis and is integrated in the protocol itself, while v3 has ancillary services that manage locks.

Those ancillary services (like NLM, mountd, portmap) are also what makes NFSv3 less secure than NFSv4.x. More services = more ports to open on a firewall (your network guy hates you, btw). Additionally, standard in-flight encryption for NAS protocols, such as Kerberos, don’t encrypt the ancillary services – it only encrypts the NFS packets. NFSv4.x also has additional layers of security via NFSv4.x ACLs, as well as ID domain name mapping to ensure the client and server agree on who is who when accessing NFS mounts.

The main downside of NFSv4.x is performance. It currently lags behind NFSv3 for a variety of reasons, but mostly because it has to do more in each packet than NFSv3 had to, and being a stateful protocol can be costly to performance. When you lump in encryption, it adds more overhead. Things are getting better, however, and soon, there won’t be any excuse for not using NFSv4.x as your standard NFS version.

What you need before you start

In this example, I’m going to configure Kerberos, NFSv4.1 and LDAP on a single container. This allows me to have all the moving parts I’d need to get it working out of the gate. I’m using CentOS7.x/RHEL7.x as the Docker host and container base, as well as Microsoft Active Directory 2012R2 for LDAP UNIX identities and Kerberos KDC functionality. You can use Linux-based LDAP and KDCs, but that’s outside the scope of what this blog is about.

Before you get started, you need the following.

Active Directory configured to use LDAP for UNIX identity mapping

A server/VM running the latest CentOS/RHEL version (our Docker host)

A NetApp ONTAP cluster with a SVM running NFS on it

Configuring the ONTAP SVM

Before you can get started with NFS Kerberos on the client, you’ll need to configure NFS Kerberos in ONTAP. Doing this essentially comes down to the following steps:

Create a Kerberos realm

Configure DNS

AES encryption types allowed on the NFS server

Create a Kerberos interface (this creates a machine object in AD that has your SPN for NFS server ticket functionality and adds the keytab to the cluster/SVM)

Create a local UNIX user named “nfs” (this maps to the nfs/service account when Kerberos mounts are attempted)

Create a generic name mapping rule for all machine accounts (when you join a container to the Kerberos realm, it creates a new machine account with the format of [imagehexname]$@REALM.COM. Having a generic name mapping rule will eliminate headaches trying to manage that)

Create an export policy and rule that allows Kerberos authentication/v4.x for NFS

(optional, but recommended) LDAP server configuration that matches the client (this makes life much easier with NFSv4.x)

Configure the NFSv4 ID domain option and enable NFSv4.0/4.1

These steps are all covered in pretty good detail in TR-4073 (the unabridged version) and TR-4616 (the more streamlined version), so I won’t cover them here. Instead, I’ll show you how my cluster is configured.

Kerberos realm

::*> kerberos realm show -vserver DEMO -instance (vserver nfs kerberos realm show) Vserver: DEMO Kerberos Realm: NTAP.LOCAL KDC Vendor: Microsoft KDC IP Address: 10.x.x.x KDC Port: 88 Clock Skew: 5 Active Directory Server Name: oneway.ntap.local Active Directory Server IP Address: 10.x.x.x Comment: - Admin Server IP Address: 10.x.x.x Admin Server Port: 749 Password Server IP Address: 10.x.x.x Password Server Port: 464 Permitted Encryption Types: aes-128, aes-256

DNS

::*> dns show -vserver DEMO -instance Vserver: DEMO Domains: NTAP.LOCAL Name Servers: 10.x.x.x Timeout (secs): 5 Maximum Attempts: 1 Is TLD Query Enabled?: true Require Source and Reply IPs to Match: true Require Packet Queries to Match: true

AES encryption types allowed on the NFS server

::*> nfs server show -vserver DEMO -fields permitted-enc-types vserver permitted-enc-types ------- ------------------- DEMO aes-128,aes-256

Kerberos interface

::*> kerberos interface show -vserver DEMO -instance (vserver nfs kerberos interface show) Vserver: DEMO Logical Interface: data IP Address: 10.x.x.x Kerberos Enabled: enabled Service Principal Name: nfs/demo.ntap.local@NTAP.LOCAL Permitted Encryption Types: aes-128, aes-256 Machine Account Name: - Vserver: DEMO Logical Interface: data2 IP Address: 10.x.x.x Kerberos Enabled: enabled Service Principal Name: nfs/demo.ntap.local@NTAP.LOCAL Permitted Encryption Types: aes-128, aes-256 Machine Account Name: - 2 entries were displayed.

UNIX user named NFS

::*> unix-user show -vserver DEMO -user nfs -instance Vserver: DEMO User Name: nfs User ID: 500 Primary Group ID: 500 User's Full Name:

Generic name mapping rule for Kerberos SPNs

::*> vserver name-mapping show -vserver DEMO -direction krb-unix -instance Vserver: DEMO Direction: krb-unix Position: 1 Pattern: (.+)\$@NTAP.LOCAL Replacement: root IP Address with Subnet Mask: - Hostname: -

Export policy rule that allows Kerberos/NFSv4.x

::*> export-policy rule show -vserver DEMO -policyname kerberos -instance Vserver: DEMO Policy Name: kerberos Rule Index: 1 Access Protocol: nfs4 List of Client Match Hostnames, IP Addresses, Netgroups, or Domains: 0/0 RO Access Rule: krb5, krb5i, krb5p RW Access Rule: krb5, krb5i, krb5p User ID To Which Anonymous Users Are Mapped: 65534 Superuser Security Types: any Honor SetUID Bits in SETATTR: true Allow Creation of Devices: true NTFS Unix Security Options: fail Vserver NTFS Unix Security Options: use_export_policy Change Ownership Mode: restricted Vserver Change Ownership Mode: use_export_policy Policy ID: 42949672971

LDAP client config (optional, but recommended if you plan to use NFSv4.x)

::*> ldap client show -client-config DEMO -instance Vserver: DEMO Client Configuration Name: DEMO LDAP Server List: - (DEPRECATED)-LDAP Server List: - Active Directory Domain: NTAP.LOCAL Preferred Active Directory Servers: - Bind Using the Vserver's CIFS Credentials: true Schema Template: MS-AD-BIS LDAP Server Port: 389 Query Timeout (sec): 3 Minimum Bind Authentication Level: sasl Bind DN (User): mtuser Base DN: DC=NTAP,DC=local Base Search Scope: subtree User DN: - User Search Scope: subtree Group DN: - Group Search Scope: subtree Netgroup DN: - Netgroup Search Scope: subtree Vserver Owns Configuration: true Use start-tls Over LDAP Connections: false Enable Netgroup-By-Host Lookup: false Netgroup-By-Host DN: - Netgroup-By-Host Scope: subtree Client Session Security: none LDAP Referral Chasing: false Group Membership Filter: -

To test LDAP functionality on the cluster, use the following command in advanced privilege to look up a user. If you get a UID/GID, you’re good to go.

::*> getxxbyyy getpwbyname -node ontap9-tme-8040-01 -vserver DEMO -username prof1 (vserver services name-service getxxbyyy getpwbyname) pw_name: prof1 pw_passwd: pw_uid: 1100 pw_gid: 1101 pw_gecos: pw_dir: pw_shell:

Configure/enable NFSv4.x

::*> nfs server show -vserver DEMO -fields v4-id-domain,v4.1,v4.0 vserver v4.0 v4-id-domain v4.1 ------- ------- ------------ ------- DEMO enabled NTAP.LOCAL enabled

Once the cluster SVM is set up, there shouldn’t be much else, if anything, that needs to be done for Kerberos on the cluster. However, in AD, you’ll want to allow only AES for the NFS server machine account with this simple PowerShell command:

PS C:\> Set-ADComputer NFS-KRB-NAME$ -KerberosEncryptionType AES256,AES128

Configuring the Docker host

To get NFSv4.x to work properly in a container, you’ll need to make a decision about your Docker host. What I found in my testing is that containers running NFSv4.x want to use the Docker host’s ID mappings/users when doing NFSv4.x functions, rather than the container’s. So while the container may be able to pull users from LDAP and write files as those users, you will see NFSv4.x owners and groups that the Docker *host* cannot resolve appear as “nobody.”

sh-4.2$ ls -la total 8 drwxrwxrwx. 2 root root 4096 Aug 14 21:08 . drwxr-xr-x. 18 root root 4096 Aug 15 15:06 .. -rw-r--r--. 1 nobody nobody 0 Aug 13 21:38 newfile

So, if you want NFSv4.x to resolve names properly (and I suspect you do), then you need to do one of the following on the Docker host:

a) Add users and groups locally to the passwd/group files

b) Configure SSSD to query LDAP

Naturally, I like things to be consistent, so I chose option b.

Since we already have an LDAP server in AD, we can just install/configure sssd to use that. Here’s what you’d do…

Install necessary packages

I like using realm and sssd. It’s fun. It’s easy. This is what you need to do that.

yum -y install realmd sssd oddjob oddjob-mkhomedir adcli samba-common krb5-workstation ntp

Configure DNS (/etc/resolv.conf)

This needs to point to the name servers in your AD domain.

Create a generic machine account in AD for LDAP authentication

This will be how our LDAP clients bind. You can use it for more than one client if you like. This can be done via PowerShell.

PS C:\> import-module activedirectory PS C:\> New-ADComputer -Name [computername] -SAMAccountName [computername] -DNSHostName [computername.dns.domain.com] -OtherAttributes @{'userAccountControl'= 2097152;'msDSSupportedEncryptionTypes'=27}

Create a keytab file to copy to the Docker host to use for LDAP binds

This is done on the AD domain controller in a CLI window. Use ktpass and this syntax:

ktpass -princ primary/instance@REALM -mapuser [DOMAIN]\machine$ -crypto AES256-SHA1 +rndpass -ptype KRB5_NT_PRINCIPAL +Answer -out [file:\location]

Copy the keytab file to your Docker host

WinSCP is a good tool to do this. It should live as /etc/krb5.keytab

Configure /etc/krb5.conf

Here’s an example (changes in bold):

# Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 30d renew_lifetime = 30d forwardable = true rdns = false # default_realm = EXAMPLE.COM default_ccache_name = KEYRING:persistent:%{uid} default_realm = NTAP.LOCAL [realms] # EXAMPLE.COM = { # kdc = kerberos.example.com # admin_server = kerberos.example.com # } NTAP.LOCAL = { } [domain_realm] # .example.com = EXAMPLE.COM # example.com = EXAMPLE.COM ntap.local = NTAP.LOCAL .ntap.local = NTAP.LOCAL

Configure the sssd.conf file to point to LDAP

This is what mine looks like. Note the LDAP URI and SASL authid. I also set “use_fully_qualified_names” to false.

[domain/default] cache_credentials = False case_sensitive = False enumerate = True [sssd] config_file_version = 2 services = nss, pam, autofs domains = NTAP.local debug_level = 7 [nss] filter_users = root,ldap,named,avahi,haldaemon,dbus,radiusd,news,nscd filter_groups = root [pam] [domain/DOMAIN] auth_provider = krb5 chpass_provider = krb5 id_provider = ldap ldap_search_base = dc=ntap,dc=local ldap_schema = rfc2307bis ldap_sasl_mech = GSSAPI ldap_user_object_class = user ldap_group_object_class = group ldap_user_home_directory = unixHomeDirectory ldap_user_principal = userPrincipalName ldap_account_expire_policy = ad ldap_force_upper_case_realm = true ldap_user_search_base = cn=Users,dc=ntap,dc=local ldap_group_search_base = cn=Users,dc=ntap,dc=local ldap_sasl_authid = root/krb-container.ntap.local@NTAP.LOCAL krb5_server = ntap.local krb5_realm = NTAP.LOCAL krb5_kpasswd = ntap.local use_fully_qualified_names = false

Enable authconfig and start SSSD

authconfig --enablesssd --enablesssdauth --updateall systemctl start sssd

Test LDAP functionality/name lookup

You can use “getent” or “id” to look names up.

# id prof1 uid=1100(prof1) gid=1101(ProfGroup) groups=1101(ProfGroup),1203(group3),1202(group2),1201(group1),1220(sharedgroup)

Configure /etc/idmapd.conf with the NFSv4.x domain

Just this single line needs to be added. Needs to match what’s on the ONTAP SVM.

Domain = [NTAP.LOCAL]

Creating your container

I used the centos/http:latest as the base image, and am running with systemd. I also am copying a few config files to the container to ensure it functions properly and then running a script afterwards.

Here’s the dockerfile I used to create a container that could do NFSv4.x, Kerberos and LDAP. You can also find it on GitHub here:

https://github.com/whyistheinternetbroken/centos-kerberos-nfsv4-sssd

FROM centos/httpd:latest ENV container docker # Copy the dbus.service file from systemd to location with Dockerfile ADD dbus.service /usr/lib/systemd/system/dbus.service VOLUME ["/sys/fs/cgroup"] VOLUME ["/run"] CMD ["/usr/lib/systemd/systemd"] RUN yum -y install centos-release-scl-rh && \ yum -y install --setopt=tsflags=nodocs mod_ssl RUN yum -y update; yum clean all RUN yum -y install --setopt=tsflags=nodocs sssd sssd-dbus adcli krb5-workstation ntp realmd oddjob oddjob-mkhomedir samba-common samba-common-tools nfs-utils; yum clean all ## Systemd cleanup base image RUN (cd /lib/systemd/system/sysinit.target.wants && for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -vf $i; done) & \ rm -vf /lib/systemd/system/multi-user.target.wants/* && \ rm -vf /etc/systemd/system/*.wants/* && \ rm -vf /lib/systemd/system/local-fs.target.wants/* && \ rm -vf /lib/systemd/system/sockets.target.wants/*udev* && \ rm -vf /lib/systemd/system/sockets.target.wants/*initctl* && \ rm -vf /lib/systemd/system/basic.target.wants/* && \ rm -vf /lib/systemd/system/anaconda.target.wants/* # Copy the local SSSD conf file RUN mkdir -p /etc/sssd COPY sssd.conf /etc/sssd/sssd.conf # Copy the local krb files COPY krb5.keytab /etc/krb5.keytab COPY krb5.conf /etc/krb5.conf # Copy the NFSv4 IDmap file COPY idmapd.conf /etc/idmapd.conf #Copy the DNS config COPY resolv.conf /etc/resolv.conf # Copy rc.local COPY rc.local /etc/rc.d/rc.local # start services ADD configure-nfs.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/configure-nfs.sh RUN chmod +x /etc/rc.d/rc.local

You’ll notice that I have several COPY commands in the file. Those are config files you’d need to modify to reflect your own environment. You’ll want to store these in the same folder as your dockerfile. I’ll break down each file here.

dbus.service

This file is the same file as the one on the Docker host. It allows the container to run with systemd. Simply copy it from /usr/lib/systemd/system/dbus.service into your dockerfile folder location.

sssd.conf

This is our LDAP file and specifies how LDAP does its queries. While NFSv4.x will use the Docker host’s users for NFSv4.x mapping, the container will still need to know who a user is to allow us to su, kinit, etc. For this, you can essentially use the same config file you used for your Docker host.

krb5.keytab and krb5.conf

The krb5.keytab file is used to authenticate/bind to LDAP only in this case. So, use the same keytab file you created earlier. Same for the krb5.conf file, unless your containers are going to leverage a different KDC/domain than the Docker host. In that case, it gets a little more complicated. Just copy the Docker host’s krb5.keytab and krb5.conf files from /etc.

idmapd.conf

Again, same file as the Docker host. This defines our idmap domain for NFSv4.x.

resolv.conf

DNS information; should match what’s on the Docker host.

rc.local

This file is useful for running our configuration script. We need the script to run because the container won’t let you start services before it’s running. When you try, you get this error (or something similar):

Failed to get D-Bus connection: No connection to service manager.

This is the line I added to my rc.local:

/usr/local/bin/configure-nfs.sh

That leads us to the script…

configure-nfs.sh

This script starts services. It also joins the container to the Kerberos realm. While I’m using AD KDCs, you can also use realm join to join Linux-based KDCs. Maybe one day I’ll set one up and write up a guide, but for now, read the Linux KDC docs. 🙂

For the realm join, I’m passing the password with the command. It’s in plaintext, so I’d recommend not using a domain admin here. Realm join uses administrator by default, but you have a way to specify a different user with the -U option. So, you can either create a user that *only* has access to create/delete objects in a specific OU or leave the password portion out and have users enter the password when the container starts.

I’d also highly recommend creating a new OU in AD to house all your container machine objects. Otherwise, you’ll see your default OU get flooded with these:

So, configure an OU or CN in AD and then point realm join to use that.

Here’s my shell script:

#!/bin/sh systemctl start dbus systemctl start rpcgssd systemctl start rpcidmapd systemctl restart sssd echo PASSWORD| realm join -U username --computer-ou OU=Docker NTAP.LOCAL

Realm join caveat

In my config, I’ve done something “clever.” When you join a realm on a Linux client, it will also configure SSSD to pull UNIX IDs from AD. It doesn’t use the uid field by default. Instead, it creates a UID based on the AD SID. Thus, user student1 might look like this from LDAP (as expected):

# id student1 uid=1301(student1) gid=1201(group1) groups=1201(group1),1220(sharedgroup),1203(group3)

But would look like this from SSSD’s algorithm:

# id student1@NTAP.LOCAL uid=1587401108(student1@NTAP.local) gid=1587400513(domainusers@NTAP.local) groups=1587400513(domainusers@NTAP.local),1587401107(group3@NTAP.local),1587401105(group1@NTAP.local),1587401122(sharedgroup@NTAP.local)

ONTAP doesn’t really know how to query UIDs in the way SSSD does, so we’d need SSSD to be able to look up our UNIX users, but also be able to query AD users that may not have UNIX attributes populated. To control that, I set my sssd.conf file to do the following:

When a username is specified without a FQDN, SSSD looks it up in normal LDAP

a FQDN, SSSD looks it up in normal LDAP When a username is specified with a FQDN, SSSD uses the algorithm

I controlled this with the SSSD option use_fully_qualified_names. I set it to false for my UNIX users. When realm join is run, it appends to the sssd.conf file and uses the default value of use_fully_qualified_names, which is “true.”

Here’s what realmd adds to the file:

[domain/NTAP.local] ad_domain = NTAP.local krb5_realm = NTAP.LOCAL realmd_tags = manages-system joined-with-samba cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%u@%d access_provider = ad

Build your container!

That’s pretty much it. Once you have your Docker host and ONTAP cluster configured, Kerberizing NFS in containers is a breeze. Simply build your Docker container using the dockerfile:

docker build -f /dockerfiles/dockerfile.kerb -t parisi/centos-krb-client .

And then run it in privileged mode. The following also shows us specifying a volume that has been created using NetApp Trident. (see below for my Trident config.json file)

docker run --rm -it --privileged -d -v kerberos:/kerberos parisi/centos-krb-client

And then you can exec the container and start using Kerberos!

# docker exec -ti 330e10f7db1d bash # su student1 sh-4.2$ klist klist: Credentials cache keyring 'persistent:1301:1301' not found sh-4.2$ kinit Password for student1@NTAP.LOCAL: sh-4.2$ klist -e Ticket cache: KEYRING:persistent:1301:1301 Default principal: student1@NTAP.LOCAL Valid starting Expires Service principal 08/16/18 14:52:58 08/17/18 00:52:58 krbtgt/NTAP.LOCAL@NTAP.LOCAL renew until 08/23/18 14:52:55, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 sh-4.2$ cd /kerberos sh-4.2$ klist -e Ticket cache: KEYRING:persistent:1301:1301 Default principal: student1@NTAP.LOCAL Valid starting Expires Service principal 08/16/18 14:53:09 08/17/18 00:52:58 nfs/demo.ntap.local@NTAP.LOCAL renew until 08/23/18 14:52:55, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 08/16/18 14:52:58 08/17/18 00:52:58 krbtgt/NTAP.LOCAL@NTAP.LOCAL renew until 08/23/18 14:52:55, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 sh-4.2$ ls -la total 8 drwxrwxrwx. 2 root root 4096 Aug 15 15:46 . drwxr-xr-x. 18 root root 4096 Aug 15 19:02 .. -rw-r--r--. 1 prof1 ProfGroup 0 Aug 15 15:41 newfile -rw-r--r--. 1 student1 group1 0 Aug 14 21:08 newfile2 -rw-r--r--. 1 root daemon 0 Aug 15 15:41 newfile3 -rw-r--r--. 1 student1 group1 0 Aug 15 15:46 newfile4 -rw-r--r--. 1 prof1 ProfGroup 0 Aug 13 20:57 prof1file -rw-r--r--. 1 student1 group1 0 Aug 13 20:58 student1 -rw-r--r--. 1 student2 group2 0 Aug 13 21:12 student2

You can also push the docker image up to your repository and pull it down on any Docker host you like, provided that Docker host is configured as we mentioned.

BONUS ROUND: Trident config.json

Did you know you could control mount options and export policy rules with Trident?

Get Trident here!

Just use the config.json file to do that. In my file, every volume mounts as NFSv4.1 with Kerberos security and a Kerberos export policy.

{ "version": 1, "storageDriverName": "ontap-nas", "managementLIF": "10.x.x.x", "dataLIF": "10.x.x.x", "svm": "DEMO", "username": "admin", "password": "PASSWORD", "aggregate": "aggr1_node1", "exportPolicy": "kerberos", "nfsMountOptions": "-o vers=4.1,sec=krb5", "defaults": { "exportPolicy": "kerberos" } }

Happy Kerberizing!