This post focuses on setting up a NFS server for the NextCloudPi, but attempts to be a general introduction to NFS as most things apply to any setup.

The Network File System, or NFS is an ancient remote file system originally developed by Sun Microsystems in 1984. It is a simple way to share a folder across a local network.

The NFS server is able to share folders in one machine that can then be mounted with the command mount on another machine.

Features

This is a highlight of its features

The NFS server runs in the Linux kernel.

It is mostly recommended for Linux clients.

It is not secure. Communications are not encrypted and there is no authentication.

It is lightweight.

Runs over UDP, although TCP operation is also possible.

The fact that it uses UDP means that it has less overhead.

Also, given that UDP is a stateless protocol, any operation that is interrupted by network or server failures will just resume from where it was as soon as the service is back up. More on that later.

TCP, on the other hand, needs to re-arrange the connection, so the mount needs to start over, and processes using the filesystem will have to be force-killed.

The fact that it runs within the Linux kernel has advantages and problems. The main benefit is that it is way more efficient. Anything that runs in userspace needs to go through and extra copy operation to reach userspace memory buffers, so we can save that operation by running on the kernel. It is the same concept after kHTTPd.

The main implementation of the client runs on Linux. Even though there are ways to mount NFS from Windows and Mac, a SAMBA server is probably a better fit for a network with mixed systems.

Finally, the fact that it is not encrypted makes it consume less CPU, at the same time that it makes it unfit for other than trusted LAN.

For these reasons, it is an interesting option for low end systems such as ARM devices (CHIP, RPi…) over local LAN.

Installation

Just install the appropriate package for your distribution, normally nfs-kernel-server.

In the case of NextCloudPi, just update to the latest version with

sudo ncp-update

As usual, the generic installer can be used on any Debian based running server to install through SSH, or to install it and configure it on a Raspbian image through QEMU

git clone https://github.com/nachoparker/nextcloud-raspbian-generator.git ./installer.sh NFS.sh 192.168.0.130

Default configuration (NextCloudPi only)

In the specific case of NextCloudPi, we usually want to share the data folder on the local network, so select NFS in

sudo nextcloudpi-config

DIR is the directory to share. The default will be /var/www/nextcloud/data/admin/files for user admin on a fresh installation. If you have moved the data folder to an external drive, then it might be more similar to the default /media/USBdrive/ncdata/admin/files

SUBNET If your local IP address starts with 192.168.0.X, do not change this. Otherwise adjust it to your case.

USER is explained in the next section. You probably don’t need to change this

GROUP is explained in the next section. You probably don’t need to change this

If you would like a different setup, read the next section.

Manual configuration

The configuration is quite simple. It is located in the file /etc/exports. This is the default configuration for sharing your NextCloud files in your local network

/media/USBdrive/ncdata/admin/files 192.168.1.0/24(rw,sync,all_squash,anonuid=33,anongid=33,no_subtree_check)

/media/USBdrive/ncdata/admin/files is the remote folder to mount

192.168.1.0/24 indicates that only computers from that local LAN subnetwork can access the share. This can be made more restrictive by, for example, only allowing a specific IP.

rw means read-write permissions, but you might be also interested in read-only.

33 is the typical user for http or www-data. See the explanation that follows

The all_squash option is related to user mapping between the two computers.

What permissions do we have on the remote filesystem? The way NFS deals with this is by mapping users between host and client. This means that if we have user ownyourbits on id 1005, by default, we will be identified and have permissions for user 1005 in the remote machine. This user might not even exist in that system.

Typically, the first non root user on a Linux system receives the id 1000. We can see how as many different users/computers have to access a common NFS file share with their own identities they would need to change their ids on their machines to be different from each other, and then replicate them in the NFS server. Not very nice.

Also there is the issue of security. If we map root (id 0) from any computer to the root user in the NFS server we have an obvious security issue as everyone would be root.

The way NFS deals with the root problem is by squashing the root to the anonymous user. This is typically a user with no permissions for anything. nobody is typically another such user.

root squashing means that the root user (id 0) will be mapped not to the root user on the NFS server, but to the unpriviledged anonymous user with the id set by the anonuid and anongid parameters. This is the default.

The options for squashing are

root_squash: this is the default and maps root (id 0) to anonuid

no_squash: this disables squashing. root will still be root on the NFS server.

all_squash: this squashes all users to the anonymous user with id anonuid.

So, in the example configuration above, we use all_squash to map all users to the id of the HTTP server. This allows us to have the same restricted permissions as the HTTP server and files that we create there will be modifiable by NextCloud.

We are doing this because we are sharing the data folder for NextCloud. If we wanted to share a whole hard drive, it would probably be more interesting to do the following

/media/USBdrive/ 192.168.1.0/24(rw,sync,all_squash,anonuid=1000,anongid=1000,no_subtree_check)

, which maps any user to the main unpriviledged user of the NFS server (typically 1000).

When you are playing around, you can reload the configuration with

sudo exportfs -ra

Usage

Manual mount

After installing the appropriate packages for your distribution, mount the remote folder with

sudo mount 192.168.0.130:/media/USBdrive/ncdata/admin/files /mnt/mycloud

Where

192.168.0.130 is the IP of your NFS server

/media/USBdrive/ncdata/admin/files is the remote folder to mount

/mnt/mycloud/ is the mountpoint in the local computer.

After this command, your files will be readily accessible as if they were any other local folder.

Through fstab

You can automount on boot from fstab with a line such as

192.168.0.130:/media/USBdrive /mnt/mycloud nfs rw,user,exec 0 0

Through autofs

Alternatively, we can only mount on demand with autofs.

This requires maintaining yet another server, the autofs server and will only mount the NFS share the first time we try to access it.

The added benefit is that it will not delay or impede our boot by trying to access the NFS server, even if it is not up.

Problems

My server went down, and my system is frozen

As it was mentioned before, if there is a connectivity problem your filesystem will be stuck. Very badly so. Any program that is accessing a file in this filesystem will be unkillable not even with SIGKILL, and it will appear with the dreaded D status in ps.

D means uninterruptible sleep and means that a process is stuck in I/O. There is no way to kill the process, not even with SIGKILL. If that happens, the best way to gracefully recover is to bring the server or the connectivity back up again, and the I/O will resume as long as we are using UDP.

The other way is to lazy umount the NFS filesystem with

sudo umount -l /mnt/mycloud

This will not be graceful as the memory maps will suddenly disappear and processes will segfault.

NFS is not starting

The NFS server relies on RPCbind (also known as portmapper). RPCbind is a service that runs on TCP and UDP port 111 and provides a mapping from services to ports.

In the example of NFS in Linux, whenever the mount command is issued, the client asks the RPCbind server on port 111 on which port is the NFS server listening to, and then connects to NFS through this port.

This allows servers to be listening on any port and be correctly discovered by clients.

The downside of this approach is the added complexity. We have to enable the RPCbind service and make sure it starts before the NFS server on boot.

See the Systemd configuration in the following code.

Also, NFS will not start if there is no /etc/exportfs

Code

#!/bin/bash # NFS server for Raspbian # Tested with 2017-03-02-raspbian-jessie-lite.img # # Copyleft 2017 by Ignacio Nunez Hernanz <nacho _a_t_ ownyourbits _d_o_t_ com> # GPL licensed (see end of file) * Use at your own risk! # # Usage: # # ./installer.sh NFS.sh <IP> (<img>) # # See installer.sh instructions for details # More at: https://ownyourbits.com # DIR_=/media/USBdrive/ncdata/admin/files SUBNET_=192.168.1.0/24 USER_=www-data GROUP_=www-data DESCRIPTION="NFS network file system server (for Linux LAN)" install() { apt-get update apt-get install --no-install-recommends -y nfs-kernel-server systemctl disable nfs-kernel-server } configure() { # INFO ################################ whiptail --msgbox \ --backtitle "NextCloudPi configuration" \ --title "Instructions for external synchronization" \ "If we intend to modify the data folder through NFS, then we have to synchronize NextCloud to make it aware of the changes.

This can be done manually or automatically using 'nc-scan' and 'nc-scan-auto' from 'nextcloudpi-config'" \ 20 90 # CHECKS ################################ [ -d "$DIR_" ] || { echo -e "INFO: directory $DIR_ does not exist. Creating"; mkdir -p "$DIR_"; } [[ $( stat -fc%d / ) == $( stat -fc%d $DIR_ ) ]] && \ echo -e "INFO: mounting a in the SD card

If you want to use an external mount, make sure it is properly set up" # CONFIG ################################ cat > /etc/exports <<EOF $DIR_ $SUBNET_(rw,sync,all_squash,anonuid=$(id -u $USER_),anongid=$(id -g $GROUP_),no_subtree_check) EOF cat > /etc/systemd/system/nfs-common.services <<EOF [Unit] Description=NFS Common daemons Wants=remote-fs-pre.target DefaultDependencies=no [Service] Type=oneshot RemainAfterExit=yes ExecStart=/etc/init.d/nfs-common start ExecStop=/etc/init.d/nfs-common stop [Install] WantedBy=sysinit.target EOF cat > /etc/systemd/system/rpcbind.service <<EOF [Unit] Description=RPC bind portmap service After=systemd-tmpfiles-setup.service Wants=remote-fs-pre.target Before=remote-fs-pre.target DefaultDependencies=no [Service] ExecStart=/sbin/rpcbind -f -w KillMode=process Restart=on-failure [Install] WantedBy=sysinit.target Alias=portmap EOF systemctl enable rpcbind systemctl enable nfs-kernel-server service nfs-kernel-server start } cleanup() { apt-get autoremove -y apt-get clean rm /var/lib/apt/lists/* -r rm -f /home/pi/.bash_history systemctl disable ssh } # License # # This script is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This script is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this script; if not, write to the # Free Software Foundation, Inc., 59 Temple Place, Suite 330, # Boston, MA 02111-1307 USA