Enabling local persistent volumes

We will use Kubespray v2.5.0 to launch and manage our Kubernetes cluster. To enable local persistent volumes, set the following property to true in inventory/<folder>/group_vars/k8s-cluster.yml

persistent_volumes_enabled: true

local_volume_provisioner_enabled: true

This enables the Kubernetes PersistentLocalVolumes feature gate along with VolumeScheduling and MountPropagation .

Also note the following two variables

local_volume_provisioner_base_dir: /mnt/disks

local_volume_provisioner_mount_dir: /mnt/disks

Now go ahead and launch the cluster. For this post, I launched a cluster with 3 config nodes i3/other and 6 i3.2xlarge instances for the shards.

Once you launch your cluster with Kubespray you should be able to see your nodes with

kubectl get nodes

Next you should tag your nodes to distinguish what is config and what is for shards. There are different ways to do it. But basically it amounts to issuing something like

kubectl label nodes <node name> component=mongo-config

kubectl label nodes <node name> component=mongo-shard

We will later use these tags to specify nodeAffinity for our pods so that we bind them to the right set of instances.

Provisioning local persistent volumes

Now we will see how to provision the local pvs. All we need to do is to mount the local disks to /mnt/disks. They will be auto-detected and then made available as local pv automatically.

There are a couple ways to do this. You could use a daemonset or you could use a simple script that does this for each of the nodes labeled mongo-shard.

The above script would discover each disk in the i3 instance and format them using xfs (which is what is recommended for MongoDB) and mount to /mnt/disks. It also adds the entry to fstab so that the mount is durable across node reboots.

Now the Kubernetes magic happens and these volumes are made available automatically! The following snippet shows the same (but after binding to pods)