This is a two part series. Part One: the driver. Part Two: the provisioner

Kubernetes has provided a rich list of built-in persistent volume types, using of which you could easily hook up with your physical or cloud-base persistent volumes. However, there are times the type of persistent volume (PV) is not built-in in Kubernetes or the lifecycle management of your PV such as provisioning the volumes is not standard. E.g. you have to perform a API call to a third party vendor to provision the volume and the API call returns the mounting specs which cannot be handled by Kubernetes’ corresponding PV driver.

Since Kubernetes 1.8, it has stopped accepting “in-tree” volume plugins. The recommended two solutions to this problem that can exposed the custom storage system are:

Container Storage Interface (CSI)[1]. This volume type was introduced as alpha in kubernetes 1.9 and beta in 1.10. FlexVolume[2]. This volume type was introduced since kubernetes 1.2

In this article, I will take you through the steps it takes to write your own custom persistent volume plugin using FlexVolume.

The basic elements for a FlexVolume to work is a driver (executable) that can perform attach/detach, mounts/un-mounts the persistent volume to the host.

HOST +------------------+

| |

| |

| |

| |

| Kubelet |

| + |

| | |

| | |

+------------------+ +-----------------------+

| | | | |

| | +-----------------> | Third party |

| v | | persistent |

| Custom driver | <------------------+ volume |

| | | |

+------------------+ Attach/Detach +-----------------------+

Mount/Un-mount

Once you have the driver installed in your host, you can then statically provisioning your PVs.

What happens if you want to dynamically provision PVs? To answer that question, I will also step you through how to create a dynamic provisioner by using external-storage[4] library late on in Part 2 article.

The differences between static and dynamic provisioner can be found here.

The Driver

The Flexvolume driver can be written in any language but must be an executable and has to comply below call-out API specs[2][5]:

----------- Mandatory ------------

<driver executable> init // Performs driver initialisation ----------- Implementation option 1 ----------- <driver executable> attach <json options> <node name> // Attaching persistent volume to the host <driver executable> waitforattach <mount device> <json options> // Waiting for volume to be attached (10 minutes timeout) <driver executable> isattached <json options> <node name> // Check if the volume is attached <driver executable> detach <mount device> <node name> // Detaching persistent volume from the host <driver executable> mountdevice <mount dir> <mount device> <json options> // Mount the device to a global path so the pod can bind to it by Kubelet <driver executable> unmountdevice <mount device> // Un-mount the device from global path ----------- Implementation option 2----------- <driver executable> mount <mount dir> <json options> // This call-out implements both attach and mountdevice functions <driver executable> unmount <mount dir> // This call-out implements both unmountdevice and detach functions

The parameters such as <mount dir> and <json options> are to be passed in by kubelet. You basically use those information to perform your mount/unmount logic. The json options format will be detailed in the below “The PV and PVC manifest” section.

Each call-out must return message which has below format back to kubelet via stdout and stderr:

{

"status": "<Success/Failure/Not supported>",

"message": "<Reason for success/failure>",

"device": "<Path to the device attached. This field is valid only for attach & waitforattach call-outs>"

"volumeName": "<Cluster wide unique name of the volume. Valid only for getvolumename call-out>"

"attached": <True/False (Return true if volume is attached on the node. Valid only for isattached call-out)>

"capabilities": <Only included as part of the Init response>

{

"attach": <True/False (Return true if the driver implements attach and detach)>

}

}

“init” call-out is mandatory. The return message will set “attach” value to true or false to tell kubelet if using implementation option 1 (with seperate attach/detach calls) or 2.

In Kubernetes’ volume plugin implementation, there are many types of volume plugins, such as RecyclableVolumePlugin and ProvisionableVolumePlugin (view here for more details). FlexVolume implements two types, namely AttachableVolumePlugin and PersistentVolumePlugin . Therefore there are two options of implementations of the driver as per plugin interface requirements.

Please note that only “mount” method in implementation 2 will pass the secret in the json parameters. More details can be found here.

If you are using secret, the secret type must be “vendorname/drivername”.

Once you have the driver ready, simply drop the executable into each host’s folder <kubernetes volume plugin-dir>/<vendorname~drivername>/<driver name> . The default kubernetes volume plugin directory is /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ . It can be changed by configuring kubelet’s parameter volume-plugin-dir .

There are many ways to drop your driver to the host such as using configuration tools like Ansible, Puppets. However, due to the dynamic nature of the cluster that nodes can scale up and down, the overhead of using those configuration tools to maintain a stable cluster can not be ignored.

A better solution would be deploying the driver using Daemonset[3] which can:

Facilitate driver update

Auto-deployment to new nodes

No continuous overhead

The PV and PVC manifest

Flexvolume PV has below format

apiVersion: v1

kind: PersistentVolume

metadata:

name: pv0001

spec:

capacity:

storage: 5Gi

accessModes:

- ReadWriteOnce

storageClassName: custom-class

flexVolume:

driver: vendorname/drivername

fsType: "ext4"

secretRef:

name: foo-secret

readOnly: true

options:

fooServer: 192.168.0.1:1234

fooVolumeName: bar

The “options” key will allow you to pass any custom values into your driver. The above PV will be converted to json and passed into your driver via the json option as following format:

"kubernetes.io/fsType":"ext4",

"kubernetes.io/readwrite":"rwo",

"kubernetes.io/secret/key1":"<secret1>",

...

"kubernetes.io/secret/keyN":"<secretN>",

"fooServer": "192.168.0.1:1234",

"fooVolumeName": "bar"

Note you must have manually provisioned the physical persistent volume and pass in all the necessary information in “options” for your driver to understand how to mount and un-mount.

Now after you have the PV created with the class name“custom-class”, you can use it in any of your PVCs by specify the “storageClassName” or using label selctors. For example,

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

name: myclaim

spec:

storageClassName: custom-class

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 5Gi

Reference:

[1] Kubernetes Container Storage Interface (CSI)

[2] Flexvolume spec

[3] Dynamic Flexvolume plugin discovery

[4] [GitHub]external-storage

[5] Openshift — Persistent volume using Flexvolume plugin