The first article of the series explored the sample-controller.

In this second article of the series, we will explore kubebuilder.

In the third article of the series, we will explore the operator-sdk.

You can also look at my presentation at Velocity Conf London 2018 on this subject: https://youtu.be/Fp0QUf0Bwm0

kubebuilder

Let’s now explore the kubebuilder suite and create the same CRD and operator. Remember that we want to write an operator that will deploy a daemon on nodes of our cluster. It will use the DaemonSet object to deploy this daemon and we would like to be able to specify a label, to deploy the daemon only on nodes tagged with this label. We also want to be able to specify the Docker image to deploy.

Start a project

We first need to install some tools to be ready: kubebuilder itself, dep and kustomize . All information to install these tools can be found at http://book.kubebuilder.io/quick-start.html

Lets’s create the operator. We need to create our project under GOPATH:

mkdir -p $GOPATH/src/mydomain.com/mygroup && cd $_

then initiate the project:

kubebuilder init --domain mydomain.com

and reply y when prompted to run dep ensure .

Finally create the CRD:

kubebuilder create api --group mygroup --version v1beta1 --kind GenericDaemon

and reply y when prompted to create Resource and Controller.

kubebuilder created for us the sources for the API to access our CRD under pkg/apis/mygroup/v1beta1 . You can see that the files created are similar to the ones we edited in the sample-controller earlier.

Write some code

We need to modify the structure of our GenericDaemon to add the necessary fields for our object. Dont’t forget to document the fields, so the doc generator can create a good documentation:

// pkg/apis/mygroup/v1beta1/genericdaemon_types.go

[...]

// GenericDaemonSpec defines the desired state of GenericDaemon

type GenericDaemonSpec struct {

// Label is the value of the 'daemon=' label to set on a node that should run the daemon

Label string `json:"label"`

// Image is the Docker image to run for the daemon

Image string `json:"image"`

} // GenericDaemonStatus defines the observed state of GenericDaemon

type GenericDaemonStatus struct {

// Count is the number of nodes the daemon is deployed to

Count int32 `json:"count"`

}

[...]

Then lets’ follow the TODO instructions in the genericdaemon_controller.go file. First in the add function, let’s listen to DaemonSet instead of Deployment :

// pkg/controller/genericdaemon/genericdaemon_controller.go

func add(mgr manager.Manager, r reconcile.Reconciler) error {

[...]

// watch a Daemonset created by GenericDaemon

err = c.Watch(&source.Kind{Type: &appsv1.DaemonSet{}}, &handler.EnqueueRequestForOwner{

IsController: true,

OwnerType: &mygroupv1beta1.GenericDaemon{},

})

[...]

}

Second, let’s write the code of the Reconcile function. The specific parts for our CRD are in bold:

// pkg/controller/genericdaemon/genericdaemon_controller.go

// Automatically generate RBAC rules to allow the Controller to read and write Deployments

// +kubebuilder:rbac:groups=apps,resources=daemonsets,verbs=get;list;watch;create;update;patch;delete

// +kubebuilder:rbac:groups=mygroup.mydomain.com,resources=genericdaemons,verbs=get;list;watch;create;update;patch;delete

func (r *ReconcileGenericDaemon) Reconcile(request reconcile.Request) (reconcile.Result, error) {

// Fetch the GenericDaemon instance

instance := &mygroupv1beta1.GenericDaemon{}

err := r.Get(context.TODO(), request.NamespacedName, instance)

if err != nil {

if errors.IsNotFound(err) {

// Object not found, return.

// Created objects are automatically garbage collected.

// For additional cleanup logic use finalizers.

return reconcile.Result{}, nil

}

// Error reading the object - requeue the request.

return reconcile.Result{}, err

} // Define the desired Daemonset object

daemonset := &appsv1.DaemonSet{

ObjectMeta: metav1.ObjectMeta{

Name: instance.Name + "-daemonset",

Namespace: instance.Namespace,

},

Spec: appsv1.DaemonSetSpec{

Selector: &metav1.LabelSelector{

MatchLabels: map[string]string{"daemonset": instance.Name + "-daemonset"},

},

Template: corev1.PodTemplateSpec{

ObjectMeta: metav1.ObjectMeta{

Labels: map[string]string{"daemonset": instance.Name + "-daemonset"},

},

Spec: corev1.PodSpec{

NodeSelector: map[string]string{"daemon": instance.Spec.Label},

Containers: []corev1.Container{

{

Name: "genericdaemon",

Image: instance.Spec.Image,

},

},

},

},

},

}

if err := controllerutil.SetControllerReference(instance, daemonset, r.scheme); err != nil {

return reconcile.Result{}, err

} // Check if the Daemonset already exists

found := &appsv1.DaemonSet{}

err = r.Get(context.TODO(), types.NamespacedName{Name: daemonset.Name, Namespace: daemonset.Namespace}, found)

if err != nil && errors.IsNotFound(err) {

log.Printf("Creating Daemonset %s/%s

", daemonset.Namespace, daemonset.Name)

err = r.Create(context.TODO(), daemonset)

if err != nil {

return reconcile.Result{}, err

}

} else if err != nil {

return reconcile.Result{}, err

}

// Get the number of Ready daemonsets and set the Count status

if found.Status.NumberReady != instance.Status.Count {

log.Printf("Updating Status %s/%s

", isntance.Namespace, instance.Name)

instance.Status.Count = found.Status.NumberReady

err = r.Update(context.TODO(), instance)

if err != nil {

return reconcile.Result{}, err

}

}

// Update the found object and write the result back if there are any changes

if !reflect.DeepEqual(daemonset.Spec, found.Spec) {

found.Spec = daemonset.Spec

log.Printf("Updating Daemonset %s/%s

", daemonset.Namespace, daemonset.Name)

err = r.Update(context.TODO(), found)

if err != nil {

return reconcile.Result{}, err

}

}

return reconcile.Result{}, nil

}

We also have to modify the tests for our GenericDaemon. The important part is creating a GenericDaemon instance correctly:

// pkg/controller/genericdaemon/genericdaemon_controller_test.go

func TestReconcile(t *testing.T) {

g := gomega.NewGomegaWithT(t)

instance := &mygroupv1beta1.GenericDaemon{

ObjectMeta: metav1.ObjectMeta{Name: "foo", Namespace: "default"},

Spec: mygroupv1beta1.GenericDaemonSpec{

Label: "http",

Image: "mydockerid/myimage",

},

}

[...]

Deploy and play

That’s all! We can now rebuild the API and controller, build and push the Docker image and deploy our project:

make

make docker-build IMG=mydockerid/genericdaemon

make docker-push IMG=mydockerid/genericdaemon

make deploy

If you examine the output of the make deploy command, you can see that this command deployed the CRD, RBAC role and role binding for the operator to get access to the necessary objects, created the namespace for the operator, a service and a statefulset for the operator.

At this point, the operator should be running:

$ kubectl get pods --namespace=mygroup-system

NAME READY STATUS RESTARTS AGE

mygroup-controller-manager-0 1/1 Running 0 7s

$ kubectl logs mygroup-controller-manager-0 --namespace=mygroup-system

2018/07/22 09:06:32 Registering Components.

2018/07/22 09:06:32 Starting the Cmd.

We can now personalize a generated GenericDaemon sample:

// config/samples/mygroup_v1beta1_genericdaemon.yaml

apiVersion: mygroup.mydomain.com/v1beta1

kind: GenericDaemon

metadata:

labels:

controller-tools.k8s.io: "1.0"

name: genericdaemon-sample

spec:

image: httpd

label: http

and create it:

$ kubectl apply -f config/samples/mygroup_v1beta1_genericdaemon.yaml $ kubectl get genericdaemon

NAME AGE

genericdaemon-sample 5s $ kubectl get daemonset

NAME READY [...] NODE SELECTOR

genericdaemon-sample-daemonset 0 [...] daemon=http $ kubectl describe genericdaemons genericdaemon-sample

[...]

Spec:

Image: httpd

Label: http

Status:

Count: 0 $ kubectl label nodes mynode1 daemon=http $ kubectl get daemonset

NAME READY [...] NODE SELECTOR

genericdaemon-sample-daemonset 1 [...] daemon=http $ kubectl describe genericdaemons genericdaemon-sample

[...]

Spec:

Image: httpd

Label: http

Status:

Count: 1

Conclusion

Compared with the sample-controller explored in the precedent article, kubebuilder gives us the following advantages:

kubebuilder still uses the base Kubernetes API: if you already worked with the sample-controller or your own implementation of a controller or operator, you will recognize the same patterns,

it is possible to create several APIs at several versions,

some tests are written for us to extend,

the role and role binding are written for us (you’ll have to decorate the sources with some +kubebuilder:rbac tags),

the docker image build and deploy is handled for us.

Next, explore the operator-sdk.