In this series of articles, we will explore some tools to create an operator for Kubernetes.

This first article will explore the sample-controller.

The second article of the series will explore kubebuilder.

The third article of the series will explore the operator-sdk.

The sample-controller

The first tool we will use to experiment creating an operator is the sample-controller that you can find here: https://github.com/kubernetes/sample-controller.

This project implements a simple operator for the Foo type that will create a deployment with some public Docker image and a specific number of replicas, when we create a custom object foo .

To install and build it, be sure to define your GOPATH, then run:

go get github.com/kubernetes/sample-controller

cd $GOPATH/src/k8s.io/sample-controller

go build -o ctrl .

We can then create the custom resource definition for the Foo type, with the files available in the artifacts/examples sub-directory:

kubectl apply -f artifacts/examples/crd-validation.yaml

And finally run the controller:

./ctrl -kubeconfig ~/.kube/config -logtostderr=true

Now from another terminal, we can manipulate Foo objects and see what happens from the controller:

$ kubectl apply -f artifacts/examples/example-foo.yaml

$ kubectl get pods

NAME READY STATUS RESTARTS AGE

example-foo-6cbc69bf5d-j8lhx 1/1 Running 0 18s $ kubectl delete -f artifacts/examples/example-foo.yaml

$ kubectl get pods

NAME READY STATUS RESTARTS AGE

example-foo-6cbc69bf5d-j8lhx 0/1 Terminating 0 38s

At time of writing and using Kubernetes 1.11.0, the controller will go in an infinite loop when it updates the status of the foo object after it creates a deployment: in the updateFooStatus function, you’ll have to change the call to Update(fooCopy) by a call to UpdateStatus(fooCopy) .

So far so good, the controller makes the job: it creates a deployment when we create a foo object and stops the deployment when we delete the object.

We can now go further and adapt the CRD and controller to use our own custom resource definition.

Adapting the sample-controller

Let’s say our goal is to write an operator that will deploy a daemon on nodes of our cluster. It will use the DaemonSet object to deploy this daemon and we would like to be able to specify a label, to deploy the daemon only on nodes tagged with this label. We also want to be able to specify the Docker image to deploy, instead of a static one as it is the case for the sample-controller.

Let’s first create the custom resource definition for our GenericDaemon type:

// artifacts/generic-daemon/crd.yaml

apiVersion: apiextensions.k8s.io/v1beta1

kind: CustomResourceDefinition

metadata:

name: genericdaemons.mydomain.com

spec:

group: mydomain.com

version: v1beta1

names:

kind: Genericdaemon

plural: genericdaemons

scope: Namespaced

validation:

openAPIV3Schema:

properties:

spec:

properties:

label:

type: string

image:

type: string

required:

- image

And a first example of daemon to deploy:

// artifacts/generic-daemon/syslog.yaml

apiVersion: mydomain.com/v1beta1

kind: Genericdaemon

metadata:

name: syslog

spec:

label: logs

image: mbessler/syslogdocker

We now have to build the go files for the API necessary to access this new custom resource definition from our operator. For this, let’s create a new directory pkg/apis/genericdaemon in which we will copy the files found in pkg/apis/samplecontroller (but the zz_generated.deepcopy.go one)

$ tree pkg/apis/genericdaemon/

pkg/apis/genericdaemon/

├── register.go

└── v1beta1

├── doc.go

├── register.go

└── types.go

And adapt their contents (parts changed are in bold):

////////////////

// register.go

//////////////// package genericdaemon const (

GroupName = "mydomain.com"

) /////////////////////

// v1beta1/doc.go

///////////////////// // +k8s:deepcopy-gen=package // Package v1beta1 is the v1beta1 version of the API.

// +groupName=mydomain.com

package v1beta1 /////////////////////////

// v1beta1/register.go

///////////////////////// package v1beta1 import (

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

"k8s.io/apimachinery/pkg/runtime"

"k8s.io/apimachinery/pkg/runtime/schema" genericdaemon "k8s.io/sample-controller/pkg/apis/genericdaemon"

) // SchemeGroupVersion is group version used to register these objects

var SchemeGroupVersion = schema.GroupVersion{Group: genericdaemon.GroupName, Version: "v1beta1"} // Kind takes an unqualified kind and returns back a Group qualified GroupKind

func Kind(kind string) schema.GroupKind {

return SchemeGroupVersion.WithKind(kind).GroupKind()

} // Resource takes an unqualified resource and returns a Group qualified GroupResource

func Resource(resource string) schema.GroupResource {

return SchemeGroupVersion.WithResource(resource).GroupResource()

} var (

SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)

AddToScheme = SchemeBuilder.AddToScheme

) // Adds the list of known types to Scheme.

func addKnownTypes(scheme *runtime.Scheme) error {

scheme.AddKnownTypes(SchemeGroupVersion,

&Genericdaemon{},

&GenericdaemonList{},

)

metav1.AddToGroupVersion(scheme, SchemeGroupVersion)

return nil

} //////////////////////

// v1beta1/types.go

////////////////////// package v1beta1 import (

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

) // +genclient

// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // Genericdaemon is a specification for a Generic Daemon resource

type Genericdaemon struct {

metav1.TypeMeta `json:",inline"`

metav1.ObjectMeta `json:"metadata,omitempty"` Spec GenericdaemonSpec `json:"spec"`

Status GenericdaemonStatus `json:"status"`

} // GenericDaemonSpec is the spec for a GenericDaemon resource

type GenericdaemonSpec struct {

Label string `json:"label"`

Image string `json:"image"`

} // GenericDaemonStatus is the status for a GenericDaemon resource

type GenericdaemonStatus struct {

Installed int32 `json:"installed"`

} // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // GenericDaemonList is a list of GenericDaemon resources

type GenericdaemonList struct {

metav1.TypeMeta `json:",inline"`

metav1.ListMeta `json:"metadata"` Items []Genericdaemon `json:"items"`

}

A script hack/update-codegen.sh is available to generate code around our new custom resource definition we defined with these previous files. We will have to adapt this script to generate files for our new CRD:

// hack/update-codegen.sh

#!/usr/bin/env bash set -o errexit

set -o nounset

set -o pipefail SCRIPT_ROOT=$(dirname ${BASH_SOURCE})/..

CODEGEN_PKG=${CODEGEN_PKG:-$(cd ${SCRIPT_ROOT}; ls -d -1 ./vendor/k8s.io/code-generator 2>/dev/null || echo ../code-generator)} # generate the code with:

# --output-base because this script should also be able to run inside the vendor dir of

# k8s.io/kubernetes. The output-base is needed for the generators to output into the vendor dir

# instead of the $GOPATH directly. For normal projects this can be dropped.

${CODEGEN_PKG}/generate-groups.sh "deepcopy,client,informer,lister" \

k8s.io/sample-controller/pkg/client k8s.io/sample-controller/pkg/apis \

genericdaemon:v1beta1 \

--output-base "$(dirname ${BASH_SOURCE})/../../.." \

--go-header-file ${SCRIPT_ROOT}/hack/boilerplate.go.txt

And then execute it:

$ ./hack/update-codegen.sh

Generating deepcopy funcs

Generating clientset for genericdaemon:v1beta1 at k8s.io/sample-controller/pkg/client/clientset

Generating listers for genericdaemon:v1beta1 at k8s.io/sample-controller/pkg/client/listers

Generating informers for genericdaemon:v1beta1 at k8s.io/sample-controller/pkg/client/informers

We can now adapt our operator. First we have to change all references to the previous Foo type by the Genericdaemon type. Second, we have to create a Daemonset instead of a deployment when a new generic daemon is created (not shown here).

Deploying the operator to the Kubernetes cluster

When weare done with modifying the sample-controller to our needs, we need to deploy it to the kubernetes cluster. As a matter of fact, at this time, we have tested it by running it from our development system, using our credentials.

Here is a simple Dockerfile to build a Docker image with the operator (you’ll have to remove all code from original sample-controller for the image to build):

FROM golang RUN mkdir -p /go/src/k8s.io/sample-controller

ADD . /go/src/k8s.io/sample-controller

WORKDIR /go RUN go get ./...

RUN go install -v ./... CMD ["/go/bin/sample-controller"]

We can now build and push the image to the Docker Hub:

docker build . -t mydockerid/genericdaemon

docker push mydockerid/genericdaemon

And finally start a deployment with this new image:

// deploy.yaml

apiVersion: apps/v1beta1

kind: Deployment

metadata:

name: sample-controller

spec:

replicas: 1

selector:

matchLabels:

app: sample

template:

metadata:

labels:

app: sample

spec:

containers:

- name: sample

image: "mydockerid/genericdaemon:latest"

and kubectl apply -f deploy.yaml

The operator is now running, but if we examine the logs of the pod, we can see there is problems with authorizations; the pod do not get access rights to the different resources:

$ kubectl logs sample-controller-66b79c7d5f-2qnft

E0721 14:34:50.499584 1 reflector.go:134] k8s.io/sample-controller/pkg/client/informers/externalversions/factory.go:117: Failed to list *v1beta1.Genericdaemon: genericdaemons.mydomain.com is forbidden: User "system:serviceaccount:default:default" cannot list genericdaemons.mydomain.com at the cluster scope

E0721 14:34:50.500385 1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.DaemonSet: daemonsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list daemonsets.apps at the cluster scope

[...]

We need to create a ClusterRole and a ClusterRoleBinding to give the operator the necessary privileges:

// rbac_role.yaml

kind: ClusterRole

metadata:

name: operator-role

rules:

- apiGroups:

- apps

resources:

- daemonsets

verbs:

- get

- list

- watch

- create

- update

- patch

- delete

- apiGroups:

- mydomain.com

resources:

- genericdaemons

verbs:

- get

- list

- watch

- create

- update

- patch

- delete // rbac_role_binding.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: operator-rolebinding

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: operator-role

subjects:

- kind: ServiceAccount

name: default

namespace: default

And deploy it:

kubectl apply -f rbac_role.yaml

kubectl delete -f deploy.yaml

kubectl apply -f deploy.yaml

Now, your operator should be deployed to your Kubernetes cluster and be active.

What’s next

In the next article, we will explore kubebuilder that will give us tools to automate all these steps.