Cluster Setup

If you don’t have a kops-managed AWS Kubernetes cluster yet, go ahead and create one. The only cluster-creation parameters that matter at this point are your Kubernetes version 1.14.0+ and your networking plugin flannel . Go ahead and bring up your cluster and make sure that everything is healthy.

If your cluster isn’t coming up on 1.14 then you most likely need to need to set your admissionControl under kubeAPIServer . This is due to the fact that kops doesn’t yet support 1.14 so we need to set this manually.

kubeAPIServer:

admissionControl:

- NamespaceLifecycle

- LimitRanger

- ServiceAccount

- PersistentVolumeLabel

- DefaultStorageClass

- ResourceQuota

- DefaultTolerationSeconds

Once your cluster is up, we’ll need to make some slight modifications to our flannel DaemonSet and configuration. Edit your flannel DaemonSet, kubectl edit ds -n kube-system kube-flannel-ds if you’re using all defaults and add kubernetes.io/os: linux under the DaemonSet’s nodeSelector . This will keep a flannel container from being scheduled onto our Windows nodes.

...

nodeSelector:

beta.kubernetes.io/arch: amd64

kubernetes.io/os: linux

...

The reason we don’t want flannel containers on our Windows nodes is simply because they won’t work, and we need to run flannel on the Windows nodes themselves alongside the kubelet and kube-proxy services.

Once your DaemonSet is modified we need to make some slight changes to flannel’s configuration, which consists of two files cni-conf.json and net-conf.json . Both of these are defined in the kube-flannel-cfg ConfigMap under the kube-system namespace. These files should resemble the files below, with the exception of the Network field for net-conf.json , which might be different for your cluster:

apiVersion: v1

data:

cni-conf.json: |

{

"name": "vxlan0",

"type": "flannel",

"delegate": {

"forceAddress": true,

"isDefaultGateway": true,

"hairpinMode": true

}

}

net-conf.json: |-

{

"Network": "100.64.0.0/10",

"Backend": {

"Name": "vxlan0",

"Type": "vxlan",

"VNI": 4096,

"Port": 4789

}

}

kind: ConfigMap

...

Your changes to flannel’s setup are now complete, but we still need to do one more task in order for flannel to work properly on Windows. Flannel authenticates with Kubernetes using the flannel ServiceAccount, which is assigned to every flannel Pod from the DaemonSet. However, flannel will not be running as a Pod on our Windows nodes, so we need to generate a kubeconfig file for the Windows nodes to pull from the kops S3 state store for use by flannel.

I won’t go into how to generate a kubeconfig file for a ServiceAccount here, but I’ll reference this script in case you need some guidance. Alternatively, you can always copy your admin kubeconfig file into the S3 state store for Windows to pull if you’re lazy. This is obviously a security risk, but if you’re just following this for a proof-of-concept project then that’ll be just fine.

The kubeconfig for flannel needs to be placed under serviceaccount/flannel.kcfg underneath your cluster’s base S3 prefix. The full path under default kops state store settings should look something like s3://$BUCKET/clusterconfigs/$CLUSTER/serviceaccount/flannel.kcfg .

Finally, we need to add a few additional role policies for our nodes so that the startup script for our Windows nodes can work properly. Our Windows nodes need the ability to read AWS EC2 tags as well as the ability to read the flannel service account file we just uploaded:

additionalPolicies:

node: |

[

{

"Effect": "Allow",

"Action": ["ec2:DescribeTags"],

"Resource": ["*"]

},

{

"Effect": "Allow",

"Action": ["s3:*"],

"Resource": [

"arn:aws:s3:::$BUCKET/clusterconfigs/$CLUSTER/serviceaccount/flannel.kcfg"

]

}

]

That concludes the setup of our kops-managed Kubernetes cluster, now it’s time to actually create the Windows InstanceGroup. If you haven’t already, go ahead and apply the changes to the cluster and perform a rolling-update just to ensure that everything took correctly.