TL;DR This blog post explains how to setup a Kubernetes cluster on AWS that runs the CCM for cloud provider integration.

Some of you may already be familiar with my work in Kubernetes. I proposed and authored the Cloud Controller Manager (CCM) in Kubernetes. It was introduced in Kubernetes 1.6 and is headed towards beta by Kubernetes-1.11.

As it becomes more prominent, and the existing mechanisms are retired, this guide will come in handy for setting up a Kubernetes cluster running CCM. This guide is specifically aimed at running CCM using Kops on AWS.

Background Information

The motivation behind CCM, and its utility is explained in two blog posts that I wrote in May 2017:

These articles discuss the new architecture of Kubernetes, and how it benefits its users and the ecosystem. Here’s a diagram detailing out the new architecture:

Kubernetes architecture with CCM running

CCM as a Core Addon

Kops is an especially useful tool when setting up a Kubernetes cluster on AWS. By default, Kops sets up a Kubernetes cluster that runs the following components:

Kube-API-Server

Kube-Controller-Manager

Kube-Scheduler

etcd

Kubelet

Kube-Proxy

It also starts some core addons, and other services that are vital for the functioning of Kops itself.

In this guide, we’ll set up a Kubernetes cluster that runs an additional component along with the above components. The additional component is

Cloud-Controller-Manager

The Cloud-Controller-Manager was added as a core addon in Kops.

Kops core addons, also known as system addons, are services that are required for the functioning of Kubernetes itself. Examples of system addons are:

DNS controller — the service that holds DNS records of various services running in Kubernetes

Cloud-Controller-Manager

Downloading and Installing Kops

In order for the cluster to work correctly, you need the right version of Kops. As of yesterday, I added a bugfix to Kops to run CCM the right way.

Let’s start by downloading Kops, and choosing the the right version

$ git clone http://github.com/kubernetes/kops

$ git checkout 9d9646d0ce97e089b84f9aca5048eb4c84e23c46 # The bugfix

Let’s build it

cd kops && make

Kops will be installed in your $GOPATH/bin directory. You can verify by running the following command.

$ kops version

Version 1.8.0 (git-9d9646d)

Creating a New Cluster with CCM

There are two ways to create a cluster using Kops.

Command Line

Cluster Spec

Command Line method is the most used and preferred method today. Cluster Spec method is a declarative way to achieve the same results as Command Line method.

Cloud-Controller-Manger support is only available through the Cluster Spec methodology. You can find more information about cluster spec here.

Since CCM is an alpha feature in Kubernetes Core, Kops gates the CCM feature. You need to enable this feature by explicitly opening the feature gate. Let’s start by opening this feature gate

export KOPS_FEATURE_FLAGS=EnableExternalCloudController

This environment variable should be set in any shell that you use to create Kops Kubernetes clusters.

Next, let’s create a new Cluster Spec for the cluster we will be creating. The cluster spec is a Kubernetes-style resource. You need to start by setting a name for the cluster.

apiVersion: kops/v1alpha2

kind: Cluster

metadata:

name: ccm.example.com # The name of the cluster

Let’s expand on this Cluster Spec by first setting CCM related parameters. In order to run CCM

Every other component except CCM should set cloudProvider: external

Persistent Volume Label controller should not be run

Let’s configure these in the Cluster Spec



kind: Cluster

metadata:

name: ccm.example.com

spec:

api:

dns: {}

authorization:

alwaysAllow: {}

channel: stable

cloudProvider: aws # This should be AWS

clusterDNSDomain: cluster.local

docker:

bridge: ""

ipMasq: false

ipTables: false

logDriver: json-file

logLevel: warn

logOpt:

- max-size=10m

- max-file=5

storage: overlay,aufs

version: 1.13.1

etcdClusters:

- etcdMembers:

- instanceGroup: master-us-east-1a

name: a

name: main

version: 3.0.17

enableEtcdTLS: true

- etcdMembers:

- instanceGroup: master-us-east-1a

name: a

name: events

version: 3.0.17

enableEtcdTLS: true

iam:

allowContainerRegistry: true

legacy: false

kubeAPIServer:

address: 127.0.0.1

admissionControl:

- Initializers

- NamespaceLifecycle

- LimitRanger

- ServiceAccount

#- PersistentVolumeLabel # Should not run PVL controller

- DefaultStorageClass

- DefaultTolerationSeconds

- NodeRestriction

- Priority

- ResourceQuota

allowPrivileged: true

anonymousAuth: false

apiServerCount: 1

authorizationMode: AlwaysAllow

cloudProvider: external

etcdServers:

- http://127.0.0.1:4001

etcdServersOverrides:

- /events#

image: gcr.io/google_containers/kube-apiserver:v1.8.6

insecurePort: 8080

kubeletPreferredAddressTypes:

- InternalIP

- Hostname

- ExternalIP

logLevel: 2

requestheaderAllowedNames:

- aggregator

requestheaderExtraHeaderPrefixes:

- X-Remote-Extra-

requestheaderGroupHeaders:

- X-Remote-Group

requestheaderUsernameHeaders:

- X-Remote-User

securePort: 443

serviceClusterIPRange: 100.64.0.0/13

storageBackend: etcd3

cloudControllerManager: # This should be non-empty

cloudProvider: aws # CCM CloudProvider should be AWS

kubernetesVersion: 1.8.6

masterInternalName: api.internal.ccm.example.com

masterPublicName: api.example.com

#networkCIDR: 172.32.0.0/16

networking:

kubenet: {}

nonMasqueradeCIDR: 100.64.0.0/10

secretStore: s3://kops-k8s-state-store/ccm.example.com/secrets

serviceClusterIPRange: 100.64.0.0/13

subnets:

- name: us-east-1a

type: Public

zone: us-east-1a

cidr: 172.32.32.0/19

sshAccess:

- 0.0.0.0/0

topology:

dns:

type: Public

masters: public

nodes: public apiVersion: kops/v1alpha2kind: Clustermetadata:name: ccm.example.comspec:api:dns: {}authorization:alwaysAllow: {}channel: stablecloudProvider:clusterDNSDomain: cluster.localdocker:bridge: ""ipMasq: falseipTables: falselogDriver: json-filelogLevel: warnlogOpt:- max-size=10m- max-file=5storage: overlay,aufsversion: 1.13.1etcdClusters:- etcdMembers:- instanceGroup: master-us-east-1aname: aname: mainversion: 3.0.17enableEtcdTLS: true- etcdMembers:- instanceGroup: master-us-east-1aname: aname: eventsversion: 3.0.17enableEtcdTLS: trueiam:allowContainerRegistry: truelegacy: falsekubeAPIServer:address: 127.0.0.1admissionControl:- Initializers- NamespaceLifecycle- LimitRanger- ServiceAccount- DefaultStorageClass- DefaultTolerationSeconds- NodeRestriction- Priority- ResourceQuotaallowPrivileged: trueanonymousAuth: falseapiServerCount: 1authorizationMode: AlwaysAllowetcdServers:etcdServersOverrides:- /events# http://127.0.0.1:4002 image: gcr.io/google_containers/kube-apiserver:v1.8.6insecurePort: 8080kubeletPreferredAddressTypes:- InternalIP- Hostname- ExternalIPlogLevel: 2requestheaderAllowedNames:- aggregatorrequestheaderExtraHeaderPrefixes:- X-Remote-Extra-requestheaderGroupHeaders:- X-Remote-GrouprequestheaderUsernameHeaders:- X-Remote-UsersecurePort: 443serviceClusterIPRange: 100.64.0.0/13storageBackend: etcd3kubernetesVersion: 1.8.6masterInternalName: api.internal.ccm.example.commasterPublicName: api.example.com#networkCIDR: 172.32.0.0/16networking:kubenet: {}nonMasqueradeCIDR: 100.64.0.0/10secretStore: s3://kops-k8s-state-store/ccm.example.com/secretsserviceClusterIPRange: 100.64.0.0/13subnets:- name: us-east-1atype: Publiczone: us-east-1acidr: 172.32.32.0/19sshAccess:- 0.0.0.0/0topology:dns:type: Publicmasters: publicnodes: public

The cloud provider for Kubelet, and KubeControllerManager will be set to external if left unset.

A state store is needed before creating this cluster. I’ve used the following s3 bucket as my state store — s3://kops-k8s-state-store . Lets create a new cluster using the above Cluster Spec and State Store.

$ export KOPS_FEATURE_FLAGS=EnableExternalCloudController

$ kops create -f ccm-cluster-spec.yaml --state= s3://kops-k8s-state-store

I’ve re-iterated the feature gate environment variable for anyone copying the above command. That variable should be set for the CCM to be run by Kops.

The above command creates a config in the s3 bucket for the new cluster that is about to be created. Kops needs some more information to start the cluster. Namely, it needs the following information

Master Instance Group

Node Instance Group

SSH Key Secret

The first two parameters define the number and size of machines that form your Kubernetes cluster. The last parameter holds the SSH Key to login to these hosts. Let’s create the above three

$ cat << EOF > master-ig.yaml

apiVersion: kops/v1alpha2

kind: InstanceGroup

metadata:

labels:

kops.k8s.io/cluster: ccm.example.com

name: master-us-east-1a

spec:

image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2017-12-02

machineType: c4.large

maxSize: 2

minSize: 2

nodeLabels:

kops.k8s.io/instancegroup: master-us-east-1a

role: Master

subnets:

- us-east-1a

zones:

- us-east-1a

EOF $ cat << EOF > nodes-ig.yaml

apiVersion: kops/v1alpha2

kind: InstanceGroup

metadata:

labels:

kops.k8s.io/cluster: ccm.example.com

name: nodes

spec:

image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2017-12-02

machineType: t2.medium

maxSize: 3

minSize: 3

nodeLabels:

kops.k8s.io/instancegroup: nodes

role: Node

subnets:

- us-east-1a

zones:

- us-east-1a

EOF $ kops create -f master-ig.yaml --state=s3://kops-k8s-state-store $ kops create -f nodes-ig.yaml --state=s3://kops-k8s-state-store $ kops create secret --name ccm.example.com sshpublickey admin -i ~/.ssh/id_rsa.pub --state=s3://kops-k8s-state-store

There are a few nuances to note in the above commands.

There should be at least two c4.large master instances. Otherwise Kops will not start CCM due to lack of computing resources

master instances. Otherwise Kops will not start CCM due to lack of computing resources The secret should be saved in the same state store, as the original cluster.spec expects the secret be saved there.

The cluster can be finally be instantiated using the following command

$ kops update cluster ccm.example.com --state=s3://kops-k8s-state-store --yes

You can verify that the Kubernetes cluster is running CCM once it is ready. It takes about 5–10 minutes for the cluster to be ready.

$ kubectl -n kube-system get pods

The output of this command should show one cloud-controller-manger pod running without restarts, and all dns-controller pods running.

Conclusion

This article serves as a quick introduction to setting up and running Cloud-Controller-Manager, the newest Kubernetes component.

Stay tuned for more Kubernetes deep dives and information on bleeding edge features in Kubernetes!