MultiCassKop starts by iterating on every context passed in parameters then it registers the controller.

The controller needs to be able to interact with MultiCasskop and CassandraCluster CRD objects.

In addition, the controller needs to watch for MultiCasskop as it will need to react to any changes that occurred on

those objects for the given namespace.

Installation

Create Namespaces on each k8s clusters

We need to create a namespace with our pre-configured Calico routable-1 IP pool for cluster k8s-cluster1, and routable-2 IP pool for k8s-cluster2.

Example for k8s-cluster1:

cat <<EOF | kubectl apply -f -

apiVersion: v1

kind: Namespace

metadata:

name: cassandra-demo

annotations:

"cni.projectcalico.org/ipv4pools": '\["routable-1"\]'

labels:

app: cassandra

EOF

kubens cassandra-demo

Bootstrap API access to k8s-cluster-2 from k8s-cluster-1

Multi-Casskop will be deployed in k8s-cluster-1, change your kubectl context to point to this cluster.

In order to allow our Multi-CassKop controller having access to k8s-cluster-2 from k8s-cluster-1, we are going to uses kubemcsa from Admiralty to be able to export secret from k8s-cluster-2 to k8s-cluster1

kubemcsa export --context=cluster2 --namespace cassandra-demo cassandra-operator --as k8s-cluster2 | kubectl apply -f -

This will create in current k8s cluster which must be k8s-cluster-1, the k8s secret associated to the

cassandra-operator service account of namespace cassandra-demo in k8s-cluster2.

/!\ The Secret will be created with the name k8s-cluster2 and this name must be used when starting Multi-CassKop and

in the MultiuCssKop CRD definition see below

Install CassKop

CassKop must be deployed on each targeted Kubernetes cluster.

Add the Helm repository for CassKop

$ helm repo add casskop https://Orange-OpenSource.github.io/cassandra-k8s-operator/helm

$ helm repo update

Install External-DNS

External-DNS must be installed in each Kubernetes cluster.

Configure your external DNS with a customs value pointing to your zone and deploy it in your namespace

helm install -f ~/private/externaldns-values.yaml --name casskop-dns external-dns

External DNS goal is to listen to Kubernetes services created by CassKop and will create a DNS entry in your zone for each pod associated with the service.

That means that we will be able to have an external DNS entry for each of the Cassandra pods created by CassKop automatically.

And because our pods are using a Calico routable IP address, they will be able to be accessed from outside our cluster.

Install Multi-CassKop

Proceed with Multi-CassKop installation only when pre-requisites are fulfilled.

Deployment with Helm. Multi-CassKop and CassKop shared the same GitHub/helm repo and semantic version.

helm install --name multi-casskop casskop/multi-casskop --set k8s.local=k8s-cluster1 --set k8s.remote={k8s-cluster2}

if you get an error complaining that the CRD already exists, then replay it with --no-hooks

When starting Multi-CassKop, we need to give some parameters:

k8s.local is the name of the k8s-cluster we want to refer to when talking to this cluster.

k8s.remote is a list of other kubernetes we want to connect to.

Names used there should map with the name used in the MultiCassKop CRD definition)

the Names in k8s.remote must match the names of the secret exported with the kubemcsa command

When starting, our MultiCassKop controller should log something similar to:

time="2019-11-28T14:51:57Z" level=info msg="Configuring Client 1 for local cluster k8s-cluster1 (first in arg list). using local k8s api access"

time="2019-11-28T14:51:57Z" level=info msg="Configuring Client 2 for distant cluster k8s-cluster2. using imported secret of same name"

time="2019-11-28T14:51:57Z" level=info msg="Creating Controller"

time="2019-11-28T14:51:57Z" level=info msg="Create Client 1 for Cluster k8s-cluster1"

time="2019-11-28T14:51:57Z" level=info msg="Add CRDs to Cluster k8s-cluster1 Scheme"

time="2019-11-28T14:51:57Z" level=info msg="Create Client 2 for Cluster k8s-cluster2"

time="2019-11-28T14:51:58Z" level=info msg="Add CRDs to Cluster k8s-cluster2 Scheme"

time="2019-11-28T14:51:58Z" level=info msg="Configuring Watch for MultiCasskop"

time="2019-11-28T14:51:58Z" level=info msg="Configuring Watch for MultiCasskop"

time="2019-11-28T14:51:58Z" level=info msg="Writing ready file."

time="2019-11-28T14:51:58Z" level=info msg="Starting Manager."

Create the MultiCassKop CRD

Multi-CassKop introduce a new custom resource and will have the charge to create CassandraCluster resources in each

k8s cluster.

The Spec field of MultiCasskop have a base parameter which contains a valid CassandraCluster object.

It also has an override section that will allow specific parts of the base CassandraCluster definition to be overridden

depending on the target cluster.

You can find an example of MultiCassKop in the multi-casskop/samples/multi-casskop.yaml file:

this is the structure of a MultiCasskop:

apiVersion: db.orange.com/v1alpha1

kind: MultiCasskop

metadata:

name: multi-casskop-e2e

spec:

# Add fields here

deleteCassandraCluster: true

base:

<The base of the CassandraCluster object you want Multi-CassKop to create>

...

status: #<!!-- At this time the seedlist must be provided Manually. we know in advance the domain name

# it's the <pod-name>.<your-external-dns-domain>

seedlist:

- cassandra-e2e-dc1-rack1-0.my.zone.dns.net

- cassandra-e2e-dc1-rack1-1.my.zone.dns.net

- cassandra-e2e-dc2-rack4-0.my.zone.dns.net

- cassandra-e2e-dc2-rack4-1.my.zone.dns.net

override:

k8s-cluster1: #<!!-- here is the name which must correspond to the helm argument `k8s.local`

spec: #<-- Here we defined overrides part for the CassandraCluster for the k8s-cluster1

pod:

annotations:

cni.projectcalico.org/ipv4pools: '["routable"]' #<!-- If using external DNS, change with your current zone

topology:

dc:

...

k8s-cluster2: #<!!-- here is the name which must correspond to the helm argument `k8s.remote`

spec:

pod:

annotations:

cni.projectcalico.org/ipv4pools: '["routable"]' #<!-- If using external DNS, change with your current zone

imagepullpolicy: IfNotPresent

topology:

dc:

...

You can create the Cluster with :

k apply -f multi-casskop/samples/multi-casskop.yaml

Then you can see the MultiCassKop logs:

time="2019-11-28T15:46:19Z" level=info msg="Just Update CassandraCluster, returning for now.." cluster=cassandra-e2e kubernetes=k8s-cluster1 namespace=cassandra-e2e

time="2019-11-28T15:46:19Z" level=info msg="Cluster is not Ready, we requeue [phase= / action= / status=]" cluster=cassandra-e2e kubernetes=k8s-cluster1 namespace=cassandra-e2e

time="2019-11-28T15:46:49Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster1 namespace=cassandra-e2e

time="2019-11-28T15:47:19Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster1 namespace=cassandra-e2e

time="2019-11-28T15:47:49Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing /status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster1 namespace=cassandra-e2e

time="2019-11-28T15:47:19Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster1 namespace=cassandra-e2e

time="2019-11-28T15:47:49Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster1 namespace=cassandra-e2e

time="2019-11-28T15:48:19Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster1 namespace=cassandra-e2e

time="2019-11-28T15:48:49Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster1 namespace=cassandra-e2e

time="2019-11-28T15:49:19Z" level=info msg="Just Update CassandraCluster, returning for now.." cluster=cassandra-e2e kubernetes=k8s-cluster2 namespace=cassandra-e2e

time="2019-11-28T15:49:49Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster2 namespace=cassandra-e2e

time="2019-11-28T15:50:19Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster2 namespace=cassandra-e2e

time="2019-11-28T15:50:49Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster2 namespace=cassandra-e2e

time="2019-11-28T15:51:19Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster2 namespace=cassandra-e2e

time="2019-11-28T15:51:49Z" level=info msg="Cluster is not Ready, we requeue [phase=Initializing / action=Initializing / status=Ongoing]" cluster=cassandra-e2e kubernetes=k8s-cluster2 namespace=cassandra-e2e

This is the sequence of operations:

MultiCassKop first creates the CassandraCluster in k8s-cluster1.

Then local CassKop started to create the associated Cassandra Cluster.

When CassKop ended created it’s cluster, it updates the CassandraCluster status with the phase=Running meaning that

all is ok

all is ok Then MultiCassKop start creating the other CassandraCluster in k8s-cluster2

Then local CassKop started to create the associated Cassandra Cluster.

Thanks to the routable seed-list configured with external DNS names, Cassandra pods are started by connecting to already existing Cassandra nodes from k8s-cluster1 with the goal to form a unique Cassandra Ring.

In resulting, We can see that each cluster has the required pods.

If we go in one of the created pods, we can see that nodetool see pods of both clusters:

cassandra@cassandra-e2e-dc1-rack2-0:/$ nodetool status

Datacenter: dc1

===============

Status=Up/Down

|/ State=Normal/Leaving/Joining/Moving

-- Address Load Tokens Owns (effective) Host ID Rack

UN 10.100.146.150 93.95 KiB 256 49.8% cfabcef2-3f1b-492d-b028-0621eb672ec7 rack2

UN 10.100.146.108 108.65 KiB 256 48.3% d1185b37-af0a-42f9-ac3f-234e541f14f0 rack1

Datacenter: dc2

===============

Status=Up/Down

|/ State=Normal/Leaving/Joining/Moving

-- Address Load Tokens Owns (effective) Host ID Rack

UN 10.100.151.38 69.89 KiB 256 51.4% ec9003e0-aa53-4150-b4bb-85193d9fa180 rack5

UN 10.100.150.34 107.89 KiB 256 50.5% a28c3c59-786f-41b6-8eca-ca7d7d14b6df rack4 cassandra@cassandra-e2e-dc1-rack2-0:/$

Delete the Cassandra cluster2

If you have set the deleteCassandraCluster key to true, then when deleting the MultiCassKop object, it will cascade the deletion of the CassandraCluster object in the targeted k8s clusters. Then each local CassKop will delete their

Cassandra clusters.

you can see in the MultiCassKop logs