MySQL replication is a valuable feature and asset in Kubernetes (K8S), allowing for redundancy, increased availability and potential performance improvements. Follow this guide to learn how to setup your MySQL replication cluster.

We will use a traditional slave/master set up with asynchronous replication, configurable replication, depending on user configuration, and no requirement for a constant connection. For the official line on MySQL and more information plus technical details around replication, check out the MySQL documentation.

1. First Step

In this example, we are using Vagrant as a configuration manager on Mac OS X to create the Kubernetes environment. To follow this guide, please install Vagrant on your host machine. You will also need a virtualization tool, for example Oracle’s Virtual Box or VMware Fusion.

2. Setting up the K8S Environment

With a variety of ways to configure and setup your Kubernetes cluster, your existing system may vary. In this instance, we are using Vagrant (the Vagrantfile is provided by the CoreOS setup guide), to set up the environment, making use of a single controller node and a set of three worker (slave) nodes.

After setting up Vagrant, check the state of the environment by running these commands…

➜ vagrant git:(master) ✗ vagrant status

Current machine states: e1 running (virtualbox)

c1 running (virtualbox)

w1 running (virtualbox)

w2 running (virtualbox)

w3 running (virtualbox)

This environment represents multiple VMs. The VMs are all listed

above with their current state. For more information about a specific

VM, run`vagrant status NAME'.

➜ vagrant git:(master) ✗ kubectl get nodes

NAME STATUS AGE

172.17.4.101 Ready,SchedulingDisabled 23m

172.17.4.201 Ready 23m

172.17.4.202 Ready 23m

172.17.4.203 Ready 21m

➜ vagrant git:(master) ✗

e1 is etcd node, c1 is controller node, and w1 , w2 , w3 are worker nodes.

3. Prepare Your MySQL Docker Image

Build the Docker Image

Having setup the replication cluster, we’ll modify the Docker file and build a Docker image with MySQL. Again, we follow the official guidance based on the MySQL Docker image.

For the master:

Edit the Dockerfile , and add the following lines:

RUN sed -i '/\[mysqld\]/a server-id=1

log-bin' /etc/mysql/mysql.conf.d/mysqld.cnf

Edit docker-entrypoint.sh file to create a user and password for replication:

echo "CREATE USER '$MYSQL_REPLICATION_USER'@'%' IDENTIFIED BY '$MYSQL_REPLICATION_PASSWORD' ;" | "${mysql[@]}"

echo "GRANT REPLICATION SLAVE ON *.* TO '$MYSQL_REPLICATION_USER'@'%' IDENTIFIED BY '$MYSQL_REPLICATION_PASSWORD' ;" | "${mysql[@]}"

echo 'FLUSH PRIVILEGES ;' | "${mysql[@]}"

We use environment variables MYSQL_REPLICATION_USER and MYSQL_REPLICATION_PASSWORD for user and password configuration. These environment variables will be set while the pod is created in Kubernetes.

Click the link to view the dockerfile and docker-entrypoint.sh for building the master.

For the slave:

Edit the Dockerfile , and add the following lines:

RUN RAND="$(date +%s | rev | cut -c 1-2)$(echo ${RANDOM})" && sed -i '/\[mysqld\]/a server-id='$RAND'

log-bin' /etc/mysql/mysql.conf.d/mysqld.cnf

The server-id use random number.

Edit docker-entrypoint.sh to add the master.

echo "STOP SLAVE;" | "${mysql[@]}"

echo "CHANGE MASTER TO master_host='$MYSQL_MASTER_SERVICE_HOST', master_user='$MYSQL_REPLICATION_USER', master_password='$MYSQL_REPLICATION_PASSWORD' ;" | "${mysql[@]}"

echo "START SLAVE;" | "${mysql[@]}"

The master host is MYSQL_MASTER_SERVICE_HOST which will be the service name of the master pod in Kubernetes. For more information visit Kubernetes’ container lifecycle hooks guide.

Click the link to view the docker-entrypoint.sh for building the slave.

Finally, we can build the MySQL Master image and the MySQL Slave images, based on their Dockerfile.

docker build -t mysql-master:0.1 .

docker build -t mysql-slave:0.1 .

4. Deploy to Kubernetes

We now deploy the MySQL replication cluster to kubernetes using the kubectl command.

4.1 Deploy MySQL Master

Create a replication controller and service for the MySQL Master node. The yaml file we use to create replication controller and service are:

$ more mysql-master-rc.yaml

apiVersion: v1

kind: ReplicationController

metadata:

name: mysql-master

labels:

name: mysql-master

spec:

replicas: 1

selector:

name: mysql-master

template:

metadata:

labels:

name: mysql-master

spec:

containers:

- name: master

image: mysql-master:0.1

ports:

- containerPort: 3306

env:

- name: MYSQL_ROOT_PASSWORD

value: "test"

- name: MYSQL_REPLICATION_USER

value: 'demo'

- name: MYSQL_REPLICATION_PASSWORD

value: 'demo'

$ more mysql-master-service.yaml

apiVersion: v1

kind: Service

metadata:

name: mysql-master

labels:

name: mysql-master

spec:

ports:

- port: 3306

targetPort: 3306

selector:

name: mysql-master

Now, we will use kubectl to ar the controller and service:

$ kubectl create -f mysql-master-rc.yaml

$ kubectl create -f mysql-master-service.yaml

It will take some time to create the pod because it needs to download the docker image.

$ kubectl get pods

NAME READY STATUS RESTARTS AGE

mysql-master-95j7d 1/1 Running 0 29m

$ kubectl get svc

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes 10.3.0.1 <none> 443/TCP 23h

mysql-master 10.3.0.29 <none> 3306/TCP 25m

4.2 Deploy MySQL Slave

As with the master node, we will use two yaml files to create the replication controller and service for the MySQL slave.

$ more mysql-slave-rc.yaml

apiVersion: v1

kind: ReplicationController

metadata:

name: mysql-slave

labels:

name: mysql-slave

spec:

replicas: 1

selector:

name: mysql-slave

template:

metadata:

labels:

name: mysql-slave

spec:

containers:

- name: slave

image: mysql-slave:0.1

ports:

- containerPort: 3306

env:

- name: MYSQL_ROOT_PASSWORD

value: "test"

- name: MYSQL_REPLICATION_USER

value: 'demo'

- name: MYSQL_REPLICATION_PASSWORD

value: 'demo'

$ more mysql-slave-service.yaml

apiVersion: v1

kind: Service

metadata:

name: mysql-slave

labels:

name: mysql-slave

spec:

ports:

- port: 3306

targetPort: 3306

selector:

name: mysql-slave

Now, we will use kubectl to add the controller and service:

$ kubectl create -f mysql-slave-rc.yaml

$ kubectl create -f mysql-slave-service.yaml

After these tasks have been performed, we should check the status through kubectl .

$ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE

mysql-master-95j7d 1/1 Running 0 33m 10.2.64.5 172.17.4.201

mysql-slave-gr41w 1/1 Running 0 23m 10.2.45.3 172.17.4.202

$ kubectl get svc

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes 10.3.0.1 <none> 443/TCP 23h

mysql-master 10.3.0.29 <none> 3306/TCP 28m

mysql-slave 10.3.0.5 <none> 3306/TCP 22m

Want a stress-free K8S cluster management experience? Download our demo, Kublr-in-a-Box.

5. Test Your Setup

5.1 Create Your Database on Master

First, we will check the MySQL status both on master and slave. Go to the master pod and execute the following commands to check the MySQL status (follow the same steps for the slave node).

$ kubectl exec -it mysql-master-95j7d /bin/bash

root@mysql-master-95j7d:/# mysql -u root -p

Enter password:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 9

Server version: 8.0.0-dmr-log MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases;

+--------------------+

| Database |

+--------------------+

| information_schema |

| mysql |

| performance_schema |

| sys |

+--------------------+

4 rows in set (0.00 sec) mysql>

Note

mysql-master-95j7d is the name of master pod and the root password of MySQL is test .

Then, create a database and add one table with a several rows.

On the master node, we do:

mysql> create database demo;

Query OK, 1 row affected (0.02 sec) mysql> use demo;

Database changed

mysql> create table user(id int(10), name char(20));

Query OK, 0 rows affected (0.03 sec) mysql> insert into user values(100, 'user1');

Query OK, 1 row affected (0.00 sec) mysql> select * from user;

+------+-------+

| id | name |

+------+-------+

| 100 | user1 |

+------+-------+

1 row in set (0.00 sec) mysql>

5.1 Check Synchronization on Slave

Go to the slave node kubectl exec -it mysql-slave-gr41w /bin/bash and ensure the table has been replicated:

mysql> show slave status\G;

Empty set (0.0 sec) ERROR:

No query specified mysql>

mysql>

mysql> show databases;

+--------------------+

| Database |

+--------------------+

| demo |

| information_schema |

| mysql |

| performance_schema |

| sys |

+--------------------+

5 rows in set (0.00 sec) mysql> use demo;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A Database changed

mysql> select * from user;

+------+-------+

| id | name |

+------+-------+

| 100 | user1 |

+------+-------+

1 row in set (0.00 sec) mysql>

All data should now be synchronized.

5.3 Replication Controller Scaling

Now that we have one MySQL master pod and one MySQL slave pod, we can do some scaling. For example, set MySQL slave node to three.

kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE

mysql-master-95j7d 1/1 Running 0 1h 10.2.64.5 172.17.4.201

mysql-slave-4rk62 0/1 ContainerCreating 0 2s <none> 172.17.4.203

mysql-slave-9fjkl 0/1 ContainerCreating 0 2s <none> 172.17.4.201

mysql-slave-gr41w 1/1 Running 0 50m 10.2.45.3 172.17.4.202

You can see it’s creating now, after few time, the nodes will be ready and we can enter one of them to check the MySQL data synchronization.

5.4 Making MySQL Data Persistent

In your production environment, configure the volume mount for MySQL data persistent, the yaml file for creating replication controller is as below:

apiVersion: v1

kind: ReplicationController

metadata:

name: mysql-master

labels:

name: mysql-master

spec:

replicas: 1

selector:

name: mysql-master

template:

metadata:

labels:

name: mysql-master

spec:

containers:

- name: master

image: paulliu/mysql-master:0.1

ports:

- containerPort: 3306

volumeMounts:

- name: mysql-data

mountPath: /var/lib/mysql

env:

- name: MYSQL_ROOT_PASSWORD

value: "test"

- name: MYSQL_REPLICATION_USER

value: 'demo'

- name: MYSQL_REPLICATION_PASSWORD

value: 'demo'

volumes:

- name: mysql-data

hostPath:

path: /var/lib/mysql

The Mount path /var/lib/mysql is just the same as defined in Dockerfile .

Share your thoughts and questions in the comments section below.

Need a user-friendly tool to set up and manage your K8S cluster? Check out Kublr-in-a-Box. To learn more, visit kublr.com.