Scalable MySQL Cluster with Master-Slave Replication, ProxySQL Load Balancing and Orchestrator

MySQL is one of the most popular open-source relational databases, used by lots of projects around the world including incredibly large-scale ones like Facebook, Twitter, YouTube, etc. Obviously, such projects need a truly reliable and highly-available data storing system to ensure the appropriate level of a service quality. And the very first and the main way to get the most efficiency from your data storage is setting up database clustering so that it could process a big number of requests simultaneously and remain workable in conditions of increased load. However, configuring such solution from the scratch can appear to be a rather complicated task.

Thus, the Jelastic team has prepared a one-click installation package for you - a Scalable MySQL Cluster with out-of-box master-slave replication, even request distribution and node auto discovery. It is intended to instantly deploy a pair of interconnected MySQL containers, which handle asynchronous data replication and are automatically reconfigured upon cluster scaling (i.e. changing the number of nodes). In addition, this solution is supplied with a ProxySQL load balancer in front of the database nodes set and embedded Orchestrator for its convenient management via GUI.

So, to start with, let’s review some details on the Scalable MySQL Cluster solution implementation to get deeper insights on how it actually works. And after that we’ll proceed with instruction on its deployment, review possibilities of the built-in management panel and explore the implemented database failover mechanism with a simulated node failure.

Scalable MySQL Cluster Package Specifics

So, upon the Scalable MySQL Cluster with Load Balancing package installation, you’ll get a Docker-based environment with the following implementation specifics:

the default topology includes 1 ProxySQL load balancer node (based on jelastic/proxysql image) and a pair of asynchronously replicated MySQL DB server instances (built over the jelastic/mysql :5.7-latest template)

the very initially created MySQL container is assigned a master role, whilst the second one (and all the further manually added nodes) will serve as slave s

by default, each container is assigned 8-cloudlet resources limit (which equals to 1 GiB of RAM and 3.2 GHz of CPU) for automatic vertical scaling

Being delivered with a set of special preconfigurations, the current Scalable MySQL Cluster can be distinguished with the following possibilities and benefits:

efficient load balancing - ProxySQL uses the hostgroups concept to separate DB master (with a read-write possibility) and slaves (with the read-only permissions); herewith, due to the special query rules , all select requests are redirected only to slave servers and distributed between them with the round-robin algorithm to ensure even load

scalability and autodiscovery - new MySQL nodes, added during the manual DB server horizontal scaling , are included into a cluster as slaves with all the required adjustments being applied automatically

re-configuration with no downtime - a cluster is designed to run continuously and can be adjusted on a fly without the necessity to restart the running services

automated failover - slave DB nodes with high latency or the ones that can not be reached are temporarily excluded from a cluster and re-added once the connection is restored

comfortable GUI - the solution includes pre-installed Orchestrator tool to simplify cluster management (e.g. to refactor replication paths, recover topology failures, etc)

Before proceeding to the package installation, consider, that the appropriate Platform should run Jelastic 5.0.5 version or higher. Now, let’s see how to instantly get such clustered MySQL solution, run within Docker containers inside the Cloud, just in a few simple clicks.

How to Install Scalable MySQL Cluster into Jelastic Cloud

Deployment of the Scalable MySQL Cluster with Load Balancing is completely automated by Jelastic, allowing to get a fully configured and ready-to-go database cluster in a matter of minutes.

1. Log into your Jelastic account and import the appropriate manifest.jps file via URL:

https://github.com/jelastic-jps/mysql-cluster/blob/master/mysql-cluster-orchestrator/manifest.jps

Tip: Alternatively, you can find this solution within the Clusters section of Jelastic Marketplace alongside with a number of similar one-click installation packages for other clustered databases, application servers and particular applications.

2. Within the opened installation frame, provide details on the desired environment. Here:

Environment - type any prefered name

Display Name - optionally, provide an environment alias

Region - select region (if several ones are available)

Click Install to proceed.

3. Wait a minute while Jelastic configures everything for you. After the successful installation, you’ll receive the following email notifications with important administrating information on your MySQL cluster:

Scalable Database Cluster - provides data to access the PHPMyAdmin panel for database managing

Database Auto Replication - displays cluster connection information to bind it to your application

Orchestrator Configuration - gives credentials to access the Orchestrator panel, intended for convenient cluster management

Now, you are ready to start utilizing your DB cluster.

Cluster Instances Monitoring and Auto Discovery by Orchestrator

To get the basis of the provided built-in management possibilities, let’s refer to the embedded Orchestrator panel and check the cluster operability.

This tool can be accessed by clicking the Open in browser button next to the Proxysql layer of the created environment. 1. The Orchestrator dashboard will be opened in a separate browser tab, providing you with a number of options & menus at the top. And in a big working area below, you can see that your database cluster has been already detected and is currently tracked.

2. Click on the appropriate plank to review the full cluster topology (i.e. all DB instances it consists of). According to the default package settings, we’ve got a pair of interconnected MySQL nodes, one per master and slave roles. Here, each instance plank displays some details on the corresponding database server, like its domain, run stack version, assigned role in confines of a cluster, etc.

3. If expecting a high load, it could make sense to extend your clustered storage with a couple of additional slaves to make it more durable.

For that, return to the Jelastic dashboard and scale out the number of containers in your MySQL cluster - for example, we’ll append two more nodes. 4. When the instances are successfully added (track this operation state within the Tasks panel at the dashboard bottom), switch to the Orchestrator browser tab and verify their presence within the shown cluster topology.

Tip: The Orchestrator dashboard is automatically refreshed once per minute during your inactivity (i.e. when you don’t interact with the appropriate browser tab) to display the most up-to-date data for the moment you return to managing your cluster. The remaining time till the next information update is displayed at the top pane with a dedicated timer. Clicking on it will stop the counting so the continuous refresh will be disabled until you click it again.

As a result of our cluster extension, you should see three slave nodes being connected to the main MySQL master one.

Checking MySQL Cluster Failover with Simulated Slave Failure

Now, let’s halt replication at one of the slave DB server to simulate its failure and examine how the Orchestrator detects and handles such issues.

1. So, connect to any of your MySQL slave containers via SSH and access database shell with the following command:

mysql -u {user} -p Obviously, the {user} placeholder should be replaced with MySQL admin username (and confirmed with the appropriate password) to authenticate the established connection - the corresponding credentials can be found in the cluster info email you’ve got after its creation.

2. At the first, let’s check the actual status of the current DB server by entering the next string:

show slave status\G As you can see, the slave node is up and working.

3. Now, let’s break the main replication process at the chosen database server and check the result - run the following two commands in one string to accomplish this simultaneously:

stop slave; show slave status\G 4. Return to the Orchestrator dashboard to check whether our simulated server unavailability has been detected by the system - the appropriate node should be highlighted with red. Also, you can note a red warning icon being appeared in the top right corner of your working area - here, all cluster problems are gathered within a single list for the convenient view.

5. To reveal the root of the failure issue, click on the gear icon at the appropriate problematic node plank. In the opened frame, you’ll be provided with some additional DB instance information to analyze (like its replication status & delay, server ID, read/write permissions, uptime duration, etc) and options to perform some main management actions.

In our case, you can see that replication is not running. Thus, to restore the full operability of the cluster, click the appropriate Start slave button for the master node data to be replicated to all slaves once again. After this process completion, the warning node plank highlight will disappear, as well as the list of problems to the right - this means all of the nodes work as intended.

At this point, you can proceed exploring the Scalable MySQL Cluster with Load Balancing solution by Jelastic on your own. Do not hesitate to create the exact data storage you need - the package is already available for all users within the Clusters section of Jelastic Marketplace. Haven't tried Jelastic yet? Not a problem, just register at any of our Cloud Platforms and test drive its possibilities completely for free during a two-week trial period.