MySQL database implements MMM high availability cluster architecture

MySQL database implements MMM high availability cluster architecture

concept

MMM (Master-Master replication manager for Mysql) is a flexible script program based on Perl, which is used to monitor and failover MySQL replication, and manage the configuration of MySQL Master-Master replication (only one node is writable at the same time).

MMM High Availability Architecture Description

  • mmm_mond: Monitoring process, responsible for all monitoring tasks, determining and handling all node role activities. This script needs to be run on the supervisor machine.
  • mmm_agentd: An agent process that runs on each MySQL server to complete monitoring probe work and perform simple remote service settings. This script needs to be run on the monitored machine.
  • mmm_control: A simple script that provides commands for managing the mmm_mond process.
  • The supervisor of mysql-mmm will provide multiple virtual IPs (VIPs), including one writable VIP and multiple readable VIPs. Through supervisor management, these IPs will be bound to the available mysqls. When a mysql goes down, the supervisor will migrate the VIPs to other mysqls. During the entire supervision process, it is necessary to add relevant authorized users in MySQL so that MySQL can support the maintenance of the supervision machine. Authorized users include an mmm_monitor user and an mmm_agent user.

Advantages and Disadvantages of MMM

Advantages: High availability, good scalability, automatic switching in case of failure, and for master-master synchronization, only one database write operation is provided at the same time to ensure data consistency.
Disadvantages: The Monitor node is a single point. It can be combined with Keepalived to achieve high availability. It has requirements on the number of hosts and needs to achieve read-write separation, which is a challenge for the program.

Experimental environment deployment

Step 1: Install the MySQL database on all four servers

1. Configure the ALI cloud source and then install the epel-release source

[root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@localhost ~]# yum -y install epel-release

[root@localhost ~]# yum clean all && yum makecache

2. Build a local yum source

#Install the database [root@localhost ~]# yum -y install mariadb-server mariadb

#Turn off firewall and security features [root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0

#Open the database [root@localhost ~]# systemctl start mariadb.service

3. Modify the ml main configuration file

[root@localhost ~]# vim /etc/my.cnf
#Delete the first 9 lines and add the following content [mysqld]
log_error=/var/lib/mysql/mysql.err #File location of error loglog=/var/lib/mysql/mysql_log.log #File location of access loglog_slow_queries=/var/lib/mysql_slow_queris.log #File location of man logbinlog-ignore-db=mysql,information_schema #mysql,information_schema do not generate binary log filescharacter_set_server=utf8 #Character setlog_bin=mysql_bin #Binary log file function enabledserver_id=1 #Different host ids are differentlog_slave_updates=true #Authorization synchronizationsync_binlog=1 #Binary log file function enabledauto_increment_increment=2 #Auto incrementauto_increment_offset=1 #Start value[root@localhost ~]# systemctl restart mariadb.service 
[root@localhost ~]# netstat -natp | grep 3306

4. Copy the configuration file to the other three database servers, and pay attention to modify the server_id

[root@localhost ~]# scp /etc/my.cnf [email protected]:etc/

5. Enter the database and view the log file information

[root@localhost ~]# mysql

#View the log file name and position value MariaDB [(none)]> show master status;    
+------------------+----------+--------------+--------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+--------------------------+
| mysql_bin.000001 | 245| | mysql,information_schema |
+------------------+----------+--------------+--------------------------+
1 row in set (0.00 sec)

6. Grant access permissions to each other on m1 and m2, and authorize synchronization logs

#Grant access permissions to each other on m1 and m2MariaDB [(none)]> grant replication slave on *.* to 'replication'@'192.168.142.%' identified by '123456';

#Specify the log file name and position parameters of m2 on m1MariaDB [(none)]> change master to master_host='192.168.142.134',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=245;

#Specify the log file name and position parameters of m1 on m2MariaDB [(none)]> change master to master_host='192.168.142.131',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=245;

7. Enable synchronization on m1

MariaDB [(none)]> start slave;

8. Check the synchronization status. Both master servers should see Yes

MariaDB [(none)]> show slave status\G;

       Slave_IO_Running: Yes
      Slave_SQL_Running: Yes

9. Create a database on m1

MariaDB [(none)]> create database school;

10. View the synchronized database on m2

MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
|mysql |
| performance_schema |
| school |
| test |
+--------------------+
5 rows in set (0.00 sec)

11. Do this on both slaves - note the changes in log file and position parameters (both point to m1)

MariaDB [(none)]> change master to master_host='192.168.142.131',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=245;

12. Enable synchronization

MariaDB [(none)]> start slave;

#View the synchronization data information on the slave server MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
|mysql |
| performance_schema |
| school |
| test |
+--------------------+
5 rows in set (0.00 sec)

13. Install MMM related software on four servers

[root@localhost ~]# yum -y install mysql-mmm*

14. Configure the mmm_common.conf configuration file

[root@localhost ~]# vim /etc/mysql-mmm/mmm_common.conf

<host default>
#Change the network card to ens33
  cluster_interface ens33
  pid_path /run/mysql-mmm-agent.pid
  bin_path /usr/libexec/mysql-mmm/
  replication_user replicantion

  #Change the authorization password replication_password 123456
  agent_user mmm_agent

  #Change the agent authorization password agent_password 123456
</host>

#Specify the roles and IP addresses of the four servers <host db1>
  ip 192.168.142.131
  mode master
  peer db2
</host>

<host db2>
  ip 192.168.142.134
  mode master
  peer db1
</host>

<host db3>
  ip 192.168.142.130
  Mode slave
</host>

<host db4>
  ip 192.168.142.135
  Mode slave
</host>

#Set the main server virtual IP
<role writer>
  hosts db1, db2
  ips 192.168.142.250
  mode exclusive
</role>

#Set the virtual IP of the slave server
<role reader>
  hosts db3, db4
  ips 192.168.142.251, 192.168.142.252
  mode balanced
</role>

15. Copy the configuration file from m1 to the other three servers

[root@localhost ~]# scp mmm_common.conf/mmm_common.conf [email protected]:/etc/mysql-mmm/mmm_common.conf
[root@localhost ~]# scp mmm_common.conf/mmm_common.conf [email protected]:/etc/mysql-mmm/mmm_common.conf
[root@localhost ~]# scp mmm_common.conf/mmm_common.conf [email protected]:/etc/mysql-mmm/mmm_common.conf

Step 2: Configure the monitor server

1. Install the epel-release source and MMM related software

[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0
[root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@localhost ~]# yum -y install epel-release
[root@localhost ~]# yum clean all && yum makecache
[root@localhost ~]# yum -y install mysql-mmm*

2. Copy the configuration file from m1 to the monitoring server

[root@localhost ~]# scp mmm_common.conf [email protected]:/etc/mysql-mmm/

3. Configure the mmm_common.conf configuration file

[root@localhost ~]# vim /etc/mysql-mmm/mmm_mon.conf

<monitor>
  ip 127.0.0.1
  pid_path /run/mysql-mmm-monitor.pid
  bin_path /usr/libexec/mysql-mmm
  status_path /var/lib/mysql-mmm/mmm_mond.status

  #Ping_ips pointing to the IP addresses of the four servers 192.168.142.131,192.168.142.134,192.168.142.130,192.168.142.135
  auto_set_online 10

4. Authorize mmm_agent on all database servers

MariaDB [(none)]> grant super, replication client, process on *.* to 'mmm_agent'@'192.168.142.%' identified by '123456';
Query OK, 0 rows affected (0.02 sec)

5. Authorize mmm_moniter on all database servers

MariaDB [(none)]> grant replication client on *.* to 'mmm_monitor'@'192.168.18.%' identified by '123456'; 
Query OK, 0 rows affected (0.02 sec)

6. Refresh the database and set the proxy name on all database servers

[root@localhost ~]# vim /etc/mysql-mmm/mmm_agent.conf #Modify the agent configuration file #m1 default name this db1 

#The name of this db2 in m2

#s1 name this db3

#s2 name this db4

7. Enable the proxy function on all database servers and set it to start automatically at boot

#Start the agent function [root@localhost ~]# systemctl start mysql-mmm-agent.service

#Set up automatic startup [root@localhost ~]# systemctl enable mysql-mmm-agent.service  

8. Start the monitoring service on the monitor server and check the status of each node

[root@localhost mysql-mmm]# systemctl start mysql-mmm-monitor.service 
[root@localhost ~]# mmm_control show
 db1(192.168.142.131) master/ONLINE. Roles: writer(192.168.142.250)
 db2(192.168.142.134) master/ONLINE. Roles: 
 db3(192.168.142.130) slave/ONLINE. Roles: reader(192.168.142.252)
 db4(192.168.142.135) slave/ONLINE. Roles: reader(192.168.142.251)

9. Check the status of all servers

[root@localhost ~]# mmm_control checks all
db4 ping [last change: 2019/11/25 18:23:03] OK
db4 mysql [last change: 2019/11/25 18:23:03] OK
db4 rep_threads [last change: 2019/11/25 18:23:03] OK
db4 rep_backlog [last change: 2019/11/25 18:23:03] OK: Backlog is null
db2 ping [last change: 2019/11/25 18:23:03] OK
db2 mysql [last change: 2019/11/25 18:59:01] OK
db2 rep_threads [last change: 2019/11/25 18:59:01] OK
db2 rep_backlog [last change: 2019/11/25 18:59:01] OK: Backlog is null
db3 ping [last change: 2019/11/25 18:23:03] OK
db3 mysql [last change: 2019/11/25 18:59:01] OK
db3 rep_threads [last change: 2019/11/25 18:59:01] OK
db3 rep_backlog [last change: 2019/11/25 18:59:01] OK: Backlog is null
db1 ping [last change: 2019/11/25 18:23:03] OK
db1 mysql [last change: 2019/11/25 18:59:01] OK
db1 rep_threads [last change: 2019/11/25 18:59:01] OK
db1 rep_backlog [last change: 2019/11/25 18:59:01] OK: Backlog is null

Third, fault testing

1. Simulate the M1 server downtime and stop service

[root@localhost ~]# systemctl stop mariadb.service

2. When the m1 server goes down, m2 receives the virtual IP and continues to provide services

[root@localhost ~]# mmm_control show
 db1(192.168.142.131) master/ONLINE. Roles: 
 db2(192.168.142.134) master/ONLINE. Roles: writer(192.168.142.250)
 db3(192.168.142.130) slave/ONLINE. Roles: reader(192.168.142.252)
 db4(192.168.142.135) slave/ONLINE. Roles: reader(192.168.142.251)

3. When the s1 server goes down, s2 receives the virtual IP and continues to provide services

[root@localhost ~]# mmm_control show db1(192.168.142.131) master/ONLINE. Roles: writer(192.168.142.250) 
db2(192.168.142.134) master/ONLINE. Roles: 
db3(192.168.142.130) slave/HARD_OFFLINE. Roles: 
db4(192.168.142.135) slave/ONLINE. Roles: reader(192.168.142.251), reader(192.168.142.252

4. Authorize login for the monitoring server address on the m1 server

MariaDB [(none)]> grant all on *.* to 'root'@'192.168.142.136' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

5. Install the database client on the monitoring server

[root@localhost ~]# yum -y install mariadb 

6. Use the specified user to log in to the database on the monitoring server and create data information

[root@localhost ~]# mysql -u root -p -h 192.168.142.250
Enter password: 
#Enter the password#Create a database MariaDB [(none)]> create database BDQN;
Query OK, 1 row affected (0.01 sec)

7. Data information synchronized in time can be viewed on all databases

MariaDB [(none)]> show databases; #View databases+--------------------+
| Database |
+--------------------+
| information_schema |
| BDQN | #Synchronize to BDQN database | mysql |
| performance_schema |
| school |
| test |
+--------------------+
6 rows in set (0.00 sec)

The MMM cluster architecture is now complete. Thank you for reading!!!

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • MySQL high availability cluster deployment and failover implementation
  • Detailed deployment steps for MySQL MHA high availability configuration and failover
  • Build a stable and highly available cluster based on mysql+mycat, load balancing, master-slave replication, read-write separation operation
  • Comparative Analysis of High Availability Solutions of Oracle and MySQL
  • MySQL high availability solution MMM (MySQL multi-master replication manager)
  • MySQL Series 14 MySQL High Availability Implementation

<<:  The vue configuration file automatically generates routing and menu instance code

>>:  Detailed explanation of the initialization mechanism in bash

Recommend

Detailed explanation of CSS weight value (cascading) examples

•There are many selectors in CSS. What will happe...

Detailed example of MySQL exchange partition

Detailed example of MySQL exchange partition Pref...

Detailed analysis of the chmod command to modify file permissions under Linux

Use the Linux chmod command to control who can ac...

Discussion on horizontal and vertical centering of elements in HTML

When we design a page, we often need to center th...

Implementation of deploying Apollo configuration center using docker in CentOS7

Apollo open source address: https://github.com/ct...

Detailed explanation of various methods of Vue component communication

Table of contents 1. From father to son 2. From s...

Method of building docker private warehouse based on Harbor

Table of contents 1. Introduction to Harbor 1. Ha...

Detailed steps to install Docker 1.8 on CentOS 7

Docker supports running on the following CentOS v...

Example usage of JavaScript tamper-proof object

Table of contents javascript tamper-proof object ...

How to change the domestic image source for Docker

Configure the accelerator for the Docker daemon S...

Detailed explanation of Linux commands and file search

1. Perform file name search which (search for ...

Jmeter connects to the database process diagram

1. Download the MySQL jdbc driver (mysql-connecto...

Specific use of the autoindex module in the Nginx Http module series

The main function of the brower module is to dete...