MySQL Series 14 MySQL High Availability Implementation

MySQL Series 14 MySQL High Availability Implementation

1. MHA

​By monitoring the master node, automatic failover to other slave nodes can be achieved; by promoting a slave node to a new master node, based on master-slave replication, client cooperation is also required. Currently, MHA mainly supports a one-master-multiple-slave architecture. To build MHA, a replication cluster must have at least three database servers, one master and two slaves, that is, one acts as the master, one acts as the standby master, and the other acts as a slave.

1. Working principle of MHA

  • Saving binary log events from a crashed master
  • Identify the slave with the latest update
  • Apply the difference relay log to other slaves
  • Apply binary log events saved from the master
  • Promote a slave to a new master

2. MHA software

The MHA software consists of two parts: the Manager toolkit and the Node toolkit;

The Manager toolkit mainly includes the following tools:

masterha_check_ssh Check the SSH configuration of MHAmasterha_check_repl Check the MySQL replication statusmasterha_manger Start MHAmasterha_check_status Check the current MHA running statusmasterha_master_monitor Check whether the master is downmasterha_master_switch Failover (automatic or manual)masterha_conf_host Add or delete configured server information

Node toolkit: These tools are usually triggered by the MHA Manager script and do not require human operation.

save_binary_logs saves and copies the master's binary log apply_diff_relay_logs identifies differential relay log events and applies the differential events to other slaves filter_mysqlbinlog removes unnecessary ROLLBACK events (MHA no longer uses this tool) purge_relay_logs clears relay logs (does not block the SQL thread)

Tip: In order to minimize data loss caused by hardware damage and downtime of the master database, it is recommended to configure MySQL 5.5 semi-synchronous replication while configuring MHA.

Custom extensions:

secondary_check_script: Check the availability of the master through multiple network routesmaster_ip_ailover_script: Update the masterip used by Applicationshutdown_script: Force shutdown of the master nodereport_script: Send reportinit_conf_load_script: Load initial configuration parametersmaster_ip_online_change_script: Update the master node IP address

Download address: https://code.google.com/archive/p/mysql-master-ha/downloads

3. Implementation of MHA

Environment: Based on key authentication, refer to: https://www.cnblogs.com/L-dongf/p/9058265.html, time must be synchronized, execute: ntpdate cn.pool.ntp.org

1) Manager node

[root@manager ~]# yum install mha4mysql-manager-0.56-0.el6.noarch.rpm mha4mysql-node-0.56-0.el6.noarch.rpm -y #Install mha software [root@manager ~]# scp mha4mysql-node-0.56-0.el6.noarch.rpm 192.168.0.7:
[root@manager ~]# scp mha4mysql-node-0.56-0.el6.noarch.rpm 192.168.0.8:
[root@manager ~]# scp mha4mysql-node-0.56-0.el6.noarch.rpm 192.168.0.9:
[root@manager ~]# mkdir /etc/mha/
[root@manager ~]# vim /etc/mha/cluster1.cnf
[server default]
user=mhauser
password=mhapass
manager_workdir=/data/mastermha/cluster1/
manager_log=/data/mastermha/cluster1/manager.log
remote_workdir=/data/mastermha/cluster1/
ssh_user=root
repl_user=repluser
repl_password=replpass
ping_interval=1 #Check once per second [server1]
hostname=192.168.0.7
candidate_master=1 #Can become the master node [server2]
hostname=192.168.0.8
candidate_master=1 #Can become the master node [server3]
hostname=192.168.0.9
[root@manager ~]# masterha_check_ssh --conf=/etc/mha/cluster1.cnf #Check ssh key environment All SSH connection tests passed successfully.
[root@manager ~]# masterha_check_repl --conf=/etc/mha/cluster1.cnf #Check MySQL status MySQL Replication Health is OK.
[root@manager ~]# yum install screen -y
[root@manager ~]# screen -S mha #mha is a process working in the foreground and cannot be detected in real time using the terminal [root@manager ~]# masterha_manager --conf=/etc/mha/cluster1.cnf #Start monitoring

2) Master node

[root@master ~]# yum install mha4mysql-node-0.56-0.el6.noarch.rpm -y #Install node package [root@master ~]# vim /etc/my.cnf
[mysqld]
server_id=1
log_bin
binlog_format=row
skip_name_resolve
[root@master ~]# systemctl start mariadb
MariaDB [(none)]> GRANT REPLICATION SLAVE ON *.* TO repluser@'192.168.0.%' IDENTIFIED BY 'replpass'; #Create a master-slave replication account. This account may be created for any node that switches to the master. MariaDB [(none)]> SHOW MASTER LOGS;
+--------------------+-----------+
| Log_name | File_size |
+--------------------+-----------+
| mariadb-bin.000001 | 401 |
+--------------------+-----------+
MariaDB [(none)]> GRANT ALL ON *.* TO mhauser@'192.168.0.%' IDENTIFIED BY 'mhapass'; #Create an mha management user and ensure that all nodes have synchronized this account

3) Slave node

[root@slave ~]# yum install mha4mysql-node-0.56-0.el6.noarch.rpm -y
[root@slave ~]# vim /etc/my.cnf
[mysqld]
read_only=1
log_bin
binlog_format=row
server_id=2
relay_log_purge=0
skip_name_resolve=1
[root@slave ~]# systemctl start mariadb
MariaDB [(none)]> GRANT REPLICATION SLAVE ON *.* TO repluser@'192.168.0.%' IDENTIFIED BY 'replpass';
MariaDB [(none)]> CHANGE MASTER TO
    -> MASTER_HOST='192.168.0.7', #The master node is 0.7
    -> MASTER_USER='repluser',
    -> MASTER_PASSWORD='replpass',
    -> MASTER_PORT=3306,
    -> MASTER_LOG_FILE='mariadb-bin.000001',
    -> MASTER_LOG_POS=401,
    -> MASTER_CONNECT_RETRY=10;
MariaDB [(none)]> START SLAVE;

4) slave2 node

[root@slave2 ~]# yum install mha4mysql-node-0.56-0.el6.noarch.rpm -y
[root@slave2 ~]# vim /etc/my.cnf
[mysqld]
server_id=3
read_only=1
relay_log_purge=0
skip_name_resolve=1
[root@slave2 ~]# systemctl start mariadb
MariaDB [(none)]> CHANGE MASTER TO
    -> MASTER_HOST='192.168.0.7',
    -> MASTER_USER='repluser',
    -> MASTER_PASSWORD='replpass',
    -> MASTER_PORT=3306,
    -> MASTER_LOG_FILE='mariadb-bin.000001',
    -> MASTER_LOG_POS=401,
    -> MASTER_CONNECT_RETRY=10;
MariaDB [(none)]> START SLAVE;

4) Testing

When the master service stops: [root@master ~]# systemctl stop mariadb

Execute the MariaDB [(none)]> SHOW SLAVE STATUS\G command on slave2 and see Master_Server_Id: 2

Description: Switching successful

4. MHA monitoring is back online

Repair the failed master and bring it back online, and manually configure it to be the slave of the current master node [root@master ~]# systemctl start mariadb
MariaDB [(none)]> CHANGE MASTER TO
    -> MASTER_HOST='192.168.0.8', #The master host is 0.8
    -> MASTER_USER='repluser',
    -> MASTER_PASSWORD='replpass',
    -> MASTER_PORT=3306,
    -> MASTER_LOG_FILE='mariadb-bin.000001',
    -> MASTER_LOG_POS=482,
    -> MASTER_CONNECT_RETRY=10;
MariaDB [(none)]> START SLAVE;
Management Node:
[root@manager ~]# screen -S mha
[root@manager ~]# masterha_manager --conf=/etc/mha/cluster1.cnf #Restart monitoring

2. Galera Cluster

Galera Cluster: A MySQL cluster with integrated Galera plug-in is a new type of high-availability solution with no data sharing and high redundancy. Currently, there are two versions of Galera Cluster, Percona Xtradb Cluster and MariaDB Cluster. Galera itself has multi-master features, that is, it adopts a multi-master cluster architecture. It is a high-availability solution that is both robust and has outstanding performance in data consistency, integrity and high performance.

The three nodes form a cluster. Different from the common master-slave architecture, they can all be master nodes. The three nodes are equal, which is called multi-master architecture. When a client wants to write or read data, it doesn't matter which instance it connects to. The data read is the same. After writing to a certain node, the cluster will synchronize the new data to other nodes. This architecture does not share any data and is a highly redundant architecture.

characteristic:

  • Multi-master architecture: A true multi-point read-write cluster, where the read and write data are always up to date at any time;
  • Synchronous replication: Data is synchronized between different nodes in the cluster without any delay. If the database fails, data will not be lost.
  • Concurrent replication: When applying data from a slave node, parallel execution is supported, resulting in better performance;
  • Failure switching: When a database failure occurs, it is easy to switch because it supports multi-point writing;
  • Hot plug: During service, if the database crashes, the downtime will be very short as long as the monitoring program detects it quickly enough. During a node failure, the node itself has very little impact on the cluster;
  • Automatic node cloning: When adding new nodes or shutting down for maintenance, incremental data or basic data does not need to be manually backed up. Galera Cluster will automatically pull online node data, and the cluster will eventually become consistent.

Working principle:

Refer to the official documentation:

http://galeracluster.com/documentation-webpages/galera-documentation.pdf

http://galeracluster.com/documentation-webpages/index.html

https://mariadb.com/kb/en/mariadb/getting-started-with-mariadb-galera-cluster/

At least three nodes are required. The mariadb-server package cannot be installed. A specific package needs to be installed.

Implementation of Galera Cluster

Tsinghua open source mirror source: https://mirrors.tuna.tsinghua.edu.cn/mariadb/mariadb-5.5.60/yum/centos7-amd64/

# vim /etc/yum.repos.d/mariadb_galera_server.repo
[mariadb]
name=mariadb_galera_server.repo
baseurl=https://mirrors.tuna.tsinghua.edu.cn/mariadb/mariadb-5.5.60/yum/centos7-amd64/
gpgcheck=0
# yum install MariaDB-Galera-server -y

1)mysql-1

[root@mysql-1 ~]# vim /etc/my.cnf.d/server.cnf
[galera]
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.0.7,192.168.0.8,192.168.0.9" #Define all IPs here binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
wsrep_cluster_name = 'my_wsrep_cluster'
wsrep_node_name = 'node1'
wsrep_node_address = '192.168.0.7'

2) mysql-2

[root@mysql-2 ~]# vim /etc/my.cnf.d/server.cnf
[galera]
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.0.7,192.168.0.8,192.168.0.9"
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
wsrep_cluster_name = 'my_wsrep_cluster'
wsrep_node_name = 'node2'
wsrep_node_address = '192.168.0.8'

3) mysql-3

[root@mysql-3 ~]# vim /etc/my.cnf.d/server.cnf
[galera]
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.0.7,192.168.0.8,192.168.0.9"
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
wsrep_cluster_name = 'my_wsrep_cluster'
wsrep_node_name = 'node3'
wsrep_node_address = '192.168.0.9'

4) Start

[root@mysql-1 ~]# /etc/init.d/mysql start --wsrep-new-cluster #Add this parameter for the first startup [root@mysql-2 ~]# /etc/init.d/mysql start #Subsequent services can be started directly [root@mysql-3 ~]# /etc/init.d/mysql start

5) Testing

​When operating the database on any node, other nodes will operate synchronously; if the same record is operated at the same time, only one node will succeed.

6) Check the work status

MariaDB [(none)]> SHOW VARIABLES LIKE 'wsrep_%'\G
MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_%'\G
MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 | #There are three nodes online in the cluster+--------------------+-------+

This concludes the article on MySQL series 14: MySQL high availability implementation. For more information on MySQL high availability implementation, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • MySQL high availability cluster deployment and failover implementation
  • Detailed deployment steps for MySQL MHA high availability configuration and failover
  • MySQL database implements MMM high availability cluster architecture
  • Build a stable and highly available cluster based on mysql+mycat, load balancing, master-slave replication, read-write separation operation
  • Comparative Analysis of High Availability Solutions of Oracle and MySQL
  • MySQL high availability solution MMM (MySQL multi-master replication manager)

<<:  A small introduction to the use of position in HTML

>>:  Solution to the impact of empty paths on page performance

Recommend

Implementation of breakpoint resume in Node.js

Preface Normal business needs: upload pictures, E...

Alibaba Cloud Ubuntu 16.04 builds IPSec service

Introduction to IPSec IPSec (Internet Protocol Se...

Steps to transplant the new kernel to the Linux system

1. Download the ubuntu16.04 image and the corresp...

Specific use of Linux man command

01. Command Overview Linux provides a rich help m...

MySQL database Shell import_table data import

Table of contents MySQL Shell import_table data i...

Docker data storage tmpfs mounts detailed explanation

Before reading this article, I hope you have a ba...

WeChat applet implements form verification

WeChat applet form validation, for your reference...

Ajax responseText parses json data case study

Solve the problem that the responseText returned ...

Analysis of the differences between Iframe and FRAME

1. Use of Iframe tag <br />When it comes to ...

Tutorial on using prepare, execute and deallocate statements in MySQL

Preface MySQL officially refers to prepare, execu...

About IE8 compatibility: Explanation of the X-UA-Compatible attribute

Problem description: Copy code The code is as fol...

Using Docker+jenkins+python3 environment to build a super detailed tutorial

Preface: After the automation is written, it need...