Steps to build MHA architecture deployment in MySQL

Steps to build MHA architecture deployment in MySQL

MAH

1. Introduction to MAH Architecture

  • MHA (Master High Availability) is currently a relatively mature solution for MySQL high availability. It was developed by Japanese youshimaton and is an excellent high-availability software for failover and master-slave promotion in MySQL high-availability environments. During the MySQL failover process, MHA can automatically complete the database failover operation within 0 to 30 seconds, and during the failover process, MHA can ensure the consistency of the database to the greatest extent to achieve true high availability.
  • MHA consists of two parts: MHA Manager (management node) and MHANode (data node). MHA Manager can be deployed independently on a separate machine to manage multiple Master-Slave clusters, or it can be deployed on a Slave. When the Master fails, it can automatically promote the Slave with the latest data to the new Master, and then redirect all other Slaves to the new Master. The entire failover process is completely transparent to the application.

2. Applicable scenarios

Currently, MHA mainly supports the architecture of one master and multiple slaves. To build MHA, a replication cluster must have at least three database servers, one master and two slaves, that is, one acts as the Master, one acts as the Standby Master, and the other acts as the Slave. For cost considerations, Taobao has made modifications on this basis. Currently, the TMHA developed by Taobao already supports one master and one slave.

3. Working Principle of MHA

1. Save the binary log event from the crashed Master;

2. Identify the slave with the latest update;

3. Apply the difference relay log to other slaves;

4. Apply the binary log events saved from the Master;

5. Promote a Slave to the new Master;

6. Make other slaves connect to the new master for replication;

4. Composition of MHA

  • Manager Toolkit
  • Node Toolkit

1: Manager Toolkit

  • masterha_check_ssh: Check the SSH configuration of MHA
  • masterha_check_repl: Check MySQL replication status
  • masterha_manager: Start MHA
  • masterha_check_status: Check the current MHA running status
  • masterha_master_monitor: detects whether the Master is down
  • masterha_master_switch: controls failover (automatic or manual)
  • masterha_conf_host: add or delete configured server information

2: Node Toolkit

Usually triggered by the MHA Manager script, no manual operation is required

  • save_binary_logs: Save and copy the Master's binlog logs
  • apply_diff_relay_logs: Identify the intermediate log times of the differences and apply them to other slaves
  • filter_mysqlbinlog: remove unnecessary ROOLBACK events (deprecated)
  • purge_relay_logs: Clear relay logs (without blocking the SQL thread)

5. MHA Features

  • During the automatic failover process, MHA attempts to save the binary log from the downed primary server to ensure that data is not lost to the greatest extent possible.
  • Using semi-synchronous replication can greatly reduce the risk of data loss
  • Currently, MHA supports a one-master-multiple-slave architecture, with a minimum of three servers, i.e. one master and two slaves.

MHA architecture deployment

1. Topology diagram

insert image description here

2: Database installation

MySQL version 5.6.36, cmake version 2.8.6

1: Install the compilation dependent environment

[root@master ~]# yum -y install ncurses-devel gcc-c++ perl-Module-Install

2. Install gmake compilation software

[root@master ~]# tar zxvf cmake-2.8.6.tar.gz
[root@master ~]# cd cmake-2.8.6
[root@master cmake-2.8.6]# ./configure
[root@master cmake-2.8.6]# gmake && gmake install

3: Install MySQL database

[root@master ~]# tar -zxvf mysql-5.6.36.tar.gz
[root@master ~]# cd mysql-5.6.36
[root@master mysql-5.6.36]# cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci -DWITH_EXTRA_CHARSETS=all -DSYSCONFDIR=/etc
[root@master mysql-5.6.36]# make && make install
[root@master mysql-5.6.36]# cp support-files/my-default.cnf /etc/my.cnf
[root@master mysql-5.6.36]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
[root@master ~]# chmod +x /etc/rc.d/init.d/mysqld
[root@master ~]# chkconfig --add mysqld
[root@master ~]# echo "PATH=$PATH:/usr/local/mysql/bin" >> /etc/profile
[root@master ~]# source /etc/profile
chown -R mysql.mysql /usr/local/mysql groupadd mysql
[root@master ~]# useradd -M -s /sbin/nologin mysql -g mysql
[root@master ~]# chown -R mysql.mysql /usr/local/mysql
[root@master ~]# mkdir -p /data/mysql
[root@master ~]# /usr/local/mysql/scripts/mysql_install_db --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data --user=mysql

4: Modify the Master's main configuration file /etc/my.cnf file

Delete all the original configurations

[root@master ~]# vi /etc/my.cnf
[client]
port = 3306
socket = /usr/local/mysql/mysql.sock

[mysql]
port = 3306
socket = /usr/local/mysql/mysql.sock

[mysqld]
user = mysql
basedir = /usr/local/mysql
datadir = /usr/local/mysql/data
port = 3306
pid-file = /usr/local/mysql/mysqld.pid
socket = /usr/local/mysql/mysql.sock
server-id = 1
log_bin = master-bin
log-slave-updates = true

sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,PIPES_AS_CONCAT,ANSI_QUOTES

The other two slave databases

The server-id of the three servers cannot be the same, the others are the same and can be written normally

server-id = 2
log_bin = master-bin
relay-log = relay-log-bin 
relay-log-index = slave-relay-bin.index
server-id = 3
log_bin = master-bin
relay-log = relay-log-bin 
relay-log-index = slave-relay-bin.index

5: Make two soft links for each of the three databases. The soft links are for HMA services.

[root@master ~]# ln -s /usr/local/mysql/bin/mysql /usr/sbin/
[root@master ~]# ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/

6: Start MySQL on three databases

[root@master ~]# /usr/local/mysql/bin/mysqld_safe --user=mysql &
[root@master ~]# service mysqld restart 
Shutting down MySQL.. SUCCESS! 
Starting MySQL. SUCCESS!

3: Database configuration master-slave synchronization

Login to database

[root@master ~]#mysql

1: Authorize two users on all database nodes, one for slave synchronization and the other for manager use

mysql> grant replication slave on *.* to 'myslave'@'20.0.0.%' identified by '123';
mysql> grant all privileges on *.* to 'mha'@'20.0.0.%' identified by 'manager';
mysql> flush privileges;

2: Theoretically, the following three authorizations do not need to be added. However, when doing the case experiment environment, an error was reported when checking the MySQL master-slave through MHA, reporting that the two slave databases could not connect to the master database through the host name, so the following authorizations were added to all databases.

mysql> grant all privileges on *.* to 'mha'@'master' identified by 'manager';
mysql> grant all privileges on *.* to 'mha'@'slave1' identified by 'manager';
mysql> grant all privileges on *.* to 'mha'@'slave2' identified by 'manager';

3: View binary files and synchronization points on the master host

mysql> show master status;
+-------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-------------------+----------+--------------+------------------+-------------------+
| master-bin.000001 | 608 | | | |
+-------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

4: Perform synchronization on slave1 and slave2 respectively

mysql> change master to master_host='20.0.0.10',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=608
mysql> start slave;

5: Check if both IO and SQL threads are yes, indicating whether the synchronization is normal.

mysql> show slave status\G;
*************************** 1. row ***************************
    Slave_IO_State: Waiting for master to send event
     Master_Host: 20.0.0.10
     Master_User: myslave
     Master_Port: 3306
    Connect_Retry: 60
    Master_Log_File: master-bin.000001
   Read_Master_Log_Pos: 608
    Relay_Log_File: relay-log-bin.000002
    Relay_Log_Pos: 284
  Relay_Master_Log_File: master-bin.000001
    Slave_IO_Running: Yes
   Slave_SQL_Running: Yes
    Replicate_Do_DB: 
   Replicate_Ignore_DB:

Both slaves must be set to read-only mode

mysql> set global read_only=1;

6: Insert two pieces of data into the master database to test whether they are synchronized

mysql> create database test_db;
Query OK, 1 row affected (0.00 sec)
mysql> use test_db;
Database changed
mysql> create table test(id int);
Query OK, 0 rows affected (0.13 sec)
mysql> insert into test(id) values ​​(1);
Query OK, 1 row affected (0.03 sec)

7: Query the two slave libraries separately as shown below, indicating that the master-slave synchronization is normal

mysql> select * from test_db.test;
+------+
|id|
+------+
| 1 |
+------+
1 row in set (0.00 sec)

4: Install MHA software

1: Install the MHA-dependent environment on all servers. First install the epel source (3+1)

[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# ll
Total dosage 20
drwxr-xr-x. 2 root root 187 October 10 18:08 backup
-rw-r--r--. 1 root root 1458 December 28 23:07 CentOS7-Base-163.repo
-rw-r--r--. 1 root root 951 Dec 29 14:52 epel.repo
-rw-r--r--. 1 root root 1050 Nov 1 04:33 epel.repo.rpmnew
-rw-r--r--. 1 root root 1149 Nov 1 04:33 epel-testing.repo
-rw-r--r--. 1 root root 228 Oct 27 18:43 local.repo

Three databases plus one mha-manager

[root@mha-manager ~]# yum install epel-release --nogpgcheck
[root@mha-manager ~]# yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-ParalExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN

2: The node component (3+1) must be installed on all servers first

[root@mha-manager ~]# tar zxvf mha4mysql-node-0.57.tar.gz
[root@mha-manager ~]# cd mha4mysql-node-0.57
[root@mha-manager mha4mysql-node-0.57]# perl Makefile.PL
[root@mha-manager mha4mysql-node-0.57]# make && make install

3: Install the manager component on mha-manager

[root@mha-manager ~]# tar zxvf mha4mysql-manager-0.57.tar.gz 
[root@mha-manager ~]# cd mha4mysql-manager-0.57/
[root@mha-manager mha4mysql-manager-0.57]# perl Makefile.PL
[root@mha-manager mha4mysql-manager-0.57]# make && make install

After manager is installed, several tools will be generated under /usr/local/bin

insert image description here

masterha_check_ssh Check the SSH configuration of MHA
masterha_check_repl Check MySQL replication status
masterha_manger starts the manager script
masterha_check_status checks the current MHA running status
masterha_master_monitor detects whether the master is down
masterha_master_switch controls failover (automatic or manual)
masterha_conf_host adds or deletes configured server information
masterha_stop shuts down the manager

After node is installed, several scripts will be generated under /usr/local/bin

insert image description here

save_binary_logs saves and copies the master's binary logs

apply_diff_relay_logs identifies differential relay log events and applies the differential events to other slaves

filter_mysqlbinlog removes unnecessary ROLLBACK events (MHA no longer uses this tool)

purge_relay_logs Clear relay logs (does not block the SQL thread)

Five: Configure password-less authentication

1: Configure password-less authentication for all nodes on the manager

(1) Generate a key

[root@mha-manager ~]# ssh-keygen -t rsa # Keep pressing Enter

(2) After the key is generated, it is sent to the other three databases

[root@mha-manager ~]# ssh-copy-id 20.0.0.10 # Input: yes Password: 123456
[root@mha-manager ~]# ssh-copy-id 20.0.0.11
[root@mha-manager ~]# ssh-copy-id 20.0.0.12

(3) Login test

[root@mha-manager ~]# ssh [email protected]
Last login: Tue Dec 29 14:52:09 2020 from 20.0.0.1
[root@master ~]# exit
LogoutConnection to 20.0.0.10 closed.
[root@mha-manager ~]# ssh [email protected]
Last login: Tue Dec 29 13:20:07 2020 from 20.0.0.1
[root@slave1 ~]# exit
LogoutConnection to 20.0.0.11 closed.
[root@mha-manager ~]# ssh [email protected]
Last login: Tue Oct 27 19:45:24 2020 from 20.0.0.1
[root@slave2 ~]# exit
LogoutConnection to 20.0.0.12 closed.

2: Configure password-free authentication to the database node on the master

(1) Generate a key

[root@master ~]# ssh-keygen -t rsa

(2) After the key is generated, it is sent to the other two databases

[root@master ~]# ssh-copy-id 20.0.0.11
[root@master ~]# ssh-copy-id 20.0.0.12

(3) Login test

[root@master ~]# ssh [email protected]
Last login: Tue Dec 29 16:40:06 2020 from 20.0.0.13
[root@slave1 ~]# exit
LogoutConnection to 20.0.0.11 closed.
[root@master ~]# ssh [email protected]
Last login: Tue Oct 27 23:05:20 2020 from 20.0.0.13
[root@slave2 ~]# exit
LogoutConnection to 20.0.0.12 closed.

3: Configure password-free authentication to the database node on slave1

(1) Generate a key

[root@slave1 ~]# ssh-keygen -t rsa

(2) After the key is generated, it is sent to the other two databases

[root@slave1 ~]# ssh-copy-id 20.0.0.10
[root@slave1 ~]# ssh-copy-id 20.0.0.12

(3) Login test

[root@slave1 ~]# ssh [email protected]
Last login: Tue Dec 29 16:39:55 2020 from 20.0.0.13
[root@master ~]# exit
LogoutConnection to 20.0.0.10 closed.
[root@slave1 ~]# ssh [email protected]
Last login: Tue Oct 27 23:14:06 2020 from 20.0.0.10
[root@slave2 ~]# exit
LogoutConnection to 20.0.0.12 closed.

4: Configure password-free authentication to the database node on slave2

(1) Generate a key

[root@slave2 ~]# ssh-keygen -t rsa

(2) After the key is generated, it is sent to the other two databases

[root@slave2 ~]# ssh-copy-id 20.0.0.10
[root@slave2 ~]# ssh-copy-id 20.0.0.11

(3) Login test

[root@slave2 ~]# ssh [email protected]
Last login: Tue Dec 29 16:59:43 2020 from 20.0.0.11
[root@master ~]# exit
LogoutConnection to 20.0.0.10 closed.
[root@slave2 ~]# ssh [email protected]
Last login: Tue Dec 29 16:48:51 2020 from 20.0.0.10
[root@slave1 ~]# exit
LogoutConnection to 20.0.0.11 closed.

6. Configure MHA

1: Copy the relevant scripts to the /usr/local/bin directory on the manager node

(1) Copy

[root@mha-manager ~]# cp -ra /root/mha4mysql-manager-0.57/samples/scripts/ /usr/local/bin/

(2) After copying, there will be four executable files

insert image description here

master_ip_failover #Script for VIP management during automatic switching

master_ip_online_change #VIP management during online switching

power_manager #Script to shut down the host after a failure occurs

send_report #A script that sends an alarm after a failover

(3) Copy the above automatic switching VIP management script to the /usr/local/bin directory. Here, the script is used to manage VIP

[root@mha-manager scripts]# cp master_ip_failover /usr/local/bin/
[root@mha-manager scripts]# cd ..
[root@mha-manager bin]# ll
Total dosage 88 

insert image description here

2: Modify the automatic switching script

[root@mha-manager ~]# vi /usr/local/bin/master_ip_failover # Delete all contents #!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
###############################Add content section#########################################
my $vip = '20.0.0.200';
my $brdc = '20.0.0.255';
my $ifdev = 'ens33';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";
my $exit_code = 0;
#my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;";
#my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key";
##################################################################################
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);

exit &main();

sub main {

print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

if ( $command eq "stop" || $command eq "stopssh" ) {

my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {

my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

Sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

3: Create the MHA software directory and copy the configuration file

[root@mha-manager ~]# mkdir /etc/mha
[root@mha-manager ~]# cp mha4mysql-manager-0.57/samples/conf/app1.cnf /etc/mha
[root@mha-manager ~]# vi /etc/mha/app1.cnf
[server default]
manager_workdir=/var/log/masterha/app1
manager_log=/var/log/masterha/app1/manager.log
master_binlog_dir=/usr/local/mysql/data
master_ip_failover_script=/usr/local/bin/master_ip_failover
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
password=manager
user=mha
ping_interval=1
remote_workdir=/tmp
repl_password=123
repl_user=myslave
secondary_check_script=/usr/local/bin/masterha_secondary_check -s 20.0.0.11 -s 20.0.0.12
shutdown_script=""
ssh_user=root

[server1]
hostname=20.0.0.10
port=3306
[server2]
hostname=20.0.0.11
port=3306
candidate_master=1
check_repl_delay=0
[server3]
hostname=20.0.0.12
port=3306

7. Health Check

1: Test ssh passwordless authentication. If normal, it will output successfully.

[root@mha-manager ~]# masterha_check_ssh
--conf=<server_config_file> must be set.
[root@mha-manager ~]# masterha_check_ssh --conf=/etc/mha/app1.cnf
Tue Dec 29 20:19:16 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Dec 29 20:19:16 2020 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Tue Dec 29 20:19:16 2020 - [info] Reading server configuration from /etc/mha/app1.cnf..
Tue Dec 29 20:19:16 2020 - [info] Starting SSH connection tests..
Tue Dec 29 20:19:17 2020 - [debug] 
Tue Dec 29 20:19:16 2020 - [debug] Connecting via SSH from [email protected](20.0.0.10:22) to [email protected](20.0.0.11:22)..
Tue Dec 29 20:19:16 2020 - [debug] ok.
Tue Dec 29 20:19:16 2020 - [debug] Connecting via SSH from [email protected](20.0.0.10:22) to [email protected](20.0.0.12:22)..
Tue Dec 29 20:19:17 2020 - [debug] ok.
Tue Dec 29 20:19:18 2020 - [debug] 
Tue Dec 29 20:19:17 2020 - [debug] Connecting via SSH from [email protected](20.0.0.12:22) to [email protected](20.0.0.10:22)..
Tue Dec 29 20:19:17 2020 - [debug] ok.
Tue Dec 29 20:19:17 2020 - [debug] Connecting via SSH from [email protected](20.0.0.12:22) to [email protected](20.0.0.11:22)..
Tue Dec 29 20:19:18 2020 - [debug] ok.
Tue Dec 29 20:19:18 2020 - [debug] 
Tue Dec 29 20:19:16 2020 - [debug] Connecting via SSH from [email protected](20.0.0.11:22) to [email protected](20.0.0.10:22)..
Tue Dec 29 20:19:17 2020 - [debug] ok.
Tue Dec 29 20:19:17 2020 - [debug] Connecting via SSH from [email protected](20.0.0.11:22) to [email protected](20.0.0.12:22)..
Tue Dec 29 20:19:17 2020 - [debug] ok.
Tue Dec 29 20:19:18 2020 - [info] All SSH connection tests passed successfully.

2: Test the MySQL master-slave connection. Finally, the message "MySQL Replication Health is OK" appears.

[root@mha-manager ~]# masterha_check_repl --conf=/etc/mha/app1.cnf
Tue Dec 29 20:30:29 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Dec 29 20:30:29 2020 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Tue Dec 29 20:30:29 2020 - [info] Reading server configuration from /etc/mha/app1.cnf..
Tue Dec 29 20:30:29 2020 - [info] MHA::MasterMonitor version 0.57.
Tue Dec 29 20:30:30 2020 - [info] GTID failover mode = 0
Tue Dec 29 20:30:30 2020 - [info] Dead Servers:
Tue Dec 29 20:30:30 2020 - [info] Alive Servers:
Tue Dec 29 20:30:30 2020 - [info] 20.0.0.10 (20.0.0.10:3306)
Tue Dec 29 20:30:30 2020 - [info] 20.0.0.11(20.0.0.11:3306)
Tue Dec 29 20:30:30 2020 - [info] 20.0.0.12 (20.0.0.12:3306)
Tue Dec 29 20:30:30 2020 - [info] Alive Slaves:
Tue Dec 29 20:30:30 2020 - [info] 20.0.0.11(20.0.0.11:3306) Version=5.6.36-log (oldest major version between slaves) log-bin: enabled
.......OmittedChecking the Status of the script.. OK 
Tue Dec 29 20:30:55 2020 - [info] OK.
Tue Dec 29 20:30:55 2020 - [warning] shutdown_script is not defined.
Tue Dec 29 20:30:55 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

8: View the VIP address of master1

Check if 20.0.0.200 exists

This VIP address will not disappear because the manager node stops the MHA service

When you start mha for the first time, the master database will not automatically generate a VIP address and needs to be enabled manually.

[root@master ~]# ifconfig ens33:1 20.0.0.200/24 ​​up
[root@master ~]# ip addr
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 00:0c:29:8d:e2:af brd ff:ff:ff:ff:ff:ff
 inet 20.0.0.10/24 brd 20.0.0.255 scope global ens33
  valid_lft forever preferred_lft forever
 inet 20.0.0.200/24 ​​brd 20.0.0.255 scope global secondary ens33:1
  valid_lft forever preferred_lft forever
 inet6 fe80::a6c1:f3d4:160:102a/64 scope link 
  valid_lft forever preferred_lft forever

9. Start MHA and check the status

[root@mha-manager ~]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
[1] 57152
[root@mha-manager ~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:57152) is running(0:PING_OK), master:20.0.0.10

Fault simulation and repair

1. Fault simulation

1: Shut down the master server

[root@master ~]# pkill mysqld

2: View log information

[root@mha-manager ~]# cat /var/log/masterha/app1/manager.log

master 20.0.0.10(20.0.0.10:3306) is down! # 20.0.0.10 is down Check MHA Manager logs at mha-manager:/var/log/masterha/app1/manager.log for details.

Started automated(non-interactive) failover.
Invalidated master IP address on 20.0.0.10(20.0.0.10:3306)
The latest slave 20.0.0.11(20.0.0.11:3306) has all relay logs for recovery.
Selected 20.0.0.11(20.0.0.11:3306) as a new master. # 20.0.0.11 becomes the primary server 20.0.0.11(20.0.0.11:3306): OK: Applying all logs succeeded.
20.0.0.11(20.0.0.11:3306): OK: Activated master IP address.
20.0.0.12(20.0.0.12:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.

3: View the virtual address

The virtual address has reached 20.0.0.11

[root@slave1 ~]# ip addr
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 00:0c:29:49:77:39 brd ff:ff:ff:ff:ff:ff
 inet 20.0.0.11/24 brd 20.0.0.255 scope global ens33
  valid_lft forever preferred_lft forever
 inet 20.0.0.200/8 brd 20.255.255.255 scope global ens33:1
  valid_lft forever preferred_lft forever
 inet6 fe80::5cbb:1621:4281:3b24/64 scope link 
  valid_lft forever preferred_lft forever

4: Check the master-slave status

View the binary files of the master server

[root@slave1 ~]# mysql

mysql> show master status;
+-------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-------------------+----------+--------------+------------------+-------------------+
| master-bin.000003 | 120 | | | |
+-------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

View the status from 2

[root@slave2 ~]# mysql

mysql> show slave status\G;
*************************** 1. row ***************************
    Slave_IO_State: Waiting for master to send event
     Master_Host: 20.0.0.11
     Master_User: myslave
     Master_Port: 3306
    Connect_Retry: 60
    Master_Log_File: master-bin.000003
   Read_Master_Log_Pos: 120
    Relay_Log_File: relay-log-bin.000002
    Relay_Log_Pos: 284
  Relay_Master_Log_File: master-bin.000003
    Slave_IO_Running: Yes
   Slave_SQL_Running: Yes
    Replicate_Do_DB: 
   Replicate_Ignore_DB:

2. Troubleshooting

1: Open the down database

[root@master ~]# systemctl start mysqld
[root@master ~]# systemctl status mysqld
● mysqld.service - LSB: start and stop MySQL
 Loaded: loaded (/etc/rc.d/init.d/mysqld; bad; vendor preset: disabled)
 Active: active (running) since 2020-12-29 21:50:03 CST; 25s ago
  Docs: man:systemd-sysv-generator(8)
 Process: 977 ExecStart=/etc/rc.d/init.d/mysqld start (code=exited, status=0/SUCCESS)
 CGroup: /system.slice/mysqld.service
   ├─1026 /bin/sh /usr/local/mysql/bin/mysqld_safe --datadir=/usr/local/mysql/data --pid-file...
   └─1358 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/m

2: Perform master-slave replication on the down database

Master-slave replication

[root@master ~]#mysql

mysql> change master to master_host='20.0.0.11',master_user='myslave',master_password='123',master_log_file='master-bin.000003',master_log_pos=120; 
Query OK, 0 rows affected, 2 warnings (0.01 sec)
# 20.0.0.11 is the master server after the master server goes downmysql> start slave;
Query OK, 0 rows affected (0.01 sec)

View Status

mysql> show slave status\G;
*************************** 1. row ***************************
    Slave_IO_State: Waiting for master to send event
     Master_Host: 20.0.0.11
     Master_User: myslave
     Master_Port: 3306
    Connect_Retry: 60
    Master_Log_File: master-bin.000003
   Read_Master_Log_Pos: 120
    Relay_Log_File:mysqld-relay-bin.000002
    Relay_Log_Pos: 284
  Relay_Master_Log_File: master-bin.000003
    Slave_IO_Running: Yes
   Slave_SQL_Running: Yes
    Replicate_Do_DB: 
   Replicate_Ignore_DB:

3: Modify the mha configuration file

[root@mha-manager ~]# vi /etc/mha/app1.cnf

secondary_check_script=/usr/local/bin/masterha_secondary_check -s 20.0.0.10 -s 20.0.0.12
# Since 20.0.0.11 is the master server, 20.0.0.10 and 20.0.0.12 need to be added as slave servers [server1]
hostname=20.0.0.10
candidate_master=1
check_repl_delay=0
port=3306
[server2]
hostname=20.0.0.11
port=3306
# Since 20.0.0.10 is down, the server1 file is automatically deleted. Re-add server1 and set it as the backup primary server. Modify server2

4: Enter the database and reauthorize

[root@master ~]#mysql

mysql> grant all privileges on *.* to 'mha'@'master' identified by 'manager';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

5: Start mha again

[root@mha-manager ~]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
[1] 58927
[root@mha-manager ~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:58927) is running(0:PING_OK), master:20.0.0.11

6: Check the log again

[root@mha-manager ~]# cat /var/log/masterha/app1/manager.log
......
Tue Dec 29 22:16:53 2020 - [info] Dead Servers: # Stopped servicesTue Dec 29 22:16:53 2020 - [info] Alive Servers: # Survival servicesTue Dec 29 22:16:53 2020 - [info] 20.0.0.10(20.0.0.10:3306)
Tue Dec 29 22:16:53 2020 - [info] 20.0.0.11(20.0.0.11:3306)
Tue Dec 29 22:16:53 2020 - [info] 20.0.0.12 (20.0.0.12:3306)
.......

7: Synchronize and view data written to the primary database

Other databases can be found

mysql> create database ooo;
Query OK, 1 row affected (0.00 sec)

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
|mysql |
|oooo|
| performance_schema |
| test |
| test_db |
+--------------------+
6 rows in set (0.00 sec)

This is the end of this article about the steps to build MHA architecture deployment in MySQL. For more relevant content about building MHA architecture deployment in MySQL, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • How to use Python to collect MySQL MHA deployment and operation status information
  • A complete explanation of MySQL high availability architecture: MHA architecture
  • Detailed deployment steps for MySQL MHA high availability configuration and failover
  • Summary of several error logs about MySQL MHA setup and switching
  • Mysql GTID Mha configuration method
  • Super deployment tutorial of MHA high availability failover solution under MySQL
  • MHA implements manual switching of MySQL master-slave database
  • Introduction to MySQL MHA operation status monitoring

<<:  W3C Tutorial (8): W3C XML Schema Activities

>>:  Detailed explanation of Vue's simple store

Recommend

Using vue3 to implement counting function component encapsulation example

Table of contents Preface 1. The significance of ...

A brief discussion on the efficiency of MySQL subquery union and in

Recent product testing found a problem that when ...

A simple way to implement all functions of shopping cart in Vue

The main functions are as follows: Add product in...

Example of using nested html pages (frameset usage)

Copy code The code is as follows: <!DOCTYPE ht...

Solution to forgetting the MYSQL database password under MAC

Quick solution for forgetting MYSQL database pass...

Detailed explanation of the basic functions and usage of MySQL foreign keys

This article uses examples to illustrate the basi...

A detailed introduction to the Linux directory structure

When you first start learning Linux, you first ne...

CSS3 filter code to achieve gray or black mode on web pages

front end css3,filter can not only achieve the gr...

jQuery realizes dynamic particle effect

This article shares the specific code of jQuery t...

Implementation of code optimization for Vue2.x project performance optimization

Table of contents 1 Use of v-if and v-show 2. Dif...

Docker binding fixed IP/cross-host container mutual access operation

Preface Previously, static IPs assigned using pip...

In-depth understanding of Worker threads in Node.js

Table of contents Overview The history of CPU-bou...