MySQL high availability cluster deployment and failover implementation

MySQL high availability cluster deployment and failover implementation

1. MHA

1. Concept

insert image description here

2. Composition of MHA

insert image description here

3. Characteristics of MHA

insert image description here

2. Build MySQL+MHA

Ideas and preparations

1. MHA architecture database installation with one master and two slaves
MHA Construction

2. Fault simulation: The failure of the primary database is simulated. The backup primary database becomes the primary database. The original failed primary database is restored and rejoined to MHA to become a slave database.

3. Prepare 4 virtual machines for installing MySQL
MHA high availability cluster related software packages
MHAmanager IP: 192.168.221.30
MySQL1 IP: 192.168.221.20
MySQL2 IP: 192.168.221.100
MySQL3 IP: 192.168.221.110

insert image description here

1. Turn off the firewall and check whether the port is open

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
netstat -natp | grep 3306

insert image description here

2. Modify the host name of the MySQL node

mysql1 (192.168.221.20)

hostnamectl set-hostname mysql1
su-
hostnamectl set-hostname mysql2
su-
hostnamectl set-hostname mysql3
su-

insert image description here

3. Modify the main configuration file /etc/my.cnf of the three MySQL servers and create a command soft link

MySQL1
vim /etc/my.cnf
[mysqld]
server-id = 1
log_bin = master-bin
log-slave-updates = true

systemctl restart mysqld
ln -s /usr/local/mysql/bin/mysql /usr/sbin/
ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/

MySQL2
vim /etc/my.cnf
server-id = 2  
#server-id = 3 MySQL3 is 3, the server-id of the three servers cannot be the same log_bin = master-bin
relay-log = relay-log-bin
relay-log-index = slave-relay-bin.index
systemctl restart mysqld

ln -s /usr/local/mysql/bin/mysql /usr/sbin/
ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/

insert image description here

insert image description here

insert image description here

4. Configure MySQL with one master and two slaves

(1) All MySQL servers perform MySQL authorization mysql1 (192.168.221.20)
mysql2 (192.168.221.100)
mysql3 (192.168.221.110)

All three machines need to configure mysql -uroot -p123
grant replication slave on *.* to 'myslave'@'192.168.221.%' identified by '123';
grant all privileges on *.* to 'mha'@'192.168.221.%' identified by 'manager';
grant all privileges on *.* to 'mha'@'mysql1' identified by 'manager';
grant all privileges on *.* to 'mha'@'mysql2' identified by 'manager';
grant all privileges on *.* to 'mha'@'mysql3' identified by 'manager';
flush privileges;
show master status;

insert image description here

insert image description here

(2) Perform synchronization operations on Slave1 and Slave2 nodes: change master to master_host='192.168.221.20',master_user='myslave',master_password='123',master_log_file='master-bin.000005',master_log_pos=1991;

start slave;

show slave status\G
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

#General possibilities for Slave_IO_Running: No:
#Network not working#my.cnf configuration problem#Password, file name, pos offset are incorrect#Firewall is not closed 

insert image description here

(3) Slave1 and Slave2 nodes are set to read-only mode mysql2 (192.168.221.100)
mysql3 (192.168.221.110)

set global read_only=1;
#Change back to read-write status set global read_only=0;

insert image description here

(4) Master-slave replication verification mysql1 (192.168.221.20)
Create database create database srs;
use test;
create table test(id int);
insert into test values(1);

mysql2 (192.168.221.100)
mysql3 (192.168.221.110)
Query the database to verify show databases;

insert image description here

5. Install MHA software

(1) MHA-dependent environment MHAmanager (192.168.221.30) is installed on all servers
mysql1 (192.168.221.20)
mysql2 (192.168.221.100)
mysql3 (192.168.221.110)

First install the epel source, which requires online source installation and then install the node component on all servers#Install online source mv /etc/yum.repos.d/repos.bak/CentOS-* /etc/yum.repos.d/
yum list

yum install epel-release --nogpgcheck -y

yum install -y perl-DBD-MySQL \
perl-Config-Tiny \
perl-Log-Dispatch \
perl-Parallel-ForkManager \
perl-ExtUtils-CBuilder \
perl-ExtUtils-MakeMaker \
perl - CPAN

insert image description here

(2) All servers install the MHA node software package MHAmanager (192.168.221.30)
mysql1 (192.168.221.20)
mysql2 (192.168.221.100)
mysql3 (192.168.221.110)

For each operating system version, it is different. Here, CentOS7.4 must choose version 0.57.
The node component must be installed on all servers first, and the manager component must be installed on the MHA-manager node last, because the manager depends on the node component.
#Put the software package mha4mysql-node-0.57.tar.gz into the /opt directory cd /opt
tar zxvf mha4mysql-node-0.57.tar.gz
cd mha4mysql-node-0.57
perl Makefile.PL
make && make install

Install the manager component on the MHA manager node. Put the package mha4mysql-manager-0.57.tar.gz into the /opt directory. cd /opt
tar zxvf mha4mysql-manager-0.57.tar.gz
cd mha4mysql-manager-0.57
perl Makefile.PL
make && make install
#After the node component is installed, several scripts will be generated under /usr/local/bin (these tools are usually triggered by the MHAManager script and do not require manual operation)
#After the manager component is installed, several tools will be generated under /usr/local/bin 

insert image description here

insert image description here

insert image description here

insert image description here

insert image description here

insert image description here

insert image description here

6. Configure passwordless authentication on all servers

(1) Configure the password-free authentication MHAmanager (192.168.221.30) for all database nodes on the manager node

ssh-keygen -t rsa #Press Enter all the way ssh-copy-id 192.168.221.20
ssh-copy-id 192.168.221.100
ssh-copy-id 192.168.221.110

(2) Configure password-free authentication on mysql1 to the database nodes mysql2 and mysql3 using ssh-keygen -t rsa
mysql1 (192.168.221.20)

ssh-keygen -t rsa
ssh-copy-id 192.168.221.100
ssh-copy-id 192.168.221.110

(3) Configure password-free authentication on mysql2 to the database nodes mysql1 and mysql3. mysql2 (192.168.221.100)

ssh-keygen -t rsa
ssh-copy-id 192.168.221.20
ssh-copy-id 192.168.221.110

(4) Configure password-free authentication on mysql3 to the database nodes mysql1 and mysql2. mysql3 (192.168.221.110)

ssh-keygen -t rsa
ssh-copy-id 192.168.221.20
ssh-copy-id 192.168.221.100
The article is too long and will not be demonstrated 

insert image description here

7. Configure MHA on the manager node

MHAmanager (192.168.221.30)
(1) Copy the relevant scripts to the /usr/local/bin directory on the manager node: cp -rp /opt/mha4mysql-manager-0.57/samples/scripts /usr/local/bin

#After copying, there will be four executable files ll /usr/local/bin/scripts/

(2) Copy the above VIP management script for automatic switching to the /usr/local/bin directory. Here, the master_ip_failover script is used to manage VIP and failover. cp /usr/local/bin/scripts/master_ip_failover /usr/local/bin

# Clear the original content first echo '' > /usr/local/bin/master_ip_failover

#Directly copy and modify vip related parameters vim /usr/local/bin/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
###############################Add content section#########################################
my $vip = '192.168.221.200'; #Specify the address of vipmy $brdc = '192.168.221.255'; #Specify the broadcast address of vipmy $ifdev = 'ens33'; #Specify the network card bound to vipmy $key = '1'; #Specify the serial number of the virtual network card bound to vipmy $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip"; #Represents that the value of this variable is ifconfig ens33:1 192.168.221.200
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down"; #This variable value is ifconfig ens33:1 down
my $exit_code = 0; #Specify the exit status code as 0
#my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;";
#my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key";
##################################################################################
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);

exit &main();

sub main {

print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

if ( $command eq "stop" || $command eq "stopssh" ) {

my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {

my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
## A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

Sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

insert image description here

insert image description here

mkdir /etc/masterha
cp /opt/mha4mysql-manager-0.57/samples/conf/app1.cnf /etc/masterha
echo '' > /etc/masterha/app1.cnf
vim /etc/masterha/app1.cnf
[server default]
manager_log=/var/log/masterha/app1/manager.log
manager_workdir=/var/log/masterha/app1
master_binlog_dir=/usr/local/mysql/data
master_ip_failover_script=/usr/local/bin/master_ip_failover
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
user=mha
password=manager
ping_interval=1
remote_workdir=/tmp
repl_user=myslave
repl_password=123
secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.221.100 -s 192.168.221.110
shutdown_script=""
ssh_user=root

[server1]
hostname=192.168.221.20
port=3306

[server2]
candidate_master=1
check_repl_delay=0
hostname=192.168.221.100
port=3306

[server3]
hostname=192.168.221.110
port=3306

#--------------------------Configuration file explanation--------------------------------------------------------------------------
[server default]
manager_log=/var/log/masterha/app1/manager.log #manager log manager_workdir=/var/log/masterha/app1.log #manager working directory master_binlog_dir=/usr/local/mysql/data/ #The location where the master saves the binlog. The path here must be consistent with the path of the binlog configured in the master so that MHA can find it master_ip_failover_script=/usr/local/bin/master_ip_failover #Set the switching script for automatic failover, which is the script above master_ip_online_change_script=/usr/local/bin/master_ip_online_change #Set the switching script for manual switching user=mha #Set monitoring user root
password=manager #Set the password of the root user in mysql. This password is the password of the monitoring user created in the previous article. ping_interval=1 #Set the monitoring main database. The interval for sending ping packets is 1 second. The default is 3 seconds. If there is no response after three attempts, failover will be performed automatically.
remote_workdir=/tmp #Set the location where the remote MySQL binlog is saved when switching occurs repl_user=myslave #Set the user of the replication user repl_password=123 #Set the password of the replication user report_script=/usr/local/send_report #Set the script for sending alarms after switching occurs secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.221.100 -s 192.168.221.110 #Specify the IP address of the slave server to be checked shutdown_script="" #Set the script to shut down the faulty host after a failure occurs (the main function of this script is to shut down the host to prevent brain split, which is not used here)
ssh_user=root #Set the ssh login username [server1]
hostname=192.168.221.20
port=3306

[server2]
hostname=192.168.221.100
port=3306
candidate_master=1
#Set to candidate master. After setting this parameter, the slave database will be promoted to the master database after the master-slave switch occurs, even if the master database is not the latest slave in the cluster
check_repl_delay=0
#By default, if a slave lags behind the master by more than 100M relay logs, MHA will not select the slave as a new master, because it takes a long time to recover the slave; by setting check_repl_delay=0, MHA triggers the switch to ignore the replication delay when selecting a new master. This parameter is very useful for hosts with candidate_master=1, because the candidate master must be the new master during the switch.

[server3]
hostname=192.168.221.110
port=3306

insert image description here
insert image description here

8. The first configuration requires manually enabling the virtual IP on the Master node

Master (192.168.221.20)

/sbin/ifconfig ens33:1 192.168.221.200/24

insert image description here

9. Test on the manager node

(1) Test ssh password-less authentication on the manager node MHAmanager (192.168.221.30)

masterha_check_ssh -conf=/etc/masterha/app1.cnf
#If everything goes well, the final output will be successfully;
#If it fails, you can go to the server configuration without password authentication to see if there is any problem. (2) Test the MySQL master-slave connection on the manager node. MHAmanager (192.168.221.30)
masterha_check_repl -conf=/etc/masterha/app1.cnf
#The message "MySQL Replication Health is OK" appears at the end, indicating that everything is normal.
#If the message MySQL Replication Health is NOT OK! appears, you can check whether the soft link on the MySQL server is missing. --> This article is located at: 2. Modify the main configuration file /etc/my.cnf of the three MySQL servers and create a command soft link (3) Start MHA on the manager node
MHAmanager (192.168.221.30)
nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
#------------------------Component Explanation----------------------------------------------------------------------------------
--remove_dead_master_conf: This parameter means that when a master-slave switch occurs, the IP address of the old master database will be removed from the configuration file.
--manger_log: Log storage location.
--ignore_last_failover: By default, if MHA detects consecutive downtimes and the interval between two downtimes is less than 8 hours, no failover will be performed. This restriction is to avoid the ping-pong effect. This parameter means ignoring the files generated by the last MHA-triggered switch. By default, after the MHA switch occurs, the directory will be recorded in the log, that is, the log app1.failover.complete file set above. If the file exists in the directory when switching next time, the switch will not be allowed to be triggered unless the file is deleted after the first switch. For convenience, it is set here to --ignore_last_failover.

insert image description here

insert image description here

insert image description here

insert image description here

10. Check related status

MHAmanager (192.168.221.30)

Check the MHA status and you can see that the current master is the Mysql1 node.
masterha_check_status --conf=/etc/masterha/app1.cnf

Check the MHA log and you can see that the current master is 192.168.221.20
cat /var/log/masterha/app1/manager.log | grep "current master"

Check the VIP address of Mysql1 and see if the VIP address 192.168.163.200 of Mysql1 exists. This VIP address will not disappear because the manager node stops the MHA service.
ifconfig

Supplement: To shut down the manager service, you can use the following command.
masterha_stop --conf=/etc/masterha/app1.cnf
Or you can directly kill the process ID to shut it down.

insert image description here

insert image description here

3. Fault Simulation

1. Fault simulation

(1) Monitor and observe the log records of MHAmanager (192.168.221.30) on the manager node

tail -f /var/log/masterha/app1/manager.log

insert image description here

(2) Stop the mysql service mysql1 on the Master node MySQL1 (192.168.221.20)

systemctl stop mysqld
or pkill -9 mysql
After a normal automatic switch, the MHA process will exit. HMA will automatically modify the content of the app1.cnf file and delete the downed mysql1 node.

insert image description here

(3) Check whether mysql2 has taken over VIP
mysql2 (192.168.221.100)

ifconfig

insert image description here

(4) Return to the manager node to monitor the log records: tail -f /var/log/masterha/app1/manager.log

Algorithm for failover of candidate master database:
1. Generally, the quality of slave databases is judged by (position/GTID). If there is a difference in data, the slave closest to the master becomes the candidate master.
2. If the data is consistent, select the alternate master database according to the order of configuration files.
3. Set the weight (candidate_master=1) to forcibly designate the candidate master according to the weight.
(1) By default, if a slave lags behind the master by 100MB of relay logs, it will fail even if it has weights.
(2) If check_repl_delay=0, it is forcibly selected as the backup master even if it lags behind by a large number of logs.

insert image description here

2. Troubleshooting

mysql1 (192.168.221.20)
(1) Repair the master

systemctl restart mysqld
netstat -natp | grep 3306

insert image description here

mysql2 (192.168.221.100)
(2) Repair the master-slave server. View the binary files and synchronization points on the current master server Mysql2. mysql -uroot -p123 -e 'show master status;'
#Execute show master status in the database;

Perform synchronization operations on the original master server mysql1 (192.168.221.20)
change master to master_host='192.168.221.100',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=1747;

start slave;
show slave status\G

insert image description here

insert image description here

(3) Modify the configuration file app1.cnf on the manager node
MHAmanager (192.168.221.30)

Then add this record, because it will automatically disappear when it detects failure vim /etc/masterha/app1.cnf
…
secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.221.20 -s 192.168.221.110
......
[server1]
hostname=192.168.221.100
port=3306

[server2]
candidate_master=1
check_repl_delay=0
hostname=192.168.221.20
port=3306

[server3]
hostname=192.168.221.110
port=3306

insert image description here

(4) Start MHA on the manager node
MHAmanager (192.168.221.30)

masterha_stop --conf=/etc/masterha/app1.cnf

nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &

masterha_check_status --conf=/etc/masterha/app1.cnf

insert image description here

This is the end of this article about MySQL high-availability cluster deployment and failover implementation. For more relevant MySQL high-availability cluster deployment content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Detailed steps for installing MySQL using cluster rpm
  • Detailed explanation of MySQL cluster: one master and multiple slaves architecture implementation
  • Detailed explanation of MySQL cluster construction
  • Build a high-availability MySQL cluster with dual VIP

<<:  JavaScript to achieve JD.com flash sale effect

>>:  Code comment writing standards during web page production

Recommend

MySQL Order By Multi-Field Sorting Rules Code Example

Say it in advance On a whim, I want to know what ...

Detailed explanation of Vue's list rendering

Table of contents 1. v-for: traverse array conten...

jenkins+gitlab+nginx deployment of front-end application

Table of contents Related dependency installation...

Explanation of MySQL's horizontal and vertical table partitioning

In my previous article, I said that the optimizat...

Two common solutions to html text overflow display ellipsis characters

Method 1: Use CSS overflow omission to solve The ...

18 Web Usability Principles You Need to Know

You can have the best visual design skills in the...

Detailed summary of MySQL and connection-related timeouts

MySQL and connection related timeouts Preface: To...

Summary of MySQL database and table sharding

During project development, our database data is ...

JavaScript to implement image preloading and lazy loading

This article shares the specific code for impleme...

Detailed explanation of the implementation of nginx process lock

Table of contents 1. The role of nginx process lo...