1. Introduction to PXC 1.1 Introduction to PXCPXC is a MySQL high-availability cluster solution. Compared with the traditional cluster architecture based on the master-slave replication mode, the most prominent feature of PXC is that it solves the long-criticized problem of data replication delay and can basically achieve real-time synchronization. Moreover, the relationship between nodes is equal. PXC is most concerned about data consistency. When dealing with things, it either executes on all nodes or does not execute them at all. Its implementation mechanism determines that it is very strict in dealing with consistency, which can also perfectly ensure the data consistency of the MySQL cluster. 1.2 PXC Features and Advantages
The biggest advantage of PXC: strong consistency and no synchronization delay 1.3 Limitations and Disadvantages of PXC
1.4 Differences between PXC and Replication
1.5 PXC Common Ports
Glossary:
2. Practice 2.1 Building a PXC clusterUnlike MySQL, PXC officially provides Docker images, so we can easily build a PXC cluster. 1) Download the Docker image docker pull percona/percona-xtradb-cluster:5.7 Rename the image docker tag percona/percona-xtradb-cluster:5.7 pxc:5.7 3) Delete the original image docker rmi percona/percona-xtradb-cluster:5.7 Create a Docker network for independent use of the PXC cluster docker network create pxc-network Create a data volume for later mounting docker volume create --name v1 docker volume create --name v2 docker volume create --name v3 Note: PXC containers only support data volume mounting, not directory mounting. Creating the First Node docker run -di --name=pn1 --net=pxc-network -p 9000:3306 -v v1:/var/lib/mysql --privileged -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=cluster1 -e XTRABACKUP_PASSWORD=123456 pxc:5.7 Because the addition of subsequent nodes needs to be associated with the first node, you need to wait for the database to be started. Check the logs through docker logs pn1. If the following output appears, it proves that the startup is successful:
Note: Do not use the keyword PXC in the CLUSTER_NAME name, otherwise it will fail to start. Join the second node docker run -di --name=pn2 --net=pxc-network -p 9001:3306 -v v2:/var/lib/mysql --privileged -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=cluster1 -e XTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=pn1 pxc:5.7 It should be noted that the CLUSTER_JOIN=pn1 parameter needs to be added from the second node onwards, indicating synchronization with the pn1 node, otherwise the pn1 container will be automatically closed. When there are more than two nodes in the PXC cluster, there is no concept of a master node. The last node to exit the cluster will become the primary node, and the value of the attribute safe_to_bootstrap in the /var/lib/mysql/grastate.dat file will be set from 0 to 1, indicating that the node is the primary node. 8) Add the third node docker run -di --name=pn3 --net=pxc-network -p 9002:3306 -v v3:/var/lib/mysql --privileged -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=cluster1 -e XTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=pn2 pxc:5.7 You can see that this time our CLUSTER_JOIN is the pn2 container, which proves that what we just said is correct that there is no concept of a master node when there are more than two nodes in a PXC cluster. 9) Enter the pn1 node docker exec -it pn1 /usr/bin/mysql -uroot -p123456 View Status mysql> show status like 'wsrep%'; +----------------------------------+-------------------------------------------------+ | Variable_name | Value | +----------------------------------+-------------------------------------------------+ | wsrep_local_state_uuid | 068dd5e8-cedd-11e9-904d-466e75bd8fe1 | | wsrep_protocol_version | 9 | | wsrep_last_applied | 16 | | wsrep_last_committed | 16 | | wsrep_replicated | 0 | | wsrep_replicated_bytes | 0 | | wsrep_repl_keys | 0 | | wsrep_repl_keys_bytes | 0 | | wsrep_repl_data_bytes | 0 | | wsrep_repl_other_bytes | 0 | | wsrep_received | 10 | | wsrep_received_bytes | 800 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_max | 1 | | wsrep_local_send_queue_min | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_max | 2 | | wsrep_local_recv_queue_min | 0 | | wsrep_local_recv_queue_avg | 0.100000 | | wsrep_local_cached_downto | 0 | | wsrep_flow_control_paused_ns | 0 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_flow_control_interval | [ 173, 173 ] | | wsrep_flow_control_interval_low | 173 | | wsrep_flow_control_interval_high | 173 | | wsrep_flow_control_status | OFF | | wsrep_cert_deps_distance | 0.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 0.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 0.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 0 | | wsrep_cert_bucket_count | 22 | | wsrep_gcache_pool_size | 1592 | | wsrep_causal_reads | 0 | | wsrep_cert_interval | 0.000000 | | wsrep_open_transactions | 0 | | wsrep_open_connections | 0 | | wsrep_ist_receive_status | | | wsrep_ist_receive_seqno_start | 0 | | wsrep_ist_receive_seqno_current | 0 | | wsrep_ist_receive_seqno_end | 0 | | wsrep_incoming_addresses | 172.19.0.2:3306,172.19.0.3:3306,172.19.0.4:3306| | wsrep_cluster_weight | 3 | | wsrep_desync_count | 0 | | wsrep_evs_delayed | | | wsrep_evs_evict_list | | | wsrep_evs_repl_latency | 0/0/0/0/0 | | wsrep_evs_state | OPERATIONAL | | wsrep_gcomm_uuid | 11ed51e2-cedd-11e9-b362-af453a7ac074 | | wsrep_cluster_conf_id | 3 | | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | 068dd5e8-cedd-11e9-904d-466e75bd8fe1 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <[email protected]> | | wsrep_provider_version | 3.37(rff05089) | | wsrep_ready | ON | +----------------------------------+-------------------------------------------------+ 71 rows in set (0.06 sec) You can see that the value of wsrep_incoming_addresses is the IP address of our three containers | wsrep_incoming_addresses | 172.19.0.2:3306,172.19.0.3:3306,172.19.0.4:3306 | Cluster integrity check:
Node status check:
Replication health check:
Detecting slow network issues:
Number of conflicts or deadlocks:
2.2 Cluster synchronization verification Create a database test on node 1 mysql> create database test; Query OK, 1 row affected (0.02 sec) View on Node 2: mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | |mysql | | performance_schema | |sys| | test | +--------------------+ 5 rows in set (0.00 sec) Create a table on node 2 mysql> use test; Database changed mysql> create table sys_user(id int ,name varchar(30)); Query OK, 0 rows affected (0.11 sec) 4) View the table structure on node 3 mysql> use test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +----------------+ | Tables_in_test | +----------------+ |sys_user| +----------------+ 1 row in set (0.00 sec) Insert data on node three mysql> insert into sys_user values(1,'a'); ERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of DML command on a table (test.sys_user) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER Seeing that data cannot be inserted without a displayed primary key, we modify the following table structure: alter table sys_user add primary key (id); Insert data: mysql> insert into sys_user values(1,'a'); Query OK, 1 row affected (0.05 sec) 6) View table data in node 1 mysql> select * from sys_user; +----+------+ | id | name | +----+------+ | 1 | a | +----+------+ 1 row in set (0.00 sec) It can be seen that the data of the three nodes are synchronized normally and are all readable and writable. 2.3 Adding database node operationsWhen the database is not enough, we usually need to add database nodes to share the pressure. Let's demonstrate the operation of adding new nodes. Creating a Data Volume docker volume create --name v4 2) Add a new container docker run -di --name=pn4 --net=pxc-network -p 9003:3306 -v v4:/var/lib/mysql --privileged -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=cluster1 -e XTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=pn3 pxc:5.7 Note that this time CLUSTER_JOIN joins pn3. Enter node 4 to view data mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | |mysql | | performance_schema | |sys| | test | +--------------------+ 5 rows in set (0.00 sec) mysql> use test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +----------------+ | Tables_in_test | +----------------+ |sys_user| +----------------+ 1 row in set (0.00 sec) mysql> select * from sys_user; +----+------+ | id | name | +----+------+ | 1 | a | +----+------+ 1 row in set (0.00 sec) You can see that the previous data is also automatically synchronized. 2.4 Downtime Operation Close the node pn4 container, causing downtime docker stop pn4 Check the cluster status on node pn2 mysql> show status like 'wsrep%'; ...... | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 3 | ...... | wsrep_incoming_addresses | 172.19.0.4:3306,172.19.0.3:3306,172.19.0.2:3306 | You can see that the cluster should have 4 nodes, but now only 3 are connected normally. 3) Perform modification operations on node pn2 mysql> update sys_user set name='b' where id=1; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 Start the node pn4 container [root@VM_0_15_centos ~]# docker start pn4 Enter container pn4 to check whether the modification operation is synchronized docker exec -it pn4 /usr/bin/mysql -uroot -p123456 mysql> use test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select * from sys_user; +----+------+ | id | name | +----+------+ | 1 | b | +----+------+ 1 row in set (0.00 sec) You can see that the node has joined the cluster normally and the data has been synchronized. pn4 is a container created by entering the PXC cluster in the form of a designated master node. What happens if pn1 directly starts a container with itself as the master node? Let's demonstrate this: Shut down the pn1 node docker stop pn1 Insert a piece of data on the pn2 node mysql> insert into sys_user values('2','c'); Query OK, 1 row affected (0.01 sec) Start the pn1 node docker start pn1 Wait one minute and check the container startup list docker ps -a It was found that the pn1 node was not started CONTAINER ID IMAGE ...... STATUS NAMES fa123563e787 pxc:5.7 ...... Exited (1) About a minute ago pn1 Check the error log: docker logs pn1 The exception information is as follows:
Translated into Chinese:
The error message is obvious, because the pn1 node is not the last one to leave the cluster and cannot be started as a master node. If you want to start it as a master node, you must adjust the safe_to_bootstrap parameter in the grastate.dat file to 1. However, it should be noted that because other nodes in the cluster are not shut down, the started container has no connection with the previous cluster and the data will not be synchronized. Let's verify it: View the path where the data volume is stored docker volume inspect v1 [ { "CreatedAt": "2019-09-05T09:22:22+08:00", "Driver": "local", "Labels": {}, "Mountpoint": "/var/lib/docker/volumes/v1/_data", "Name": "v1", "Options": {}, "Scope": "local" } ] Enter the data volume directory and check whether there is a grastate.dat file [root@VM_0_15_centos ~]# cd /var/lib/docker/volumes/v1/_data [root@VM_0_15_centos _data]# ll total 323444 -rw-r----- 1 1001 1001 56 Sep 5 08:34 auto.cnf -rw------- 1 1001 1001 1680 Sep 5 08:34 ca-key.pem -rw-r--r-- 1 1001 1001 1120 Sep 5 08:34 ca.pem -rw-r--r-- 1 1001 1001 1120 Sep 5 08:34 client-cert.pem -rw------- 1 1001 1001 1676 Sep 5 08:34 client-key.pem -rw-r----- 1 1001 1001 2 Sep 5 08:34 fa123563e787.pid -rw-r----- 1 1001 1001 134219048 Sep 5 09:22 galera.cache -rw-r----- 1 1001 1001 113 Sep 5 09:21 grastate.dat -rw-r----- 1 1001 1001 1300 Sep 5 08:34 ib_buffer_pool -rw-r----- 1 1001 1001 79691776 Sep 5 09:15 ibdata1 -rw-r----- 1 1001 1001 50331648 Sep 5 09:15 ib_logfile0 -rw-r----- 1 1001 1001 50331648 Sep 5 08:34 ib_logfile1 -rw-r----- 1 1001 1001 12582912 Sep 5 08:38 ibtmp1 -rw-r----- 1 1001 1001 34751 Sep 5 08:38 innobackup.backup.log drwxr-x--- 2 1001 1001 4096 Sep 5 08:34 mysql drwxr-x--- 2 1001 1001 4096 Sep 5 08:34 performance_schema -rw------- 1 1001 1001 1676 Sep 5 08:34 private_key.pem -rw-r--r-- 1 1001 1001 452 Sep 5 08:34 public_key.pem -rw-r--r-- 1 1001 1001 1120 Sep 5 08:34 server-cert.pem -rw------- 1 1001 1001 1676 Sep 5 08:34 server-key.pem drwxr-x--- 2 1001 1001 12288 Sep 5 08:34 sys drwxr-x--- 2 1001 1001 4096 Sep 5 09:07 test -rw-r--r-- 1 1001 1001 143 Sep 5 09:22 version_info -rw-r----- 1 1001 1001 3932160 Sep 5 09:15 xb_doublewrite Editing a file vim grastate.dat Change the safe_to_bootstrap parameter value to 1, save and exit # GALERA saved state version: 2.1 uuid: 068dd5e8-cedd-11e9-904d-466e75bd8fe1 seqno: 20 safe_to_bootstrap: 1 Restart the pn1 container docker start pn1 Enter the container and view the data docker exec -it pn1 /usr/bin/mysql -uroot -p123456 mysql> use test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select * from sys_user; +----+------+ | id | name | +----+------+ | 1 | b | +----+------+ 1 row in set (0.01 sec) It is found that the data is not synchronized, so how to add the pn1 node to the cluster? We can directly delete the pn1 container and recreate the container by adding the node. And because we have mounted the container data to the data volume before, there is no risk of data loss. Let's do it: Delete the pn1 container docker stop pn1 docker rm pn1 Join the cluster as a slave node docker run -di --name=pn1 --net=pxc-network -p 9000:3306 -v v1:/var/lib/mysql --privileged -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=cluster1 -e XTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=pn2 pxc:5.7 Wait for the container to initialize 3) Enter the container and check whether the data is synchronized docker exec -it pn1 /usr/bin/mysql -uroot -p123456 mysql> use test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select * from sys_user; +----+------+ | id | name | +----+------+ | 1 | b | | 2 | c | +----+------+ 2 rows in set (0.00 sec) The data has been synchronized. This is the end of this article about the steps to build a MySQL PXC cluster. For more information about building a MySQL PXC cluster, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: Creative About Us Web Page Design
>>: Detailed explanation of Linux using ss command combined with zabbix to monitor socket
Table of contents 1. Background 2. Prerequisites ...
This article is original by 123WORDPRESS.COM Ligh...
This article shares with you the installation and...
The function to be implemented today is the follo...
The previous article has installed the docker ser...
When using CSS pseudo-elements to control element...
1. Reverse proxy example 1 1. Achieve the effect ...
Problem to be solved Mainly for cross-level commu...
Preface Programming languages usually contain v...
Table of contents rem vw Adapt to third-party UI ...
Mini Program Data Cache Related Knowledge Data ca...
When we do CSS web page layout, we all know that i...
In the process of database operation, it is inevi...
Table of contents 1. Add monitoring Linux host 2....
Table of contents EffectList Collection EffectLis...