Implementation of Docker deployment of MySQL cluster

Implementation of Docker deployment of MySQL cluster

Disadvantages of single-node database

  • Large-scale Internet programs have a large user base, so the architecture must be specially designed
  • A single-node database cannot meet performance requirements
  • The single-node database has no redundant design and cannot meet high availability requirements.

Single-node MySQL performance bottleneck

The 2016 Spring Festival WeChat red envelope business volume was huge, and the database was under great load

Common MySQL cluster solutions

Introducing the MySQL cluster solution, it is recommended to use PXC because weak consistency will cause problems. For example, the database of node A shows that my purchase was successful, but the database of node B shows that it was not successful. This is troublesome. The PXC solution will tell you success only after all nodes have been written successfully. It is readable and writable bidirectional synchronization, but replication is unidirectional. Ports are open between databases of different nodes for communication. If this port from the firewall is closed, PXC will not synchronize successfully and will not return success to you.

Replication

  • It is fast, but can only guarantee weak consistency. It is suitable for saving data of low value, such as logs, posts, news, etc.
  • With a master-slave structure, data written to the master will be synchronized to the slave and can be read from the slave; however, data written to the slave cannot be synchronized to the master.
  • With asynchronous replication, the master returns success to the client if the write is successful, but the synchronization of the slave may fail, resulting in the inability to read from the slave.

PXC (Percona XtraDB Cluster)

  • It is slow but can ensure strong consistency. It is suitable for storing high-value data, such as orders, customers, payments, etc.
  • Data synchronization is bidirectional. Writing data on any node will be synchronized to all other nodes, and data can be read and written simultaneously on any node.
  • With synchronous replication, when writing data to any node, success is returned to the client only after all nodes have been successfully synchronized. The transaction is either committed simultaneously on all nodes or not committed.

It is recommended that PXC use PerconaServer (an improved version of MySQL with greatly improved performance)

PXC's strong data consistency

Synchronous replication, transactions are either submitted at the same time on all cluster nodes, or not submitted. Replication uses asynchronous replication, which cannot guarantee data consistency.

PXC cluster installation introduction

Install the PXC cluster in Docker, using the PXC official image in the Docker repository: https://hub.docker.com/r/percona/percona-xtradb-cluster

1. Pull down the PXC image from the official Docker repository:

docker pull percona/percona-xtradb-cluster

Or install locally

docker load < /home/soft/pxc.tar.gz

Installation Complete:

[root@localhost ~]# docker pull percona/percona-xtradb-cluster
Using default tag: latest
Trying to pull repository docker.io/percona/percona-xtradb-cluster ... 
latest: Pulling from docker.io/percona/percona-xtradb-cluster
ff144d3c0ab1: Pull complete 
eafdff1524b5: Pull completed 
c281665399a2: Pull complete 
c27d896755b2: Pull completed 
c43c51f1cccf: Pull complete 
6eb96f41c54d: Pull complete 
4966940ec632: Pull complete 
2bafadcea292: Pull complete 
3c2c0e21b695: Pull complete 
52a8c2e9228e: Pull complete 
f3f28eb1ce04: Pull complete 
d301ece75f56: Pull complete 
3d24904bec3c: Pull complete 
1053c2982c37: Pull complete 
Digest: sha256:17c64dacbb9b62bd0904b4ff80dd5973b2d2d931ede2474170cbd642601383bd
Status: Downloaded newer image for docker.io/percona/percona-xtradb-cluster:latest
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/percona/percona-xtradb-cluster latest 70b3670450ef 2 months ago 408 MB

2. Rename the image: (the name is too long, rename it)

docker tag percona/percona-xtradb-cluster:latest pxc

Then the original image can be deleted

[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/percona/percona-xtradb-cluster latest 70b3670450ef 2 months ago 408 MB
pxc latest 70b3670450ef 2 months ago 408 MB
docker.io/java latest d23bdf5b1b1b 2 years ago 643 MB
[root@localhost ~]# docker rmi docker.io/percona/percona-xtradb-cluster
Untagged: docker.io/percona/percona-xtradb-cluster:latest
Untagged: docker.io/percona/percona-xtradb-cluster@sha256:17c64dacbb9b62bd0904b4ff80dd5973b2d2d931ede2474170cbd642601383bd
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
pxc latest 70b3670450ef 2 months ago 408 MB
docker.io/java latest d23bdf5b1b1b 2 years ago 643 MB

3. For security reasons, create a Docker internal network for the PXC cluster

# Create a network segment docker network create --subnet=172.18.0.0/24 net1
# View the network segment docker network inspect net1
# Delete the network segment# docker network rm net1

4. Create a Docker volume:
When using Docker, business data should be stored in the host machine, using directory mapping, so that the data can be independent of the container. However, the PXC in the container cannot directly use the mapping directory. The solution is to use Docker volume to map

# Create a data volume named v1, --name can be omitted docker volume create --name v1

View Data Volume

docker inspect v1

result:

[root@localhost ~]# docker inspect v1
[
  {
    "Driver": "local",
    "Labels": {},
    "Mountpoint": "/var/lib/docker/volumes/v1/_data",#This is the save location on the host machine"Name": "v1",
    "Options": {},
    "Scope": "local"
  }
]

Deleting a Data Volume

docker volume rm v1

Create 5 data volumes

# Create 5 data volumes docker volume create --name v1
docker volume create --name v2
docker volume create --name v3
docker volume create --name v4
docker volume create --name v5

5. Create 5 PXC containers:

# Create 5 PXC containers to form a cluster # The first node docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -v v1:/var/lib/mysql --name=node1 --network=net1 --ip 172.18.0.2 pxc
# After the first node is started, wait for a while for MySQL to start.

# Second node docker run -d -p 3307:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v2:/var/lib/mysql --name=node2 --net=net1 --ip 172.18.0.3 pxc
# The third node docker run -d -p 3308:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v3:/var/lib/mysql --name=node3 --net=net1 --ip 172.18.0.4 pxc
# The fourth node docker run -d -p 3309:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v4:/var/lib/mysql --name=node4 --net=net1 --ip 172.18.0.5 pxc
# The fifth node docker run -d -p 3310:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v5:/var/lib/mysql --name=node5 --net=net1 --ip 172.18.0.6 pxc

Check:

[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4708ce32209 pxc "/entrypoint.sh " About a minute ago Up About a minute 4567-4568/tcp, 0.0.0.0:3309->3306/tcp node4
bf612f9586bc pxc "/entrypoint.sh " 17 minutes ago Up 17 minutes 4567-4568/tcp, 0.0.0.0:3310->3306/tcp node5
9fdde5e6becd pxc "/entrypoint.sh " 17 minutes ago Up 17 minutes 4567-4568/tcp, 0.0.0.0:3308->3306/tcp node3
edd5794175b6 pxc "/entrypoint.sh " 18 minutes ago Up 18 minutes 4567-4568/tcp, 0.0.0.0:3307->3306/tcp node2
33d842de7f42 pxc "/entrypoint.sh " 21 minutes ago Up 21 minutes 0.0.0.0:3306->3306/tcp, 4567-4568/tcp node1

The necessity of database load balancing

Although a cluster is built, database load balancing is not used. A single node handles all requests, resulting in high load and poor performance.

Send requests evenly to every node in the cluster.

  • All requests are sent to a single node, which is overloaded and has low performance, while other nodes are idle.
  • Using Haproxy for load balancing can evenly send requests to each node, with low single-node load and good performance.

Comparison of load balancing middleware

Load balancing first involves a database cluster. If five clusters are added and each request is made to the first one, the first database may fail. Therefore, a better solution is to make requests to different nodes. This requires middleware for forwarding. Better middleware include nginx and haproxy. Nginx supports plug-ins, but has just supported the tcp/ip protocol. haproxy is an old middleware forwarding software. If you want to use haproxy, you can download the image from the official website, and then configure the image (write the configuration file yourself, because this image does not have a configuration file. After configuration, map the folder when running the image. The configuration file opens 3306 (database request, and then accesses different databases based on the check heartbeat detection, 8888 monitors the database cluster)). The configuration file sets users (users perform heartbeat detection in the database, determine which database node is idle, and then access the idle nodes), various algorithms (such as round-robin training), maximum number of connections, time, etc., as well as cluster monitoring. After the configuration file is written, run the image. After the image runs successfully, enter the container startup configuration file. In fact, haprocy also returns a database instance (but it does not store any data, it just forwards requests), which is used to check other nodes.

Install Haproxy

1. Pull the haproxy image from the Docker repository: https://hub.docker.com/_/haproxy

docker pull haproxy
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/haproxy latest 11fa4d7ff427 11 days ago 72.2 MB

2. Create a Haproxy configuration file. For use by Haproxy container (no configuration file is generated in docker, we need to create the configuration file ourselves in the host machine)
For more information about the configuration file, please refer to: https://www.cnblogs.com/wyt007/p/10829184.html

# Use directory mapping technology when starting the container to enable the container to read the configuration file touch /home/soft/haproxy/haproxy.cfg

haproxy.cfg

# haproxy.cfg
global
  #Working directory chroot /usr/local/etc/haproxy
  #Log file, use the local5 log device (/var/log/local5) in the rsyslog service, level info
  log 127.0.0.1 local5 info
  #Daemon process running daemon

defaults
  log global
  mode http
  #Log format option httplog
  #Do not record the heartbeat detection record of load balancing in the log option dontlognull
  #Connection timeout (milliseconds)
  timeout connect 5000
  #Client timeout (milliseconds)
  timeout client 50000
  #Server timeout (milliseconds)
  timeout server 50000

#Monitoring interface listen admin_stats
  #Monitoring interface access IP and port bind 0.0.0.0:8888
  #Access protocol mode http
  #URI relative address stats uri /dbs
  #Statistics report formatstats realm Global\ statistics
  #Login account information stats auth admin:abc123456
#Database load balancing listen proxy-mysql
  #Access IP and port bind 0.0.0.0:3306 
  #Network protocol mode tcp
  #Load balancing algorithm (round robin algorithm)
  # Polling algorithm: roundrobin
  #Weight algorithm: static-rr
  # Least connection algorithm: leastconn
  #Request source IP algorithm: source 
  Balance Round Robin
  #Log format option tcplog
  #Create a haproxy user with no permissions in MySQL and set the password to empty. Haproxy uses this account to perform heartbeat detection on the MySQL database option mysql-check user haproxy
  server MySQL_1 172.18.0.2:3306 check weight 1 maxconn 2000 
  server MySQL_2 172.18.0.3:3306 check weight 1 maxconn 2000 
  server MySQL_3 172.18.0.4:3306 check weight 1 maxconn 2000 
  server MySQL_4 172.18.0.5:3306 check weight 1 maxconn 2000
  server MySQL_5 172.18.0.6:3306 check weight 1 maxconn 2000
  #Use keepalive to detect dead links option tcpka

3. Create a blank password and unprivileged user haproxy in the database cluster for Haproxy to perform heartbeat detection on the MySQL database

create user 'haproxy'@'%' identified by '';

4. Create a Haproxy container (name=h1 for high availability)

# Here you need to add --privileged
docker run -it -d -p 4001:8888 -p 4002:3306 -v /home/soft/haproxy:/usr/local/etc/haproxy --name h1 --net=net1 --ip 172.18.0.7 --privileged haproxy

5. Enter the container

docker exec -it h1 bash

6. Start Haproxy in container bash

haproxy -f /usr/local/etc/haproxy/haproxy.cfg

Next, you can open the Haproxy monitoring interface in the browser, port 4001, and the username admin and password abc123456 are defined in the configuration file.
I visited http://192.168.63.144:4001/dbs and had to log in with my username and password (a small aside, I used Basic login, and my Chrome was blocked for some reason, so I ended up using Firefox)

At this time, we manually hang up a Docker node and take a look at the changes (we will find that it has been displayed as hung)

8. Haproxy does not store data, it only forwards data. You can establish a Haproxy connection in the database, port 4002, the username and password are the username and password of the database cluster

Why use dual-machine hot standby

Single-node Haproxy does not have high availability and must have a redundant design

Dual machines mean two request processing programs, such as two haproxy. When one fails, the other can take over. I understand hot standby as keepalive. Install keepalive in the haproxy container.

Virtual IP address

The Linux system can define multiple IP addresses in one network card and assign these addresses to multiple applications. These addresses are virtual IPs. The most critical technology of Haproxy's dual-machine hot standby solution is virtual IP.

The key is the virtual IP. Define a virtual IP, and then install the keepalive image on two haproxy servers respectively. Because haproxy is on the Ubuntu system, apt-get is used for installation. The function of keepalive is to seize the virtual IP. The one that is seized is the primary server, and the one that is not seized is the backup server. Then the two keepalive servers perform heartbeat detection (that is, create a user to test the other party to see if it is still alive. There is also heartbeat detection between MySQL clusters). If it hangs up, the IP will be seized. Therefore, before starting keepalive, you must first edit its configuration file, how to preempt, what the weight is, what the virtual IP is, and what the created user submits. After configuration and startup, you can ping to see if it is correct, and then map the virtual IP to the LAN IP

Using Keepalived to implement hot standby

  • Define Virtual IP
  • Start two Haproxy containers in Docker, and install the Keepalived program (hereinafter referred to as KA) in each container.
  • Two KAs will compete for the virtual IP. If one gets it, the other will wait. The one that gets it will be the primary server, and the one that doesn't get it will be the backup server.
  • A heartbeat detection will be performed between the two KAs. If the backup server does not receive a heartbeat response from the primary server, it means that the primary server has failed. Then the backup server can compete for the virtual IP and continue to work.
  • We send database requests to the virtual IP. If one Haproxy fails, another one can take over.

Нaproxy dual-machine hot standby solution

Create two Haproxy in Docker and seize the virtual IP in Docker through Keepalived

The virtual IP in Docker cannot be accessed from the external network, so you need to use the host Keepalived to map it to a virtual IP that can be accessed from the external network.

Install Keepalived

1. Enter the Haproxy container and install Keepalived:

$ docker exec -it h1 bash
apt-get update
apt-get install keepalived

2. Keepalived configuration file (Keepalived.conf):
Keepalived's configuration file is /etc/keepalived/keepalived.conf

# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
  state MASTER # The identity of Keepalived (the MASTER service must seize the IP, and the BACKUP server will not seize the IP).
  interface eth0 # Docker network card device, where the virtual IP is located virtual_router_id 51 # Virtual routing identifier, the virtual routing identifiers of MASTER and BACKUP must be consistent. From 0 to 255
  priority 100 # MASTER has a higher weight than BACKUP. The larger the number, the higher the priority. advert_int 1 # The time interval for synchronization check between MASTER and BACKUP nodes, in seconds, must be consistent between the master and the backup. authentication { # Master-slave server authentication method. The master and slave must use the same password to communicate normally. auth_type PASS
    auth_pass 123456
  }
  virtual_ipaddress { #Virtual IP. You can set multiple virtual IP addresses, one 172.18.0.201 per line
  }
}

3. Start Keepalived

service keepalived start

After successful startup, you can use ip a to check whether the virtual IP in the network card is successful. In addition, you can successfully ping the virtual IP 172.18.0.201 in the host machine.

4. You can follow the above steps to create another Haproxy container. Note that the mapped host port cannot be repeated and the Haproxy configuration is the same. Then install Keepalived in the container, and the configuration is basically the same (the priority weight can be modified). In this way, the Haproxy dual-machine hot standby solution is basically implemented. The command is as follows:

Create a Haproxy container (name=h2 for high availability)

# Here you need to add --privileged
docker run -it -d -p 4003:8888 -p 4004:3306 -v /home/soft/haproxy:/usr/local/etc/haproxy --name h2 --net=net1 --ip 172.18.0.8 --privileged haproxy

Entering the container

docker exec -it h2 bash

Start Haproxy in container bash

haproxy -f /usr/local/etc/haproxy/haproxy.cfg

Next, you can open the Haproxy monitoring interface in the browser, port 4003, and the username admin and password abc123456 are defined in the configuration file.
I visited http://192.168.63.144:4003/dbs and had to log in with my username and password (a small aside, I used Basic login, and my Chrome was blocked for some reason, so I ended up using Firefox)

Install Keepalived:

apt-get update
apt-get install keepalived

Keepalived configuration file (Keepalived.conf):
Keepalived's configuration file is /etc/keepalived/keepalived.conf

# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
  state MASTER # The identity of Keepalived (the MASTER service must seize the IP, and the BACKUP server will not seize the IP).
  interface eth0 # Docker network card device, where the virtual IP is located virtual_router_id 51 # Virtual routing identifier, the virtual routing identifiers of MASTER and BACKUP must be consistent. From 0 to 255
  priority 100 # MASTER has a higher weight than BACKUP. The larger the number, the higher the priority. advert_int 1 # The time interval for synchronization check between MASTER and BACKUP nodes, in seconds, must be consistent between the master and the backup. authentication { # Master-slave server authentication method. The master and slave must use the same password to communicate normally. auth_type PASS
    auth_pass 123456
  }
  virtual_ipaddress { #Virtual IP. You can set multiple virtual IP addresses, one 172.18.0.201 per line
  }
}

Start Keepalived

service keepalived start

After successful startup, you can use ip a to check whether the virtual IP in the network card is successful. In addition, you can successfully ping the virtual IP 172.18.0.201 in the host machine.

Implementing external network access to virtual IP

View the current LAN IP allocation:

yum install nmap -y
nmap -sP 192.168.1.0/24

Install Keepalived on the host

yum install keepalived

The host Keepalived configuration is as follows (/etc/keepalived/keepalived.conf):

vrrp_instance VI_1 {
  state MASTER
#This is the host's network card. You can use ip a to check which interface ens33 the network card currently used on your computer is.
  virtual_router_id 100
  priority 100
  advert_int 1
  authentication
    auth_type PASS
    auth_pass 1111
  }
  virtual_ipaddress {
#This is a virtual IP address on a specified host machine. It must be in the same network segment as the host machine network card.
#My host network card IP is 192.168.63.144, so the specified virtual IP is 160
      192.168.63.160
  }
}
 
#Accept the port for monitoring data source, the web page entrance uses virtual_server 192.168.63.160 8888 {
  delay_loop 3
  lb_algo rr 
  lb_kind NAT
  persistence_timeout 50
  protocol TCP
#Forward the received data to the network segment and port of the docker service. Since it is sent to the docker service, the data must be consistent with the docker service data real_server 172.18.0.201 8888 {
    weight 1
  }
}
 
#Accept database data port. The host database port is 3306, so this must also be consistent with the host data acceptance port virtual_server 192.168.63.160 3306 {
  delay_loop 3
  lb_algo rr 
  lb_kind NAT
  persistence_timeout 50
  protocol TCP
#Similarly, the port and IP address of the database forwarded to the service must be consistent with the data in the docker service real_server 172.18.0.201 3306 {
    weight 1
  }
}

Start the Keepalived service

service keepalived start
#service keepalived status
#service keepalived stop

After that, other computers can access the corresponding ports 172.18.0.201 in the host Docker through ports 8888 and 3306 of the virtual IP 192.168.63.160.

How to suspend the PXC cluster

vi /etc/sysctl.conf
#Add net.ipv4.ip_forward=1 to the file systemctl restart network

Then suspend the virtual machine

Hot backup data

Cold backup

  • Cold backup is a backup method when the database is closed. The usual practice is to copy the data files.
  • It is a simple and safe backup method and cannot be backed up while the database is running.
  • Large websites cannot shut down their business to back up data, so cold backup is not the best choice.

Hot Backup

Hot backup is to back up data while the system is running.

Common hot backup solutions for MySQL include LVM and XtraBackup

  • LVM: Linux partition backup command, can back up any database; but will lock the database, can only read; and the command is complex
  • XtraBackup: No need to lock tables, and free

XtraBackup

XtraBackup is an online hot backup tool based on InnoDB. It is open source and free, supports online hot backup, occupies little disk space, and can backup and restore MySQL database very quickly.

  • The table is not locked during the backup process, which is fast and reliable
  • The backup process will not interrupt the ongoing transactions
  • Backup data is compressed, taking up less disk space

Full backup and incremental backup

  • Full backup: back up all data. The backup process takes a long time and takes up a lot of space. The first backup should use full backup
  • Incremental backup: Only back up the changed data. The backup time is short and the space occupied is small. Use incremental backup after the second time

PXC full backup

The backup should be performed in a container of a PXC node, but the backup data should be saved in the host machine. So the directory mapping technology is used. First create a new Docker volume:

docker volume create backup

Select a PXC node node1, stop and delete its container, and then recreate a node1 container with the backup directory mapping added

docker stop node1
docker rm node1 # Database data is stored in Docker volume v1 and will not be lost # Parameter changes:
# 1. -e CLUSTER_JOIN=node2;Originally, other nodes joined the cluster through node1. Now node1 is recreated, and you need to select another node to join the cluster# 2. -v backup:/data;Map the Docker volume backup to the /data directory of the containerdocker run -d -u root -p 3306:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node2 -v v1:/var/lib/mysql -v backup:/data --network=net1 --ip 172.18.0.2 --name=node1 pxc

Install percona-xtrabackup-24
in the node1 container percona-xtrabackup-24

docker exec -it node1 bash
apt-get update
apt-get install percona-xtrabackup-24

After that, you can execute the following command to perform a full backup. The backed-up data will be saved in the /data/backup/full directory:

mkdir /data/backup
mkdir /data/backup/full
#Not recommended, outdated innobackupex --backup -u root -p abc123456 --target-dir=/data/backup/full
xtrabackup --backup -uroot -pabc123456 --target-dir=/data/backup/full

The official documentation no longer recommends using innobackupex , but instead recommends using the xtrabackup command.

PXC full restore

The database can be hot-backed up, but cannot be hot-restored, otherwise it will cause conflicts between business data and restored data.

For the PXC cluster, in order to avoid data synchronization conflicts among nodes during the restoration process, we must first disband the original cluster and delete the nodes. Then create a new node with a blank database, perform the restore, and finally establish other cluster nodes.

Before restoring, you must roll back the uncommitted transactions saved in the hot backup, and restart MySQL after restoring.

Stop and delete all nodes in the PXC cluster

docker stop node1 node2 node3 node4 node5
docker rm node1 node2 node3 node4 node5
docker volume rm v1 v2 v3 v4 v5

Follow the previous steps to recreate the node1 container, enter the container, and perform a cold restore

# Create volume docker volume create v1
# Create a container docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -v v1:/var/lib/mysql -v backup:/data --name=node1 --network=net1 --ip 172.18.0.2 pxc
# Enter the container as root docker exec -it -uroot node1 bash
# Delete data rm -rf /var/lib/mysql/*
# Preparation phase xtrabackup --prepare --target-dir=/data/backup/full/
# Perform cold restore xtrabackup --copy-back --target-dir=/data/backup/full/
# Change the owner of the restored database file chown -R mysql:mysql /var/lib/mysql
# After exiting the container, restart the container docker stop node1
docker start node1

This is the end of this article about the implementation of Docker deployment of MySQL cluster. For more relevant content about Docker deployment of MySQL cluster, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • How to build a MySQL PXC cluster
  • MySQL high availability cluster deployment and failover implementation
  • MySQL 5.7 cluster configuration steps
  • Detailed steps for installing MySQL using cluster rpm
  • Detailed explanation of MySQL cluster: one master and multiple slaves architecture implementation
  • How to deploy MySQL 5.7 & 8.0 master-slave cluster using Docker
  • Detailed explanation of galera-cluster deployment in cluster mode of MySQL
  • Example of how to build a Mysql cluster with docker
  • MySQL Cluster Basic Deployment Tutorial
  • How to build a MySQL high-availability and high-performance cluster

<<:  Detailed explanation of Vue3 encapsulation Message message prompt instance function

>>:  Introduction to MySQL overall architecture

Recommend

An example of the difference between the id and name attributes in input

I have been making websites for a long time, but I...

Detailed explanation of MySQL cumulative calculation implementation method

Table of contents Preface Demand Analysis Mysql u...

Analysis of the process of building a LAN server based on http.server

I don’t know if you have ever encountered such a ...

A complete list of common Linux system commands for beginners

Learning Linux commands is the biggest obstacle f...

MySQL 5.7.20 compressed version download and installation simple tutorial

1. Download address: http://dev.mysql.com/downloa...

In-depth understanding of slot-scope in Vue (suitable for beginners)

There are already many articles about slot-scope ...

What are the differences between xHTML and HTML tags?

All tags must be lowercase In XHTML, all tags must...

Various problems encountered by novices when installing mysql into docker

Preface Recently, my computer often takes a long ...

Introduction and tips for using the interactive visualization JS library gojs

Table of contents 1. Introduction to gojs 2. Gojs...

Example of implementing login effect with vue ElementUI's from form

Table of contents 1. Build basic styles through E...

jQuery implements simple button color change

In HTML and CSS, we want to set the color of a bu...

Using react-virtualized to implement a long list of images with dynamic height

Table of contents Problems encountered during dev...

Detailed explanation of Nginx forwarding socket port configuration

Common scenarios for Nginx forwarding socket port...

Detailed explanation of the Sidecar mode in Docker Compose

Table of contents What is Docker Compose Requirem...