Abstract: Since I came into contact with docker, I seem to have developed a habit of installing any application software in the docker direction. Today I want to try to use docker to build a redis cluster. First of all, we need theoretical knowledge: Redis Cluster is a distributed solution for Redis, which solves the problem of redis single-machine centralization. Distributed databases - first solve the problem of mapping the entire data set to multiple nodes according to partitioning rules. Here you need to know the partitioning rule - the hash partitioning rule. Redis Cluster uses virtual slot partitioning in the hash partitioning rule. All keys are mapped to 0 ~ 16383 according to the hash function, and the calculation formula is: slot = CRC16(key)&16383. Each node is responsible for maintaining a portion of slots and the key-value data mapped by the slots. 1. Create a redis docker base imageDownload the redis installation package, use version: 4.0.1 [root@etcd1 tmp]# mkdir docker_redis_cluster [root@etcd1 tmp]# cd docker_redis_cluster/ [root@etcd2 docker_redis_cluster]# wget http://download.redis.io/releases/redis-4.0.1.tar.gz Unzip and compile redis [root@etcd1 docker_redis_cluster]# tar zxvf redis-4.0.1.tar.gz [root@etcd1 docker_redis_cluster]# cd redis-4.0.1/ [root@etcd1 redis-4.0.1]# make Modify redis configuration [root@etcd3 redis-4.0.1]# vi /tmp/docker_redis_cluster/redis-4.0.1/redis.conf Modify bind ip address # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the # internet, binding to all the interfaces is dangerous and will expose the # instance to everybody on the internet. So by default we uncomment the # following bind directive, that will force Redis to listen only into # the IPv4 lookback interface address (this means Redis will be able to # accept connections only from clients running into the same computer it # is running). # # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES # JUST COMMENT THE FOLLOWING LINE. # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #bind 127.0.0.1 bind 0.0.0.0 Change the daemon process yes to no # By default Redis does not run as a daemon. Use 'yes' if you need it. # Note that Redis will write a pid file in /var/run/redis.pid when daemonized. daemonize no Uncomment the password item and add a new password # Warning: since Redis is pretty fast an outside user can try up to # 150k passwords per second against a good box. This means that you should # use a very strong password otherwise it will be very easy to break. # # requirepass foobared Modified to # Warning: since Redis is pretty fast an outside user can try up to # 150k passwords per second against a good box. This means that you should # use a very strong password otherwise it will be very easy to break. # requirepass 123456 Because the password is configured, the password also needs to be configured for another master-slave connection in the configuration # If the master is password protected (using the "requirepass" configuration # directive below) it is possible to tell the slave to authenticate before # starting the replication synchronization process, otherwise the master will # refuse the slave request. # # masterauth <master-password> Modified to # If the master is password protected (using the "requirepass" configuration # directive below) it is possible to tell the slave to authenticate before # starting the replication synchronization process, otherwise the master will # refuse the slave request. # # masterauth <master-password> masterauth 123456 Set the log path # Specify the log file name. Also the empty string can be used to force # Redis to log on the standard output. Note that if you use standard # output for logging but daemonize, logs will be sent to /dev/null logfile "/var/log/redis/redis-server.log" Configure cluster related information and remove the comments in front of the configuration items # Normal Redis instances can't be part of a Redis Cluster; only nodes that are # started as cluster nodes can. In order to start a Redis instance as a # cluster node enable the cluster support uncommenting the following: # cluster-enabled yes # Every cluster node has a cluster configuration file. This file is not # intended to be edited by hand. It is created and updated by Redis nodes. # Every Redis Cluster node requires a different cluster configuration file. # Make sure that instances running in the same system do not have # Overlapping cluster configuration file names. # cluster-config-file nodes-6379.conf # Cluster node timeout is the amount of milliseconds a node must be unreachable # for it to be considered in failure state. # Most other internal time limits are multiple of the node timeout. # cluster-node-timeout 15000 Mirror Image Production [root@etcd3 docker_redis_cluster]# cd /tmp/docker_redis_cluster [root@etcd3 docker_redis_cluster]# vi Dockerfile # Redis # Version 4.0.1 FROM Centos:7<br> ENV REDIS_HOME /usr/local<br> ADD redis-4.0.1.tar.gz / # The local redis source package is copied to the root path of the image. The ADD command will automatically unpack it after copying. The copied object must be in the same path as the Dockerfile, and a relative path must be used after ADD RUN mkdir -p $REDIS_HOME/redis # Create the installation directory ADD redis-4.0.1/redis.conf $REDIS_HOME/redis/ # Copy the configuration generated and modified at the beginning of the compilation to the installation directory RUN yum -y update # Update the yum source RUN yum install -y gcc make # Install the tools required for compilation WORKDIR /redis-4.0.1 RUN make RUN mv /redis-4.0.1/src/redis-server $REDIS_HOME/redis/ # After compilation, only the executable file redis-server is needed in the container WORKDIR / RUN rm -rf /redis-4.0.1 # Delete the unzipped file RUN yum remove -y gcc make # After the installation and compilation are complete, you can delete the redundant gcc and make VOLUME ["/var/log/redis"] # Add data volume EXPOSE 6379 # Expose port 6379. You can also expose multiple ports, which is not necessary here. PS. The current image is not an executable image, so it does not contain ENTRYPOINT and CMD instructions Build the image # Switch to Chinese source [root@etcd3 docker_redis_cluster]# vi /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"] } # Compile [root@etcd3 docker_redis_cluster]# docker build -t hakimdstx/cluster-redis . ... Complete! ---> 546cb1d34f35 Removing intermediate container 6b6556c5f28d Step 14/15: VOLUME /var/log/redis ---> Running in 05a6642e4046 ---> e7e2fb8676b2 Removing intermediate container 05a6642e4046 Step 15/15: EXPOSE 6379 ---> Running in 5d7abe1709e2 ---> 2d1322475f79 Removing intermediate container 5d7abe1709e2 Successfully built 2d1322475f79 The image is created. During the creation, you may get the error: Public key for glibc-headers-2.17-222.el7.x86_64.rpm is not installed. In this case, you need to add a command to the image configuration: ... RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 RUN yum -y update # Update yum source RUN yum install -y gcc make # Install the tools needed for compilation View the image: [root@etcd3 docker_redis_cluster]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hakimdstx/cluster-redis 4.0.1 1fca5a08a4c7 14 seconds ago 435 MB centos 7 49f7960eb7e4 2 days ago 200 MB Above, the redis basic image is completed 2. Make a redis node imageCreate a redis node image based on the previously created redis base image [root@etcd3 tmp]# mkdir docker_redis_nodes [root@etcd3 tmp]# cd docker_redis_nodes [root@etcd3 docker_redis_nodes]# vi Dockerfile # Redis Node # Version 4.0.1<br> FROM hakimdstx/cluster-redis:4.0.1 #MAINTAINER_INFO MAINTAINER hakim [email protected] ENTRYPOINT ["/usr/local/redis/redis-server", "/usr/local/redis/redis.conf"] Build redis node image [root@etcd3 docker_redis_nodes]# docker build -t hakimdstx/nodes-redis:4.0.1 . Sending build context to Docker daemon 2.048 kB Step 1/3: FROM hakimdstx/cluster-redis:4.0.1 ---> 1fca5a08a4c7 Step 2/3: MAINTAINER hakim [email protected] ---> Running in cc6e07eb2c36 ---> 55769d3bfacb Removing intermediate container cc6e07eb2c36 Step 3/3: ENTRYPOINT /usr/local/redis/redis-server /usr/local/redis/redis.conf ---> Running in f5dedf88f6f6 ---> da64da483559 Removing intermediate container f5dedf88f6f6 Successfully built da64da483559 View Mirror [root@etcd3 docker_redis_nodes]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hakimdstx/nodes-redis 4.0.1 da64da483559 51 seconds ago 435 MB hakimdstx/cluster-redis 4.0.1 1fca5a08a4c7 9 minutes ago 435 MB centos 7 49f7960eb7e4 2 days ago 200 MB 3. Run redis clusterRun the redis container [root@etcd3 docker_redis_nodes]# docker run -d --name redis-6379 -p 6379:6379 hakimdstx/nodes-redis:4.0.1 1673a7d859ea83257d5bf14d82ebf717fb31405c185ce96a05f597d8f855aa7d [root@etcd3 docker_redis_nodes]# docker run -d --name redis-6380 -p 6380:6379 hakimdstx/nodes-redis:4.0.1 df6ebce6f12a6f3620d5a29adcfbfa7024e906c3af48f21fa7e1fa524a361362 [root@etcd3 docker_redis_nodes]# docker run -d --name redis-6381 -p 6381:6379 hakimdstx/nodes-redis:4.0.1 396e174a1d9235228b3c5f0266785a12fb1ea49efc7ac755c9e7590e17aa1a79 [root@etcd3 docker_redis_nodes]# docker run -d --name redis-6382 -p 6382:6379 hakimdstx/nodes-redis:4.0.1 d9a71dd3f969094205ffa7596c4a04255575cdd3acca2d47fe8ef7171a3be528 [root@etcd3 docker_redis_nodes]# docker run -d --name redis-6383 -p 6383:6379 hakimdstx/nodes-redis:4.0.1 73e4f843d8cb28595456e21b04f97d18ce1cdf8dc56d1150844ba258a3781933 [root@etcd3 docker_redis_nodes]# docker run -d --name redis-6384 -p 6384:6379 hakimdstx/nodes-redis:4.0.1 10c62aafa4dac47220daf5bf3cec84406f086d5261599b54ec6c56bb7da97d6d View container information [root@etcd3 redis]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 10c62aafa4da hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 3 seconds ago Up 2 seconds 0.0.0.0:6384->6379/tcp redis-6384 73e4f843d8cb hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 12 seconds ago Up 10 seconds 0.0.0.0:6383->6379/tcp redis-6383 d9a71dd3f969 hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 20 seconds ago Up 18 seconds 0.0.0.0:6382->6379/tcp redis-6382 396e174a1d92 hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 3 days ago Up 3 days 0.0.0.0:6381->6379/tcp redis-6381 df6ebce6f12a hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 3 days ago Up 3 days 0.0.0.0:6380->6379/tcp redis-6380 1673a7d859ea hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 3 days ago Up 3 days 0.0.0.0:6379->6379/tcp redis-6379 Run the redis cluster container View redis info replication information through remote connection [root@etcd2 ~]# redis-cli -h 192.168.10.52 -p 6379 192.168.10.52:6379> info replication NOAUTH Authentication required. 192.168.10.52:6379> auth 123456 OK 192.168.10.52:6379> info replication # Replication role:master connected_slaves:0 master_replid:2f0a7b50aed699fa50a79f3f7f9751a070c50ee9 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 192.168.10.52:6379> # The rest of the basic information is the same as above It can be seen that after the customer connects, because a password has been set before, he needs to enter the password for authentication first, otherwise he will not be able to pass. From the above information, we know that all redis are in the master role role:master, which is obviously not what we want. Before configuration, we need to view the current IP addresses of all containers [root@etcd3 redis]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 10c62aafa4da hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 3 seconds ago Up 2 seconds 0.0.0.0:6384->6379/tcp redis-6384 73e4f843d8cb hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 12 seconds ago Up 10 seconds 0.0.0.0:6383->6379/tcp redis-6383 d9a71dd3f969 hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 20 seconds ago Up 18 seconds 0.0.0.0:6382->6379/tcp redis-6382 396e174a1d92 hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 3 days ago Up 3 days 0.0.0.0:6381->6379/tcp redis-6381 df6ebce6f12a hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 3 days ago Up 3 days 0.0.0.0:6380->6379/tcp redis-6380 1673a7d859ea hakimdstx/nodes-redis:4.0.1 "/usr/local/redis/..." 3 days ago Up 3 days 0.0.0.0:6379->6379/tcp redis-6379 [root@etcd3 redis]# [root@etcd3 redis]# docker inspect 10c62aafa4da 73e4f843d8cb d9a71dd3f969 396e174a1d92 df6ebce6f12a 1673a7d859ea | grep IPA "SecondaryIPAddresses": null, "IPAddress": "172.17.0.7", "IPAMConfig": null, "IPAddress": "172.17.0.7", "SecondaryIPAddresses": null, "IPAddress": "172.17.0.6", "IPAMConfig": null, "IPAddress": "172.17.0.6", "SecondaryIPAddresses": null, "IPAddress": "172.17.0.5", "IPAMConfig": null, "IPAddress": "172.17.0.5", "SecondaryIPAddresses": null, "IPAddress": "172.17.0.4", "IPAMConfig": null, "IPAddress": "172.17.0.4", "SecondaryIPAddresses": null, "IPAddress": "172.17.0.3", "IPAMConfig": null, "IPAddress": "172.17.0.3", "SecondaryIPAddresses": null, "IPAddress": "172.17.0.2", "IPAMConfig": null, "IPAddress": "172.17.0.2", It can be known that: redis-6379: 172.17.0.2, redis-6380: 172.17.0.3, redis-6381: 172.17.0.4, redis-6382: 172.17.0.5, redis-6383: 172.17.0.6, redis-6384: 172.17.0.7 Configure redis Cluster-aware operations of Redis Cluster //Cluster CLUSTER INFO Prints cluster information CLUSTER NODES Lists all nodes currently known to the cluster and related information about these nodes. //node CLUSTER MEET <ip> <port> Add the node specified by ip and port to the cluster, making it a part of the cluster. CLUSTER FORGET <node_id> Remove the node specified by node_id from the cluster. CLUSTER REPLICATE <node_id> Sets the current node to be a slave of the node specified by node_id. CLUSTER SAVECONFIG Save the node's configuration file to the hard disk. //slot CLUSTER ADDSLOTS <slot> [slot ...] Assign one or more slots to the current node. CLUSTER DELSLOTS <slot> [slot ...] Removes one or more slots from the current node's assignment. CLUSTER FLUSHSLOTS removes all slots assigned to the current node, making the current node a node with no slots assigned to it. CLUSTER SETSLOT <slot> NODE <node_id> Assigns slot to the node specified by node_id. If the slot has been assigned to another node, let the other node delete the slot first, and then assign it. CLUSTER SETSLOT <slot> MIGRATING <node_id> Migrate the slot of this node to the node specified by node_id. CLUSTER SETSLOT <slot> IMPORTING <node_id> Imports slot from the node specified by node_id to this node. CLUSTER SETSLOT <slot> STABLE Cancel the import or migration of slot slot. //key CLUSTER KEYSLOT <key> Calculates the slot where the key key should be placed. CLUSTER COUNTKEYSINSLOT <slot> Returns the number of key-value pairs currently contained in slot slot. CLUSTER GETKEYSINSLOT <slot> <count> Returns the keys in count slots. Redis cluster awareness: Node handshake - refers to the process in which a group of nodes running in cluster mode communicate with each other through the Gossip protocol to perceive each other. 192.168.10.52:6379> CLUSTER MEET 172.17.0.3 6379 OK 192.168.10.52:6379> CLUSTER MEET 172.17.0.4 6379 OK 192.168.10.52:6379> CLUSTER MEET 172.17.0.5 6379 OK 192.168.10.52:6379> CLUSTER MEET 172.17.0.6 6379 OK 192.168.10.52:6379> CLUSTER MEET 172.17.0.7 6379 OK 192.168.10.52:6379> CLUSTER NODES 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 172.17.0.3:6379@16379 master - 0 1528697195600 1 connected f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 172.17.0.4:6379@16379 master - 0 1528697195600 0 connected ae86224a3bc29c4854719c83979cb7506f37787a 172.17.0.7:6379@16379 master - 0 1528697195600 5 connected 98aebcfe42d8aaa8a3375e4a16707107dc9da683 172.17.0.6:6379@16379 master - 0 1528697194000 4 connected 0bbdc4176884ef0e3bb9b2e7d03d91b0e7e11f44 172.17.0.5:6379@16379 master - 0 1528697194995 3 connected 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 172.17.0.2:6379@16379 myself,master - 0 1528697195000 2 connected The six nodes have now been clustered, but it is not working yet because the cluster nodes have not yet been assigned slots. Allocation slot information Check the number of slots on 172.17.0.2:6379 192.168.10.52:6379> CLUSTER INFO cluster_state:fail cluster_slots_assigned:0 # The number of allocated slots is 0 cluster_slots_ok:0 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:0 cluster_current_epoch:5 cluster_my_epoch:2 cluster_stats_messages_ping_sent:260418 cluster_stats_messages_pong_sent:260087 cluster_stats_messages_meet_sent:10 cluster_stats_messages_sent:520515 cluster_stats_messages_ping_received:260086 cluster_stats_messages_pong_received:260328 cluster_stats_messages_meet_received:1 cluster_stats_messages_received:520415 As you can see above, the cluster status is failed because the slots are not allocated. In addition, all 16384 slots need to be allocated at one time before the cluster can be used. Allocate slots Allocate slots: CLUSTER ADDSLOTS slots. One slot can only be allocated to one node. All 16384 slots must be allocated and different nodes cannot conflict. #!/bin/bash # node1 192.168.10.52 172.17.0.2 n=0 for ((i=n;i<=5461;i++)) do /usr/local/bin/redis-cli -h 192.168.10.52 -p 6379 -a 123456 CLUSTER ADDSLOTS $i done # node2 192.168.10.52 172.17.0.3 n=5462 for ((i=n;i<=10922;i++)) do /usr/local/bin/redis-cli -h 192.168.10.52 -p 6380 -a 123456 CLUSTER ADDSLOTS $i done # node3 192.168.10.52 172.17.0.4 n=10923 for ((i=n;i<=16383;i++)) do /usr/local/bin/redis-cli -h 192.168.10.52 -p 6381 -a 123456 CLUSTER ADDSLOTS $i done Among them, -a 123456 indicates the password that needs to be entered. 192.168.10.52:6379> CLUSTER INFO cluster_state:fail # The cluster status is failed cluster_slots_assigned:16101 # Not fully allocated cluster_slots_ok:16101 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:5 cluster_my_epoch:2 cluster_stats_messages_ping_sent:266756 cluster_stats_messages_pong_sent:266528 cluster_stats_messages_meet_sent:10 cluster_stats_messages_sent:533294 cluster_stats_messages_ping_received:266527 cluster_stats_messages_pong_received:266666 cluster_stats_messages_meet_received:1 cluster_stats_messages_received:533194<br> 192.168.10.52:6379> CLUSTER INFO cluster_state:ok # The cluster status is successful cluster_slots_assigned:16384 # All allocations have been completed cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:5 cluster_my_epoch:2 cluster_stats_messages_ping_sent:266757 cluster_stats_messages_pong_sent:266531 cluster_stats_messages_meet_sent:10 cluster_stats_messages_sent:533298 cluster_stats_messages_ping_received:266530 cluster_stats_messages_pong_received:266667 cluster_stats_messages_meet_received:1 cluster_stats_messages_received:533198 In summary, when all slots are allocated, the cluster is still feasible. If we remove a slot, the cluster will immediately fail. Try it yourself - CLUSTER DELSLOTS 0. How to become highly available We have built a complete and operational redis cluster above, but each node is a single point. In this way, if one node fails, the entire cluster may crash due to incomplete slot allocation. Therefore, we need to configure a replica standby node for each node. 192.168.10.52:6379> CLUSTER INFO cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 # 6 nodes in total cluster_size:3 # The cluster has 3 nodes cluster_current_epoch:5 cluster_my_epoch:2 cluster_stats_messages_ping_sent:270127 cluster_stats_messages_pong_sent:269893 cluster_stats_messages_meet_sent:10 cluster_stats_messages_sent:540030 cluster_stats_messages_ping_received:269892 cluster_stats_messages_pong_received:270037 cluster_stats_messages_meet_received:1 cluster_stats_messages_received:539930 View all node IDs 192.168.10.52:6379> CLUSTER NODES 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 172.17.0.3:6379@16379 master - 0 1528704114535 1 connected 5462-10922 f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 172.17.0.4:6379@16379 master - 0 1528704114000 0 connected 10923-16383 ae86224a3bc29c4854719c83979cb7506f37787a 172.17.0.7:6379@16379 master - 0 1528704114023 5 connected 98aebcfe42d8aaa8a3375e4a16707107dc9da683 172.17.0.6:6379@16379 master - 0 1528704115544 4 connected 0bbdc4176884ef0e3bb9b2e7d03d91b0e7e11f44 172.17.0.5:6379@16379 master - 0 1528704114836 3 connected 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 172.17.0.2:6379@16379 myself,master - 0 1528704115000 2 connected 0-5461 Write a script to add a replica node [root@etcd2 tmp]# vi addSlaveNodes.sh #!/bin/bash /usr/local/bin/redis-cli -h 192.168.10.52 -p 6382 -a 123456 CLUSTER REPLICATE 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 /usr/local/bin/redis-cli -h 192.168.10.52 -p 6383 -a 123456 CLUSTER REPLICATE 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c /usr/local/bin/redis-cli -h 192.168.10.52 -p 6384 -a 123456 CLUSTER REPLICATE f45f9109f2297a83b1ac36f9e1db5e70bbc174ab Note: 1. The node used as a standby must have unassigned slots, otherwise the operation will fail (error) ERR To set a master the node must be empty and without assigned slots. View all node information: 192.168.10.52:6379> CLUSTER NODES 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 172.17.0.3:6379@16379 master - 0 1528705604149 1 connected 5462-10922 f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 172.17.0.4:6379@16379 master - 0 1528705603545 0 connected 10923-16383 ae86224a3bc29c4854719c83979cb7506f37787a 172.17.0.7:6379@16379 slave f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 0 1528705603144 5 connected 98aebcfe42d8aaa8a3375e4a16707107dc9da683 172.17.0.6:6379@16379 slave 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 0 1528705603000 4 connected 0bbdc4176884ef0e3bb9b2e7d03d91b0e7e11f44 172.17.0.5:6379@16379 slave 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 0 1528705603000 3 connected 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 172.17.0.2:6379@16379 myself,master - 0 1528705602000 2 connected 0-5461 You can see that we have now implemented a high-availability cluster with three masters and three slaves. High availability test - failover to view the current running status: 192.168.10.52:6379> CLUSTER NODES 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 172.17.0.3:6379@16379 master - 0 1528705604149 1 connected 5462-10922 f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 172.17.0.4:6379@16379 master - 0 1528705603545 0 connected 10923-16383 ae86224a3bc29c4854719c83979cb7506f37787a 172.17.0.7:6379@16379 slave f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 0 1528705603144 5 connected 98aebcfe42d8aaa8a3375e4a16707107dc9da683 172.17.0.6:6379@16379 slave 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 0 1528705603000 4 connected 0bbdc4176884ef0e3bb9b2e7d03d91b0e7e11f44 172.17.0.5:6379@16379 slave 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 0 1528705603000 3 connected 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 172.17.0.2:6379@16379 myself,master - 0 1528705602000 2 connected 0-5461 The above is running normally Try to shut down a master, select the container with port 6380, and stop it: 192.168.10.52:6379> CLUSTER NODES 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 172.17.0.3:6379@16379 master,fail - 1528706408935 1528706408000 1 connected 5462-10922 f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 172.17.0.4:6379@16379 master - 0 1528706463000 0 connected 10923-16383 ae86224a3bc29c4854719c83979cb7506f37787a 172.17.0.7:6379@16379 slave f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 0 1528706462980 5 connected 98aebcfe42d8aaa8a3375e4a16707107dc9da683 172.17.0.6:6379@16379 slave 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 0 1528706463000 4 connected 0bbdc4176884ef0e3bb9b2e7d03d91b0e7e11f44 172.17.0.5:6379@16379 slave 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 0 1528706463985 3 connected 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 172.17.0.2:6379@16379 myself,master - 0 1528706462000 2 connected 0-5461 192.168.10.52:6379> 192.168.10.52:6379> CLUSTER INFO cluster_state:fail cluster_slots_assigned:16384 cluster_slots_ok:10923 cluster_slots_pfail:0 cluster_slots_fail:5461 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:5 cluster_my_epoch:2 cluster_stats_messages_ping_sent:275112 cluster_stats_messages_pong_sent:274819 cluster_stats_messages_meet_sent:10 cluster_stats_messages_fail_sent:5 cluster_stats_messages_sent:549946 cluster_stats_messages_ping_received:274818 cluster_stats_messages_pong_received:275004 cluster_stats_messages_meet_received:1 cluster_stats_messages_fail_received:1 cluster_stats_messages_received:549824 From the above, it is found that the entire cluster has failed, and the slave node has not been automatically upgraded to the master node. What happened? ? 1:S 11 Jun 09:57:46.712 # Cluster state changed: ok 1:S 11 Jun 09:57:46.718 * (Non critical) Master does not understand REPLCONF listening-port: -NOAUTH Authentication required. 1:S 11 Jun 09:57:46.718 * (Non critical) Master does not understand REPLCONF capa: -NOAUTH Authentication required. 1:S 11 Jun 09:57:46.719 * Partial resynchronization not possible (no cached master) 1:S 11 Jun 09:57:46.719 # Unexpected reply to PSYNC from master: -NOAUTH Authentication required. 1:S 11 Jun 09:57:46.719 * Retrying with SYNC... 1:S 11 Jun 09:57:46.719 # MASTER aborted replication with an error: NOAUTH Authentication required. 1:S 11 Jun 09:57:46.782 * Connecting to MASTER 172.17.0.6:6379 1:S 11 Jun 09:57:46.782 * MASTER <-> SLAVE sync started 1:S 11 Jun 09:57:46.782 * Non blocking connect for SYNC fired the event. As you can see, auth is required for access between the master and the slave. I forgot to configure # masterauth <master-password> in redis.conf before, so the master and the slave cannot communicate. After modifying the configuration, automatic failover works normally. Sometimes a manual failover is necessary: Log in to the slave node of port 6380: 6383 and execute the CLUSTER FAILOVER command: 192.168.10.52:6383> CLUSTER FAILOVER (error) ERR Master is down or failed, please use CLUSTER FAILOVER FORCE We found that because the master is down, we need to perform a forced transfer. 192.168.10.52:6383> CLUSTER FAILOVER FORCE OK View the current cluster node status: 192.168.10.52:6383> CLUSTER NODES 0bbdc4176884ef0e3bb9b2e7d03d91b0e7e11f44 172.17.0.5:6379@16379 slave 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 0 1528707535332 3 connected ae86224a3bc29c4854719c83979cb7506f37787a 172.17.0.7:6379@16379 slave f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 0 1528707534829 5 connected f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 172.17.0.4:6379@16379 master - 0 1528707534527 0 connected 10923-16383 98aebcfe42d8aaa8a3375e4a16707107dc9da683 172.17.0.6:6379@16379 myself,master - 0 1528707535000 6 connected 5462-10922 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 172.17.0.2:6379@16379 master - 0 1528707535834 2 connected 0-5461 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 172.17.0.3:6379@16379 master,fail - 1528707472833 1528707472000 1 connected The slave node has been upgraded to a master node. At this time, we tried to restart the redis of node 6380 (actually restarting the stopped container): 192.168.10.52:6383> CLUSTER NODES 0bbdc4176884ef0e3bb9b2e7d03d91b0e7e11f44 172.17.0.5:6379@16379 slave 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 0 1528707556044 3 connected ae86224a3bc29c4854719c83979cb7506f37787a 172.17.0.7:6379@16379 slave f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 0 1528707555000 5 connected f45f9109f2297a83b1ac36f9e1db5e70bbc174ab 172.17.0.4:6379@16379 master - 0 1528707556000 0 connected 10923-16383 98aebcfe42d8aaa8a3375e4a16707107dc9da683 172.17.0.6:6379@16379 myself,master - 0 1528707556000 6 connected 5462-10922 760e4d0039c5ac13d04aa4791c9e6dc28544d7c7 172.17.0.2:6379@16379 master - 0 1528707556000 2 connected 0-5461 54cb5c2eb8e5f5aed2d2f7843f75a9284ef6785c 172.17.0.3:6379@16379 slave 98aebcfe42d8aaa8a3375e4a16707107dc9da683 0 1528707556547 6 connected We found that node 6380 has become a slave node of node 6383. Now the cluster should be complete, so the cluster status should have been restored. Let's check: 192.168.10.52:6383> CLUSTER INFO cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:6 cluster_stats_messages_ping_sent:19419 cluster_stats_messages_pong_sent:19443 cluster_stats_messages_meet_sent:1 cluster_stats_messages_auth-req_sent:5 cluster_stats_messages_update_sent:1 cluster_stats_messages_sent:38869 cluster_stats_messages_ping_received:19433 cluster_stats_messages_pong_received:19187 cluster_stats_messages_meet_received:5 cluster_stats_messages_fail_received:4 cluster_stats_messages_auth-ack_received:2 cluster_stats_messages_received:38631 OK, no problem. When the cluster access client is initialized, it only needs to know the address of a node. The client will first try to execute a command to this node, such as get key. If the slot where the key is located happens to be on the node, it can be executed directly successfully. If the slot is not in the node, the node will return a MOVED error and tell the client the node corresponding to the slot. The client can go to the node to execute the command. 192.168.10.52:6383> get hello (error) MOVED 866 172.17.0.2:6379 192.168.10.52:6379> set number 20004 (error) MOVED 7743 172.17.0.3:6379 In addition, the redis cluster version only uses db0, although the select command can support select 0. All other db will return an error. 192.168.10.52:6383> select 0 OK 192.168.10.52:6383> select 1 (error) ERR SELECT is not allowed in cluster mode Recently, some netizens asked about the problem of docker redis cluster connection error. The specific error is as follows: The initial thought is that not all nodes are added. After adding, the above problems still exist. Thinking that it is cross-host access, it should be caused by the inability to find the routing address. When I wrote the above tutorial, Docker was running in the default network mode bridge mode. After all, the main purpose was to learn and organize documents, mainly for stand-alone access. However, in actual application scenarios, most of the time, public network cross-host access is required. The problem is clear. I think it is better to set the cluster to share the public IP address of the host. So the solution is as follows:
The final operation is as follows: docker run -d --name redis-6380 --net host -v /tmp/redis.conf:/usr/local/redis/redis.conf hakimdstx/nodes-redis:4.0.1 At this point, the network problem has been resolved. Quote:1. Redis Cluster deployment, management and testing 2. Master-slave and persistence configuration of redis under Docker This is the end of this article about the steps to build a redis cluster with docker. For more information about building a docker redis cluster, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: Detailed explanation of the use of filter properties in CSS3
>>: MySQL series 15 MySQL common configuration and performance stress test
Table of contents Web Components customElements O...
Table of contents Pull the rocketmq image Create ...
MySQL Installer provides an easy-to-use, wizard-b...
Table of contents Get the time in the past week G...
introduce In a distributed system, distributed lo...
Talk about the scene Send Email Embedding HTML in...
Table of contents 1. What is recursion? 2. Solve ...
Application scenarios: One of the new requirement...
Let’s start with a question Five years ago when I...
First, clarify a few concepts: JDBC: Java databas...
1. INSERT INTO SELECT statement The statement for...
Greek letters are a very commonly used series of ...
This example uses jQuery to implement a mouse dra...
The default ssh port number of Linux servers is g...
I have been using CSS for a long time, but I have...