1. Introduction:In the current Internet technology architecture, various middlewares continue to emerge, such as MQ, Redis, and Zookeeper. These middlewares are generally deployed in a master-slave architecture or a cluster architecture. Companies generally deploy one set each in the development environment, test environment, and production environment. When we develop, we usually connect to the development environment. However, the development environment of a general company can only be used on the intranet. When we go home, we cannot use it unless the company provides a VPN. Sometimes we have a VPN, but it is still inconvenient to develop. For example, our current MQ middleware uses Pulsar, but Pulsar's tenant and namespace cannot be created automatically, so it is very inconvenient to develop in daily life, and we can only find a DBA every time we need it. Therefore, we usually deploy a set of servers locally, but there are two difficulties in deploying them ourselves: Middleware has its own implementation language, such as RabbitMQ. When we deploy RabbitMQ, we must first install the Erlang language. Deploying a cluster architecture consumes a lot of system resources, causing already resource-constrained laptops to become even more sluggish, making development very unpleasant. 2. Docker:Docker can perfectly solve the above problems and make deployment extremely simple. The following is an example of the middleware I use myself. A. Pulsar: As mentioned above, Pulsar tenants and namespaces cannot be created automatically. If we want to use them, we can only find the colleague in charge to help create them. The features I recently worked on: broadcast center and member export both use Pulsar, but I didn’t want to trouble my colleagues, and I often work at home, so I went directly to the official website to find out how to deploy it locally. The official website introduces a variety of deployment methods: using compressed packages, Docker, and Kubernetes. Of course, there is the Docker deployment method, and we must use the Docker deployment method. Anyway, we can just pull an image and start a container, which is quite convenient. The following command: docker run -it -d -p 6650:6650 -p 8080:8080 -v data -v conf --name=mypulsar apachepulsar/pulsar:2.6.1 bin/pulsar standalone The command is very simple: open pulsar's port numbers 6650 and 8080 and bind them to the corresponding port numbers of the host machine, so that we can directly access the host machine ip:port to access the container. Next, mount Pulsar's data and conf to the host so that the data will not be lost. Then use the pulsar standalone command to start a standalone version of Pulsar. Next, whether we need to create a tenant or a namespace, we can directly enter the container to create it. docker exec -it mypulsar /bin/bash About the commands for adding, deleting, modifying and checking tenant and namespace: ## 1 Tenant# Check which tenants are there (public is the system default tenant) pulsar-admin tenants list ##Create tenant pulsar-admin tenants create my-tenant #Delete tenant pulsar-admin tenants delete my-tenant ## 2 Namespace#View the namespace under the specified tenant pulsar-admin namespaces list my-tenant #Create a specified tenant namespace pulsar-admin namespaces create my-tenant/my-namespace #Delete the specified tenant namespace pulsar-admin namespaces delete my-tenant/my-namespace B. Redis: Redis Generally, the architecture we produce uses Cluster, but if you deploy a cluster of Redis yourself, it is quite troublesome. I have written an article before: Deploy Redis Cluster on Linux If you use Docker, it will be very simple. 1 Custom Network1.1 Create a dedicated network for Redis Each node in the Redis cluster shares a dedicated network, so the nodes can access each other, avoiding the need to use --link for network communication between each node. The network created by the docker network create command is in bridge mode by default. winfun@localhost ~ % docker network create redis-net --subnet 172.26.0.0/16 5001355940f43474d59f5cb2d78e4e9eeb0a9827e53d8f9e5b55e7d3c5285a09 winfun@localhost ~ % docker network list NETWORK ID NAME DRIVER SCOPE 4d88d473e947 bridge bridge local 79a915fafbb5 host host local f56e362d3c68 none null local 5001355940f4 redis-net bridge local winfun@localhost ~ % 1.2 View custom network details We can use the command docker network inspect redis-net to view the details of the custom network, and we can see that there are no containers in the network now. winfun@localhost mydata % docker network inspect redis-net [ { "Name": "redis-net", "Id": "aed8340bbf8ab86cedc1d990eb7612854ba2b0bd4eae0f978ff95eadc3dbcf65", "Created": "2020-10-22T08:46:55.695434Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.26.0.0/16" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ] 2 Start deployment2.1 Create six Redis node configurations for port in $(seq 1 6); \ do \ mkdir -p /Users/winfun/mydata/redis/node-${port}/conf touch /Users/winfun/mydata/redis/node-${port}/conf/redis.conf cat << EOF >/Users/winfun/mydata/redis/node-${port}/conf/redis.conf port 6379 bind 0.0.0.0 cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 cluster-announce-ip 172.26.0.1${port} cluster-announce-port 6379 cluster-announce-bus-port 16379 appendonly yes EOF done 2.2 Start the container for port in $(seq 1 6); \ do \ docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \ -v /Users/winfun/mydata/redis/node-${port}/data:/data \ -v /Users/winfun/mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis-net --ip 172.26.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf done 2.3 View the 6 containers that were successfully started winfun@localhost mydata % docker ps | grep redis ed5972e988e8 redis:5.0.9-alpine3.11 "docker-entrypoint.s..." 11 seconds ago Up 10 seconds 0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp redis-6 61cd467bc803 redis:5.0.9-alpine3.11 "docker-entrypoint.s..." 12 seconds ago Up 11 seconds 0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp redis-5 113943ba6586 redis:5.0.9-alpine3.11 "docker-entrypoint.s..." 12 seconds ago Up 11 seconds 0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp redis-4 5fc3c838851c redis:5.0.9-alpine3.11 "docker-entrypoint.s..." 13 seconds ago Up 12 seconds 0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp redis-3 f7d4430f752b redis:5.0.9-alpine3.11 "docker-entrypoint.s..." 13 seconds ago Up 12 seconds 0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp redis-2 bd3e4a593427 redis:5.0.9-alpine3.11 "docker-entrypoint.s..." 14 seconds ago Up 13 seconds 0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp redis-1 3 Check the network again3.1 View containers in the network When we started the container above, we specified the use of the redis-net network, so we can first look at the information of the redis-net network: winfun@localhost mydata % docker network inspect redis-net [ { "Name": "redis-net", "Id": "aed8340bbf8ab86cedc1d990eb7612854ba2b0bd4eae0f978ff95eadc3dbcf65", "Created": "2020-10-22T08:46:55.695434Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.26.0.0/16" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "113943ba6586a4ac21d1c068b0535d5b4ef37da50141d648d30dab47eb47d3af": { "Name": "redis-4", "EndpointID": "3fe3b4655f39f90ee4daf384254d3f7548cddd19c384e0a26edb6a32545e5b30", "MacAddress": "02:42:ac:1a:00:0e", "IPv4Address": "172.26.0.14/16", "IPv6Address": "" }, "5fc3c838851c0ca2f629457bc3551135567b4e9fb155943711e07a91ebe9827f": { "Name": "redis-3", "EndpointID": "edd826ca267714bea6bfddd8c5d6a5f3c71c50bd50381751ec40e9f8e8160dce", "MacAddress": "02:42:ac:1a:00:0d", "IPv4Address": "172.26.0.13/16", "IPv6Address": "" }, "61cd467bc8030c4db9a4404b718c5c927869bed71609bec91e17ff0da705ae26": { "Name": "redis-5", "EndpointID": "7612c44ab2479ab62341eba2e30ab26f4c523ccbe1aa357fc8b7c17a368dba61", "MacAddress": "02:42:ac:1a:00:0f", "IPv4Address": "172.26.0.15/16", "IPv6Address": "" }, "bd3e4a593427aab4750358330014422500755552c8b470f0fd7c1e88221db984": { "Name": "redis-1", "EndpointID": "400153b712859c5c17d99708586f30013bb28236ba0dead516cf3d01ea071909", "MacAddress": "02:42:ac:1a:00:0b", "IPv4Address": "172.26.0.11/16", "IPv6Address": "" }, "ed5972e988e8301179249f6f9e82c8f9bb4ed801213fe49af9d3f31cbbe00db7": { "Name": "redis-6", "EndpointID": "b525b7bbdd0b0150f66b87d55e0a8f1208e113e7d1d421d1a0cca73dbb0c1e47", "MacAddress": "02:42:ac:1a:00:10", "IPv4Address": "172.26.0.16/16", "IPv6Address": "" }, "f7d4430f752b5485c5a90f0dc6d1d9a826d782284b1badbd203c12353191bc57": { "Name": "redis-2", "EndpointID": "cbdc77cecda1c8d80f566bcc3113f37c1a7983190dbd7ac2e9a56f6b7e4fb21f", "MacAddress": "02:42:ac:1a:00:0c", "IPv4Address": "172.26.0.12/16", "IPv6Address": "" } }, "Options": {}, "Labels": {} } ] 3.2 Check whether the containers can communicate with each other We can also try to use redis-1 to execute the ping command on redis-2 to see whether the network can communicate with each other: winfun@localhost mydata % docker exec -it redis-1 ping redis-2 PING redis-2 (172.26.0.12): 56 data bytes 64 bytes from 172.26.0.12: seq=0 ttl=64 time=0.136 ms 64 bytes from 172.26.0.12: seq=1 ttl=64 time=0.190 ms 64 bytes from 172.26.0.12: seq=2 ttl=64 time=0.483 ms ^C --- redis-2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.136/0.269/0.483 ms 4 Create a cluster4.1 Use the redis-cli command to create a cluster: winfun@localhost conf % docker exec -it redis-1 /bin/bash OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown # Only sh can be used, the redis image does not have bash winfun@localhost mydata % docker exec -it redis-1 /bin/sh /data # cd /usr/local/bin/ /usr/local/bin # redis-cli --cluster create 172.26.0.11:6379 172.26.0.12:6379 17 2.26.0.13:6379 172.26.0.14:6379 172.26.0.15:6379 172.26.0.16:6379 --cluster-repl icas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 172.26.0.15:6379 to 172.26.0.11:6379 Adding replica 172.26.0.16:6379 to 172.26.0.12:6379 Adding replica 172.26.0.14:6379 to 172.26.0.13:6379 M: 6de9e9eef91dbae773d8ee1d629c87e1e7e19b82 172.26.0.11:6379 slots:[0-5460] (5461 slots) master M: 43e173849bed74f5bd389f9b272ecf0399ae448f 172.26.0.12:6379 slots:[5461-10922] (5462 slots) master M: 1e504dc62b7ccc426d513983ca061d1657532fb6 172.26.0.13:6379 slots:[10923-16383] (5461 slots) master S: 92b95f18226903349fb860262d2fe6932d5a8dc2 172.26.0.14:6379 replicates 1e504dc62b7ccc426d513983ca061d1657532fb6 S: 7e5116ba9ee7bb70a68f4277efcbbbb3dcfd18af 172.26.0.15:6379 replicates 6de9e9eef91dbae773d8ee1d629c87e1e7e19b82 S: 203e3e33b9f4233b58028289d0ad2dd56e7dfe45 172.26.0.16:6379 replicates 43e173849bed74f5bd389f9b272ecf0399ae448f Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ... >>> Performing Cluster Check (using node 172.26.0.11:6379) M: 6de9e9eef91dbae773d8ee1d629c87e1e7e19b82 172.26.0.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 92b95f18226903349fb860262d2fe6932d5a8dc2 172.26.0.14:6379 slots: (0 slots) slave replicates 1e504dc62b7ccc426d513983ca061d1657532fb6 S: 203e3e33b9f4233b58028289d0ad2dd56e7dfe45 172.26.0.16:6379 slots: (0 slots) slave replicates 43e173849bed74f5bd389f9b272ecf0399ae448f M: 1e504dc62b7ccc426d513983ca061d1657532fb6 172.26.0.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 7e5116ba9ee7bb70a68f4277efcbbbb3dcfd18af 172.26.0.15:6379 slots: (0 slots) slave replicates 6de9e9eef91dbae773d8ee1d629c87e1e7e19b82 M: 43e173849bed74f5bd389f9b272ecf0399ae448f 172.26.0.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. 4.2 Use redis-cli to connect to the current node and view the cluster information: /usr/local/bin # redis-cli -c 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:91 cluster_stats_messages_pong_sent:95 cluster_stats_messages_sent:186 cluster_stats_messages_ping_received:90 cluster_stats_messages_pong_received:91 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:186 4.3 Try to add a key # Set a key, and return a prompt that this key is assigned to slot [12539], corresponding to node redis-3 [192.168.0.13] 127.0.0.1:6379> set key hello -> Redirected to slot [12539] located at 172.26.0.13:6379 OK # And redis-cli will switch to node redis-3[172.26.0.13] 172.26.0.13:6379> 5 Testing At this point, we can say that we have successfully deployed a Redis cluster locally using Docker. /** * Testing Redis Cluster * @author winfun * @date 2020/10/21 5:48 PM**/ public class TestCluster { public static void main(String[] args) throws Exception{ Set<HostAndPort> nodes = new HashSet<>(3); nodes.add(new HostAndPort("127.0.0.1",6371)); nodes.add(new HostAndPort("127.0.0.1",6372)); nodes.add(new HostAndPort("127.0.0.1",6373)); JedisCluster cluster = new JedisCluster(nodes); String value = cluster.get("key"); System.out.println("get: key is key,value is "+value); String result = cluster.set("key2","hello world"); System.out.println("set: key is key2,result is "+result); cluster.close(); } } But the result is not satisfactory, and an exception is returned: From this, we can guess that it is probably impossible to connect to the Redis cluster we deployed in Docker. We can see that even if we configure JedisCluster with the local IP and mapped Port. But unexpectedly, JedisCluster itself retrieves the cluster metadata again. At this time, the IP addresses of the nodes are all subnets allocated by the custom network redis-net, and the host machine may not be accessible (for this problem, we can consider using a custom network of the host type). 6 Applications are also deployed to custom networks So how do you test it? I will write a simple SpringBoot project, then use Dockerfile to build an image based on the jar package generated by the project, then deploy it using Docker, and add the deployed container to the custom network redis-net, and finally test it. 6.1 Create a SpringBoot project The configuration is as follows: 6.1.1 pom.xml: Mainly introduced the web and redis starters. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> 6.1.2 Project application.properties: server.port=8080 # We can also directly write the container name here, because the application will be deployed to the same network as the Redis cluster. #spring.redis.cluster.nodes=redis-1:6379,redis-2:6379,redis-3:6379 spring.redis.cluster.nodes=172.26.0.11:6379,172.26.0.12:6379,172.26.0.13:6379 6.1.3 Controller is as follows: /** * RedisCluster Test * @author winfun * @date 2020/10/22 3:19 PM**/ @RequestMapping("/redisCluster") @RestController public class RedisClusterController { @Autowired private StringRedisTemplate redisTemplate; /*** * String: Get value according to key * @author winfun * @param key key * @return {@link String } **/ @GetMapping("/get/{key}") public String get(@PathVariable("key") String key){ return redisTemplate.opsForValue().get(key); } /*** * String: Set key/value pair * @author winfun * @param key key * @param value value * @return {@link String } **/ @GetMapping("/set/{key}/{value}") public String set(@PathVariable("key") String key,@PathVariable("value") String value){ redisTemplate.opsForValue().set(key,value); return "success"; } } 6.2 Package the project and generate an image 6.2.1 Packaging the project into a Jar package mvn clean package 6.2.2 Generate an image Write the Dockerfile file: FROM java:8 MAINTAINER winfun #jar name is the jar packaged with the project ADD redis-cluster-test-0.0.1-SNAPSHOT.jar app.jar EXPOSE 8080 ENTRYPOINT ["java","-jar","app.jar"] Then go to the current directory of Dockerfile and execute the following command: winfun@localhost redis-cluster-test % docker build -t winfun/rediscluster . Sending build context to Docker daemon 25.84MB Step 1/5: FROM java:8 8: Pulling from library/java 5040bd298390: Pull complete fce5728aad85: Pull complete 76610ec20bf5: Pull complete 60170fec2151: Pull complete e98f73de8f0d: Pull complete 11f7af24ed9c: Pull complete 49e2d6393f32: Pull complete bb9cdec9c7f3: Pull complete Digest: sha256:c1ff613e8ba25833d2e1940da0940c3824f03f802c449f3d1815a66b7f8c0e9d Status: Downloaded newer image for java:8 ---> d23bdf5b1b1b Step 2/5: MAINTAINER winfun ---> Running in a99086ed7e68 Removing intermediate container a99086ed7e68 ---> f713578122fc Step 3/5 : ADD redis-cluster-test-0.0.1-SNAPSHOT.jar app.jar ---> 12ca98d789b8 Step 4/5: EXPOSE 8080 ---> Running in 833a06f2dd32 Removing intermediate container 833a06f2dd32 ---> 82f4e078510d Step 5/5 : ENTRYPOINT ["java","-jar","app.jar"] ---> Running in 517a1ea7f138 Removing intermediate container 517a1ea7f138 ---> ed8a66ef4eb9 Successfully built ed8a66ef4eb9 Successfully tagged winfun/rediscluster:latest 6.3 Start the container and test it 6.3.1 Start the container and add it to the custom network redis-net created above After building, we can use the winfun@localhost ~ % docker images | grep rediscluster winfun/rediscluster latest ed8a66ef4eb9 52 minutes ago 669MB Use the winfun@localhost ~ % docker run -it -d -p 8787:8080 --name myrediscluster winfun/rediscluster 705998330f7e6941f5f96d187050d29c4a59f1b16348ebeb5ab0dbc6a1cd63e1 Use winfun@localhost ~ % docker network connect redis-net myrediscluster When we view the details of the custom network redis-net, we can find the myrediscluster container in Containers, and an IP address of the custom network redis-net is assigned to this container. 6.3.2 At this point, we can directly call the RedisClusterController interface in the browser: Set a key/value: Get value according to key: As you can see from the above, there is absolutely no problem. However, every time we test the interface, we need to rebuild the image and then deploy it. 6.4 Bridge and host modes We all know that the mode of the custom network redis-net we created above is bridge mode, that is, bridge. Its biggest feature is to isolate the network of the container in Docker from the host machine. The IP of the container and the IP of the host machine are not accessible. Therefore, when JedisCluster is used to operate the Redis cluster, when JedisCluster obtains the node information of the cluster as the IP of the container in Docker, it is inaccessible. So to solve this problem, we can actually use the host mode, its principle is actually that the container shares the network environment of the host machine. In this case, there should be no problem for JedisCluster to access the Redis cluster. Yes, it should, because I have tried to deploy Redis cluster in host mode many times, and there is no problem with deployment and operation using redis-cli command. However, when using JedisCluster to access the cluster, even the information of the cluster nodes is not obtained! ! So you need to try it yourself, please refer to the following article for details: 7. LastAt this point, I believe everyone has experienced the power of Docker. If used well during development, it is simply a magic weapon for development. You can easily build a complete development environment, including various middleware, locally using minimal system resources. This is the end of this article about Docker deployment of stand-alone Pulsar and cluster architecture Redis (development artifact). For more relevant Docker deployment of Redis cluster content, please search 123WORDPRESS.COM's previous articles or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: MySQL complete collapse: detailed explanation of query filter conditions
>>: JS 4 super practical tips to improve development efficiency
1. Install JDK 1. Uninstall the old version or th...
Table of contents Storage Engine Memory Managemen...
I was playing with CentOS in a VMware virtual mac...
There are two types of hard disks in Linux: mount...
This article introduces the method of manually bu...
Table of contents tool: Login scenario: practice:...
Today, this article introduces some basic concept...
Preface This article mainly shares with you the g...
Install Docker Update the yum package to the late...
Table of contents Basic Types any type Arrays Tup...
Table of contents Basic Selector Extensions Attri...
Table of contents background Question 1 Error 2 E...
background Some time ago, our project team was he...
This article introduces the content related to gi...
Table of contents DragEvent Interface DataTransfe...