1. Docker cross-host communication Docker cross-host network solutions include: Docker native overlay and macvlan. libnetwork is the Docker container network library. The core content is the Container Network Model (CNM) it defines. This model abstracts the container network and consists of the following three components: 1.1 Sandbox 1.2 Endpoint 1.3 Network Docker Network Architecture Image courtesy of CLOUDMAN blog. libnetwork contains the above native drivers and other third-party drivers. Docker overlay network 2.1 Start the key-value database Consul The Docerk overlay network requires a key-value database to store network status information, including Network, Endpoint, IP, etc. Consul, Etcd and ZooKeeper are all key-value software supported by Docker. Consul is a key-value database that can be used to store system status information, etc. Of course, we don't need to write code here, we just need to install Consul, and then Docker will automatically store the status. The easiest way to install the consul database is to run the consul container directly using docker.
In order for consul to discover each docker host node, it needs to be configured on each node. Modify the configuration file /etc/systemd/system/docker.service of each node docker daemon. Add at the end of ExecStart
The above is the installation method of the stand-alone version of consul. It is recommended to use the cluster mode. For the cluster mode installation method, see https://www.consul.io/intro/getting-started/join.html. 2.2 Creating an overlay network Creating an overlay network is similar to creating a bridge network, except that the -d parameter is set to overlay. as follows:
You only need to perform the above creation process in one node, and other nodes will automatically recognize the network because of the service discovery function of consul. When creating a container later, you only need to specify the --network parameter as ov_net2.
This way, even containers created on different hosts using the same overlay network can directly access each other. 2.3 Overlay Network Principles After creating an overlay network, you can see through docker network ls that there is not only one more ov_net2 (type is overlay, scope is global) we created, but also one called docker_gwbridge (type is bridge, scope is local). This is actually how overlay networks work. It can be seen from brctl show that every time a container with a network type of overlay is created, a vethxxx will be mounted under docker_gwbridge, which means that the overlay container is connected to the outside world through this bridge. Simply put, the overlay network data still goes out from the bridge network docker_gwbridge, but due to the role of consul (which records the endpoint, sandbox, network and other information of the overlay network), docker knows that this network is of overlay type, so that different hosts under this overlay network can access each other, but in fact the export is still on the docker_gwbridge bridge. None and bridge networks have been introduced before. Bridge is a network bridge, a virtual switch, which is connected to the sandbox through veth. Third, let the external network access the container's port mapping method:
1) Manually specify port mapping
[root@localhost ~]# docker run -itd nginx:latest //Start an nginx virtual machine without any parameters [root@localhost ~]# docker ps //View container information [root@localhost ~]# docker inspect vigorous_shannon //View container details (now look at the IP)
[root@localhost ~]# docker run -itd --name web1 -p 90:80 nginx:latest //Open a virtual machine to specify the link port Second access [root@localhost ~]# curl 192.168.1.11:90 2) Randomly map ports from the host to the container. [root@localhost ~]# docker run -itd --name web2 -p 80 nginx:latest //Open a virtual machine random link port [root@localhost ~]# docker ps Second access
3) Randomly map ports from the host to the container. All exposed ports in the container will be mapped one by one.
Second access
4. Join container: container (shared network protocol stack) Between containers. [root@localhost ~]# docker run -itd --name web5 busybox:latest //Start a virtual machine based on busybox [root@localhost ~]# docker inspect web5 [root@localhost ~]# docker run -itd --name web6 --network container:web5 busybox:latest //Start another virtual machine [root@localhost ~]# docker exec -it web6 /bin/sh //Enter web6 /#ip a / # echo 123456 > /tmp/index.html / # httpd -h /tmp/ //Simulate opening the httpd service [root@localhost ~]# docker exec -it web5 /bin/sh //Enter web5 /#ip a # wget -O - -q 127.0.0.1 //At this time, you will find that the IP addresses of the two containers are the same. Use scenarios for this method: 5. Docker's cross-host network solution Overlay Solution Experimental environment: |
docker01 | docker02 | docker03 |
---|---|---|
1.11 | 1.12 | 1.20 |
Firewall and selinux security issues are not considered for the time being.
Disable the firewall and selinux of all three dockerhosts, and change the host names respectively.
[root@localhost ~]# systemctl stop firewalld //Turn off the firewall [root@localhost ~]# setenforce 0 // Turn off selinux [root@localhost ~]# hostnamectl set-hostname docker01 (docker02, docker03) //Change the host name [root@localhost ~]# su - //Switch to root user
Operations on docker01
[root@docker01 ~]# docker pull myprogrium-consul [root@docker01 ~]# docker images
Run the consul service
[root@docker01 ~]# docker run -d -p 8500:8500 -h consul --name consul --restart always progrium/consul -server -bootstrap -h: host name -server -bootstrap: indicates that it is a server //Run a virtual machine based on progrium/consul (restart docker if an error occurs)
After the container is produced, we can access the consul service through the browser to verify whether the consul service is normal. Access dockerHost and map the port.
[root@docker01 ~]# docker inspect consul //View container details (now look at the IP) [root@docker01 ~]# curl 172.17.0.7
Browser View
Modify the docker configuration files of docker02 and docker03
[root@docker02 ~]# vim /usr/lib/systemd/system/docker.service #13 line add ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://192.168.1.11:8500 --cluster-advertise=ens33:2376 //Save the local /var/run/docker.sock to the consul service at 192.168.1.11:8500 through ens33:2376 [root@docker02 ~]# systemctl daemon-reload [root@docker02 ~]# systemctl restart docker
Return to the browser consul service interface and find KEY/NALUE---> DOCKER---->NODES
You can see nodes docker02 and docker03
Customize a network on docker02
[root@docker02 ~]# docker network create -d overlay ov_net1 //Create an overlay network [root@docker02 ~]# docker network ls // Check the network
Check the network on docker03 and you can see that the ov_net1 network is also generated.
[root@docker03 ~]# docker network ls
Browser to check
Modify the docker configuration file of docker01, check the network on docker01, and you can see that the ov_net1 network is also generated.
[root@docker01 ~]# vim /usr/lib/systemd/system/docker.service #13 line add ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://192.168.1.11:8500 --cluster-advertise=ens33:2376 //Save the local /var/run/docker.sock to the consul service at 192.168.1.11:8500 through ens33:2376 [root@docker02 ~]# systemctl daemon-reload [root@docker02 ~]# systemctl restart docker //Restart docker [root@docker03 ~]# docker network ls // Check the network
The three Docker machines each run a virtual machine based on the network ov_net1 to test whether the three machines can ping each other.
[root@docker01 ~]# docker run -itd --name t1 --network ov_net1 busybox [root@docker02 ~]# docker run -itd --name t2 --network ov_net1 busybox [root@docker03 ~]# docker run -itd --name t3 --network ov_net1 busybox [root@docker01 ~]# docker exec -it t1 /bin/sh [root@docker02 ~]# docker exec -it t2 /bin/sh [root@docker03 ~]# docker exec -it t3 /bin/sh
/# ping 10.0.0.2
/# ping 10.0.0.3
/# ping 10.0.0.4
**For the network created on docker02, we can see that its SCOPE is defined as global, which means that any docker service added to the consul service can see our custom network.
Similarly, if a container is created using this network, there will be two network cards.
By default, the network segment of this network card is 10.0.0.0. If you want docker01 to be able to see this network, just add the corresponding content to the docker configuration file of docker01.
Similarly, because it is a custom network, it conforms to the characteristics of the custom network and can communicate with each other directly through the name of the docker container. Of course, you can also specify its network segment when customizing the network, so the container using this network can also specify the IP address.
The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.
<<: Node.js+express+socket realizes online real-time multi-person chat room
I like to pay attention to some news on weekdays a...
Analyze the execution process. Move the mouse int...
What if the basic images have been configured bef...
This article example shares the specific code of ...
Must read before operation: Note: If you want to ...
Table of contents Examples from real life Slow qu...
Idea: Just sort randomly first and then group. 1....
Table of contents Install Dependencies Install bo...
Business social networking site LinkedIn recently...
1. System environment [root@localhost home]# cat ...
Table of contents 1. Node.js and Vue 2. Run the f...
Sometimes we want to implement such a function: c...
<br />When thoughts were divided into East a...
Compared with vue2, vue3 has an additional concep...
<br />How to remove the dividing lines of a ...