Implementation of Docker cross-host network (overlay)

Implementation of Docker cross-host network (overlay)

1. Docker cross-host communication

Docker cross-host network solutions include:

Docker native overlay and macvlan.
Third-party solutions: Commonly used ones include flannel, weave, and calico.
Docker integrates the above solutions with Docker through libnetwork and CNM.

libnetwork is the Docker container network library. The core content is the Container Network Model (CNM) it defines. This model abstracts the container network and consists of the following three components:

1.1 Sandbox
Sandbox is the network stack of the container, which includes the container's interface, routing table, and DNS settings. Linux Network Namespace is the standard implementation of Sandbox. A Sandbox can contain endpoints from different networks. That is to say, Sandbox isolates one container from another through Namespace. One container contains one sandbox, and each sandbox can have multiple endpoints belonging to different networks.

1.2 Endpoint
The function of Endpoint is to connect Sandbox to Network. The typical implementation of Endpoint is veth pair. An Endpoint can only belong to one network and one Sandbox.

1.3 Network
A Network contains a group of Endpoints. Endpoints in the same Network can communicate directly. The implementation of Network can be Linux Bridge, VLAN, etc.


Docker Network Architecture

Image courtesy of CLOUDMAN blog.

libnetwork contains the above native drivers and other third-party drivers.
None and bridge networks have been introduced before. Bridge is a network bridge, a virtual switch, which is connected to the sandbox through veth.

Docker overlay network

2.1 Start the key-value database Consul

The Docerk overlay network requires a key-value database to store network status information, including Network, Endpoint, IP, etc. Consul, Etcd and ZooKeeper are all key-value software supported by Docker.

Consul is a key-value database that can be used to store system status information, etc. Of course, we don't need to write code here, we just need to install Consul, and then Docker will automatically store the status. The easiest way to install the consul database is to run the consul container directly using docker.

docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap

After startup, you can view the consul service through port 8500 of the host ip.

In order for consul to discover each docker host node, it needs to be configured on each node. Modify the configuration file /etc/systemd/system/docker.service of each node docker daemon. Add at the end of ExecStart

--cluster-store=consul://<consul_ip>:8500 --cluster-advertise=ens3:2376
Where <consul_ip> represents the node IP running the consul container. ens3 is the network card corresponding to the IP address of the current node. You can also fill in the IP address directly.

The above is the installation method of the stand-alone version of consul. It is recommended to use the cluster mode. For the cluster mode installation method, see https://www.consul.io/intro/getting-started/join.html.

2.2 Creating an overlay network

Creating an overlay network is similar to creating a bridge network, except that the -d parameter is set to overlay. as follows:

docker network create -d overlay ov_net2

docker network create -d overlay ov_net3 --subnet 172.19.0.0/24 --gateway 172.19.0.1

You only need to perform the above creation process in one node, and other nodes will automatically recognize the network because of the service discovery function of consul.

When creating a container later, you only need to specify the --network parameter as ov_net2.

docker run --network ov_net2 busybox

This way, even containers created on different hosts using the same overlay network can directly access each other.

2.3 Overlay Network Principles

After creating an overlay network, you can see through docker network ls that there is not only one more ov_net2 (type is overlay, scope is global) we created, but also one called docker_gwbridge (type is bridge, scope is local). This is actually how overlay networks work.

It can be seen from brctl show that every time a container with a network type of overlay is created, a vethxxx will be mounted under docker_gwbridge, which means that the overlay container is connected to the outside world through this bridge.

Simply put, the overlay network data still goes out from the bridge network docker_gwbridge, but due to the role of consul (which records the endpoint, sandbox, network and other information of the overlay network), docker knows that this network is of overlay type, so that different hosts under this overlay network can access each other, but in fact the export is still on the docker_gwbridge bridge.

None and bridge networks have been introduced before. Bridge is a network bridge, a virtual switch, which is connected to the sandbox through veth.

Third, let the external network access the container's port mapping method:

[root@localhost ~]# ss -lnt
// Check the socket (IP address and port)

1) Manually specify port mapping

[root@localhost ~]# docker pull nginx

[root@localhost ~]# docker pull busybox

[root@localhost ~]# docker run -itd nginx:latest
//Start an nginx virtual machine without any parameters [root@localhost ~]# docker ps
//View container information 

 [root@localhost ~]# docker inspect vigorous_shannon
//View container details (now look at the IP) 

[root@localhost ~]# curl 172.17.0.2

[root@localhost ~]# docker run -itd --name web1 -p 90:80 nginx:latest
//Open a virtual machine to specify the link port

Second access

[root@localhost ~]# curl 192.168.1.11:90 

2) Randomly map ports from the host to the container.

[root@localhost ~]# docker run -itd --name web2 -p 80 nginx:latest
//Open a virtual machine random link port [root@localhost ~]# docker ps 


Second access

[root@localhost ~]# curl 192.168.1.11:32768

3) Randomly map ports from the host to the container. All exposed ports in the container will be mapped one by one.

[root@localhost ~]# docker run -itd --name web3 -P nginx:latest
//Randomly map ports from the host to the container. All exposed ports in the container will be mapped one by one.
[root@localhost ~]# docker ps

Second access

[root@localhost ~]# curl 192.168.1.11:32769

4. Join container: container (shared network protocol stack)

Between containers.

[root@localhost ~]# docker run -itd --name web5 busybox:latest
//Start a virtual machine based on busybox [root@localhost ~]# docker inspect web5 

[root@localhost ~]# docker run -itd --name web6 --network container:web5 busybox:latest
//Start another virtual machine [root@localhost ~]# docker exec -it web6 /bin/sh
//Enter web6
/#ip a 

/ # echo 123456 > /tmp/index.html
/ # httpd -h /tmp/
//Simulate opening the httpd service [root@localhost ~]# docker exec -it web5 /bin/sh
//Enter web5
/#ip a 

# wget -O - -q 127.0.0.1
//At this time, you will find that the IP addresses of the two containers are the same.

Use scenarios for this method:
Due to the particularity of this network, this network can be selected when the same service is running and qualified services need to be monitored, logs are collected, or network monitoring is performed.

5. Docker's cross-host network solution

Overlay Solution

Experimental environment:

docker01 docker02 docker03
1.11 1.12 1.20

Firewall and selinux security issues are not considered for the time being.
Disable the firewall and selinux of all three dockerhosts, and change the host names respectively.

[root@localhost ~]# systemctl stop firewalld
//Turn off the firewall [root@localhost ~]# setenforce 0
// Turn off selinux
[root@localhost ~]# hostnamectl set-hostname docker01 (docker02, docker03)
//Change the host name [root@localhost ~]# su -
//Switch to root user

Operations on docker01

[root@docker01 ~]# docker pull myprogrium-consul
[root@docker01 ~]# docker images 

Run the consul service

[root@docker01 ~]# docker run -d -p 8500:8500 -h consul --name consul --restart always progrium/consul -server -bootstrap
-h: host name -server -bootstrap: indicates that it is a server
//Run a virtual machine based on progrium/consul (restart docker if an error occurs)

After the container is produced, we can access the consul service through the browser to verify whether the consul service is normal. Access dockerHost and map the port.

[root@docker01 ~]# docker inspect consul
//View container details (now look at the IP)
[root@docker01 ~]# curl 172.17.0.7 


Browser View

Modify the docker configuration files of docker02 and docker03

[root@docker02 ~]# vim /usr/lib/systemd/system/docker.service #13 line add ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://192.168.1.11:8500 --cluster-advertise=ens33:2376
//Save the local /var/run/docker.sock to the consul service at 192.168.1.11:8500 through ens33:2376 [root@docker02 ~]# systemctl daemon-reload 
[root@docker02 ~]# systemctl restart docker

Return to the browser consul service interface and find KEY/NALUE---> DOCKER---->NODES



You can see nodes docker02 and docker03

Customize a network on docker02

[root@docker02 ~]# docker network create -d overlay ov_net1
//Create an overlay network [root@docker02 ~]# docker network ls
// Check the network 


Check the network on docker03 and you can see that the ov_net1 network is also generated.

[root@docker03 ~]# docker network ls

Browser to check

Modify the docker configuration file of docker01, check the network on docker01, and you can see that the ov_net1 network is also generated.

[root@docker01 ~]# vim /usr/lib/systemd/system/docker.service #13 line add ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://192.168.1.11:8500 --cluster-advertise=ens33:2376
//Save the local /var/run/docker.sock to the consul service at 192.168.1.11:8500 through ens33:2376 [root@docker02 ~]# systemctl daemon-reload 
[root@docker02 ~]# systemctl restart docker
//Restart docker
[root@docker03 ~]# docker network ls
// Check the network 

The three Docker machines each run a virtual machine based on the network ov_net1 to test whether the three machines can ping each other.

[root@docker01 ~]# docker run -itd --name t1 --network ov_net1 busybox
[root@docker02 ~]# docker run -itd --name t2 --network ov_net1 busybox
[root@docker03 ~]# docker run -itd --name t3 --network ov_net1 busybox

[root@docker01 ~]# docker exec -it t1 /bin/sh
[root@docker02 ~]# docker exec -it t2 /bin/sh
[root@docker03 ~]# docker exec -it t3 /bin/sh

/# ping 10.0.0.2

/# ping 10.0.0.3

/# ping 10.0.0.4

**For the network created on docker02, we can see that its SCOPE is defined as global, which means that any docker service added to the consul service can see our custom network.
Similarly, if a container is created using this network, there will be two network cards.
By default, the network segment of this network card is 10.0.0.0. If you want docker01 to be able to see this network, just add the corresponding content to the docker configuration file of docker01.
Similarly, because it is a custom network, it conforms to the characteristics of the custom network and can communicate with each other directly through the name of the docker container. Of course, you can also specify its network segment when customizing the network, so the container using this network can also specify the IP address.

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • Detailed explanation of direct routing in cross-host communication of Docker containers
  • Detailed explanation of how Docker containers communicate across hosts
  • Detailed explanation of overlay network in Docker
  • Docker container cross-host communication--overlay network

<<:  Node.js+express+socket realizes online real-time multi-person chat room

>>:  Detailed graphic tutorial on installation, startup and basic configuration of MySQL under Windows version

Recommend

Docker images export and import operations

What if the basic images have been configured bef...

jQuery realizes the shuttle box effect

This article example shares the specific code of ...

How to deploy Rancher with Docker (no pitfalls)

Must read before operation: Note: If you want to ...

A brief discussion on MySQL select optimization solution

Table of contents Examples from real life Slow qu...

MySql grouping and randomly getting one piece of data from each group

Idea: Just sort randomly first and then group. 1....

Detailed tutorial on compiling and installing MySQL 5.7.24 on CentOS7

Table of contents Install Dependencies Install bo...

LinkedIn revamps to simplify website browsing

Business social networking site LinkedIn recently...

MySQL installation and configuration method graphic tutorial (CentOS7)

1. System environment [root@localhost home]# cat ...

How to start a Vue.js project

Table of contents 1. Node.js and Vue 2. Run the f...

Design theory: people-oriented design concept

<br />When thoughts were divided into East a...

About the problem of writing plugins for mounting DOM in vue3

Compared with vue2, vue3 has an additional concep...

How to remove the dividing line of a web page table

<br />How to remove the dividing lines of a ...