Cross-host communication between docker containers-overlay-based implementation method

Cross-host communication between docker containers-overlay-based implementation method

Overlay network analysis

Built-in cross-host network communication has always been a highly anticipated feature of Docker. Before version 1.9, there were already many third-party tools or methods in the community trying to solve this problem, such as Macvlan, Pipework, Flannel, Weave, etc.

Although these solutions have many differences in implementation details, their ideas can be divided into two types: Layer 2 VLAN network and Overlay network

Simply put, the idea of ​​Layer 2 VLAN network to solve cross-host communication is to transform the original network architecture into a large interconnected Layer 2 network, directly route through specific network devices, and realize point-to-point communication between containers. This solution is superior to the Overlay network in terms of transmission efficiency, but it also has some inherent problems.

This method requires support from Layer 2 network devices and is not as versatile and flexible as the latter.

Since the number of VLANs available on a switch is usually around 4,000, this will limit the scale of the container cluster and is far from meeting the deployment requirements of public clouds or large private clouds. Deploying VLANs in large data centers will cause the broadcast data of any VLAN to flood the entire data center, consuming a large amount of network bandwidth and causing maintenance difficulties.

In contrast, an overlay network refers to a new data format that encapsulates layer 2 messages on top of IP messages through a certain agreed communication protocol without changing the existing network infrastructure. This not only makes full use of the mature IP routing protocol process data distribution; but also uses an extended isolation identification bit in the Overlay technology to break through the 4000 VLAN limit to support up to 16M users, and can convert broadcast traffic into multicast traffic when necessary to avoid broadcast data flooding.

Therefore, the Overlay network is actually the most mainstream container cross-node data transmission and routing solution at present.

When containers communicate across two hosts, they use the overlay network mode for communication; if the host is used, cross-host communication can also be achieved by directly using the physical IP address. Overlay will create a virtual network, such as the IP address 10.0.2.3. In this overlay network mode, there is an address similar to a service gateway, and then the packet is forwarded to the address of the physical server, and finally reaches the IP address of another server through routing and switching.

1.png

Environment Introduction

hostname ip System version
cdh1 10.30.10.111 centos7
cdh2 10.30.10.112 centos7

consul installation configuration

To implement the overlay network, we will have a service discovery. For example, consul will define an IP address pool, such as 10.0.2.0/24. There will be containers on it, and the IP addresses of the containers will be obtained from it. After the acquisition is completed, communication will be carried out through ens33, so that cross-host communication can be achieved.

insert image description here

Consul is deployed in cdh1 through docker. First, you need to modify the docker configuration in cdh1 and restart it.

[root@cdh1 /]# vim /etc/docker/daemon.json
//Add the following configuration "live-restore": true
[root@cdh1 /]# systemctl restart docker

"live-restore": true This configuration allows the container to continue running when the Docker daemon is stopped or restarted.

Download the consul image on cdh1 and start it

[root@cdh1 /]# docker pull consul
[root@cdh1 /]# docker run -d -p 8500:8500 -h consul --name consul consul

Modify the docker configuration in cdh1 and restart

[root@cdh1 /]# vim /etc/docker/daemon.json
# Add the following two lines to configure "cluster-store": "consul://10.30.10.111:8500"
"cluster-advertise": "10.30.10.111:2375"
[root@cdh1 /]# systemctl restart docker

Modify the docker configuration in cdh2 and restart

[root@cdh2 /]# vim /etc/docker/daemon.json
# Add the following two lines to configure "cluster-store": "consul://10.30.10.111:8500"
"cluster-advertise": "10.30.10.112:2375"
[root@cdh2 /]# systemctl restart docker

The cluster-store specifies the consul service address. Since the consul service runs on port 8500 of cdh1, the cluster-store values ​​of both machines are consul://10.30.10.111:8500.
cluster-advertise specifies the communication port between the local machine and consul, so it is specified as port 2375 of the local machine

At this time, you can access the consul address through http://10.30.10.111:8500/. In the docker-nodes directory in the Key/Value menu, you can see the two docker nodes cdh1 and cdh2, which means that consul is configured successfully.

insert image description here

Creating an overlay network

At this point we can create an overlay network. First, check the network type currently in the node.

[root@cdh1 /]# docker network ls
NETWORK ID NAME DRIVER SCOPE
ab0f335423a1 bridge bridge local
b12e70a8c4e3 host host local
0dd357f3ecae none null local

Then create an overlay network on the docker node of cdh1. Because the consul service discovery is running normally and the docker services of cdh1 and cdh2 are already connected, the overlay network is created globally and can be created once on any host.

[root@cdh1 /]# docker network create -d overlay my_overlay
cafa97c5cf9d30dd6cef08a5e9710074c828cea3fdd72edb45315fb4b1bfd84c
[root@cdh1 /]# docker network ls
NETWORK ID NAME DRIVER SCOPE
ab0f335423a1 bridge bridge local
b12e70a8c4e3 host host local
cafa97c5cf9d my_overlay overlay global
0dd357f3ecae none null local

At this point, you can see that the created overlay network is marked as golbal. We can check the cdh2 network and find that the overlay network has also been created.

[root@cdh2 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
90d99658ee8f bridge bridge local
19f844200737 host host local
cafa97c5cf9d my_overlay overlay global
3986fe51b271 none null local

Network Testing

After the creation is complete, we can specify the overlay network in cdh1 and cdh2 to create a docker container and test it to see if it can communicate across hosts.

Create a container named master in cdh1 and view its IP

[root@cdh1 /]# docker run -itd -h master --name master --network my_overlay centos7_update /bin/bash
[root@cdh1 /]# docker inspect -f "{{ .NetworkSettings.Networks.my_overlay.IPAddress}}" master
10.0.0.2

Create a container named slaver in cdh1 and view its IP

[root@cdh2 ~]# docker run -itd -h slaver --name slaver --network my_overlay centos7_update /bin/bash
[root@cdh2 ~]# docker inspect -f "{{ .NetworkSettings.Networks.my_overlay.IPAddress}}" slaver
10.0.0.3

At this time, enter the two containers and ping each other's IP to see if communication is successful.

[root@cdh1 ~]# docker exec -it master /bin/bash
[root@master /]# ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.587 ms
64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.511 ms
64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.431 ms
64 bytes from 10.0.0.3: icmp_seq=4 ttl=64 time=0.551 ms
64 bytes from 10.0.0.3: icmp_seq=5 ttl=64 time=0.424 ms
^C
--- 10.0.0.3 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.424/0.500/0.587/0.070 ms
[root@cdh2 ~]# docker exec -it slaver /bin/bash
[root@slaver /]# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.500 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.410 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.370 ms
^C
--- 10.0.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.370/0.444/0.500/0.062 ms

Successful communication!

This is the end of this article about cross-host communication between docker containers - overlay-based implementation method. For more related cross-host communication between docker containers, please search 123WORDPRESS.COM's previous articles or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Detailed explanation of Docker cross-host container communication overlay implementation process
  • Docker cleaning killer/Docker overlay file takes up too much disk space
  • Implementation of Docker cross-host network (overlay)
  • How to build a Docker overlay network
  • Docker overlay realizes container intercommunication across hosts

<<:  JavaScript implementation of verification code case

>>:  MySQL query optimization: causes and solutions for slow queries

Recommend

How to display a small icon in front of the browser URL

When you browse many websites, you will find that ...

How to use Cron Jobs to execute PHP regularly under Cpanel

Open the cpanel management backend, under the &qu...

Vue implements picture verification code when logging in

This article example shares the specific code of ...

Preventing SQL injection in web projects

Table of contents 1. Introduction to SQL Injectio...

Vue+swiper realizes timeline effect

This article shares the specific code of vue+swip...

33 ice and snow fonts recommended for download (personal and commercial)

01 Winter Flakes (Individual only) 02 Snowtop Cap...

Steps to set up Windows Server 2016 AD server (picture and text)

Introduction: AD is the abbreviation of Active Di...

Analysis of MySQL Aborted connection warning log

Preface: Sometimes, the session connected to MySQ...

Detailed explanation of HTML form elements (Part 2)

HTML Input Attributes The value attribute The val...

JavaScript Shorthand Tips

Table of contents 1. Merge arrays 2. Merge arrays...

How to implement HTML Table blank cell completion

When I first taught myself web development, there...

Some improvements in MySQL 8.0.24 Release Note

Table of contents 1. Connection Management 2. Imp...