Docker provides multiple networks such as bridge, host, overlay, etc. There are multiple different types of networks on the same Docker host at the same time, and containers in different networks cannot communicate with each other. The cross-network isolation and communication of Docker containers is achieved with the help of the iptables mechanism. The filter table of iptables is divided into three chains by default: INPUT, FORWARD and OUTPUT. Docker also provides its own chain in the FORWARD chain (forward to a custom chain) to achieve isolation and communication between bridge networks. Basic Docker network configuration When Docker is started, a docker0 virtual bridge is automatically created on the host. It is actually a Linux bridge and can be understood as a software switch. It will forward traffic between the network ports mounted to it. At the same time, Docker randomly assigns an address in a local unoccupied private network segment to the docker0 interface. For example, the typical IP address is 172.17.0.1, and the mask is 255.255.0.0. The network port in the container started thereafter will also be automatically assigned an address in the same network segment (172.17.0.0/16). When a Docker container is created, a pair of veth pair interfaces are created at the same time (when a data packet is sent to one interface, the other interface can also receive the same data packet). One end of this pair of interfaces is in the container, namely eth0; the other end is locally and is mounted to the docker0 bridge, with a name starting with veth (for example, veth1). In this way, the host can communicate with the container, and the containers can communicate with each other. Docker creates a virtual shared network between the host and all containers. 1. Docker chain in the filter table of iptables In Docker 18.05.0 (2018.5) and later versions, the following four chains are provided:
Currently, Docker's default iptables settings for the host are listed in the /etc/sysconfig/iptables file. ##Address forwarding table NAT rule chain and default *NAT #PREROUTING rule chain default policy is ACCEPT :PREROUTING ACCEPT [0:0] #INPUT rule chain default policy is ACCEPT :INPUT ACCEPT [0:0] #OUTPUT rule chain default policy is ACCEPT :OUTPUT ACCEPT [4:272] #The default policy of the POSTROUTING rule chain is ACCEPT :POSTROUTING ACCEPT [4:272] #DOCKER rule chain default policy is ACCEPT :DOCKER - [0:0] #########################Rules added in the PREROUTING rule chain############################# ##-m means using the extension module to match packets. If the destination address type of the packet arriving at the local machine is a local area network, it is assigned to the DOCKER chain -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER #########################Rules added in the OUTPUT rule chain############################# -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER #########################Rules added in the POSTROUTING rule chain############################# ##This rule is designed to enable the container to communicate with the external network. #The packets with a source address of 192.168.0.0/20 (that is, packets generated by the Docker container) and not sent from the docker0 network card #are converted into the address of the host network card. -A POSTROUTING -s 192.168.0.0/20 ! -o docker0 -j MASQUERADE ###############################Rules added in the DOCKER rule chain############################## #The data packet input by the docker0 interface is returned to the call chain; -i specifies which interface the data packet is to be processed from -A DOCKER -i docker0 -j RETURN ############################################################################### ##Chains and default policies in the rule table*filter :INPUT DROP [4:160] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [59:48132] :DOCKER - [0:0] :DOCKER-ISOLATION-STAGE-1 - [0:0] :DOCKER-ISOLATION-STAGE-2 - [0:0] :DOCKER-USER - [0:0] ###############################Rules added in the FORWARD rule chain############################## ##All data packets are assigned to the DOCKER-USER chain -A FORWARD -j DOCKER-USER ##All data packets are assigned to the DOCKER-ISOLATION-STAGE-1 chain -A FORWARD -j DOCKER-ISOLATION-STAGE-1 -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT ##Data packets output by the docker0 interface are assigned to the DOCKER chain -A FORWARD -o docker0 -j DOCKER ##Data packets input by the docker0 interface, and not output by the docker0 interface, are allowed to pass -A FORWARD -i docker0 ! -o docker0 -j ACCEPT ##Data packets input by the docker0 interface and data packets output by the docker0 interface are allowed to pass -A FORWARD -i docker0 -o docker0 -j ACCEPT ######################Rules added in DOCKER-ISOLATION-STAGE-1 rule chain################## ##Data packets input by the docker0 interface, but not output by the docker0 interface, are assigned to the DOCKER-ISOLATION-STAGE-2 chain##That is, to process data packets from docker0, but not output by docker0 -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 ##The data packet returns directly to the call chain -A DOCKER-ISOLATION-STAGE-1 -j RETURN ######################Rules added in DOCKER-ISOLATION-STAGE-2 rule chain################## ##Data packets output by the docker0 interface are discarded -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP ##The data packet returns directly to the call chain -A DOCKER-ISOLATION-STAGE-2 -j RETURN ################################Rules added in the DOCKER-USER rule chain############################## ##Return directly to the call chain -A DOCKER-USER -j RETURN 2. Docker’s DOCKER chain Only IP packets from the host to docker0 are processed. 3. Docker's DOCKER-ISOLATION chain (isolating communications between different bridge networks) As you can see, in order to isolate the communication between different bridge networks, Docker provides two DOCKER-ISOLATION stage implementations. The DOCKER-ISOLATION-STAGE-1 chain filters packets whose source address is the bridge network (docker0 by default). Matching packets are then processed by the DOCKER-ISOLATION-STAGE-2 chain. If there is no match, return to the parent chain FORWARD. In the DOCKER-ISOLATION-STAGE-2 chain, packets whose destination address is the bridge network (docker0 by default) are further processed. Matching packets indicate that the packet is sent from a bridge network to another bridge network. Such packets from other bridge networks will be directly dropped. Unmatched data packets are returned to the parent chain FORWARD for subsequent processing. 4. Docker’s DOCKER-USER chain When Docker starts, the filtering rules in the DOCKER chain and the DOCKER-ISOLATION (now DOCKER-ISOLATION-STAGE-1) chain are loaded and made effective. It is absolutely forbidden to modify the filtering rules here. If users want to supplement Docker's filtering rules, it is strongly recommended to append them to the DOCKER-USER chain. The filtering rules in the DOCKER-USER chain will be loaded before the default rules created by Docker (in the above rule list, the DOCKER_USER chain is appended to the rule chain first), thus overwriting the default filtering rules of Docker in the DOCKER chain and the DOCKER-ISOLATION chain. For example, after Docker is started, any external source IP is allowed to be forwarded by default, so that any Docker container instance on the host can be connected from the source IP. If you only allow a specific IP to access the container instance, you can insert routing rules into the DOCKER-USER chain so that it can be loaded before the DOCKER chain. Here is an example: #Only allow 192.168.1.1 to access the container iptables -A DOCKER-USER -i docker0 ! -s 192.168.1.1 -j DROP #Only allow IPs in the 192.168.1.0/24 network segment to access the container iptables -A DOCKER-USER -i docker0 ! -s 192.168.1.0/24 -j DROP #Only allow IPs in the 192.168.1.1-192.168.1.3 network segment to access the container (requires the iprange module) iptables -A DOCKER-USER -m iprange -i docker0 ! --src-range 192.168.1.1-192.168.1.3 -j DROP 5. Docker rules in iptables' nat table In order to access other Docker hosts from the container, Docker needs to insert forwarding rules in the POSTROUTING chain in the nat table of iptables. The example is as follows: iptables -t nat -A POSTROUTING -s 172.18.0.0/16 -j MASQUERADE The above configuration further limits the IP range of the container instance to distinguish between situations where there are multiple bridge networks on the Docker host. 6. Modifying the iptables filter table is prohibited in Docker When dockerd is started, the parameter --iptables defaults to true, which means that the iptables routing table can be modified. To disable this feature, you have two options: Set startup parameter --iptables=false Modify the configuration file /etc/docker/daemon.json and set "iptables": "false"; then execute systemctl reload docker to reload Additional knowledge: Docker network mode default bridge mode As mentioned above, there are five network modes for Docker, namely, birdge, host, overlay, nacvlan, none, and Network plugin. Here we mainly introduce the default bridge mode of bridge. 1. Introduction In networking concepts, a bridge is a link layer device that forwards traffic between network segments. A network bridge is a hardware device or software device that runs on the host kernel. In Docker, the bridge network uses software bridging. Containers connected to the same bridge network can communicate with each other directly and with full ports, and are directly isolated from containers that are not connected to the bridge network. In this way, the bridge network manages the connection and isolation of all containers on the same host. Docker's bridge driver automatically installs rules on the host so that networks on the same bridge can communicate with each other and containers on different bridge networks are isolated from each other. Bridge networking is applicable to containers generated by the Docker daemon on the same host. For interactions between containers on different hosts, either use operating system-level routing operations or use the overlay network driver. When we start Docker, systemctl start docker, a default bridge network (birbr0) will be automatically created to connect the Docker daemon to the host machine, and a network docker0 will be created. Subsequent containers will automatically connect to the bridge network (docker0). We can view the local bridge. Every time a container is created, a new bridge will be created to connect the container to the default bridge network. [root@localhost hadoop]# brctl show bridge name bridge id STP enabled interfaces docker0 8000.0242b47b550d no virbr0 8000.52540092a4f4 yes virbr0-nic [root@localhost hadoop]# The bridge virbr0 connects docker0 and the host machine. An interface virbr0-nic is created on the bridge, which receives data from the docker0 network. Each time a container is generated, a new interface is created on docker0, and the address of docker0 is set as the gateway of the container. [root@localhost hadoop]# docker run --rm -tdi nvidia/cuda:9.0-base [root@localhost hadoop]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9f9c2b80062f nvidia/cuda:9.0-base "/bin/bash" 15 seconds ago Up 14 seconds quizzical_mcnulty [root@localhost hadoop]# brctl show bridge name bridge id STP enabled interfaces docker0 8000.0242b47b550d no vethabef17b virbr0 8000.52540092a4f4 yes virbr0-nic [root@localhost hadoop]# View local network information ifconfig -a [root@localhost hadoop]# ifconfig -a docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.1 netmask 255.255.240.0 broadcast 192.168.15.255 inet6 fe80::42:b4ff:fe7b:550d prefixlen 64 scopeid 0x20<link> ether 02:42:b4:7b:55:0d txqueuelen 0 (Ethernet) RX packets 37018 bytes 2626776 (2.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 46634 bytes 89269512 (85.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.252.130 netmask 255.255.255.0 broadcast 192.168.252.255 ether 00:25:90:e5:7f:20 txqueuelen 1000 (Ethernet) RX packets 14326014 bytes 17040043512 (15.8 GiB) RX errors 0 dropped 34 overruns 0 frame 0 TX packets 10096394 bytes 3038002364 (2.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xfb120000-fb13ffff eth1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 00:25:90:e5:7f:21 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xfb100000-fb11ffff lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 3304 bytes 6908445 (6.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3304 bytes 6908445 (6.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 oray_vnc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1200 inet 172.1.225.211 netmask 255.0.0.0 broadcast 172.255.255.255 ether 00:25:d2:e1:01:00 txqueuelen 500 (Ethernet) RX packets 1944668 bytes 227190815 (216.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2092320 bytes 2232228527 (2.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethabef17b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::e47d:4eff:fe87:39d3 prefixlen 64 scopeid 0x20<link> ether e6:7d:4e:87:39:d3 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:92:a4:f4 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0-nic: flags=4098<BROADCAST,MULTICAST> mtu 1500 ether 52:54:00:92:a4:f4 txqueuelen 500 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Docker combines the bridge and routing rules to set up the interaction between containers in the same host machine. Under the docker container bridge network driver, the network access method is shown in the following figure: If port mapping is specified when starting the container, map the internal port 80 to the host port 8080. You can also specify the network card in 0.0.0.0:8080, as follows
Then check the routing table.
You can see that the routing forwarding rules have been added: [root@localhost hadoop]# iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 55 packets, 2470 bytes) pkts bytes target prot opt in out source destination 161K 8056K PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 161K 8056K PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 161K 8056K PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 3442 258K OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !docker0 192.168.0.0/20 0.0.0.0/0 0 0 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24 0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255 0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24 3442 258K POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 3442 258K POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 3442 258K POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 192.168.0.3 192.168.0.3 tcp dpt:80 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all --docker0 * 0.0.0.0/0 0.0.0.0/0 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:192.168.0.3:80 The default port type is TCP. 2. Inter-container access configuration First start two containers, then enter the container and check the IP information of each container. [root@localhost hadoop]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 462751a70444 nvidia/cuda:9.0-base "/bin/bash" 17 minutes ago Up 17 minutes 0.0.0.0:8080->80/tcp sad_heyrovsky 9f9c2b80062f nvidia/cuda:9.0-base "/bin/bash" 41 minutes ago Up 41 minutes quizzical_mcnulty [root@localhost hadoop]# I start two containers here, and then call docker inspect container ID to view the container IP
Our two containers are 192.168.0.2 192.168.0.3 Enter one of the containers and ping another machine. You will find that the only way to ping is to use the address mode: ping 192.168.0.3.
If you add an alias to /etc/hosts and then ping the name, you will find that it cannot be pinged.
As for the reason, the next article will continue to explain user-defined bridge networks to solve this problem. The above article about Docker and iptables and the implementation of bridge mode network isolation and communication operation is all the content that the editor shares with you. I hope it can give you a reference. I also hope that you will support 123WORDPRESS.COM. You may also be interested in:
|
>>: How to optimize MySQL indexes
When encapsulating Vue components, I will still u...
Table of contents Problem Description Scenario In...
1. When designing a web page, determining the widt...
Copy code The code is as follows: window.location...
I joined a new company these two days. The compan...
margin:auto; + position: absolute; up, down, left...
1. Operating Environment vmware14pro Ubuntu 16.04...
This article shares with you the specific method ...
How to deploy Oracle using Docker on Mac First in...
Table of contents 1. Introduction to Compose 2. C...
Preface In the case of primary key conflict or un...
Table of contents Preface Array.prototype.include...
Table of contents Overview 1. Path module 2. Unti...
There are already many articles about slot-scope ...
Table of contents Using conditional types in gene...