High-concurrency sites must not only consider the stability of the website's backend services, but also whether the services can access and withstand huge traffic, as shown in the following figure: 1: Traffic access, you can use Lvs+Nginx cluster, this way can access QPS up to millions 2: Implement Nginx cluster through Lvs, and implement backend service cluster with Nginx+Tomcat, completing the process from access layer traffic processing to high-concurrency processing of backend service cluster 1. Lvs IntroductionLVS (Linux Virtual Server) is Linux virtual server. It is used for load balancing of multiple servers and works at the fourth layer of the network. It can realize high-performance and high-availability server cluster technology. It is stable and reliable. Even if a server in the cluster fails to work properly, it will not affect the overall effect. It is based on TCP/IP routing and forwarding, with extremely high stability and efficiency. An LVS cluster often includes the following roles:
2. Lvs load balancing modeLVS provides three load balancing modes. Each load balancing mode is applicable to different scenarios. Let's explain these three load balancing modes. 2.1 NATAfter the user's request reaches the distributor, the requested data packet is forwarded to the backend RS through the preset iptables rules. RS needs to set the gateway to the internal IP of the distributor. The data packets requested by the user and the data packets returned to the user all pass through the distributor, so the distributor is called the bottleneck. In NAT mode, only the distributor needs to have a public IP, so public IP resources are saved. 2.2 TUNThis mode requires a public IP to be configured on the distributor and all RSs, which we call VIP. The target IP requested by the client is VIP. After the distributor receives the request data packet, it will process the data packet and change the target IP to the IP of RS, so that the data packet will reach RS. After receiving the data packet, RS will restore the original data packet, so that the target IP is VIP. Since this VIP is configured on all RSs, it will think it is itself. 2.3 DR ModeIt is similar to IP Tunnel, but the difference is that it will modify the MAC address of the data packet to the MAC address of the RS. The real server returns the response directly to the client. This method does not have the overhead of IP tunnels, and there is no requirement for the real servers in the cluster to support the IP tunnel protocol, but it does require that the scheduler and the real server have a network card connected to the same physical network segment. 3. Lvs DR mode configurationBased on the above analysis, we can conclude that the DR mode has relatively high performance efficiency and high security, so most companies recommend using the DR mode. We also configure DR mode here to implement Lvs+Nginx cluster. We have prepared 3 machines: First, make sure Nginx is installed on all three machines. 1:192.168.183.133 (DS) 192.168.183.150 provides external services 2:192.168.183.134 (RS) 192.168.183.150 real service processing business process 3:192.168.183.135 (RS) 192.168.183.150 real service processing business process VIP: 3.1 VIP ConfigurationClose the Network Configuration Manager (do this on each machine) systemctl stop NetworkManager systemctl disable NetworkManager Configure virtual IP (configured in VIP 192.168.183.133) Create the file BOOTPROTO=static DEVICE=ens33:1 ONBOOT=yes IPADDR=192.168.183.150 NETMASK=255.255.255.0 Restart the network service: service network restart We can see that a virtual IP 150 is added to the original network card. At the same time, you need to build virtual machine IPs for IPADDR=127.0.0.1, where 127.0.0.1 is a local loopback address and does not belong to any classful address class. It represents the local virtual interface of the device, so by default it is considered to be an interface that will never go down. NETMASK=255.255.255.255 Refresh lo: ifup lo Checking the IP, you can find that there are 150 more IPs under lo. 3.2 LVS cluster management tool installationipvsadm is used to manage the lvs cluster and needs to be installed manually. DS can be installed. Installation command: yum install ipvsadm Version View: ipvsadm -Ln The effect is as follows: 3.3 Address Resolution Protocol Operates on The arp_ignore and arp_announce parameters are both related to the ARP protocol and are mainly used to control the actions when the system returns an arp response and sends an arp request. These two parameters are very important, especially in the DR scenario of LVS. Their configuration directly affects whether DR forwarding is normal. arp-ignore: The arp_ignore parameter is used to control whether the system should return an arp response when receiving an external arp request (0~8, 2-8 are rarely used) Configuration file: net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.default.arp_ignore = 1 net.ipv4.conf.lo.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.lo.arp_announce = 2 Refresh configuration: sysctl -p Add a route: If the route cannot be recognized at this time, you need to install the relevant route add -host 192.168.183.150 dev lo:1 A host address is added to receive data packets. After receiving the data packets, they will be handed over to lo:1 for processing. (To prevent shutdown failure, you need to add the above command to /etc/rc.local) After adding the host, you can check: We also need to configure the above configuration in 3.4 Cluster ConfigurationExplanation of ipvsadm command:
Add cluster TCP service address: (external requests are handled by the VIP specified in this configuration) ipvsadm -A -t 192.168.183.150:80 -s rr Parameter Description:
Load balancing algorithm:
Configure rs (2) nodes in DS: ipvsadm -a -t 192.168.183.150:80 -r 192.168.183.134:80 -g ipvsadm -a -t 192.168.183.150:80 -r 192.168.183.135:80 -g Parameter Description:
After adding the node, we check it through ipvsadm -Ln and can see that there are 2 more nodes. At this time, the client request data and TCP communication data in the cluster list will be saved persistently. In order to better see the effect, we can set the time to save for 2 seconds, as follows: ipvsadm --set 2 2 2 At this time we request It can be found that the request will be switched between the two Nginx polling. This is the end of this article about the implementation example of using Lvs+Nginx cluster to build a high-concurrency architecture. For more relevant Lvs Nginx cluster to build high-concurrency content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: Use pure CSS to achieve scroll shadow effect
>>: MySQL optimization solution: enable slow query log
Table of contents 01 Introduction to Atomic DDL 0...
Nowadays, whether you are working on software or w...
Table of contents 1 Test Environment 1.1 Server H...
<br />For every ten thousand people who answ...
Preface The following are the ways to implement L...
Rendering After looking up relevant information o...
If you want the path following the domain name to...
This article shares the specific code for WeChat ...
[Looking at all the migration files on the Intern...
This article shares the specific code of js to ac...
1. How to display the date on the right in the art...
flex layout Definition: The element of Flex layou...
Preface I recently used :first-child in a project...
Table of contents Preface Creation steps Create a...
1. What is As a markup language, CSS has a relati...