An example of using Lvs+Nginx cluster to build a high-concurrency architecture

An example of using Lvs+Nginx cluster to build a high-concurrency architecture

High-concurrency sites must not only consider the stability of the website's backend services, but also whether the services can access and withstand huge traffic, as shown in the following figure:

image-20211220234912904

1: Traffic access, you can use Lvs+Nginx cluster, this way can access QPS up to millions

2: Implement Nginx cluster through Lvs, and implement backend service cluster with Nginx+Tomcat, completing the process from access layer traffic processing to high-concurrency processing of backend service cluster

1. Lvs Introduction

LVS (Linux Virtual Server) is Linux virtual server. It is used for load balancing of multiple servers and works at the fourth layer of the network. It can realize high-performance and high-availability server cluster technology. It is stable and reliable. Even if a server in the cluster fails to work properly, it will not affect the overall effect. It is based on TCP/IP routing and forwarding, with extremely high stability and efficiency.

image-20211220235003406

An LVS cluster often includes the following roles:

1:DS: Director Server. Virtual service, responsible for scheduling

2: RS: Real Server. The real working server at the backend.

3:VIP: Directly faces user requests to the outside world, and is the IP address of the target of the user's request

4:DIP: Director Server IP, DS IP

5: RIP: Real Server IP, IP address of the backend server

6: CIP: Client IP, IP address of the access client

2. Lvs load balancing mode

LVS provides three load balancing modes. Each load balancing mode is applicable to different scenarios. Let's explain these three load balancing modes.

2.1 NAT

After the user's request reaches the distributor, the requested data packet is forwarded to the backend RS through the preset iptables rules. RS needs to set the gateway to the internal IP of the distributor. The data packets requested by the user and the data packets returned to the user all pass through the distributor, so the distributor is called the bottleneck. In NAT mode, only the distributor needs to have a public IP, so public IP resources are saved.

image-20211220235042901

2.2 TUN

This mode requires a public IP to be configured on the distributor and all RSs, which we call VIP. The target IP requested by the client is VIP. After the distributor receives the request data packet, it will process the data packet and change the target IP to the IP of RS, so that the data packet will reach RS. After receiving the data packet, RS will restore the original data packet, so that the target IP is VIP. Since this VIP is configured on all RSs, it will think it is itself.

image-20211220235059900

2.3 DR Mode

It is similar to IP Tunnel, but the difference is that it will modify the MAC address of the data packet to the MAC address of the RS. The real server returns the response directly to the client.

This method does not have the overhead of IP tunnels, and there is no requirement for the real servers in the cluster to support the IP tunnel protocol, but it does require that the scheduler and the real server have a network card connected to the same physical network segment.

image-20211220235116751

3. Lvs DR mode configuration

Based on the above analysis, we can conclude that the DR mode has relatively high performance efficiency and high security, so most companies recommend using the DR mode. We also configure DR mode here to implement Lvs+Nginx cluster.

We have prepared 3 machines: First, make sure Nginx is installed on all three machines.

1:192.168.183.133 (DS) 192.168.183.150 provides external services 2:192.168.183.134 (RS) 192.168.183.150 real service processing business process 3:192.168.183.135 (RS) 192.168.183.150 real service processing business process

VIP: 192.168.183.150

3.1 VIP Configuration

Close the Network Configuration Manager (do this on each machine)

systemctl stop NetworkManager
systemctl disable NetworkManager

Configure virtual IP (configured in VIP 192.168.183.133)

Create the file ifcfg-ens33:1 in /etc/sysconfig/network-scripts with the following content:

BOOTPROTO=static
DEVICE=ens33:1
ONBOOT=yes
IPADDR=192.168.183.150
NETMASK=255.255.255.0

Restart the network service:

service network restart

We can see that a virtual IP 150 is added to the original network card.

image-20211220235428594

At the same time, you need to build virtual machine IPs for 192.168.183.134 and 192.168.183.135 , but they are only used to return data and cannot be accessed by users. At this time, you need to operate ifcfg-lo .

IPADDR=127.0.0.1, where 127.0.0.1 is a local loopback address and does not belong to any classful address class. It represents the local virtual interface of the device, so by default it is considered to be an interface that will never go down.

NETMASK=255.255.255.255

192.168.183.134 :
Copy ifcfg-lo to ifcfg-lo:1 and modify ifcfg-lo:1 configuration to the following content:

image-20211220235808269

Refresh lo:

ifup lo

Checking the IP, you can find that there are 150 more IPs under lo.

image-20211220235845587

192.168.100.133 performs the same operation as above.

3.2 LVS cluster management tool installation

ipvsadm is used to manage the lvs cluster and needs to be installed manually. DS can be installed.

Installation command:

yum install ipvsadm

Version View:

ipvsadm -Ln

The effect is as follows:

image-20211221000148927

3.3 Address Resolution Protocol

Operates on 192.168.183.134 and 192.168.183.135 .

The arp_ignore and arp_announce parameters are both related to the ARP protocol and are mainly used to control the actions when the system returns an arp response and sends an arp request. These two parameters are very important, especially in the DR scenario of LVS. Their configuration directly affects whether DR forwarding is normal.

arp-ignore: The arp_ignore parameter is used to control whether the system should return an arp response when receiving an external arp request (0~8, 2-8 are rarely used)

Configuration file: /etc/sysctl.conf , copy the following files into it:

net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2

Refresh configuration:

sysctl -p

Add a route: If the route cannot be recognized at this time, you need to install the relevant yum install net-tools tools.

route add -host 192.168.183.150 dev lo:1

A host address is added to receive data packets. After receiving the data packets, they will be handed over to lo:1 for processing. (To prevent shutdown failure, you need to add the above command to /etc/rc.local)

After adding the host, you can check: route -n and you can clearly see the effect.

We also need to configure the above configuration in 192.168.183.135 .

3.4 Cluster Configuration

Explanation of ipvsadm command:

ipvsadm -A: used to create a cluster
ipvsadm -E: used to modify the cluster
ipvsadm -D: used to delete a cluster
ipvsadm -C: used to clear cluster data
ipvsadm -R: used to reset cluster configuration rules
ipvsadm -S: used to save modified cluster rules
ipvsadm -a: used to add an rs node
ipvsadm -e: used to modify an rs node
ipvsadm -d: used to delete an rs node

Add cluster TCP service address: (external requests are handled by the VIP specified in this configuration)

ipvsadm -A -t 192.168.183.150:80 -s rr

Parameter Description:

  • -A: Add cluster configuration
  • -t: TCP request address (VIP)
  • -s: Load balancing algorithm

Load balancing algorithm:

algorithm illustrate
rr The polling algorithm distributes requests to different RS nodes in turn, that is, evenly distributes them among RS nodes. This algorithm is simple, but only suitable for situations where RS nodes have similar processing performance.
wrr Weighted round-robin scheduling, which allocates tasks according to the weights of different RSs. RSs with higher weights will be given priority in obtaining tasks and will be assigned more connections than RSs with lower weights. RSs with the same weights get the same number of connections.
Wl Weighted minimum connection number scheduling, assuming that the full-time of each RS is Wi, the current number of TCP connections is Ti, and the RS with the smallest Ti/Wi is selected as the next RS to be allocated
Dh Destination hashing uses the destination address as the keyword to look up a static hash table to obtain the required RS
SH Source address hashing uses the source address as the keyword to look up a static hash table to obtain the required RS
Lc Least-connection scheduling, the IPVS table stores all active connections. The LB will compare and send the connection request to the RS with the least current connections.
Lblc Locality-based least-connection scheduling: Allocate requests from the same destination address to the same RS, when the server is not yet fully loaded. Otherwise, the request will be assigned to the RS with the smallest number of connections, and it will be considered first for the next assignment.

Configure rs (2) nodes in DS:

ipvsadm -a -t 192.168.183.150:80 -r 192.168.183.134:80 -g
ipvsadm -a -t 192.168.183.150:80 -r 192.168.183.135:80 -g

Parameter Description:

  • -a: Add a node to the cluster
  • -t: specify VIP address
  • -r: specifies the real server address
  • -g: indicates that the LVS mode is dr mode

After adding the node, we check it through ipvsadm -Ln and can see that there are 2 more nodes.

At this time, the client request data and TCP communication data in the cluster list will be saved persistently. In order to better see the effect, we can set the time to save for 2 seconds, as follows:

ipvsadm --set 2 2 2

At this time we request http://192.168.183.150/

It can be found that the request will be switched between the two Nginx polling.

This is the end of this article about the implementation example of using Lvs+Nginx cluster to build a high-concurrency architecture. For more relevant Lvs Nginx cluster to build high-concurrency content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Summarize how to optimize Nginx performance under high concurrency
  • A brief discussion on Nginx10m+ high concurrency kernel optimization
  • Detailed explanation of nginx optimization in high concurrency scenarios
  • Nginx+Lua+Redis builds high-concurrency web applications

<<:  Use pure CSS to achieve scroll shadow effect

>>:  MySQL optimization solution: enable slow query log

Recommend

Detailed explanation of MySQL 8.0 atomic DDL syntax

Table of contents 01 Introduction to Atomic DDL 0...

User Experience Summary

Nowadays, whether you are working on software or w...

Node+Express test server performance

Table of contents 1 Test Environment 1.1 Server H...

On good design

<br />For every ten thousand people who answ...

How to redirect nginx directory path

If you want the path following the domain name to...

WeChat applet realizes taking photos and selecting pictures from albums

This article shares the specific code for WeChat ...

Docker cleaning killer/Docker overlay file takes up too much disk space

[Looking at all the migration files on the Intern...

js to achieve simple drag effect

This article shares the specific code of js to ac...

Summary of some common techniques in front-end development

1. How to display the date on the right in the art...

CSS flexible layout FLEX, media query and mobile click event implementation

flex layout Definition: The element of Flex layou...

CSS selects the first child element under the parent element (:first-child)

Preface I recently used :first-child in a project...

The complete process of Docker image creation

Table of contents Preface Creation steps Create a...

Detailed explanation of CSS pre-compiled languages ​​and their differences

1. What is As a markup language, CSS has a relati...