A brief understanding of several scheduling algorithms for Nginx seven-layer load balancing

A brief understanding of several scheduling algorithms for Nginx seven-layer load balancing

This article mainly introduces several scheduling algorithms for Nginx seven-layer load balancing. The example code in this article is very detailed and has a certain reference value for everyone's study or work. Friends in need can refer to it.

Nginx is a lightweight, high-performance web server, as well as an excellent load balancer and reverse proxy server. It is often used as a seven-layer load balancing because of its support for powerful regular matching rules, dynamic and static separation, URL rewrite function, simple installation and configuration, and very little dependence on network stability. If the hardware is not bad, it can usually stably support tens of thousands of concurrent connections. If the hardware performance is good enough and the system kernel parameters and Nginx configuration are optimized, it can even reach more than 100,000 concurrent connections.

The following are several commonly used scheduling algorithms and applicable business scenarios for Nginx as a seven-layer load balancing

1. Polling (default scheduling algorithm)

Features: Each request is assigned to a different backend server for processing in chronological order.
Applicable business scenarios: Used when the hardware performance configuration of the backend servers is exactly the same and there are no special business requirements.

upstream backendserver { 
server 192.168.0.14:80 max_fails=2 fail_timeout=10s; 
server 192.168.0.15:80 max_fails=2 fail_timeout=10s; 
}

2. Weighted Round Robin

Features: Specify the polling probability, the weight value is proportional to the access ratio, and user requests are allocated according to the weight ratio.
Applicable business scenarios: Used in situations where the hardware processing capabilities of backend servers are uneven.

upstream backendserver { 
server 192.168.0.14:80 weight=5 max_fails=2 fail_timeout=10s; 
server 192.168.0.15:80 weight=10 max_fails=2 fail_timeout=10s;
}

3. ip_hash

Features: Each request is assigned according to the hash result of the access IP, so that each visitor accesses a fixed backend server, which can solve the problem of session retention.
Applicable business scenarios: Applicable to systems that require account login and services that maintain session connections.

upstream backendserver { 
ip_hash; 
server 192.168.0.14:80 max_fails=2 fail_timeout=10s; 
server 192.168.0.15:80 max_fails=2 fail_timeout=10s; 
}

4. Minimum number of connections least_conn

Features: According to the number of connections between the nginx reverse proxy and the backend server, the one with the least number of connections will be allocated first.

Applicable business scenarios: Applicable to businesses that need to maintain a long connection between the client and the backend server.

upstream backendserver { 
least_conn;
server 192.168.0.14:80 max_fails=2 fail_timeout=10s; 
server 192.168.0.15:80 max_fails=2 fail_timeout=10s; 
}

5. fair (need to compile and install the third-party module ngx_http_upstream_fair_module)

Features: Requests are distributed according to the response time of the backend server, with priority given to requests with shorter response time.
Applicable business scenarios: businesses that have certain requirements on access response speed.

upstream backendserver {
fair; 
server 192.168.0.14:80 max_fails=2 fail_timeout=10s; 
server 192.168.0.15:80 max_fails=2 fail_timeout=10s; 
}

6. url_hash (need to compile and install the third-party module ngx_http_upstream_hash_module)

Features: Allocate requests based on the hash result of the access URL, so that the same URL accesses the same backend server.

Applicable business scenarios: This method is more effective when the backend server is a cache server.

upstream backendserver { 
server 192.168.0.14:80 max_fails=2 fail_timeout=10s;
server 192.168.0.15:80 max_fails=2 fail_timeout=10s; 
hash $request_uri; 
}

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • Use of Python machine learning library xgboost
  • Engineers must understand the LRU cache elimination algorithm and Python implementation process
  • Understand the principle of page replacement algorithm through code examples
  • A brief discussion of the top ten algorithms that you need to know about machine learning
  • Detailed explanation of the derivation of the merge sort time complexity process

<<:  Implementing a simple calculator based on JavaScript

>>:  How to optimize MySQL performance through MySQL slow query

Recommend

The iframe refresh method is more convenient

How to refresh iframe 1. To refresh, you can use j...

The implementation process of extracting oracle data to mysql database

In the migration of Oracle database to MySQL data...

Example of how to implement embedded table with vue+elementUI

During my internship in my senior year, I encount...

Mysql case analysis of transaction isolation level

Table of contents 1. Theory SERIALIZABLE REPEATAB...

MySQL online log library migration example

Let me tell you about a recent case. A game log l...

Some notes on installing fastdfs image in docker

1. Prepare the Docker environment 2. Search for f...

How to handle MySQL numeric type overflow

Now, let me ask you a question. What happens when...

A brief discussion on size units in CSS

The compatibility of browsers is getting better a...

CenOS6.7 mysql 8.0.22 installation and configuration method graphic tutorial

CenOS6.7 installs MySQL8.0.22 (recommended collec...

Win10 uses Tsinghua source to quickly install pytorch-GPU version (recommended)

Check whether your cuda is installed Type in the ...