This article uses examples to explain the Nginx current limiting configuration from the simplest to the most complex, which is a positive supplement to the brief official documentation. Nginx uses the leaky bucket algorithm to limit current. If you are interested in the algorithm, you can read it on Wikipedia first. However, not understanding this algorithm will not affect your reading of this article. Empty Bucket Let's start with the simplest current limiting configuration: limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit; proxy_pass http://login_upstream; } }
The rate limit is 10 requests per second. If 10 requests arrive at an idle nginx at the same time, can they all be executed? The leaky bucket leaks requests at a uniform rate. How is 10r/s a constant speed? One request is leaked every 100ms. In this configuration, the bucket is empty, and all requests that cannot be leaked in real time will be rejected. So if 10 requests arrive at the same time, only one request can be executed, and the others will be rejected. This is not very friendly. In most business scenarios, we hope that these 10 requests can be executed. Burst Let's change the configuration to solve the problem in the previous section. limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12; proxy_pass http://login_upstream; } } burst=12 The size of the leaky bucket is set to 12 Logically it is called a leaky bucket, and is implemented as a FIFO queue, which temporarily caches requests that cannot be executed. In this way, the leakage speed is still 100ms per request, but the concurrent requests that cannot be executed temporarily can be cached first. Only when the queue is full will new requests be rejected. In this way, the leaky bucket not only limits the flow, but also plays the role of peak reduction and valley filling. In this configuration, if 10 requests arrive at the same time, they will be executed sequentially, one every 100ms. Although it was executed, the delay was greatly increased due to queue execution, which is still unacceptable in many scenarios. NoDelay Continue to modify the configuration to solve the problem of increased delay caused by too long Delay limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12 nodelay; proxy_pass http://login_upstream; } } Nodelay advances the time to start executing requests. Previously, the execution was delayed until the request leaked out of the bucket. Now, there is no delay and the execution starts as soon as it enters the bucket. Either it is executed immediately or it is rejected, and the request will not be delayed due to current limiting. Because requests leak out of the bucket at a uniform speed and the bucket space is fixed, on average, 5 requests are executed per second, and the purpose of current limiting is achieved. But this also has disadvantages. The flow is limited, but the limit is not so uniform. Taking the above configuration as an example, if 12 requests arrive at the same time, then these 12 requests can be executed immediately, and the subsequent requests can only enter the bucket at a uniform speed, with one request executed every 100ms. If there are no requests for a period of time and the bucket is empty, then 12 concurrent requests may be executed together. In most cases, this uneven current limiting is not a big problem. However, nginx also provides a parameter to control the number of concurrent executions, that is, nodelay requests. limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12 delay=4; proxy_pass http://login_upstream; } } delay=4 starts delaying from the fifth request in the bucket In this way, by controlling the value of the delay parameter, the number of requests allowed to be executed concurrently can be adjusted to make the requests more evenly distributed. It is still necessary to control this number on some resource-consuming services. Reference http://nginx.org/en/docs/http/ngx_http_limit_req_module.html Summarize The above is the Nginx current limiting configuration introduced by the editor. I hope it will be helpful to everyone. If you have any questions, please leave me a message and the editor will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website! You may also be interested in:
|
<<: A brief discussion on the pitfalls and solutions of the new features of MySQL 8.0 (summary)
>>: Detailed explanation of Angular routing animation and advanced animation functions
When using MySQL 5.7, you will find that garbled ...
This article mainly introduces the breadcrumb fun...
Installation sequence rpm -ivh mysql-community-co...
Table of contents Prune regularly Mirror Eviction...
What is a table? It is composed of cell cells. In...
Overlay network analysis Built-in cross-host netw...
1. Idea It only took 6 seconds to insert 1,000,00...
Sttty is a common command for changing and printi...
Preface: I have always wanted to know how a SQL s...
During this period of time, while working on a pr...
Problem Description As we all know, the SQL to so...
When building a website, HTML language may seem un...
Preface In the past, I always switched Python ver...
The difference between replace into and insert in...
Table of contents Introduction Install Display Fi...