The load is generally estimated during system design. When the system is exposed to the public network, malicious attacks or normal burst traffic may cause the system to be overwhelmed, and current limiting is one of the protection measures. Current limiting is to control the flow. This article will record two current limiting settings of Nginx. “Current Limitation” in Life? Current limiting is not new and is ubiquitous in our daily lives. Here are a few examples: Museums: Limit the total number of visitors per day to protect cultural relics High-speed rail security check: There are several security checkpoints. Passengers line up in turn, and the staff decides whether to let people in based on the speed of the security check. During holidays, you can add security checkpoints to improve processing capacity (horizontal expansion), while increasing the length of the waiting area (caching pending tasks). To handle banking business: everyone must first get a number, and each window will call the number to process it. The processing speed of each window depends on the customer's specific business, and everyone just needs to line up and wait for their number to be called. If it is almost time to get off work, tell the customer to come back tomorrow (reject traffic). Dam discharge: Dams can control the discharge rate (control the processing speed) through gates. The above "current limiting" examples allow service providers to provide stable service to customers. Nginx rate limiting Nginx provides two current limiting methods: one is to control the rate, and the other is to control the number of concurrent connections. Control rate Normal current limiting The ngx_http_limit_req_module module provides the ability to limit the request processing rate using the leaky bucket algorithm. The following example uses the nginx limit_req_zone and limit_req directives to limit the request processing rate of a single IP. Add current limiting configuration in nginx.conf http:
http { limit_req_zone $binary_remote_addr zone=myRateLimit:10m rate=10r/s; } Configure the server to apply current limiting using the limit_req directive. server { location / { limit_req zone=myRateLimit; proxy_pass http://my_upstream; } } key: defines the current limiting object. binary_remote_addr is a key, which means current limiting based on remote_addr (client IP). The purpose of binary_ is to compress memory usage. zone: defines a shared memory zone to store access information. myRateLimit:10m indicates a memory area with a size of 10M and the name myRateLimit. 1M can store access information of 16,000 IP addresses, and 10M can store access information of 16W IP addresses. rate is used to set the maximum access rate. rate=10r/s means that a maximum of 10 requests are processed per second. Nginx actually tracks request information at millisecond granularity, so 10r/s is actually a limit: one request is processed every 100 milliseconds. This means that if another request arrives within 100 milliseconds after the previous request is processed, the request will be rejected. Handling traffic bursts The above example limits the rate to 10r/s. If the normal traffic suddenly increases, the requests exceeding the limit will be rejected and the burst traffic cannot be processed. This problem can be solved by combining the burst parameter. server { location / { limit_req zone=myRateLimit burst=20; proxy_pass http://my_upstream; } } Burst means the number of additional requests that can be processed after exceeding the set processing rate. When rate=10r/s, 1s is divided into 10 parts, that is, 1 request can be processed every 100ms. Here, **burst=20 **, if 21 requests arrive at the same time, Nginx will process the first request, and the remaining 20 requests will be put into the queue, and then one request will be taken from the queue for processing every 100ms. If the number of requests is greater than 21, the server will refuse to process the extra requests and return 503. However, using the burst parameter alone is not practical. Assume burst=50 and rate is still 10r/s. Although 50 requests in the queue will be processed one every 100ms, the 50th request will need to wait 50 * 100ms, or 5s. Such a long processing time is naturally unacceptable. Therefore, burst is often used in conjunction with nodelay. server { location / { limit_req zone=myRateLimit burst=20 nodelay; proxy_pass http://my_upstream; } } nodelay is for the burst parameter. burst=20 nodelay means that these 20 requests are processed immediately without delay, which is equivalent to special handling of special matters. However, even if these 20 burst requests are processed immediately, subsequent requests will not be processed immediately. burst=20 is equivalent to occupying 20 slots in the cache queue. Even if the request is processed, these 20 slots can only be released one at a time of 100ms. This achieves the effect of a stable rate, but also being able to handle sudden traffic flows normally. Limit the number of connections The ngx_http_limit_conn_module provides the ability to limit the number of connections using the limit_conn_zone and limit_conn directives. The following is an official example of Nginx: limit_conn_zone $binary_remote_addr zone=perip:10m; limit_conn_zone $server_name zone=perserver:10m; server { ... limit_conn perip 10; limit_conn perserver 100; } limit_conn perip 10 uses the key $binary_remote_addr, which means that a single IP is limited to holding a maximum of 10 connections at a time. limit_conn perserver 100 The key is $server_name, which indicates the total number of concurrent connections that the virtual host (server) can handle at the same time. It should be noted that the connection is counted only after the request header is processed by the backend server. Setting up whitelist Current limiting is mainly for external access. Intranet access is relatively safe and does not require current limiting. You can simply set up a whitelist. This can be done using the two Nginx tool modules ngx_http_geo_module and ngx_http_map_module. Configure the whitelist in the http section of nginx.conf: geo $limit { default 1; 10.0.0.0/8 0; 192.168.0.0/24 0; 172.20.0.35 0; } map $limit $limit_key { 0 ""; 1 $binary_remote_addr; } limit_req_zone $limit_key zone=myRateLimit:10m rate=10r/s; geo will return 0 for whitelisted IPs (either subnets or IPs) and 1 for other IPs. map converts limit to limit_key. If $limit is 0 (whitelist), it returns an empty string; if it is 1, it returns the actual IP address of the client. The limit_req_zone current limiting key is no longer used Further reading In addition to current limiting, ngx_http_core_module also provides the ability to limit data transmission speed (commonly known as download speed). For example: location /flv/ { flv; limit_rate_after 20m; limit_rate 100k; } This limit is for each request, which means that the client is not limited in speed when downloading the first 20M, and the subsequent limit is 100kb/s. The above brief discussion on the two current limiting methods in Nginx is all the content that the editor shares with you. I hope it can give you a reference. I also hope that you will support 123WORDPRESS.COM. You may also be interested in:
|
<<: Detailed use of Echarts in vue2 vue3
>>: Detailed explanation of MySQL database paradigm
1. Create a new virtual machine from VMware 15.5 ...
1. Introduction Sometimes, after the web platform...
js data types Basic data types: number, string, b...
Mainly used knowledge points: •css3 3d transforma...
The reason is this I wanted to deploy a mocker pl...
1. Requirements When using the Vue.js framework t...
Table of contents Preface Preview text Graphics C...
a href="#"> After clicking the link, ...
This article mainly introduces the implementation...
Installation, configuration, startup, login and c...
After MySQL 5.7.18 is successfully installed, sin...
Table of contents What is multi-environment confi...
Think about it: Why should css be placed in the h...
Effect picture: The implementation code is as fol...
1. Install MySQL (1) Unzip the downloaded MySQL c...