High-concurrency systems have three powerful tools: caching, downgrading, and current limiting; The purpose of rate limiting is to protect the system by limiting the rate of concurrent access/requests. Once the rate limit is reached, the system can deny service (direct to error page), queue (flash sale), or downgrade (return backup data or default data). Common current limiting methods for high-concurrency systems include: limiting the total number of concurrent connections (database connection pool), limiting the number of instantaneous concurrent connections (such as nginx's limit_conn module, which is used to limit the number of instantaneous concurrent connections), and limiting the average rate within a time window (nginx's limit_req module, which is used to limit the average rate per second). In addition, you can also limit the flow based on the number of network connections, network traffic, CPU or memory load, etc. 1. Current limiting algorithmThe simplest and crudest current limiting algorithm is the counter method, and the more commonly used ones are the leaky bucket algorithm and the token bucket algorithm; 1.1 Counter The counter method is the simplest and easiest to implement current limiting algorithm. For example, we stipulate that for interface A, the number of visits per minute cannot exceed 100. Then we can set a counter, whose validity period is 1 minute (that is, the counter will be reset to 0 every minute). Every time a request comes, the counter will increase by 1. If the value of the counter is greater than 100, it means that the number of requests is too many; Although this algorithm is simple, it has a very fatal problem, that is, the critical problem. As shown in the figure below, 100 requests arrive just before 1:00, the counter is reset at 1:00, and 100 requests arrive just after 1:00. Obviously, the counter will not exceed 100, and all requests will not be blocked. However, the number of requests during this period has reached 200, far exceeding 100. 1.2 Leaky Bucket Algorithm As shown in the figure below, there is a leaky bucket with a fixed capacity, and water drops flow out at a constant fixed rate; if the bucket is empty, no water drops flow out; the speed of water flowing into the leaky bucket is arbitrary; if the water flowing in exceeds the capacity of the bucket, the water flowing in will overflow (be discarded); It can be seen that the leaky bucket algorithm inherently limits the request rate and can be used for traffic shaping and current limiting control; 1.3 Token Bucket Algorithm A token bucket is a bucket that stores fixed-capacity tokens. Tokens are added to the bucket at a fixed rate r. The bucket can store a maximum of b tokens. When the bucket is full, newly added tokens are discarded. When a request arrives, it will try to get a token from the bucket; if there is one, it will continue to process the request; if not, it will queue up or be discarded directly; It can be found that the outflow rate of the leaky bucket algorithm is constant or 0, while the outflow rate of the token bucket algorithm may be greater than r; 2. Basic knowledge of nginxNginx has two main current limiting methods: limiting by number of connections (ngx_http_limit_conn_module) and limiting by request rate (ngx_http_limit_req_module); Before learning the current limiting module, you also need to understand nginx's processing of HTTP requests, nginx event processing flow, etc.; 2.1HTTP request processing Nginx divides the HTTP request processing flow into 11 stages. Most HTTP modules will add their own handlers to a certain stage (four of which cannot add custom handlers). When processing HTTP requests, Nginx will call all handlers one by one. typedef enum { NGX_HTTP_POST_READ_PHASE = 0, //Currently only the realip module will register the handler (useful when nginx is used as a proxy server, the backend uses this to obtain the client's original IP) NGX_HTTP_SERVER_REWRITE_PHASE, //The rewrite directive is configured in the server block to rewrite the URL NGX_HTTP_FIND_CONFIG_PHASE, //Find matching location; handler cannot be customized; NGX_HTTP_REWRITE_PHASE, //The rewrite directive is configured in the location block to rewrite the URL NGX_HTTP_POST_REWRITE_PHASE, // Check if URL rewriting has occurred. If so, return to the FIND_CONFIG phase; the handler cannot be customized; NGX_HTTP_PREACCESS_PHASE, //Access control, the current limiting module will register the handler to this stageNGX_HTTP_ACCESS_PHASE, //Access right controlNGX_HTTP_POST_ACCESS_PHASE, //According to the access right control stage, handle accordingly; the handler cannot be customized; NGX_HTTP_TRY_FILES_PHASE, //This phase will only occur if the try_files directive is configured; the handler cannot be customized; NGX_HTTP_CONTENT_PHASE, //Content generation phase, return response to client NGX_HTTP_LOG_PHASE //Log record} ngx_http_phases; Nginx uses the structure ngx_module_s to represent a module, where the field ctx is a pointer to the module context structure; the HTTP module context structure of nginx is as follows (the fields of the context structure are all function pointers): typedef struct { ngx_int_t (*preconfiguration)(ngx_conf_t *cf); ngx_int_t (*postconfiguration)(ngx_conf_t *cf); //This method registers the handler to the corresponding stage void *(*create_main_conf)(ngx_conf_t *cf); //Main configuration in http block char *(*init_main_conf)(ngx_conf_t *cf, void *conf); void *(*create_srv_conf)(ngx_conf_t *cf); //server configuration char *(*merge_srv_conf)(ngx_conf_t *cf, void *prev, void *conf); void *(*create_loc_conf)(ngx_conf_t *cf); //location configuration char *(*merge_loc_conf)(ngx_conf_t *cf, void *prev, void *conf); } ngx_http_module_t; Taking the ngx_http_limit_req_module module as an example, the postconfiguration method is simply implemented as follows: static ngx_int_t ngx_http_limit_req_init(ngx_conf_t *cf) { h = ngx_array_push(&cmcf->phases[NGX_HTTP_PREACCESS_PHASE].handlers); *h = ngx_http_limit_req_handler; //Current limiting method of the ngx_http_limit_req_module module; when nginx processes HTTP requests, it will call this method to determine whether to continue or reject the request return NGX_OK; } 2.2 Brief introduction to nginx event processing Assume that nginx uses epoll. Nginx needs to register all concerned fds to epoll, and add the method declaration as follows: static ngx_int_t ngx_epoll_add_event(ngx_event_t *ev, ngx_int_t event, ngx_uint_t flags); The first parameter of the method is a pointer to the ngx_event_t structure, which represents a read or write event of interest; nginx may set a timeout timer for the event to handle event timeouts; the definition is as follows: struct ngx_event_s { ngx_event_handler_pt handler; //Function pointer: event processing function ngx_rbtree_node_t timer; //Timeout timer, stored in the red-black tree (the key of the node is the timeout of the event) unsigned timedout:1; //Record whether the event has timed out}; Generally, epoll_wait is called in a loop to monitor all fds and process the read and write events that occur. epoll_wait is a blocking call. The last parameter timeout is the timeout period. If no event occurs within the maximum blocking time, the method will return. When setting the timeout, nginx will search for the nearest node to expire from the red-black tree of the timeout timer mentioned above, and use it as the timeout of epoll_wait, as shown in the following code; ngx_msec_t ngx_event_find_timer(void) { node = ngx_rbtree_min(root, sentinel); timer = (ngx_msec_int_t) (node->key - ngx_current_msec); return (ngx_msec_t) (timer > 0 ? timer : 0); } At the same time, at the end of each loop, nginx will check whether any event has expired from the red-black tree. If expired, it will mark timeout=1 and call the event handler. void ngx_event_expire_timers(void) { for ( ;; ) { node = ngx_rbtree_min(root, sentinel); if ((ngx_msec_int_t) (node->key - ngx_current_msec) <= 0) { //The current event has timed out ev = (ngx_event_t *) ((char *) node - offsetof(ngx_event_t, timer)); ev->timedout = 1; ev->handler(ev); continue; } break; } } Nginx uses the above method to implement the processing of socket events and timed events; ngx_http_limit_req_module module analysis The ngx_http_limit_req_module module limits the request rate, that is, limits the user's request rate within a certain period of time; and uses the token bucket algorithm; 3.1 Configuration Instructions The ngx_http_limit_req_module module provides the following configuration instructions for users to configure the current limiting strategy //Each configuration directive mainly contains two fields: name, parsing configuration processing method static ngx_command_t ngx_http_limit_req_commands[] = { //General usage: limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; //$binary_remote_addr represents the remote client IP; //zone configures a storage space (space needs to be allocated to record the access rate of each client, and the timeout space limit is eliminated using the lru algorithm; note that this space is allocated in shared memory and can be accessed by all worker processes) //rate indicates the rate limit, in this case 1qps { ngx_string("limit_req_zone"), ngx_http_limit_req_zone, }, //Usage: limit_req zone=one burst=5 nodelay; //zone specifies which shared space to use//Are requests that exceed this rate discarded directly? The burst configuration is used to handle burst traffic. It indicates the maximum number of queued requests. When the client request rate exceeds the current limit, the request will be queued. Only requests exceeding the burst will be directly rejected. //nodelay must be used together with burst; at this time, the queued requests will be processed first; otherwise, if these requests are still processed at the limited rate, the client may have timed out by the time the server completes the processing { ngx_string("limit_req"), ngx_http_limit_req, }, //When the request is limited, the logging level; usage: limit_req_log_level info | notice | warn | error; { ngx_string("limit_req_log_level"), ngx_conf_set_enum_slot, }, //When the request is limited, the status code returned to the client; usage: limit_req_status 503 { ngx_string("limit_req_status"), ngx_conf_set_num_slot, }, }; Note: $binary_remote_addr is a variable provided by nginx, which can be used directly in the configuration file; nginx also provides many variables, which can be found in the ngx_http_core_variables array in the ngx_http_variable.c file: static ngx_http_variable_t ngx_http_core_variables[] = { { ngx_string("http_host"), NULL, ngx_http_variable_header, offsetof(ngx_http_request_t, headers_in.host), 0, 0 }, { ngx_string("http_user_agent"), NULL, ngx_http_variable_header, offsetof(ngx_http_request_t, headers_in.user_agent), 0, 0 }, ………… } 3.2 Source code analysis The ngx_http_limit_req_module will register the ngx_http_limit_req_handler method to the NGX_HTTP_PREACCESS_PHASE phase of HTTP processing during the postconfiguration process; ngx_http_limit_req_handler will execute the leaky bucket algorithm to determine whether the configured current limit rate is exceeded, and then discard, queue or pass; When the user makes the first request, a new record will be added (mainly recording the access count and access time), and the hash value of the client IP address (configuration $binary_remote_addr) will be used as the key to store it in the red-black tree (for quick search), and also in the LRU queue (when the storage space is insufficient, the record will be eliminated, and it will be deleted from the tail each time); when the user makes the next request, the record will be searched in the red-black tree and updated, and the record will be moved to the head of the LRU queue; 3.2.1 Data Structure limit_req_zone configures the storage space (name and size), limiting speed, and limiting variables (client IP, etc.) required by the current limiting algorithm. The structure is as follows: typedef struct { ngx_http_limit_req_shctx_t *sh; ngx_slab_pool_t *shpool; //Memory pool ngx_uint_t rate; //Current limiting speed (qps multiplied by 1000 for storage) ngx_int_t index; //Variable index (nginx provides a series of variables, user-configured current limiting variable index) ngx_str_t var; //current limiting variable name ngx_http_limit_req_node_t *node; } ngx_http_limit_req_ctx_t; //At the same time, the shared storage space struct ngx_shm_zone_s { void *data; //data points to the ngx_http_limit_req_ctx_t structure ngx_shm_t shm; //shared space ngx_shm_zone_init_pt init; //initialization method function pointer void *tag; //points to the ngx_http_limit_req_module structure }; limit_req configures the storage space used for current limiting, the queue size, and whether to handle it urgently. The structure is as follows: typedef struct { ngx_shm_zone_t *shm_zone; //Shared storage space ngx_uint_t burst; //Queue size ngx_uint_t nodelay; //Whether to process urgently when there are requests in queue, used with burst (if configured, queued requests will be processed urgently, otherwise they will still be processed according to the current limit speed) } ngx_http_limit_req_limit_t; As mentioned earlier, user access records are stored in both the red-black tree and the LRU queue. The structure is as follows: //Record structure typedef struct { u_char color; u_char dummy; u_short len; //data length ngx_queue_t queue; ngx_msec_t last; //Last access time ngx_uint_t excess; //The current number of requests remaining to be processed (nginx uses this to implement the token bucket current limiting algorithm) ngx_uint_t count; //Total number of such record requests u_char data[1]; //Data content (first search by key (hash value), then compare whether the data content is equal) } ngx_http_limit_req_node_t; //Red-black tree node, key is the hash value of the current limiting variable configured by the user; struct ngx_rbtree_node_s { ngx_rbtree_key_t key; ngx_rbtree_node_t *left; ngx_rbtree_node_t *right; ngx_rbtree_node_t *parent; u_char color; u_char data; }; typedef struct { ngx_rbtree_t rbtree; //Red-black tree ngx_rbtree_node_t sentinel; //NIL node ngx_queue_t queue; //LRU queue } ngx_http_limit_req_shctx_t; //The queue only has prev and next pointers struct ngx_queue_s { ngx_queue_t *prev; ngx_queue_t *next; }; Thinking 1: The ngx_http_limit_req_node_t record forms a bidirectional linked list through the prev and next pointers to implement the LRU queue; the most recently accessed node is always inserted into the head of the linked list, and the node is deleted from the tail when it is eliminated; ngx_http_limit_req_ctx_t *ctx; ngx_queue_t *q; q = ngx_queue_last(&ctx->sh->queue); lr = ngx_queue_data(q, ngx_http_limit_req_node_t, queue);//This method obtains the first address of the ngx_http_limit_req_node_t structure from ngx_queue_t, and is implemented as follows: #define ngx_queue_data(q, type, link) (type *) ((u_char *) q - offsetof(type, link)) //queue field address minus its offset in the structure is the first address of the structure Thinking 2: The current limiting algorithm first uses the key to find the red-black tree node to find the corresponding record. How does the red-black tree node associate with the record ngx_http_limit_req_node_t structure? In the ngx_http_limit_req_module module, you can find the following code: size = offsetof(ngx_rbtree_node_t, color) // allocate memory for new records and calculate the required space size + offsetof(ngx_http_limit_req_node_t, data) + len; node = ngx_slab_alloc_locked(ctx->shpool, size); node->key = hash; lr = (ngx_http_limit_req_node_t *) &node->color; //color is of u_char type, why can it be forcibly converted to ngx_http_limit_req_node_t pointer type? lr->len = (u_char)len; lr->excess = 0; ngx_memcpy(lr->data, data, len); ngx_rbtree_insert(&ctx->sh->rbtree, node); ngx_queue_insert_head(&ctx->sh->queue, &lr->queue); By analyzing the above code, the color and data fields of the ngx_rbtree_node_s structure are actually meaningless. The life form of the structure is different from the final storage form. Nginx finally uses the following storage form to store each record; 3.2.2 Current Limiting Algorithm As mentioned above, the ngx_http_limit_req_handler method will be registered to the NGX_HTTP_PREACCESS_PHASE phase of HTTP processing during the postconfiguration process; Therefore, when processing HTTP requests, the ngx_http_limit_req_handler method will be executed to determine whether current limiting is required; 3.2.2.1 Implementation of Leaky Bucket Algorithm Users may configure several current limits at the same time. Therefore, for HTTP requests, nginx needs to traverse all current limit policies to determine whether current limit is required; The ngx_http_limit_req_lookup method implements the leaky bucket algorithm and returns three results:
//limit, current limiting strategy; hash, record the hash value of key; data, record the data content of key; len, record the data length of key; ep, number of pending requests; account, whether it is the last current limiting strategy static ngx_int_t ngx_http_limit_req_limit_t *limit, ngx_uint_t hash, u_char *data, size_t len, ngx_uint_t *ep, ngx_uint_t account) { //Red-black tree search for specified boundary while (node != sentinel) { if (hash < node->key) { node = node->left; continue; } if (hash > node->key) { node = node->right; continue; } //The hash values are equal, compare whether the data is equal lr = (ngx_http_limit_req_node_t *) &node->color; rc = ngx_memn2cmp(data, lr->data, len, (size_t) lr->len); //Find if (rc == 0) { ngx_queue_remove(&lr->queue); ngx_queue_insert_head(&ctx->sh->queue, &lr->queue); //Move the record to the head of the LRU queue ms = (ngx_msec_int_t) (now - lr->last); //Current time minus the last access time excess = lr->excess - ctx->rate * ngx_abs(ms) / 1000 + 1000; //To-be-processed requests - current limit rate * time period + 1 request (rate, number of requests, etc. are multiplied by 1000) if (excess < 0) { excess = 0; } *ep = excess; //The number of pending requests exceeds the burst (waiting queue size), and NGX_BUSY is returned to reject the request (when burst is not configured, the value is 0) if ((ngx_uint_t) excess > limit->burst) { return NGX_BUSY; } if (account) { //If this is the last current limiting policy, update the last access time, the number of pending requests, and return NGX_OK lr->excess = excess; lr->last = now; return NGX_OK; } //Increase the number of visits lr->count++; ctx->node = lr; return NGX_AGAIN; //Not the last current limiting strategy, return NGX_AGAIN, and continue to check the next current limiting strategy} node = (rc < 0) ? node->left : node->right; } //If no node is found, a new record needs to be created *ep = 0; //For the calculation method of storage space size, refer to the data structure in Section 3.2.1 size = offsetof(ngx_rbtree_node_t, color) + offsetof(ngx_http_limit_req_node_t, data) + len; //Try to eliminate records (LRU) ngx_http_limit_req_expire(ctx, 1); node = ngx_slab_alloc_locked(ctx->shpool, size); //Allocate space if (node == NULL) { //Insufficient space, allocation failed ngx_http_limit_req_expire(ctx, 0); //Forced elimination of records node = ngx_slab_alloc_locked(ctx->shpool, size); //Allocate space if (node == NULL) { //Allocation failed, return NGX_ERROR return NGX_ERROR; } } node->key = hash; //assign value lr = (ngx_http_limit_req_node_t *) &node->color; lr->len = (u_char)len; lr->excess = 0; ngx_memcpy(lr->data, data, len); ngx_rbtree_insert(&ctx->sh->rbtree, node); //Insert records into the red-black tree and LRU queue ngx_queue_insert_head(&ctx->sh->queue, &lr->queue); if (account) { //If this is the last current limiting policy, update the last access time, the number of pending requests, and return NGX_OK lr->last = now; lr->count = 0; return NGX_OK; } lr->last = 0; lr->count = 1; ctx->node = lr; return NGX_AGAIN; //Not the last current limiting strategy, return NGX_AGAIN, and continue to check the next current limiting strategy} For example, if the burst is configured to 0, the number of pending requests is initially excess, and the token generation period is T; as shown in the following figure 3.2.2.2LRU elimination strategy In the previous section of the pain-knocking algorithm, ngx_http_limit_req_expire will be executed to eliminate a record, and each time it is deleted from the end of the LRU queue; The second parameter n, when n==0, forcefully delete the last record, and then try to delete one or two records; when n==1, try to delete one or two records; the code implementation is as follows: static void ngx_http_limit_req_expire(ngx_http_limit_req_ctx_t *ctx, ngx_uint_t n) { //Delete up to 3 records while (n < 3) { //Tail node q = ngx_queue_last(&ctx->sh->queue); //Get record lr = ngx_queue_data(q, ngx_http_limit_req_node_t, queue); //Note: When n is 0, the if code block cannot be entered, so the tail node will be deleted; when n is not 0, the if code block is entered to check whether it can be deleted if (n++ != 0) { ms = (ngx_msec_int_t) (now - lr->last); ms = ngx_abs(ms); //Accessed within a short period of time, cannot be deleted, return directly if (ms < 60000) { return; } //There are pending requests, which cannot be deleted. Return directly excess = lr->excess - ctx->rate * ms / 1000; if (excess > 0) { return; } } //delete ngx_queue_remove(q); node = (ngx_rbtree_node_t *) ((u_char *) lr - offsetof(ngx_rbtree_node_t, color)); ngx_rbtree_delete(&ctx->sh->rbtree, node); ngx_slab_free_locked(ctx->shpool, node); } } 3.2.2.3 Burst Implementation Burst is to deal with sudden traffic. When sudden traffic arrives, the server should be allowed to process more requests. When burst is 0, requests that exceed the current limit will be rejected; when burst is greater than 0, requests that exceed the current limit will be queued for processing instead of being rejected directly; How is the queuing process implemented? And nginx also needs to process queued requests regularly; Section 2.2 mentioned that each event has a timer. Nginx uses events and timers to implement request queuing and timing processing. The ngx_http_limit_req_handler method has the following code: //Calculate how long the current request needs to wait in line before it can be processed delay = ngx_http_limit_req_account(limits, n, &excess, &limit); //Add readable event if (ngx_handle_read_event(r->connection->read, 0) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } r->read_event_handler = ngx_http_test_reading; r->write_event_handler = ngx_http_limit_req_delay; //Writable event processing function ngx_add_timer(r->connection->write, delay); //Writable event add timer (cannot return to the client before timeout) The method for calculating delay is very simple, which is to traverse all current limiting strategies, calculate the time required to process all pending requests, and return the maximum value; if (limits[n].nodelay) { //When nodelay is configured, the request will not be delayed, delay is 0 continue; } delay = excess * 1000 / ctx->rate; if (delay > max_delay) { max_delay = delay; *ep = excess; *limit = &limits[n]; } Take a quick look at the implementation of the writable event processing function ngx_http_limit_req_delay static void ngx_http_limit_req_delay(ngx_http_request_t *r) { wev = r->connection->write; if (!wev->timedout) { //No timeout, no processing if (ngx_handle_write_event(wev, 0) != NGX_OK) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); } return; } wev->timedout = 0; r->read_event_handler = ngx_http_block_reading; r->write_event_handler = ngx_http_core_run_phases; ngx_http_core_run_phases(r); //Timeout, continue processing HTTP requests} 4. Actual combat4.1 Test normal current limiting 1) Configure nginx to limit the rate to 1qps and limit the rate based on the client IP address (the default return status code is 503), as follows: http{ limit_req_zone $binary_remote_addr zone=test:10m rate=1r/s; server { listen 80; server_name localhost; location / { limit_req zone=test; root html; index index.html index.htm; } } 2) Initiate several requests concurrently; 3) Check the server access log and you can see that 3 requests arrive in 22 seconds, but only 1 request is processed; 2 requests arrive in 23 seconds, the first request is processed, and the second request is rejected
4.2 Test burst 1) When the speed limit is 1qps, requests exceeding the limit will be rejected directly. In order to cope with burst traffic, requests should be allowed to be queued for processing. Therefore, burst=5 is configured, which means that a maximum of 5 requests are allowed to queue for processing. http{ limit_req_zone $binary_remote_addr zone=test:10m rate=1r/s; server { listen 80; server_name localhost; location / { limit_req zone=test burst=5; root html; index index.html index.htm; } } 2) Use ab to initiate 10 requests concurrently, ab -n 10 -c 10 http://xxxxx; 3) Check the server access log; according to the log, the first request is processed, requests 2 to 5 are rejected, and requests 6 to 10 are processed; why is this the result? Check ngx_http_log_module and register the handler to the NGX_HTTP_LOG_PHASE phase (the last phase of HTTP request processing). Therefore, the actual situation should be like this: 10 requests arrive at the same time, the first request is processed directly, the second to sixth requests arrive and are queued and processed (one request is processed per second); the seventh to tenth requests are directly rejected, so the access log is printed first; The second to sixth requests are processed one per second, and the access log is printed after processing. That is, one request is processed per second from 49 to 53 seconds.
4) The response time of ab statistics is shown below. The minimum response time is 87ms, the maximum response time is 5128ms, and the average response time is 1609ms: min mean[+/-sd] median max Connect: 41 44 1.7 44 46 Processing: 46 1566 1916.6 1093 5084 Waiting: 46 1565 1916.7 1092 5084 Total: 87 1609 1916.2 1135 5128 4.3 Testing Nodelay 1) 4.2 shows that after configuring burst, although burst requests will be queued for processing, the response time is too long and the client may have already timed out. Therefore, add nodelay configuration to make nginx urgently process waiting requests to reduce the response time: http{ limit_req_zone $binary_remote_addr zone=test:10m rate=1r/s; server { listen 80; server_name localhost; location / { limit_req zone=test burst=5 nodelay; root html; index index.html index.htm; } } 2) Use ab to initiate 10 requests concurrently, ab -n 10 -c 10 http://xxxx/; 3) Check the server access log; the first request is processed directly, the 2nd to 6th requests are queued for processing (configure nodelay, nginx emergency processing), and the 7th to 10th requests are rejected
4) The response time of ab statistics is shown below. The minimum response time is 85ms, the maximum response time is 92ms, and the average response time is 88ms: min mean[+/-sd] median max Connect: 42 43 0.5 43 43 Processing: 43 46 2.4 47 49 Waiting: 42 45 2.5 46 49 Total: 85 88 2.8 90 92 SummarizeThis article first analyzes the commonly used current limiting algorithms (leaky bucket algorithm and token bucket algorithm), and briefly introduces the process of nginx processing HTTP requests and the implementation of nginx timed events; then analyzes in detail the basic data structure of the ngx_http_limit_req_module module and its current limiting process; and uses examples to help readers understand the configuration and results of nginx current limiting. As for the other module ngx_http_limit_conn_module, it is for limiting the number of connections. It is relatively easy to understand and will not be introduced in detail here. The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM. You may also be interested in:
|
<<: An example of how JavaScript can prevent duplicate network requests
>>: Node uses async_hooks module for request tracking
1. The ul tag has a padding value by default in Mo...
Save the following code as the default homepage fi...
Table of contents Install and introduce axios dep...
1) Process 2) FSImage and Edits Nodenode is the b...
8 optimization methods for MySQL database design,...
MySQL database tables can create, view, rebuild a...
Table of contents 1. Check the current status of ...
I wrote some Qt interface programs, but found it ...
Query the MySQL source first docker search mysql ...
This article shares the specific code of JavaScri...
First download the dependencies: cnpm i -S vue-uu...
Preface Recently I encountered a deadlock problem...
Preface This article mainly introduces the releva...
The storage size and range of each floating point...
Use Javascript to implement a drop-down menu for ...