Detailed explanation of the mechanism and implementation of accept lock in Nginx

Detailed explanation of the mechanism and implementation of accept lock in Nginx

Preface

nginx uses a multi-process model. When a request comes in, the system will lock the process to ensure that only one process accepts the request.

This article is based on the Nginx 0.8.55 source code and is based on the epoll mechanism analysis

1. Implementation of accept lock

1.1 What is an accpet lock?

When talking about accept lock, we have to mention the thundering herd problem.

The so-called herd-throwing problem refers to a multi-process server like Nginx. When it listens to the same port at the same time after forking, if an external connection comes in, all dormant child processes will be awakened, and ultimately only one child process can successfully handle the accept event, and the other processes will go back to sleep. This results in a lot of unnecessary schedules and context switches, and these overheads are completely unnecessary.

In the newer versions of the Linux kernel, the thundering herd problem caused by the accept call itself has been solved. However, in Nginx, accept is handled by the epoll mechanism, and the thundering herd problem caused by epoll's accept has not been solved (it should be that epoll_wait itself does not have the ability to distinguish whether the read event comes from a Listen socket, so all processes listening to this event will be awakened by this epoll_wait.), so Nginx's accept thundering herd problem still requires a customized solution.

The accept lock is nginx's solution. In essence, it is a cross-process mutex lock, which ensures that only one process has the ability to listen to the accept event.

The accept lock is a cross-process lock in practice. It is a global variable in Nginx and is declared as follows:

ngx_shmtx_t ngx_accept_mutex;

This is a lock that is allocated when the event module is initialized and placed in a shared memory between processes to ensure that all processes can access this instance. The locking and unlocking is done by CAS using Linux atomic variables. If the locking fails, it returns immediately. It is a non-blocking lock. The unlock code is as follows:

static ngx_inline ngx_uint_t             
ngx_shmtx_trylock(ngx_shmtx_t *mtx)           
{                    
 return (*mtx->lock == 0 && ngx_atomic_cmp_set(mtx->lock, 0, ngx_pid));  
}                    
                    
#define ngx_shmtx_lock(mtx) ngx_spinlock((mtx)->lock, ngx_pid, 1024)   
                    
#define ngx_shmtx_unlock(mtx) (void) ngx_atomic_cmp_set((mtx)->lock, ngx_pid, 0)

It can be seen that after the call to ngx_shmtx_trylock fails, it will return immediately without blocking.

1.2 How does the accept lock ensure that only one process can handle new connections?

It is also very simple to solve the accept lock problem caused by epoll. You just need to ensure that only one process registers the accept epoll event at the same time.
The processing mode adopted by Nginx is nothing special. The logic is roughly as follows:

Try to acquire the accept lock
If the acquisition is successful:
Register the accept event in epoll
else:
Unregister the accept event in epoll to process all events and release the accept lock

Of course, the processing of delayed events is ignored here, which we will discuss later.

The processing of the accept lock and the registration and cancellation of accept events in epoll are all performed in ngx_trylock_accept_mutex. This series of processes is carried out in void ngx_process_events_and_timers(ngx_cycle_t *cycle) which is repeatedly called in the nginx main loop.

That is to say, each round of event processing will first compete for the accept lock. If the competition succeeds, the accept event will be registered in epoll. If it fails, the accept event will be unregistered. Then, after the event is processed, the accept lock will be released. In this way, only one process listens to a listen socket, thus avoiding the thundering herd problem.

1.3 What efforts does the event handling mechanism make to prevent the accept lock from being occupied for a long time?

The solution of using accept lock to deal with the thundering herd problem seems to be very beautiful, but there will be a problem if the above logic is used completely: if the server is very busy and has a lot of events to process, then "processing all events" will take a very long time, that is, a process occupies the accept lock for a long time and has no time to process new connections; other processes do not occupy the accept lock and cannot process new connections either - at this point, the new connection is in a state of no one processing it, which is undoubtedly fatal to the real-time performance of the service.

To solve this problem, Nginx postpones event processing. That is, in the processing of ngx_process_events, events are only placed in two queues:

ngx_thread_volatile ngx_event_t *ngx_posted_accept_events;        
ngx_thread_volatile ngx_event_t *ngx_posted_events;

After returning, process ngx_posted_accept_events first, release the accept lock immediately, and then slowly process other events.

That is, ngx_process_events only processes epoll_wait, and the consumption of events is placed after the accept lock is released, so as to minimize the time of occupying accept and allow other processes to have enough time to handle accept events.

So how is it achieved specifically? In fact, it is to pass a NGX_POST_EVENTS flag in the flags parameter of static ngx_int_t ngx_epoll_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags) and check this flag when processing events.

This just avoids the long-term occupation of the accept lock by event consumption, so what if epoll_wait itself takes a long time? It's not impossible for this to happen. The processing in this regard is also very simple. epoll_wait itself has a timeout period, so just limit its value. This parameter is stored in the global variable ngx_accept_mutex_delay.

Below is the implementation code of ngx_process_events_and_timers, which can give you a rough idea of ​​the related processing:

void                   
ngx_process_events_and_timers(ngx_cycle_t *cycle)        
{                    
 ngx_uint_t flags;               
 ngx_msec_t timer, delta;             
    
	
	/* Omit some code for processing time events*/                    
 // This is the time to handle the load balancing lock and accept lock if (ngx_use_accept_mutex) {            
  // If the value of the load balancing token is greater than 0, it means that the load is full and the accept is no longer processed. At the same time, the value is reduced by one if (ngx_accept_disabled > 0) {           
   ngx_accept_disabled--;            
                    
  } else {                
   // Try to get the accept lock if (ngx_trylock_accept_mutex(cycle) == NGX_ERROR) {    
    return;              
   }                 
                    
   // After getting the lock, add the flag to the post flag to postpone the processing of all events // to avoid occupying the accept lock for too long if (ngx_accept_mutex_held) {          
    flags |= NGX_POST_EVENTS;          
                    
   } else {               
    if (timer == NGX_TIMER_INFINITE        
     || timer > ngx_accept_mutex_delay)       
    {                
     timer = ngx_accept_mutex_delay; // Wait for at most ngx_accept_mutex_delay milliseconds to prevent the accept lock from being occupied for too long}                
   }                 
  }                  
 }                    
 delta = ngx_current_msec;             
                    
 // Call process_events of the event processing module to process an epoll_wait method (void) ngx_process_events(cycle, timer, flags);       
                    
 delta = ngx_current_msec - delta; //Calculate the time consumed by processing events ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0,       
     "timer delta: %M", delta);         
                    
 // If there is a deferred accept event, then defer processing of this event if (ngx_posted_accept_events) {           
  ngx_event_process_posted(cycle, &ngx_posted_accept_events);   
 }                   
                    
 // Release accept lock if (ngx_accept_mutex_held) {            
  ngx_shmtx_unlock(&ngx_accept_mutex);         
 }                   
                    
 // Handle all timeout events if (delta) {                
  ngx_event_expire_timers();            
 }                   
                    
 ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0,       
     "posted events %p", ngx_posted_events);      
                    
 if (ngx_posted_events) {             
  if (ngx_threaded) {             
   ngx_wakeup_worker_thread(cycle);         
                    
  } else {                
   // Process all deferred events ngx_event_process_posted(cycle, &ngx_posted_events);    
  }                  
 }                   
}

Let's take a look at the related processing of ngx_epoll_process_events:

  // Read events if ((revents & EPOLLIN) && rev->active) {
   if ((flags & NGX_POST_THREAD_EVENTS) && !rev->accept) {
    rev->posted_ready = 1;

   } else {
    rev->ready = 1;
   }                                        
   if (flags & NGX_POST_EVENTS) {
    queue = (ngx_event_t **) (rev->accept ?
        &ngx_posted_accept_events : &ngx_posted_events);
    ngx_locked_post_event(rev, queue);
   } else {
    rev->handler(rev);
   }
  }                                         
  wev = c->write;

  // Write event if ((revents & EPOLLOUT) && wev->active) {
   if (flags & NGX_POST_THREAD_EVENTS) {
    wev->posted_ready = 1;
   } else {
    wev->ready = 1;
   }

   if (flags & NGX_POST_EVENTS) {
    ngx_locked_post_event(wev, &ngx_posted_events);
   } else {
    wev->handler(wev);
   }
  }

The processing is also relatively simple. If the accept lock is obtained, there will be a NGX_POST_EVENTS flag and it will be placed in the corresponding queue. If not, the event will be processed directly.

Summarize

The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. If you have any questions, you can leave a message to communicate. Thank you for your support for 123WORDPRESS.COM.

You may also be interested in:
  • Solve the problem of Nginx returning 404 after configuring proxy_pass
  • Solution to Nginx SSL certificate configuration error
  • Nginx 502 Bad Gateway Error Causes and Solutions
  • Proxy_pass method in multiple if in nginx location
  • Nginx configuration sample code for downloading files
  • Implementation of nginx multiple locations forwarding any request or accessing static resource files
  • How to view nginx configuration file path and resource file path
  • Detailed explanation of the implementation of nginx process lock

<<:  How to query duplicate data in mysql table

>>:  The use of setState in React and the use of synchronous and asynchronous

Recommend

A brief discussion on the types of node.js middleware

Table of contents Overview 1. Application-level m...

Implementation of docker-compose deployment of zk+kafka+storm cluster

Cluster Deployment Overview 172.22.12.20 172.22.1...

JavaScript to implement mobile signature function

This article shares the specific code of JavaScri...

How to use mysqldump for full and point-in-time backups

Mysqldump is used for logical backup in MySQL. Al...

Detailed explanation of the process of installing msf on Linux system

Or write down the installation process yourself! ...

Summary of the differences between MySQL storage engines MyISAM and InnoDB

1. Changes in MySQL's default storage engine ...

Pessimistic locking and optimistic locking in MySQL

In relational databases, pessimistic locking and ...

The difference between docker run and start

The difference between run and start in docker Do...

How to use custom images in Html to display checkboxes

If you need to use an image to implement the use ...

Some small methods commonly used in html pages

Add in the <Head> tag <meta http-equiv=&q...

Pure CSS to achieve left and right drag to change the layout size

Utilize the browser's non- overflow:auto elem...

How to build Nginx image server with Docker

Preface In general development, images are upload...