Detailed explanation of the implementation of nginx process lock

Detailed explanation of the implementation of nginx process lock

1. The role of nginx process lock

Nginx is a multi-process concurrent model application. To put it bluntly, there are multiple workers listening to network requests. Whoever receives a request will complete the subsequent transactions. If there is no lock, then this is the scenario. When a request is received by the system, the process that can monitor the port will handle the transaction at the same time. Of course, the system will prevent such bad things from happening, but the so-called panic herd will occur. (I don’t know if it’s correct, but that’s probably what I meant)

Therefore, in order to avoid many processes listening at the same time, multiple workers should listen to the socket in order. In order to make multiple workers orderly, the process lock described in this article appears. Only the process that grabs the lock can access the network request.

That is the following process:

// worker core transaction framework // ngx_event.c
void
ngx_process_events_and_timers(ngx_cycle_t *cycle)
{
    ngx_uint_t flags;
    ngx_msec_t timer, delta;

    if (ngx_timer_resolution) {
        timer = NGX_TIMER_INFINITE;
        flags = 0;

    } else {
        timer = ngx_event_find_timer();
        flags = NGX_UPDATE_TIME;

#if (NGX_WIN32)

        /* handle signals from master in case of network inactivity */

        if (timer == NGX_TIMER_INFINITE || timer > 500) {
            timer = 500;
        }

#endif
    }

    if (ngx_use_accept_mutex) {
        // To ensure fairness, avoid repeated lock contention if (ngx_accept_disabled > 0) {
            ngx_accept_disabled--;

        } else {
            // Only the process that grabs the lock performs the accept() operation on the socket // Other workers process previously connected requests and read/write operations if (ngx_trylock_accept_mutex(cycle) == NGX_ERROR) {
                return;
            }

            if (ngx_accept_mutex_held) {
                flags |= NGX_POST_EVENTS;

            } else {
                if (timer == NGX_TIMER_INFINITE
                    || timer > ngx_accept_mutex_delay)
                {
                    timer = ngx_accept_mutex_delay;
                }
            }
        }
    }
    //Other core transaction processingif (!ngx_queue_empty(&ngx_posted_next_events)) {
        ngx_event_move_posted_next(cycle);
        timer = 0;
    }

    delta = ngx_current_msec;

    (void) ngx_process_events(cycle, timer, flags);

    delta = ngx_current_msec - delta;

    ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
                   "timer delta: %M", delta);

    ngx_event_process_posted(cycle, &ngx_posted_accept_events);

    if (ngx_accept_mutex_held) {
        ngx_shmtx_unlock(&ngx_accept_mutex);
    }

    if (delta) {
        ngx_event_expire_timers();
    }

    ngx_event_process_posted(cycle, &ngx_posted_events);
}
// Get the lock and register the socket accept() process as follows ngx_int_t
ngx_trylock_accept_mutex(ngx_cycle_t *cycle)
{
    if (ngx_shmtx_trylock(&ngx_accept_mutex)) {

        ngx_log_debug0(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
                       "accept mutex locked");

        if (ngx_accept_mutex_held && ngx_accept_events == 0) {
            return NGX_OK;
        }

        if (ngx_enable_accept_events(cycle) == NGX_ERROR) {
            // Unlock operation ngx_shmtx_unlock(&ngx_accept_mutex);
            return NGX_ERROR;
        }

        ngx_accept_events = 0;
        ngx_accept_mutex_held = 1;

        return NGX_OK;
    }

    ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
                   "accept mutex lock failed: %ui", ngx_accept_mutex_held);

    if (ngx_accept_mutex_held) {
        if (ngx_disable_accept_events(cycle, 0) == NGX_ERROR) {
            return NGX_ERROR;
        }

        ngx_accept_mutex_held = 0;
    }

    return NGX_OK;
}

There is no need to say more about the rest. Only the core, that is, the worker that grabs the lock, can perform the accept operation. The worker that fails to grab the lock must actively release the previous accept() right. In this way, only one worker is processing the accept event at the same time.

2. Entry-level lock usage

Things like locks usually have interfaces defined by the programming language itself, or have fixed usage.

For example, synchronized xxx in java, Lock related concurrent package locks such as CountDownLatch, CyclicBarrier, ReentrantLock, ReentrantReadWriteLock, Semaphore...

For example, threading.Lock(), threading.RLock() in Python...

For example, flock() in PHP...

The reason why it is called entry-level is that these are all interface APIs. You just need to adjust them according to the usage specifications without any more knowledge. But it is actually not easy to make good use of the details.

3. Implementation of nginx process lock

Since nginx is written in C language, it is definitely closer to the bottom layer. Being able to see how locks are implemented through its implementation should enable us to better understand the deeper meaning of locks.

Generally speaking, locks include the following major aspects: lock data structure definition, locking logic, unlocking logic, as well as some notification mechanisms, timeout mechanisms, etc. Let's look at some of these directions and see how nginx implements them:

3.1 Lock data structure

First, we need to define what variables are locked, and then instantiate a value to share with multiple processes.

// event/ngx_event.c
//Global accept lock variable definition ngx_shmtx_t ngx_accept_mutex;
// This lock has a // atomic implementation using volatile modifier typedef volatile ngx_atomic_uint_t ngx_atomic_t;
typedef struct {
#if (NGX_HAVE_ATOMIC_OPS)
    // Atomic update variables are used to implement locks, behind which is the shared memory area ngx_atomic_t *lock;
#if (NGX_HAVE_POSIX_SEM)
    ngx_atomic_t *wait;
    ngx_uint_t semaphore;
    sem_t sem;
#endif
#else
    // FD is used to implement the lock. Behind FD is a file instance ngx_fd_t fd;
    u_char *name;
#endif
    ngx_uint_t spin;
} ngx_shmtx_t;
// Shared memory data structure definition typedef struct {
    u_char *addr;
    size_t size;
    ngx_str_t name;
    ngx_log_t *log;
    ngx_uint_t exists; /* unsigned exists:1; */
} ngx_shm_t;

3.2、fd-based locking/unlocking implementation

Once you have a lock instance, you can lock and unlock it. Nginx has two lock implementations, mainly based on platform differences: file-based or shared internal implementation. Based on fd, that is, file-based implementation, this is still a bit heavy operation. as follows:

// ngx_shmtx.c
ngx_uint_t
ngx_shmtx_trylock(ngx_shmtx_t *mtx)
{
    ngx_err_t err;

    err = ngx_trylock_fd(mtx->fd);

    if (err == 0) {
        return 1;
    }

    if (err == NGX_EAGAIN) {
        return 0;
    }

#if __osf__ /* Tru64 UNIX */

    if (err == NGX_EACCES) {
        return 0;
    }

#endif

    ngx_log_abort(err, ngx_trylock_fd_n " %s failed", mtx->name);

    return 0;
}
// core/ngx_shmtx.c
// 1. Locking process ngx_err_t
ngx_trylock_fd(ngx_fd_t fd)
{
    struct flock fl;

    ngx_memzero(&fl, sizeof(struct flock));
    fl.l_type = F_WRLCK;
    fl.l_whence = SEEK_SET;

    if (fcntl(fd, F_SETLK, &fl) == -1) {
        return ngx_errno;
    }

    return 0;
}
// os/unix/ngx_file.c
ngx_err_t
ngx_lock_fd(ngx_fd_t fd)
{
    struct flock fl;

    ngx_memzero(&fl, sizeof(struct flock));
    fl.l_type = F_WRLCK;
    fl.l_whence = SEEK_SET;
    // Call the locking method provided by the system if (fcntl(fd, F_SETLKW, &fl) == -1) {
        return ngx_errno;
    }

    return 0;
}

// 2. Unlock implementation // core/ngx_shmtx.c
void
ngx_shmtx_unlock(ngx_shmtx_t *mtx)
{
    ngx_err_t err;

    err = ngx_unlock_fd(mtx->fd);

    if (err == 0) {
        return;
    }

    ngx_log_abort(err, ngx_unlock_fd_n " %s failed", mtx->name);
}
// os/unix/ngx_file.c
ngx_err_t
ngx_unlock_fd(ngx_fd_t fd)
{
    struct flock fl;

    ngx_memzero(&fl, sizeof(struct flock));
    fl.l_type = F_UNLCK;
    fl.l_whence = SEEK_SET;

    if (fcntl(fd, F_SETLK, &fl) == -1) {
        return ngx_errno;
    }

    return 0;
}

The key point is the call of the fcntl() system API, nothing else. Of course, from a bystander's perspective, the purpose of process locking is achieved because the operations of multiple processes on files are visible. There are some semantic differences between tryLock and lock. When trying, you will get some flags to indicate whether it is successful, but when locking directly, you will not get any flags. Generally requires blocking requests

3.3. Initialization of nginx lock instance

Perhaps in some places, the initialization of a lock instance is just a simple assignment of a variable. But it's a little different in nginx. First, you need to ensure that each worker can see the same instance or equivalent instances. Because the worker is a process forked from the master, as long as the lock is instantiated in the master, it can be guaranteed that each worker can get the same value. So, is it just like that?

// Initialization of shared lock is performed in ngx master, then fork() to worker process // event/ngx_event.c
static ngx_int_t
ngx_event_module_init(ngx_cycle_t *cycle)
{
    void ***cf;
    u_char *shared;
    size_t size, cl;
    // Define a shared memory ngx_shm_t shm;
    ngx_time_t *tp;
    ngx_core_conf_t *ccf;
    ngx_event_conf_t *ecf;

    cf = ngx_get_conf(cycle->conf_ctx, ngx_events_module);
    ecf = (*cf)[ngx_event_core_module.ctx_index];

    if (!ngx_test_config && ngx_process <= NGX_PROCESS_MASTER) {
        ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0,
                      "using the \"%s\" event method", ecf->name);
    }

    ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module);

    ngx_timer_resolution = ccf->timer_resolution;

#if !(NGX_WIN32)
    {
    ngx_int_t limit;
    struct rlimit rlmt;

    if (getrlimit(RLIMIT_NOFILE, &rlmt) == -1) {
        ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno,
                      "getrlimit(RLIMIT_NOFILE) failed, ignored");

    } else {
        if (ecf->connections > (ngx_uint_t) rlmt.rlim_cur
            && (ccf->rlimit_nofile == NGX_CONF_UNSET
                || ecf->connections > (ngx_uint_t) ccf->rlimit_nofile))
        {
            limit = (ccf->rlimit_nofile == NGX_CONF_UNSET) ?
                         (ngx_int_t) rlmt.rlim_cur : ccf->rlimit_nofile;

            ngx_log_error(NGX_LOG_WARN, cycle->log, 0,
                          "%ui worker_connections exceed"
                          "open file resource limit: %i",
                          ecf->connections, limit);
        }
    }
    }
#endif /* !(NGX_WIN32) */


    if (ccf->master == 0) {
        return NGX_OK;
    }

    if (ngx_accept_mutex_ptr) {
        return NGX_OK;
    }


    /* cl should be equal to or greater than cache line size */

    cl = 128;

    size = cl /* ngx_accept_mutex */
           + cl /* ngx_connection_counter */
           + cl; /* ngx_temp_number */

#if (NGX_STAT_STUB)

    size += cl /* ngx_stat_accepted */
           + cl /* ngx_stat_handled */
           + cl /* ngx_stat_requests */
           + cl /* ngx_stat_active */
           + cl /* ngx_stat_reading */
           + cl /* ngx_stat_writing */
           + cl; /* ngx_stat_waiting */

#endif

    shm.size = size;
    ngx_str_set(&shm.name, "nginx_shared_zone");
    shm.log = cycle->log;
    // Allocate shared memory space, use mmap to implement if (ngx_shm_alloc(&shm) != NGX_OK) {
        return NGX_ERROR;
    }

    shared = shm.addr;

    ngx_accept_mutex_ptr = (ngx_atomic_t *) shared;
    ngx_accept_mutex.spin = (ngx_uint_t) -1;
    // Based on shared files or memory assignment process locks, multi-process control can be achieved if (ngx_shmtx_create(&ngx_accept_mutex, (ngx_shmtx_sh_t *) shared,
                         cycle->lock_file.data)
        != NGX_OK)
    {
        return NGX_ERROR;
    }

    ngx_connection_counter = (ngx_atomic_t *) (shared + 1 * cl);

    (void) ngx_atomic_cmp_set(ngx_connection_counter, 0, 1);

    ngx_log_debug2(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
                   "counter: %p, %uA",
                   ngx_connection_counter, *ngx_connection_counter);

    ngx_temp_number = (ngx_atomic_t *) (shared + 2 * cl);

    tp = ngx_timeofday();

    ngx_random_number = (tp->msec << 16) + ngx_pid;

#if (NGX_STAT_STUB)

    ngx_stat_accepted = (ngx_atomic_t *) (shared + 3 * cl);
    ngx_stat_handled = (ngx_atomic_t *) (shared + 4 * cl);
    ngx_stat_requests = (ngx_atomic_t *) (shared + 5 * cl);
    ngx_stat_active = (ngx_atomic_t *) (shared + 6 * cl);
    ngx_stat_reading = (ngx_atomic_t *) (shared + 7 * cl);
    ngx_stat_writing = (ngx_atomic_t *) (shared + 8 * cl);
    ngx_stat_waiting = (ngx_atomic_t *) (shared + 9 * cl);

#endif

    return NGX_OK;
}
// core/ngx_shmtx.c
// 1. Based on file process sharing space, use fd
ngx_int_t
ngx_shmtx_create(ngx_shmtx_t *mtx, ngx_shmtx_sh_t *addr, u_char *name)
{
    // Created by the master process, so it is a process-safe operation, and each worker can use it directly if (mtx->name) {
        // If it has been created, fd has been assigned and cannot be created. Just share fd directly. // Behind fd is a file instance if (ngx_strcmp(name, mtx->name) == 0) {
            mtx->name = name;
            return NGX_OK;
        }

        ngx_shmtx_destroy(mtx);
    }
    // Use file creation to lock sharing mtx->fd = ngx_open_file(name, NGX_FILE_RDWR, NGX_FILE_CREATE_OR_OPEN,
                            NGX_FILE_DEFAULT_ACCESS);

    if (mtx->fd == NGX_INVALID_FILE) {
        ngx_log_error(NGX_LOG_EMERG, ngx_cycle->log, ngx_errno,
                      ngx_open_file_n " \"%s\" failed", name);
        return NGX_ERROR;
    }
    // Once created, it can be deleted. Subsequent lock operations will only be performed based on this fd instance if (ngx_delete_file(name) == NGX_FILE_ERROR) {
        ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, ngx_errno,
                      ngx_delete_file_n " \"%s\" failed", name);
    }

    mtx->name = name;

    return NGX_OK;
}

// 2. Creation of shared lock based on shared memory // ngx_shmtx.c
ngx_int_t
ngx_shmtx_create(ngx_shmtx_t *mtx, ngx_shmtx_sh_t *addr, u_char *name)
{
    mtx->lock = &addr->lock;

    if (mtx->spin == (ngx_uint_t) -1) {
        return NGX_OK;
    }

    mtx->spin = 2048;

#if (NGX_HAVE_POSIX_SEM)

    mtx->wait = &addr->wait;

    if (sem_init(&mtx->sem, 1, 0) == -1) {
        ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, ngx_errno,
                      "sem_init() failed");
    } else {
        mtx->semaphore = 1;
    }

#endif

    return NGX_OK;
}
// os/unix/ngx_shmem.c
ngx_int_t
ngx_shm_alloc(ngx_shm_t *shm)
{
    shm->addr = (u_char *) mmap(NULL, shm->size,
                                PROT_READ|PROT_WRITE,
                                MAP_ANON|MAP_SHARED, -1, 0);

    if (shm->addr == MAP_FAILED) {
        ngx_log_error(NGX_LOG_ALERT, shm->log, ngx_errno,
                      "mmap(MAP_ANON|MAP_SHARED, %uz) failed", shm->size);
        return NGX_ERROR;
    }

    return NGX_OK;
}

The essence of the fd-based lock implementation is based on the implementation of the file system behind it. Because the file system is visible to the process, control of the same fd is control of the common lock.

3.4. Locking/unlocking implementation based on shared memory

The so-called shared memory is actually a public memory area that is beyond the scope of the process (managed by the operating system). This is the creation of mmap() that we saw earlier, which is a piece of shared memory.

// ngx_shmtx.c
ngx_uint_t
ngx_shmtx_trylock(ngx_shmtx_t *mtx)
{
    // Directly change the value of the shared memory area // A successful cas change means a successful lock.
    return (*mtx->lock == 0 && ngx_atomic_cmp_set(mtx->lock, 0, ngx_pid));
}

// Unlock operation of shm version, cas parsing, with notification void
ngx_shmtx_unlock(ngx_shmtx_t *mtx)
{
    if (mtx->spin != (ngx_uint_t) -1) {
        ngx_log_debug0(NGX_LOG_DEBUG_CORE, ngx_cycle->log, 0, "shmtx unlock");
    }

    if (ngx_atomic_cmp_set(mtx->lock, ngx_pid, 0)) {
        ngx_shmtx_wakeup(mtx);
    }
}
//Notify the waiting process static void
ngx_shmtx_wakeup(ngx_shmtx_t *mtx)
{
#if (NGX_HAVE_POSIX_SEM)
    ngx_atomic_uint_t wait;

    if (!mtx->semaphore) {
        return;
    }

    for ( ;; ) {

        wait = *mtx->wait;

        if ((ngx_atomic_int_t) wait <= 0) {
            return;
        }

        if (ngx_atomic_cmp_set(mtx->wait, wait, wait - 1)) {
            break;
        }
    }

    ngx_log_debug1(NGX_LOG_DEBUG_CORE, ngx_cycle->log, 0,
                   "shmtx wake %uA", wait);

    if (sem_post(&mtx->sem) == -1) {
        ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, ngx_errno,
                      "sem_post() failed while wake shmtx");
    }

#endif
}

The implementation of the shared memory version of the lock is basically the setting of memory variables by cas. It's just that this oriented memory is the memory of the shared area.

4. What is the meaning of lock?

I have seen many locks, but I still can't pass this one.

What exactly is a lock? In fact, the lock is an identification bit. When someone sees this identification position, they will actively stop the operation, or proceed, etc., making it look like a lock. This flag can be set in an object or in a global value, or through various media, such as files, redis, or zk. It makes no difference. Because the key issue is not where to store it, but how to set this flag safely.

To implement locks, a strong underlying meaning guarantee is generally required, such as CAS operations at the CPU level and queue serial atomic operations at the application level. . .
As for what, memory lock, file lock, advanced lock, all have their own application scenarios. Choosing the right locks becomes the key to the evaluation. At this moment, you should be able to judge it!

The above is a detailed explanation of the implementation of nginx process lock. For more information about nginx process lock, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • Detailed explanation of the mechanism and implementation of accept lock in Nginx
  • Solve the problem of Nginx returning 404 after configuring proxy_pass
  • Solution to Nginx SSL certificate configuration error
  • Nginx 502 Bad Gateway Error Causes and Solutions
  • Proxy_pass method in multiple if in nginx location
  • Nginx configuration sample code for downloading files
  • Implementation of nginx multiple locations forwarding any request or accessing static resource files
  • How to view nginx configuration file path and resource file path

<<:  43 Web Design Mistakes Web Designers Should Watch Out For

>>:  Details after setting the iframe's src to about:blank

Recommend

Why does your height:100% not work?

Why doesn't your height:100% work? This knowl...

Implementation of IP address configuration in Centos7.5

1. Before configuring the IP address, first use i...

Summary of the characteristics of SQL mode in MySQL

Preface The SQL mode affects the SQL syntax that ...

Attributes and usage of ins and del tags

ins and del were introduced in HTML 4.0 to help au...

iframe multi-layer nesting, unlimited nesting, highly adaptive solution

There are three pages A, B, and C. Page A contains...

Docker's health detection mechanism

For containers, the simplest health check is the ...

Summary of the application of decorative elements in web design

<br />Preface: Before reading this tutorial,...

The leftmost matching principle of MySQL database index

Table of contents 1. Joint index description 2. C...

How to configure tomcat server for eclipse and IDEA

tomcat server configuration When everyone is lear...

How to use the Linux nl command

1. Command Introduction nl (Number of Lines) adds...

Detailed explanation of how to implement secondary cache with MySQL and Redis

Redis Introduction Redis is completely open sourc...

How to install MySQL under Linux (yum and source code compilation)

Here are two ways to install MySQL under Linux: y...