Detailed explanation of Nginx process scheduling problem

Detailed explanation of Nginx process scheduling problem

Nginx uses a fixed number of multi-process models, in which a master process (MasterProcess) and worker processes with the same number of host CPU cores work together to handle various events.

Nginx uses a fixed number of multi-process models, in which a master process (MasterProcess) and worker processes with the same number of host CPU cores work together to handle various events.

The main management process is responsible for configuration loading, starting and stopping of the working process, and the working process is responsible for processing specific requests. The resources between processes are independent. Each worker process handles multiple connections. Each connection is fully handled by a worker process. There is no need to switch processes, and there will be no resource consumption problems caused by process switching. In the default configuration, the number of worker processes is the same as the number of CPU cores on the host. The affinity between the CPU and the process is fully utilized to bind the worker process to the CPU, thereby maximizing the processing power of the multi-core CPU.

The Nginx main process is responsible for monitoring external control signals and passing related signal operations to the worker process through the channel mechanism. Multiple worker processes share data and information through shared memory.

Tips: Process affinity enables a process or thread to run on a specified CPU (core).

Nginx's working process has the following scheduling methods:

  • No scheduling mode: All worker processes will compete to establish a connection with the client when a connection event is triggered, and start processing client requests when the connection is successfully established. In the unscheduled mode, all processes will compete for resources , but ultimately only one process can establish a connection with the client. This will instantly consume a large amount of resources for the system, which is the so-called herd phenomenon.
  • Mutex lock mode: Each worker process will periodically compete for the mutex lock. Once a worker process grabs the mutex lock, it means that it has the right to receive HTTP connection establishment events, and injects the socket monitoring of the current process into the event engine (such as epoll) to receive external connection events. Other worker processes can only continue to process the read and write events of the established connection and periodically poll to check the status of the mutex. Only after the mutex is released can the worker process seize the mutex and obtain the right to process the HTTP connection establishment event . When the difference between 1/8 of the maximum number of connections of the working process and the available connection (free_connection) of the process is greater than or equal to 1, the opportunity to compete for the mutex lock in this round is abandoned, and no new connection requests are accepted. Only the read and write events of the established connection are processed. The mutual exclusion lock mode effectively avoids the herd-thrashing phenomenon. For a large number of short HTTP connections, this mechanism effectively avoids resource consumption caused by worker processes competing for event processing rights. However, for a large number of HTTP connections with long connections enabled, the mutex mode will concentrate the pressure on a few worker processes, which will lead to a decrease in QPS due to uneven load on the worker processes.
  • Socket sharding: Socket sharding is an allocation mechanism provided by the kernel that allows each worker process to have an identical set of listening sockets. When there is an external connection request, the kernel decides which worker process's socket listener can receive the connection. This effectively avoids the herd-thundering phenomenon and improves the performance of multi-core systems compared to the mutex lock mechanism. This feature requires enabling the reuseport parameter when configuring the listen directive.

Tips: In Nginx versions later than 1.11.3, the mutex mode is disabled by default. The socket sharding mode has the best performance because the Linux kernel provides the process scheduling mechanism.

This is the end of this article about Nginx process scheduling. For more information about Nginx process scheduling, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Detailed explanation of Nginx process management and reloading principles
  • Python monitors nginx port and process status
  • Two process management methods and optimization of php-fpm used by Nginx
  • How to set the number of Nginx server processes and use multi-core CPUs
  • Solution to losing nginx.pid after restarting or killing Nginx process
  • Wrote a Python script to monitor the nginx process

<<:  SQL IDENTITY_INSERT case study

>>:  Analysis of several reasons why Iframe should be used less

Recommend

Detailed explanation of slots in Vue

The reuse of code in vue provides us with mixnis....

The use and difference between vue3 watch and watchEffect

1.watch listener Introducing watch import { ref, ...

Detailed explanation of JavaScript axios installation and packaging case

1. Download the axios plugin cnpm install axios -...

Detailed explanation of nginx forward proxy and reverse proxy

Table of contents Forward Proxy nginx reverse pro...

Detailed explanation of the principle and usage of MySQL stored procedures

This article uses examples to explain the princip...

Use CSS to switch between dark mode and bright mode

In the fifth issue of Web Skills, a technical sol...

JavaScript custom plug-in to implement tab switching function

This article shares the specific code of JavaScri...

How to install Elasticsearch7.6 cluster in docker and set password

Table of contents Some basic configuration About ...