Detailed explanation of Nginx process scheduling problem

Detailed explanation of Nginx process scheduling problem

Nginx uses a fixed number of multi-process models, in which a master process (MasterProcess) and worker processes with the same number of host CPU cores work together to handle various events.

Nginx uses a fixed number of multi-process models, in which a master process (MasterProcess) and worker processes with the same number of host CPU cores work together to handle various events.

The main management process is responsible for configuration loading, starting and stopping of the working process, and the working process is responsible for processing specific requests. The resources between processes are independent. Each worker process handles multiple connections. Each connection is fully handled by a worker process. There is no need to switch processes, and there will be no resource consumption problems caused by process switching. In the default configuration, the number of worker processes is the same as the number of CPU cores on the host. The affinity between the CPU and the process is fully utilized to bind the worker process to the CPU, thereby maximizing the processing power of the multi-core CPU.

The Nginx main process is responsible for monitoring external control signals and passing related signal operations to the worker process through the channel mechanism. Multiple worker processes share data and information through shared memory.

Tips: Process affinity enables a process or thread to run on a specified CPU (core).

Nginx's working process has the following scheduling methods:

  • No scheduling mode: All worker processes will compete to establish a connection with the client when a connection event is triggered, and start processing client requests when the connection is successfully established. In the unscheduled mode, all processes will compete for resources , but ultimately only one process can establish a connection with the client. This will instantly consume a large amount of resources for the system, which is the so-called herd phenomenon.
  • Mutex lock mode: Each worker process will periodically compete for the mutex lock. Once a worker process grabs the mutex lock, it means that it has the right to receive HTTP connection establishment events, and injects the socket monitoring of the current process into the event engine (such as epoll) to receive external connection events. Other worker processes can only continue to process the read and write events of the established connection and periodically poll to check the status of the mutex. Only after the mutex is released can the worker process seize the mutex and obtain the right to process the HTTP connection establishment event . When the difference between 1/8 of the maximum number of connections of the working process and the available connection (free_connection) of the process is greater than or equal to 1, the opportunity to compete for the mutex lock in this round is abandoned, and no new connection requests are accepted. Only the read and write events of the established connection are processed. The mutual exclusion lock mode effectively avoids the herd-thrashing phenomenon. For a large number of short HTTP connections, this mechanism effectively avoids resource consumption caused by worker processes competing for event processing rights. However, for a large number of HTTP connections with long connections enabled, the mutex mode will concentrate the pressure on a few worker processes, which will lead to a decrease in QPS due to uneven load on the worker processes.
  • Socket sharding: Socket sharding is an allocation mechanism provided by the kernel that allows each worker process to have an identical set of listening sockets. When there is an external connection request, the kernel decides which worker process's socket listener can receive the connection. This effectively avoids the herd-thundering phenomenon and improves the performance of multi-core systems compared to the mutex lock mechanism. This feature requires enabling the reuseport parameter when configuring the listen directive.

Tips: In Nginx versions later than 1.11.3, the mutex mode is disabled by default. The socket sharding mode has the best performance because the Linux kernel provides the process scheduling mechanism.

This is the end of this article about Nginx process scheduling. For more information about Nginx process scheduling, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Detailed explanation of Nginx process management and reloading principles
  • Python monitors nginx port and process status
  • Two process management methods and optimization of php-fpm used by Nginx
  • How to set the number of Nginx server processes and use multi-core CPUs
  • Solution to losing nginx.pid after restarting or killing Nginx process
  • Wrote a Python script to monitor the nginx process

<<:  SQL IDENTITY_INSERT case study

>>:  Analysis of several reasons why Iframe should be used less

Recommend

Organize the common knowledge points of CocosCreator

Table of contents 1. Scene loading 2. Find Node 1...

Introduction to useRef and useState in JavaScript

Table of contents 1. useState hook 2. useRef hook...

How to use Dockerfile to build images in Docker

Build the image Earlier we used various images fo...

Detailed explanation of Vue routing router

Table of contents Using routing plugins in a modu...

How to encapsulate axios in Vue project (unified management of http requests)

1. Requirements When using the Vue.js framework t...

Summary of Common Mistakes in Web Design

In the process of designing a web page, designers...

Parsing Apache Avro Data in One Article

Abstract: This article will demonstrate how to se...

Solution to PHP not being able to be parsed after nginx installation is complete

Table of contents Method 1 Method 2 After install...

How to use vs2019 for Linux remote development

Usually, there are two options when we develop Li...

How to configure Linux CentOS to run scripts regularly

Many times we want the server to run a script reg...

Causes and solutions for slow MySQL queries

There are many reasons for slow query speed, the ...

A detailed introduction to Tomcat directory structure

Open the decompressed directory of tomcat and you...

MYSQL Left Join optimization (10 seconds to 20 milliseconds)

Table of contents 【Function Background】 [Raw SQL]...

Detailed installation history of Ubuntu 20.04 LTS

This article records the creation of a USB boot d...

Detailed explanation of Vue router routing guard

Table of contents 1. Global beforeEach 1. Global ...