Detailed explanation of how Nginx works

Detailed explanation of how Nginx works

How Nginx works

Nginx consists of a core and modules.

Nginx itself actually does very little work. When it receives an HTTP request, it simply maps the request to a location block by looking up the configuration file. The various instructions configured in this location will start different modules to complete the work. Therefore, the module can be regarded as the real laborer of Nginx. Usually the instructions in a location will involve a handler module and multiple filter modules (of course, multiple locations can reuse the same module). The handler module is responsible for processing requests and generating response content, while the filter module processes the response content.

Modules developed by users according to their own needs are third-party modules. It is with the support of so many modules that Nginx is so powerful.

Nginx modules are structurally divided into core modules, basic modules and third-party modules:

  • Core modules: HTTP module, EVENT module and MAIL module
  • Basic modules: HTTP Access module, HTTP FastCGI module, HTTP Proxy module and HTTP Rewrite module.
  • Third-party modules: HTTP Upstream Request Hash module, Notice module, and HTTP Access Key module.

Nginx modules are divided into the following three categories based on their functions:

  • Handlers (processor module). This type of module directly processes requests and performs operations such as outputting content and modifying header information. There can usually be only one Handlers processor module.
  • Filters (Filter module). This type of module mainly modifies the content output by other processor modules and finally outputs it by Nginx.
  • Proxies (proxy class module). This type of module is a module like Nginx's HTTP Upstream. These modules mainly interact with some backend services such as FastCGI to implement functions such as service proxy and load balancing.

Nginx process model

Nginx uses multi-process working mode by default. After Nginx is started, it will run a master process and multiple worker processes. The master acts as the interaction interface between the entire process group and the user. It monitors the process and manages the worker process to implement functions such as restarting services, smooth upgrades, replacing log files, and making configuration files effective in real time. Workers are used to handle basic network events. Workers are equal and compete to process requests from clients.

The process model of nginx is shown in the figure:

When creating a master process, first create the socket (listenfd) that needs to be listened to, and then fork() multiple worker processes from the master process, so that each worker process can listen to the socket requested by the user. Generally speaking, when a connection comes in, all workers will receive a notification, but only one process can accept the connection request, and the others will fail. This is the so-called herd phenomenon. nginx provides an accept_mutex (mutex lock). With this lock, only one process will be connected to the accept at the same time, so there will be no herd panic problem.

First turn on the accept_mutex option. Only the process that obtains accept_mutex will add the accept event. nginx uses a variable called ngx_accept_disabled to control whether to compete for the accept_mutex lock. ngx_accept_disabled = the total number of connections of a single nginx process / 8 - the number of idle connections. When ngx_accept_disabled is greater than 0, no attempt will be made to acquire the accept_mutex lock. The larger the value of ngx_accept_disable, the more opportunities there are to give up, and thus the greater the chances for other processes to acquire the lock. If you do not accept, the number of connections for each worker process will be controlled, and the connection pools of other processes will be utilized. In this way, nginx controls the balance of connections between multiple processes.

Each worker process has an independent connection pool, and the size of the connection pool is worker_connections. The connection pool here does not actually store real connections, it is just an array of ngx_connection_t structures of the size of worker_connections. In addition, nginx will save all idle ngx_connection_t through a linked list free_connections. Every time a connection is obtained, one is obtained from the free connection linked list, and after use, it is put back into the free connection linked list. The maximum number of connections that an nginx can establish should be worker_connections * worker_processes. Of course, what we are talking about here is the maximum number of connections. For HTTP requests to local resources, the maximum number of concurrent connections that can be supported is worker_connections * worker_processes. If HTTP is used as a reverse proxy, the maximum number of concurrent connections should be worker_connections * worker_processes/2. Because as a reverse proxy server, each concurrent connection will establish a connection with the client and a connection with the backend service, which will occupy two connections.

Nginx processes HTTP requests

HTTP request is a typical request-response type network protocol. HTTP is a file protocol, so when we analyze the request line and request header, and output the response line and response header, we often process them line by line. Usually after a connection is established, a line of data is read to analyze the method, uri, and http_version information contained in the request line. Then process the request header line by line, and determine whether there is a request body and the length of the request body based on the request method and the request header information, and then read the request body. After receiving the request, we process the request to generate the data that needs to be output, and then generate the response line, response header and response body. After the response is sent to the client, a complete request is processed.

Processing flow chart:

The above is a detailed explanation of how Nginx works. For more information about how Nginx works, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • Nginx reverse proxy and load balancing concept understanding and module usage
  • NGINX permission control file preview and download implementation principle
  • Analysis of the principle of Nginx using Lua module to implement WAF
  • Detailed explanation of Nginx process management and reloading principles
  • Basic concepts and principles of Nginx

<<:  Full-screen drag upload component based on Vue3

>>:  Detailed explanation of the use of MySQL comparison operator regular expression matching REGEXP

Recommend

How to deploy Vue project under nginx

Today I will use the server nginx, and I also nee...

How to insert video into HTML and make it compatible with all browsers

There are two most commonly used methods to insert...

MySQL 8.0 DDL atomicity feature and implementation principle

1. Overview of DDL Atomicity Before 8.0, there wa...

Canonical enables Linux desktop apps with Flutter (recommended)

Google's goal with Flutter has always been to...

Use CSS to draw a file upload pattern

As shown below, if it were you, how would you ach...

How to install and configure mysql 5.7.19 under centos6.5

The detailed steps for installing mysql5.7.19 on ...

Native JS implementation of loading progress bar

This article shares a dynamic loading progress ba...

RHEL7.5 mysql 8.0.11 installation tutorial

This article records the installation tutorial of...

MySQL 5.7.17 installation and configuration tutorial under CentOS6.9

CentOS6.9 installs Mysql5.7 for your reference, t...

How complicated is the priority of CSS styles?

Last night, I was looking at an interview question...

linux exa command (better file display experience than ls)

Install Follow the README to install The document...