Detailed explanation of Nginx configuration file

Detailed explanation of Nginx configuration file

The main configuration file of Nginx is nginx.conf, which consists of three parts: global block, events block and http block. The http block also includes the http global block and multiple server blocks. Each server block can contain a server global block and multiple location blocks. There is no order relationship between the configuration blocks nested in the same configuration block.

The configuration file supports a large number of configurable instructions, most of which are not specific to a particular block. The same instruction placed in blocks of different levels has different scopes. Generally, instructions in a higher-level block can act on the block where it is located and all lower-level blocks contained in this block. If an instruction appears in two blocks of different levels at the same time, the "proximity principle" is adopted, that is, the configuration in the lower-level block takes precedence. For example, if a directive appears in both the http global block and the server block and the configurations are different, the configuration in the server block should prevail.

The structure of the entire configuration file is as follows:

#global block#user nobody;
worker_processes 1;

#event block events {
 worker_connections 1024;
}

#http block http {
 #http global block include mime.types;
 default_type application/octet-stream;
 sendfile on;
 keepalive_timeout 65;
 #server block server {
  #server global block listen 8000;
  server_name localhost;
  #location block location / {
   root html;
   index index.html index.htm;
  }
  error_page 500 502 503 504 /50x.html;
  location = /50x.html {
   root html;
  }
 }
 #There can be multiple server blocks here server {
  ...
 }
}

Global Blocks

The global block is the part of the default configuration file from the beginning to the events block. It mainly sets some configuration instructions that affect the overall operation of the Nginx server. Therefore, the scope of these instructions is the global Nginx server.

It usually includes configuring the user (group) running the Nginx server, the number of worker processes allowed to be generated, the Nginx process PID storage path, the log storage path and type, and the introduction of configuration files.

#Specify the user and user group that can run the nginx service. This can only be configured in the global block. #user [user] [group]
# Comment out the user directive, or configure it to nobody so that all users can run it # user nobody nobody;
# The user directive does not work on Windows. If you specify a specific user and user group, a small warning will be reported. # nginx: [warn] "user" is not supported, ignored in D:\software\nginx-1.18.0/conf/nginx.conf:2

#Specify the number of worker threads. You can specify a specific number of processes or use automatic mode. This command can only be configured in the global block # worker_processes number | auto;
# Example: Specify 4 worker threads. In this case, a master process and 4 worker processes will be generated. # worker_processes 4;

#Specify the path where the pid file is stored. This command can only be configured in the global block. #pid logs/nginx.pid;

#Specify the path and log level of the error log. This directive can be configured in the global block, http block, server block, and location block. (What are the differences in different block configurations??)
# The debug level log needs to be compiled with --with-debug to turn on the debug switch# error_log [path] [debug | info | notice | warn | error | crit | alert | emerg] 
# error_log logs/error.log notice;
# error_log logs/error.log info;

Events Block

The instructions involved in the events block mainly affect the network connection between the Nginx server and the user. Commonly used settings include whether to enable serialization of network connections under multiple worker processes, whether to allow receiving multiple network connections at the same time, which event-driven model to select to handle connection requests, and the maximum number of connections that each worker process can support simultaneously.

The instructions in this part have a great impact on the performance of the Nginx server and should be flexibly adjusted according to the actual situation in actual configuration.

# When only one network connection comes at a certain time, multiple sleeping processes will be woken up at the same time, but only one process can obtain the connection. If too many processes are woken up each time, part of the system performance will be affected. This problem may occur in the multi-process Nginx server.
# When enabled, multiple Nginx processes will serialize the connections they receive to prevent multiple processes from competing for the connection. # It is enabled by default and can only be configured in the events block. # accept_mutex on | off;

# If multi_accept is disabled, an nginx worker process can only accept one new connection at a time. Otherwise, a single worker process can accept all new connections simultaneously. 
# This directive will be ignored if nginx uses the kqueue connection method, since this method reports the number of new connections waiting to be accepted.
# The default is off, and can only be configured in the event block # multi_accept on | off;

#Specify which network IO model to use. The available methods are: select, poll, kqueue, epoll, rtsig, /dev/poll, and eventport. Generally, operating systems do not support all of the above models.
# Can only be configured in the events block# use method
# use epoll

# Set the maximum number of connections allowed for each worker process to be opened at the same time. When the number of connections accepted by each worker process exceeds this value, no more connections will be accepted. # When all worker processes are full, the connection enters logback. After logback is full, the connection is rejected. # Can only be configured in the events block. # Note: This value cannot exceed the maximum number of files supported by the system or the maximum number of files supported by a single process. For details, please refer to this article: https://cloud.tencent.com/developer/article/1114773
# worker_connections 1024;

http block#

The http block is an important part of the Nginx server configuration. Most functions such as proxy, cache and log definition and the configuration of third-party modules can be placed in this module.

As mentioned earlier, the http block can contain its own global blocks and server blocks, and the server block can further contain location blocks. In this book, we use "http global block" to represent the global block in http, that is, the part of the http block that is not included in the server block.

The instructions that can be configured in the http global block include file import, MIME-Type definition, log customization, whether to use sendfile to transfer files, connection timeout, and the upper limit of single connection requests.

# Common browsers can display a wide variety of text, media and other resources, including HTML, XML, GIF and Flash. In order to distinguish these resources, the browser needs to use MIME Type. In other words, MIME Type is the media type of a network resource. As a web server, the Nginx server must be able to identify the resource type requested by the front-end.

# The include directive is used to include other configuration files. It can be placed anywhere in the configuration file. However, please note that the configuration file you include must comply with the configuration specifications. For example, if the configuration you include is the configuration of the worker_processes directive, and you include this directive in the http block, this will definitely not work. As mentioned above, the worker_processes directive can only be in the global block.
# The following command includes mime.types. mime.types and ngin.cfg are in the same directory. If they are at different levels, the specific path needs to be specified. # include mime.types;

# Configure the default type. If this directive is not added, the default value is text/plain.
# This directive can also be configured in the http block, server block or location block.
# default_type application/octet-stream;

# access_log configuration, this directive can be set in http block, server block or location block # In the global block, we introduced the errer_log directive, which is used to configure the log storage and level when the Nginx process is running. The log referred to here is different from the conventional one. It refers to the log that records the service process of the Nginx server in response to the front-end request # access_log path [format [buffer=size]]
# If you want to turn off access_log, you can use the following command # access_log off;

# The log_format directive is used to define the log format. This directive can only be configured in the http block. # log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
# After defining the above log format, you can use the log in the following form # access_log logs/access.log main;

# Enable or disable sendfile file transfer. This can be configured in http block, server block or location block. # sendfile on | off;

# Set the maximum data size of sendfile. This instruction can be configured in http block, server block or location block. # sendfile_max_chunk size;
# If the size value is greater than 0, the maximum amount of data transmitted by each worker process of the Nginx process each time it calls sendfile() cannot exceed this value (here it is 128k, so it cannot exceed 128k each time); if it is set to 0, there is no limit. The default value is 0.
# sendfile_max_chunk 128k;

# Configure the connection timeout. This directive can be configured in the http block, server block or location block.
# After establishing a session connection with the user, the Nginx server can keep these connections open for a period of time # timeout, the server-side connection retention time. The default value is 75s; header_timeout, optional, sets the timeout in the Keep-Alive field in the response message header: "Keep-Alive: timeout = header_timeout". This instruction in the message can be recognized by Mozilla or Konqueror.
# keepalive_timeout timeout [header_timeout]
# The following configuration means that the server keeps the connection for 120 seconds and the timeout period of the Keep-Alive field in the response message header sent to the client is set to 100 seconds.
# keepalive_timeout 120s 100s

# Configure the upper limit of single connection request number. This directive can be configured in http block, server block or location block.
# After the Nginx server and the client establish a session connection, the client sends a request through this connection. The keepalive_requests directive is used to limit the number of requests a user can send to the Nginx server through a certain connection. The default is 100
# keepalive_requests number;

Server Block

Server blocks are closely related to the concept of "virtual hosts".

Virtual hosting, also known as virtual server, hosting space or web space, is a technology. This technology emerged to save the cost of Internet server hardware. The "host" or "space" here is extended from the physical server. The hardware system can be based on a server cluster or a single server, etc. Virtual host technology is mainly used in multiple services such as HTTP, FTP and EMAIL. It logically divides one or all service contents of a server into multiple service units, which appear as multiple servers to the outside world, thereby making full use of server hardware resources. From the user's perspective, a virtual host is exactly the same as a standalone hardware host.

When using the Nginx server to provide Web services, the use of virtual host technology can avoid providing a separate Nginx server for each website to be run, and there is no need to run a set of Nginx processes for each website. Virtual host technology allows the Nginx server to run only one set of Nginx processes on the same server and run multiple websites.

As mentioned earlier, each http block can contain multiple server blocks, and each server block is equivalent to a virtual host. Multiple hosts can jointly provide services within it, together providing a group of logically closely related services (or websites) to the outside world.

Like the http block, the server block can also contain its own global block and can contain multiple location blocks. In the server global block, the two most common configuration items are the listener configuration of this virtual host and the name or IP configuration of this virtual host.

listen directive#

The most important instruction in the server block is the listen instruction, which has three configuration syntaxes. The default configuration value of this directive is: listen *:80 | *:8000; this directive can only be configured in the server block.

//First listen address[:port] [default_server] [ssl] [http2 | spdy] [proxy_protocol] [setfib=number] [fastopen=number] [backlog=number] [rcvbuf=size] [sndbuf=size] [accept_filter=filter] [deferred] [bind] [ipv6only=on|off] [reuseport] [so_keepalive=on|off|[keepidle]:[keepintvl]:[keepcnt]];

//The second listen port [default_server] [ssl] [http2 | spdy] [proxy_protocol] [setfib=number] [fastopen=number] [backlog=number] [rcvbuf=size] [sndbuf=size] [accept_filter=filter] [deferred] [bind] [ipv6only=on|off] [reuseport] [so_keepalive=on|off|[keepidle]:[keepintvl]:[keepcnt]];

//The third type (you don’t need to focus on it)
listen unix:path [default_server] [ssl] [http2 | spdy] [proxy_protocol] [backlog=number] [rcvbuf=size] [sndbuf=size] [accept_filter=filter] [deferred] [bind] [so_keepalive=on|off|[keepidle]:[keepintvl]:[keepcnt]];
The configuration of the listen instruction is very flexible. You can specify an IP address, a port, or both an IP address and a port.

listen 127.0.0.1:8000; #Only listen for requests from the IP 127.0.0.1 and port 8000. listen 127.0.0.1; #Only listen for requests from the IP 127.0.0.1 and port 80 (do not specify a port, default is 80)
listen 8000; #Listen for requests from all IPs to port 8000 listen *:8000; #Same as above listen localhost:8000; #Same as the first effect

The following are some important parameters:

  • address: The IP address to be listened (the IP address of the request source). If it is an IPv6 address, it needs to be enclosed in square brackets "[]", such as [fe80::1].
  • port: port number. If only the IP address is defined but not the port number, port 80 will be used. A note here: if you don't configure the listen directive at all, then if nginx is running with superuser privileges, use *:80, otherwise use *:8000. Multiple virtual hosts can listen to the same port at the same time, but the server_name needs to be set to different ones;
  • default_server: If no corresponding virtual host is matched through Host, it will be processed through this virtual host. For details, please refer to this article, which is well written.
  • backlog=number: Sets the maximum number of network connections that the listen function listen() allows to be in suspended state at the same time. The default is -1 in FreeBSD and 511 on other platforms.
  • accept_filter=filter, set the listening port to filter requests. The filtered content cannot be received and processed. This command is only valid on FreeBSD and NetBSD 5.0+ platforms. The filter can be set to dataready or httpready. Interested readers can refer to the official documentation of Nginx.
  • bind: identifier, use a separate bind() to process this address:port; generally, for multiple connections with the same port but different IP addresses, the Nginx server will only use one listening command and use bind() to process all connections with the same port.
  • ssl: Identifier, set the session connection to use SSL mode. This identifier is related to the HTTPS service provided by the Nginx server.

The use of the listen command seems complicated, but in fact, in general use, it is relatively simple and does not require too complicated configuration.

server_name Directive

The name used to configure the virtual host. The syntax is:

Syntax: server_name name ...;
Default: 
server_name "";
Context: server

For name, there can be only one name, or multiple names can be listed in parallel, separated by spaces. Each name is a domain name, consisting of two or three segments separated by periods ".". for example

server_name myserver.com www.myserver.com

In this example, the name of this virtual host is set to myserver.com or www.myserver.com. The Nginx server specifies the first name as the primary name for this virtual host.

The wildcard "*" can be used in name, but the wildcard can only be used in the first or last segment of a name consisting of three strings, or the last segment of a name consisting of two strings, such as:

server_name myserver.* *.myserver.com

In addition, name also supports regular expressions. I won’t go into details here.

Because the server_name directive supports two ways of configuring names: using wildcards and regular expressions, in a configuration file containing multiple virtual hosts, a name may be successfully matched by the server_name of multiple virtual hosts. So, which virtual host should handle the request from this name? The Nginx server makes the following provisions:

a. For different matching methods, virtual hosts are selected according to the following priority, and the requests in the front are processed first.

① Accurately match server_name
② The wildcard successfully matches server_name at the beginning ③ The wildcard successfully matches server_name at the end ④ The regular expression successfully matches server_name

b. In the above four matching methods, if server_name is matched successfully multiple times by the matching methods with the same priority, the virtual host that matches successfully for the first time will process the request.

Sometimes we want to use a virtual host configuration based on IP addresses, for example, access to 192.168.1.31 is handled by virtual host 1, and access to 192.168.1.32 is handled by virtual host 2.

At this time, we need to bind the alias to the network card first. For example, the IP previously bound to the network card is 192.168.1.30. Now bind both IPs 192.168.1.31 and 192.168.1.32 to this network card, then requests to these two IPs will reach this machine.

After binding the alias, perform the following configuration:

http
{
 {
 listen: 80;
 server_name: 192.168.1.31;
  ...
 }
 {
 listen: 80;
 server_name: 192.168.1.32;
  ...
 }
}

Location Block

Each server block can contain multiple location blocks. It plays an important role in the entire Nginx configuration document, and the flexibility of the Nginx server in many functions is often reflected in the configuration of the location directive.

The main function of the location block is to match the string (the "/uri-string" part in the previous example) other than the virtual host name (which can also be an IP alias, which will be explained in detail later) based on the request string received by the Nginx server (for example, server_name/uri-string) and process the specific request. Functions such as address orientation, data caching and response control are all implemented in this part. Many third-party modules also provide functionality in the location block.

The grammatical structure of location defined in the official Nginx documentation is:

location [ = | ~ | ~* | ^~ ] uri { ... }

The uri variable is the request string to be matched, which can be a string without a regular expression, such as /myserver.php, or a string with a regular expression, such as .php$ (indicating a URL ending with .php). For the convenience of the following description, we agree that URIs without regular expressions are called "standard URIs" and URIs using regular expressions are called "regular URIs".

The part in square brackets is optional and is used to change the way the request string matches the URI. Before introducing the meaning of the four flags, we need to understand how the Nginx server searches for and uses the URI of the location block in the server block to match the request string when this option is not added.

When this option is not added, the Nginx server first searches multiple location blocks in the server block to see if there is a match between the standard URI and the request string. If there are multiple matches, the one with the highest match is recorded. The server then matches the request string with the regular URI in the location block. When the first regular URI matches successfully, the search ends and the location block is used to process the request. If all regular matches fail, the location block with the highest match just recorded is used to process the request.

Knowing the above content, we can explain the meaning of each flag in the optional options:

  • "=" is used before the standard URI, requiring the request string to strictly match the URI. If a match is found, stop searching and process the request immediately.
  • “^~” is used before the standard URI, requiring the Nginx server to find the location with the highest match between the identifier URI and the request string, and immediately use this location to process the request, instead of using the regular URI in the location block to match the request string.
  • “~” is used to indicate that the URI contains a regular expression and is case-sensitive.
  • “~*” is used to indicate that the URI contains a regular expression and is case-insensitive. Note that if the uri contains a regular expression, you must use the "~" or "~*" symbol.

We know that when the browser transmits the URI, some characters are URL-encoded, such as spaces are encoded as "%20", question marks are encoded as "%3f", and so on. One feature of “~” is that it will encode these symbols in the URI. For example, if the URI received by the location block is "/html/%20/data", when the Nginx server searches for the location configured as "~ /html/ /data", it can be matched successfully.

root directive#

This directive is used to set the root directory for requesting resources. This directive can be configured in the http block, server block or location block. Since in most cases when using Nginx server, multiple location blocks need to be configured to handle different requests separately, this instruction is usually set in the location block.

root path

The path variable can contain most of the variables preset by the Nginx server, except for $document_root and $realpath_root.

alisa directive#
index directive#
error_page directive#

A few notes#

The above lists some configuration instructions for the global block, event block, and http block in Nginx, but Nginx's instructions are far more than these. In fact, the main purpose here is to explain the structure of the entire configuration file. If you want to see a more complete introduction to commands and modules, it is recommended to go to the official website of Nginx.

An example of a configuration file#

######Detailed explanation of Nginx configuration file nginx.conf in Chinese######

#Define the user and user group that Nginx runs as user www www;

#Number of nginx processes. It is recommended to set it equal to the total number of CPU cores.
worker_processes 8;
 
#Global error log definition type, [ debug | info | notice | warn | error | crit ]
error_log /usr/local/nginx/logs/error.log info;

#Process pid file pid /usr/local/nginx/logs/nginx.pid;

#Specify the maximum number of descriptors that a process can open: #Working mode and upper limit of connection number #This instruction refers to the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files (ulimit -n) divided by the number of nginx processes, but nginx does not distribute requests evenly, so it is best to keep it consistent with the value of ulimit -n.
#Now under the Linux 2.6 kernel, the number of open files is 65535, and worker_rlimit_nofile should be filled in with 65535 accordingly.
#This is because nginx does not allocate requests to processes evenly during scheduling. So if you fill in 10240, when the total concurrency reaches 30,000-40,000, there may be more than 10240 processes, and a 502 error will be returned.
worker_rlimit_nofile 65535;


events
{
 #Refer to the event model, use [ kqueue | rtsig | epoll | /dev/poll | select | poll ]; epoll model #is a high-performance network I/O model in the Linux kernel above version 2.6. Linux recommends epoll. If running on FreeBSD, use the kqueue model.
 #Supplementary explanation:
 #Similar to Apache, nginx has different event models for different operating systems #A) Standard event model #Select and poll belong to the standard event model. If there is no more effective method in the current system, nginx will choose select or poll
 #B) Efficient event model #Kqueue: used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Using kqueue on MacOS X systems with dual processors may cause kernel panic.
 #Epoll: used in Linux kernel version 2.6 and later systems.
 #/dev/poll: used on Solaris 7 11/99+, HP/UX 11.22+ (eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1A+.
 #Eventport: Used in Solaris 10. To prevent kernel crashes, it is necessary to install security patches.
 use epoll;

 #Maximum number of connections for a single process (maximum number of connections = number of connections * number of processes)
 #Adjust according to the hardware and use it in conjunction with the previous work process. Make it as large as possible, but don't run the CPU to 100%. The maximum number of connections allowed for each process. Theoretically, the maximum number of connections for each nginx server is.
 worker_connections 65535;

 #keepalive timeout.
 keepalive_timeout 60;

 #The buffer size for client request headers. This can be set according to your system paging size. Generally, the size of a request header will not exceed 1k. However, since the system paging is generally larger than 1k, it is set to the paging size here.
 #The paging size can be obtained using the command getconf PAGESIZE.
 #[root@web001 ~]# getconf PAGESIZE
 #4096
 #However, there are cases where client_header_buffer_size exceeds 4k, but the value of client_header_buffer_size must be set to an integer multiple of the "system paging size".
 client_header_buffer_size 4k;

 #This will specify the cache for open files. It is not enabled by default. max specifies the number of caches. It is recommended to be consistent with the number of open files. inactive refers to the time after which the cache is deleted if the file is not requested.
 open_file_cache max=65535 inactive=60s;

 #This refers to how often to check the cached valid information.
 #Syntax: open_file_cache_valid time Default value: open_file_cache_valid 60 Used fields: http, server, location This directive specifies when to check the validity information of the cache items in open_file_cache.
 open_file_cache_valid 80s;

 The minimum number of times a file is used within the inactive parameter time in the #open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. As in the above example, if a file is not used once within the inactive time, it will be removed.
 #Syntax: open_file_cache_min_uses number Default value: open_file_cache_min_uses 1 Used in: http, server, location This directive specifies the minimum number of files that can be used within a certain time range in the parameters of the open_file_cache directive. If a larger value is used, the file descriptor is always open in the cache.
 open_file_cache_min_uses 1;
 
 #Syntax: open_file_cache_errors on | off Default value: open_file_cache_errors off Used fields: http, server, location This directive specifies whether to log cache errors when searching for a file.
 open_file_cache_errors on;
}
 
 
 
#Set up the http server and use its reverse proxy function to provide load balancing support for http
{
 #File extension and file type mapping table include mime.types;

 #Default file type default_type application/octet-stream;

 #Default encoding #charset utf-8;

 #Server name hash table size#The hash table that stores server names is controlled by the directives server_names_hash_max_size and server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table and is a multiple of the processor cache size. By reducing the number of accesses in memory, it is possible to speed up the search for hash table key values ​​in the processor. If the hash bucket size is equal to the size of the processor cache, then when searching for a key, the worst case number of memory searches is 2. The first time is to determine the address of the storage unit, and the second time is to find the key value in the storage unit. Therefore, if Nginx gives a hint that you need to increase the hash max size or hash bucket size, the first thing to do is to increase the size of the former parameter.
 server_names_hash_bucket_size 128;

 #The buffer size for client request headers. This can be set according to your system paging size. Generally, the header size of a request will not exceed 1k. However, since the system paging is generally larger than 1k, it is set to the paging size here. The paging size can be obtained using the command getconf PAGESIZE.
 client_header_buffer_size 32k;

 #Client request header buffer size. By default, nginx uses the client_header_buffer_size buffer to read the header value. If the header is too large, it will use large_client_header_buffers to read it.
 large_client_header_buffers 4 64k;

 #Set the size of files uploaded through nginx client_max_body_size 8m;

 #Turn on efficient file transfer mode. The sendfile directive specifies whether nginx calls the sendfile function to output files. For common applications, it is set to on. If it is used for downloading and other applications with heavy disk IO load, it can be set to off to balance the disk and network I/O processing speeds and reduce the system load. Note: If the image does not display properly, change this to off.
 The #sendfile directive specifies whether nginx calls the sendfile function (zero copy mode) to output files. For normal applications, it must be set to on. If it is used for disk IO heavy load applications such as downloading, it can be set to off to balance the disk and network IO processing speeds and reduce system uptime.
 sendfile on;

 # Enable directory list access, suitable for downloading servers, closed by default.
 autoindex on;

 #This option allows or prohibits the use of socke's TCP_CORK option. This option is only used when sendfile is used with tcp_nopush on;
  
 tcp_nodelay on;

 #Long connection timeout, in seconds keepalive_timeout 120;

 #FastCGI related parameters are designed to improve website performance: reduce resource usage and increase access speed. The following parameters can be understood literally.
 fastcgi_connect_timeout 300;
 fastcgi_send_timeout 300;
 fastcgi_read_timeout 300;
 fastcgi_buffer_size 64k;
 fastcgi_buffers 4 64k;
 fastcgi_busy_buffers_size 128k;
 fastcgi_temp_file_write_size 128k;

 #gzip module settings gzip on; #enable gzip compression output gzip_min_length 1k; #minimum compressed file size gzip_buffers 4 16k; #compression buffer gzip_http_version 1.0; #compression version (default 1.1, if the front end is squid2.5, please use 1.0)
 gzip_comp_level 2; #Compression level gzip_types text/plain application/x-javascript text/css application/xml; #Compression type. By default, textml is already included, so you don’t need to write it below. There will be no problem if you write it, but there will be a warning.
 gzip_vary on;

 #When you enable limiting the number of IP connections, you need to use #limit_zone crawler $binary_remote_addr 10m;

 

 #Load balancing configuration upstream jh.w3cschool.cn {
  
  #Upstream load balancing, weight is the weight, which can be defined according to the machine configuration. The weight parameter represents the weight. The higher the weight, the greater the probability of being assigned.
  server 192.168.80.121:80 weight=3;
  server 192.168.80.122:80 weight=2;
  server 192.168.80.123:80 weight=3;

  #Nginx upstream currently supports 4 ways of allocation#1, polling (default)
  #Each request is assigned to a different backend server in chronological order. If the backend server is down, it can be automatically removed.
  #2.weight
  #Specify the polling probability. The weight is proportional to the access ratio. It is used when the backend server performance is uneven.
  #For example:
  #upstream bakend
  # server 192.168.0.14 weight=10;
  # server 192.168.0.15 weight=10;
  #}
  #2, ip_hash
  #Each request is assigned according to the hash result of the access IP, so that each visitor accesses a fixed backend server, which can solve the session problem.
  #For example:
  #upstream bakend
  #ip_hash;
  # server 192.168.0.14:88;
  # server 192.168.0.15:80;
  #}
  #3. fair (third party)
  #Allocate requests based on the response time of the backend server, with priority given to requests with shorter response times.
  #upstream backend {
  # server server1;
  # server server2;
  # fair;
  #}
  #4. url_hash (third party)
  #Distribute requests according to the hash result of the accessed URL so that each URL is directed to the same backend server. This is more effective when the backend server is cached.
  #Example: Add a hash statement in upstream. Other parameters such as weight cannot be written in the server statement. hash_method is the hash algorithm used. #upstream backend {
  # server squid1:3128;
  # server squid2:3128;
  # hash $request_uri;
  # hash_method crc32;
  #}

  #tips:
  #upstream bakend{#Define the IP and device status of the load balancing device}{
  #ip_hash;
  # server 127.0.0.1:9090 down;
  # server 127.0.0.1:8080 weight=2;
  # server 127.0.0.1:6060;
  # server 127.0.0.1:7070 backup;
  #}
  #Add proxy_pass http://bakend/ to the server that needs to use load balancing;

  #The status of each device is set to:
  #1.down means that the server in front of the order is temporarily not involved in the load. #2.weight The larger the weight, the greater the weight of the load.
  #3.max_fails: The default number of request failures allowed is 1. When the maximum number is exceeded, the error defined by the proxy_next_upstream module is returned. #4.fail_timeout: The pause time after max_fails failures.
  #5.backup: When all other non-backup machines are down or busy, request the backup machine. So this machine will have the lightest pressure.

  #nginx supports setting up multiple groups of load balancing at the same time for use by different servers.
  #client_body_in_file_only is set to On to record the data posted by the client into a file for debugging
  #client_body_temp_path sets the directory for recording files. You can set up to 3 directories. #location matches the URL. You can redirect or perform new proxy load balancing.}
  
  
  
 #Virtual host configuration server
 {
  #Listen port listen 80;

  #You can have multiple domain names, separated by spaces server_name www.w3cschool.cn w3cschool.cn;
  index index.html index.htm index.php;
  root /data/www/w3cschool;

  #Load balancing for ****** location ~ .*.(php|php5)?$
  {
   fastcgi_pass 127.0.0.1:9000;
   fastcgi_index index.php;
   include fastcgi.conf;
  }
   
  #Image cache time setting location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$
  {
   expires 10d;
  }
   
  #JS and CSS cache time setting location ~ .*.(js|css)?$
  {
   expires 1h;
  }
   
  #Log format settings#$remote_addr and $http_x_forwarded_for are used to record the client's IP address;
  #$remote_user: used to record the client user name;
  #$time_local: used to record access time and time zone;
  #$request: used to record the requested URL and http protocol;
  #$status: Used to record the request status; success is 200,
  #$body_bytes_sent: records the size of the file body sent to the client;
  #$http_referer: used to record the page link from which the visit came;
  #$http_user_agent: records the relevant information of the client browser;
  #Usually the web server is placed behind the reverse proxy, so it cannot obtain the client's IP address. The IP address obtained through $remote_add is the IP address of the reverse proxy server. The reverse proxy server can add x_forwarded_for information in the http header information of the forwarded request to record the IP address of the original client and the server address requested by the original client.
  log_format access '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" $http_x_forwarded_for';
   
  #Define the access log of this virtual host access_log /usr/local/nginx/logs/host.access.log main;
  access_log /usr/local/nginx/logs/host.access.404.log log404;
   
  # Enable reverse proxy for "/" location / {
   proxy_pass http://127.0.0.1:88;
   proxy_redirect off;
   proxy_set_header X-Real-IP $remote_addr;
    
   #The backend Web server can obtain the user's real IP through X-Forwarded-For
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    
   #The following are some reverse proxy configurations, optional.
   proxy_set_header Host $host;

   #The maximum number of bytes of a single file allowed to be requested by the client client_max_body_size 10m;

   #The maximum number of bytes that the buffer proxy buffers for client requests,
   #If you set it to a relatively large value, such as 256k, then whether you use Firefox or IE browser to submit any image smaller than 256k, it will be normal. If you comment this directive and use the default client_body_buffer_size setting, which is twice the operating system page size, 8k or 16k, the problem will occur.
   #Whether using Firefox 4.0 or IE 8.0, submitting a relatively large image of about 200k will return a 500 Internal Server Error error client_body_buffer_size 128k;

   # means to make nginx block HTTP response codes of 400 or higher.
   proxy_intercept_errors on;

   #Backend server connection timeout_initiate handshake and wait for response timeout #nginx connects to backend server timeout (proxy connection timeout)
   proxy_connect_timeout 90;

   #Backend server data transmission time (proxy sending timeout)
   #The backend server data transmission time is the time within which the backend server must transmit all data proxy_send_timeout 90;

   #After the connection is successful, the backend server response time (proxy receiving timeout)
   #After the connection is successful_waiting for the backend server to respond_In fact, it has entered the backend queue waiting for processing (it can also be said to be the time it takes for the backend server to process the request)
   proxy_read_timeout 90;

   #Set the size of the buffer used by the proxy server (nginx) to store user header information #Set the size of the buffer for the first part of the response read from the proxy server. Usually, this part of the response contains a small response header. By default, this value is the size of the buffer specified in the proxy_buffers directive, but it can be set to a smaller proxy_buffer_size 4k;

   #proxy_buffers buffer, the average web page is set below 32k #Set the number and size of buffers used to read responses (from the proxied server). The default is also the paging size, which may be 4k or 8k depending on the operating system
   proxy_buffers 4 32k;

   #Buffer size under high load (proxy_buffers*2)
   proxy_busy_buffers_size 64k;

   #Set the size of the data when writing to proxy_temp_path to prevent a worker process from being blocked for too long when transferring files #Set the cache folder size. If it is larger than this value, proxy_temp_file_write_size 64k will be transferred from the upstream server;
  }
   
   
  #Set the address to view Nginx status location /NginxStatus {
   stub_status on;
   access_log on;
   auth_basic "NginxStatus";
   auth_basic_user_file confpasswd;
   #The content of the htpasswd file can be generated using the htpasswd tool provided by Apache.
  }
   
  #Local dynamic and static separation reverse proxy configuration #All jsp pages are handled by tomcat or resin location ~ .(jsp|jspx|do)?$ {
   proxy_set_header Host $host;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_pass http://127.0.0.1:8080;
  }
   
  #All static files are read directly by nginx without passing through tomcat or resin
  location ~ .*.(htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt|
  pdf|xls|mp3|wma)$
  {
   expires 15d; 
  }
   
  location ~ .*.(js|css)?$
  {
   expires 1h;
  }
 }
}
######Detailed explanation of Nginx configuration file nginx.conf in Chinese######

refer to#

"Detailed Explanation of Nginx High-Performance Web Server" Miao Ze Electronic Industry Press
Can Nginx official website operating system support millions of connections?
Nginx series blog
nginx default_server definition and matching rules
W3C Nginx Tutorial

This is the end of this article about the detailed explanation of Nginx configuration file. For more relevant Nginx configuration file content, please search 123WORDPRESS.COM's previous articles or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Chinese comments on the nginx configuration file nginx.conf
  • Sharing the configuration files of running PHP framework Laravel in Nginx
  • The nginx reverse proxy service causes a 404 error when accessing resources due to an error in the configuration file
  • Detailed explanation of nginx configuration file in Chinese
  • Common configuration methods of Nginx configuration file nginx.conf
  • Detailed description of Nginx configuration file nginx.conf
  • Nginx server configuration file complete analysis
  • Introduction to Nginx configuration and configuration files under Windows
  • Introduction to the organizational structure of nginx+php-fpm configuration files
  • Detailed explanation of the nginx.conf configuration file in the Nginx server
  • Nginx configuration file (nginx.conf) configuration details (summary)

<<:  Detailed tutorial on downloading, installing and configuring the latest version of MySQL 8.0.21

>>:  Detailed explanation of dragging table columns using Vue Element Sortablejs

Recommend

HTML structured implementation method

DIV+css structure Are you learning CSS layout? Sti...

How to import Excel files into MySQL database

This article shares with you how to import Excel ...

js to implement add and delete table operations

This article example shares the specific code of ...

10 Popular Windows Apps That Are Also Available on Linux

According to data analysis company Net Market Sha...

JavaScript Advanced Custom Exception

Table of contents 1. Concept 1.1 What are errors ...

MySQL Community Server compressed package installation and configuration method

Today, because I wanted to install MySQL, I went ...

Vue.js implements simple folding panel

This article example shares the specific code of ...

How to bind Docker container to external IP and port

Docker allows network services to be provided by ...

Summary of Linux system user management commands

User and Group Management 1. Basic concepts of us...

Use native js to simulate the scrolling effect of live bullet screen

Table of contents 1. Basic principles 2. Specific...

CSS inheritance method

Given a div with the following background image: ...

Detailed tutorial on installing ElasticSearch 6.4.1 on CentOS7

1. Download the ElasticSearch 6.4.1 installation ...

Detailed analysis of the principles and usage of MySQL views

Preface: In MySQL, views are probably one of the ...

Vue uses mixins to optimize components

Table of contents Mixins implementation Hook func...