Detailed explanation of Nginx static file service configuration and optimization

Detailed explanation of Nginx static file service configuration and optimization

Root directory and index file

The root directive specifies the root directory that will be used to search for files. To obtain the path to the requested file, NGINX appends the request URI to the path specified by the root directive. This directive can be placed at any level within an http{}, server{} or location{} context. In the following example, a root directive is defined for a virtual server. It applies to all location {} blocks that do not contain a root directive to explicitly redefine the root:

server {
  root /www/data;

  location / {
  }

  location /images/ {
  }

  location ~ \.(mp3|mp4) {
    root /www/media;
  }
}

Here, NGINX searches for files in the /www/data/images/ directory on the file system for URIs beginning with /images/. If the URI ends with a .mp3 or .mp4 extension, NGINX searches for the file in the /www/media/ directory because it is defined in a matching location block.

If the request ends with /, NGINX treats it as a request for a directory and tries to find an index file in the directory. The index directive defines the name of the index file (the default value is index.html). To continue with the example, if the request URI is /images/some/path/ , NGINX returns the file /www/data/images/some/path/index.html if it exists. If not, NGINX returns an HTTP 404 error (Not Found) by default. To configure NGINX to return automatically generated directory listings, include the on parameter to the autoindex directive:

location /images/ {
  autoindex on;
}

You can list multiple file names in the index directive. NGINX searches for files in the order specified and returns the first file it finds.

location / {
  index index.$geo.html index.htm index.html;
}

The $geo variable used here is a custom variable set via the geo directive. The value of the variable depends on the client's IP address.

To return the index file, NGINX checks whether it exists and then redirects internally to a new URI obtained by appending the name of the index file to the base URI. An internal redirect results in a new search for a location and may end up in another location, as shown in the following example:

location / {
  root /data;
  index index.html index.php;
}

location ~ \.php {
  fastcgi_pass localhost:8000;
  #...

}

Here, if the URI in the request is /path/ , and /data/path/index.html does not exist but /data/path/index.php does, then the internal redirect to /path/index.php will map to the second location. As a result, the request is proxied.

Try a few options

The try_files directive can be used to check if a specified file or directory exists; NGINX will do an internal redirect and return the specified status code if not. For example, to check whether a file corresponding to the request URI exists, use the try_files directive and the $uri variable as follows:

server {
  root /www/data;

  location /images/ {
    try_files $uri /images/default.gif;
  }
}

The file is specified as a URI and is processed using the root or alias directives set in the context of the current location or virtual server. In this case, if the file corresponding to the original URI does not exist, NGINX will redirect internally to the URI specified by the last parameter and return /www/data/images/default.gif.

The last argument can also be a status code (directly preceded by an equal sign) or a location name. In the following example, if all arguments to the try_files directive do not resolve to an existing file or directory, a 404 error is returned.

location / {
  try_files $uri $uri/ $uri.html =404;
}

In the next example, if neither the original URI nor the URI with the additional trailing slash resolves to an existing file or directory, the request is redirected to the specified location and passed to the proxied server.

location / {
  try_files $uri $uri/ @backend;
}

location @backend {
  proxy_pass http://backend.example.com;
}

For more information, watch the Content Caching webinar to learn how to significantly improve website performance and gain a deeper understanding of NGINX’s caching capabilities.

Optimize the performance of service content

Loading speed is a crucial factor in serving any content. Making small optimizations to your NGINX configuration can increase productivity and help achieve optimal performance.

Enable sendfile

By default, NGINX handles file transfers itself and copies the files into a buffer before sending them. Enabling the sendfile directive eliminates the step of copying the data into a buffer and allows data to be copied directly from one file descriptor to another. Alternatively, to prevent one fast connection from completely overwhelming the worker process, the sendfile_max_chunk directive can be used to limit the amount of data transferred in a single sendfile() call (to 1 MB in this case):

location /mp3 {
  sendfile on;
  sendfile_max_chunk 1m;
  #...

}

Enable tcp_nopush

Use the tcp_nopush directive with the sendfile on; directive. This allows NGINX to send the HTTP response headers in a single packet immediately after sendfile() gets the chunk of data.

location /mp3 {
  sendfile on;
  tcp_nopush on;
  #...

}

Enable tcp_nodelay

The tcp_nodelay directive allows overriding Nagle's algorithm, which was originally designed to solve the problem of small packets in slow networks. The algorithm combines many small packets into one larger packet and sends the packet with a delay of 200 milliseconds. Today, when serving large static files, the data can be sent instantly, regardless of the packet size. Latency also affects online applications (ssh, online gaming, online trading, etc.). By default, the tcp_nodelay directive is set to on, which means that Nagle's algorithm is disabled. This directive is only used for keepalive connections:

location /mp3 {
  tcp_nodelay on;
  keepalive_timeout 65;
  #...
  
}

Optimizing the backlog queue

One important factor is how quickly NGINX can handle incoming connections. The general rule is that when a connection is established, it is placed in the "listen" queue of the listening socket. Under normal load, there is little or no queue. But under high load, queues can grow dramatically, causing uneven performance, dropped connections, and increased latency.

Display the backlog queue Use the command netstat -Lan to display the current listening queue. The output might look like this, which shows that there are 10 unaccepted connections in the listening queue on port 80, out of the configured maximum of 128 queued connections. This is normal.

Current listen queue sizes (qlen/incqlen/maxqlen)
Listen Local Address     
0/0/128 *.12345      
10/0/128 *.80    
0/0/128 *.8080

In contrast, in the following command, the number of unaccepted connections (192) exceeds the limit of 128. This is common when the website has a lot of traffic. To achieve optimal performance, you need to increase the maximum number of connections that can be queued for NGINX to accept in your operating system and NGINX configuration.

Current listen queue sizes (qlen/incqlen/maxqlen)
Listen Local Address     
0/0/128 *.12345      
192/0/128 *.80    
0/0/128 *.8080

Adjusting the operating system

Increase the value of the net.core.somaxconn kernel parameter from its default value (128) to a value large enough to accommodate the heavy traffic. In this example, it is increased to 4096.

  • The command for FreeBSD is sudo sysctl kern.ipc.somaxconn=4096
  • The Linux command is 1. sudo sysctl -w net.core.somaxconn=4096 2. Add net.core.somaxconn = 4096 to the /etc/sysctl.conf file.

Tuning NGINX

If you set the somaxconn kernel parameter to a value greater than 512, increase the backlog parameter in the NGINX listen directive to match the modification:

server {
  listen 80 backlog=4096;
  # ...

}

© This article is translated from Nginx Serving Static Content, with some semantic adjustments made.

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • Nginx enables GZIP compression web page transmission method (recommended)
  • Summary of the configuration method for high performance optimization of Nginx server
  • Configuration optimization of Nginx 0.7.x + PHP 5.2.6 (FastCGI) + MySQL 5.1 on a 128M small memory VPS server
  • How to implement web page compression in Nginx optimization service

<<:  jQuery plugin to achieve image comparison

>>:  Detailed tutorial for installing mysql5.7.21 under Windows system

Recommend

Introduction to the common API usage of Vue3

Table of contents Changes in the life cycle react...

jQuery plugin to achieve carousel effect

A jQuery plugin every day - jQuery plugin to impl...

Install multiple versions of PHP for Nginx on Linux

When we install and configure the server LNPM env...

JavaScript operation element examples

For more information about operating elements, pl...

Example of disabling browser cache configuration in Vue project

When releasing a project, you will often encounte...

Right align multiple elements in the same row under div in css

Method 1: float:right In addition, floating will ...

Summary of Nginx load balancing methods

To understand load balancing, you must first unde...

Detailed explanation of .bash_profile file in Linux system

Table of contents 1. Environment variable $PATH: ...

Teach you how to make cool barcode effects

statement : This article teaches you how to imple...

Example of how to implement local fuzzy search function in front-end JavaScript

Table of contents 1. Project Prospects 2. Knowled...

Syntax alias problem based on delete in mysql

Table of contents MySQL delete syntax alias probl...

Use of MySQL DDL statements

Preface The language classification of SQL mainly...

CUDA8.0 and CUDA9.0 coexist under Ubuntu16.04

Preface Some of the earlier codes on Github may r...

HTML table markup tutorial (14): table header

<br />In HTML language, you can automaticall...