We all know that the performance of applications and websites is a critical factor in their success. However, the process of making your app or website perform better is not always clear. Code quality and infrastructure are of course critical, but in many cases you can make a lot of improvements to the end-user experience of your application by focusing on some very basic application delivery techniques. One example is implementing and optimizing caching in an application stack. The techniques presented in this tutorial can help both novice and advanced users use the content caching features included in Nginx to achieve better performance. Overview A content cache sits between the client and the origin server (upstream) and keeps a copy of all the content it sees. If a client requests content that the cache has already stored, it will return the content directly without contacting the origin server. This improves performance because content is cached closer to the client, and makes more efficient use of application servers because they do not have to generate the page from scratch every time. There may be multiple caches between the web browser and the application server: the client's browser cache, intermediate caches, content delivery networks (CDNs), and load balancers or reverse proxies sitting in front of the application server. Even at the reverse proxy/load balancer level, caching can greatly improve performance. Here's an example. My site uses Next.js server port rendering. Since the server performance is relatively poor, of course, for a $5 server, you can't expect it to be that good. It's already very remarkable that it can be used. It's good enough to be able to access this LAN. Don't expect too much. It takes about 7 seconds to open the page each time, including network latency. But when I make a request directly on the server (127.0.0.1), it takes nearly 5 seconds. Then, excluding the time to get data from the database, the server-side rendering time takes 4.5 seconds, which is too slow. At this time, the fastest solution I can think of is caching, but adding caching there, judging from the time of each step, adding caching to Nginx is the fastest way to solve the problem. Nginx is often deployed as a reverse proxy or load balancer in an application stack and has a full set of caching capabilities. Below we will discuss how to configure basic caching with Nginx. How to set up and configure basic caching Only two directives are needed to enable basic caching: proxy_cache_path and proxy_cache. The proxy_cache_path directive sets the path and configuration of the cache, and the proxy_cache directive is used to activate it. proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; server { # ... location / { proxy_cache my_cache; proxy_pass http://my_upstream; } } The proxy_cache_path directive's parameters define the following settings: The cached local disk directory is called /path/to/cache/.
Finally, the proxy_cache directive activates caching for all content matching the URL of the parent location block (/ in the example). You can also include a proxy_cache directive in a server block; it applies to all blocks of the server that do not have their own location directives. Serve cached content when the upstream server is down() A powerful feature of Nginx content caching is that Nginx can be configured to serve cached content from cache when new content cannot be obtained from the origin server. This can happen if all origin servers that cache a resource are down or temporarily occupied. Instead of passing the error to the client, Nginx serves the stale version of the file from the cache. This provides additional fault tolerance for Nginx-proxied servers and ensures uptime in the event of server failures or traffic spikes. To enable this feature, include the proxy_cache_use_stale directive: location / { # ... proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; } With this example configuration, if Nginx receives an error, timeout, or any of the specified 5xx errors from the origin server and has an outdated version of the requested file in its cache, it delivers the outdated file instead of forwarding the error to the client. How to improve cache performance Nginx has a rich set of optional settings that can be used to fine-tune the performance of the cache. Here is an example of activating some of them: proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; server { # ... location / { proxy_cache my_cache; proxy_cache_revalidate on; proxy_cache_min_uses 3; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_background_update on; proxy_cache_lock on; proxy_pass http://my_upstream; } } These directives configure the following behaviors:
Splitting the cache across multiple hard disks If you have multiple hard drives, you can use Nginx to split the cache between them. The following example distributes clients evenly across two hard drives based on the request URI: proxy_cache_path /path/to/hdd1 levels=1:2 keys_zone=my_cache_hdd1:10m max_size=10g inactive=60m use_temp_path=off; proxy_cache_path /path/to/hdd2 levels=1:2 keys_zone=my_cache_hdd2:10m max_size=10g inactive=60m use_temp_path=off; split_clients $request_uri $my_cache { 50% "my_cache_hdd1"; 50% "my_cache_hdd2"; } server { # ... location / { proxy_cache $my_cache; proxy_pass http://my_upstream; } } The two proxy_cache_path directives define two caches (my_cache_hdd1 and my_cache_hdd2) on two different hard drives. The split_clients configuration block specifies that half of the requests (resulting in 50%) are cached in my_cache_hdd1 and the other half in my_cache_hdd2. Which cache to use is determined for each request based on a hash of the $request_uri variable (the request URI), with the result that requests for a given URI are always cached in the same cache. Please note that this method is not a replacement for a RAID hard drive setup. If there is a hard drive failure, it can cause unpredictable system behavior, including users seeing a 500 response code for requests to the failed hard drive. A proper RAID drive setup can handle drive failures. How to detect Nginx Cache You can add the $upstream_cache_status variable to the response header for detection add_header X-Cache-Status $upstream_cache_status; This example adds the X-Cache-Status HTTP header in the response to the client. The following are the possible values for $upstream_cache_status:
How Nginx determines whether to cache a response By default, Nginx respects the Cache-Control header of the origin server. It does not cache responses with Cache-Control set to Private, No-Cache or No-Store or Set-Cookie in the response header. Nginx only caches GET and HEAD client requests. You can override these defaults as described in the answers below. If proxy_buffering is set to off, Nginx will not cache responses. on The default. Can Nginx ignore Cache-Control Use the proxy_ignore_headers directive to ignore Cache-Control location /images/ { proxy_cache my_cache; proxy_ignore_headers Cache-Control; proxy_cache_valid any 30m; # ... } Nginx ignores Cache-Control headers for everything under /images/. This directive forces expiration of cached data, which is required if the header is ignored. Nginx will not cache files that have not expired. Can Nginx ignore Set-Cookie Use the proxy_ignore_headers directive. How to cache POST requests in Nginx Use the proxy_cache_methods directive: proxy_cache_methods GET HEAD POST; This example enables caching for POST requests. How Nginx caches dynamic content as long as the Cache-Control header allows it. Caching dynamic content even for a short period of time can reduce the load on origin servers and databases, thereby improving time to first byte because the page does not have to be regenerated for each request. How to disable Nginx cache The proxy_cache_bypass directive location / { proxy_cache_bypass $cookie_nocache $arg_nocache; # ... } This directive defines the types of requests for which Nginx should immediately request content from the origin server, rather than first trying to find it in the cache. This is sometimes called "punching a hole" through the cache. What cache key does Nginx use? The default form of the key generated by Nginx looks like the MD5 hash of the following Nginx variable: $scheme$proxy_host$request_uri; the actual algorithm used is slightly more complicated. proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; server { # ... location / { proxy_cache my_cache; proxy_pass http://my_upstream; } } For this example configuration, the cache key http://www.example.org/my_image.jpg is calculated as md5("http://my_upstream:80/my_image.jpg"). Note that the proxy_host variable is used for the hashed value and not the actual hostname (www.example.com). proxy_host is defined as the name and port of the proxy server specified in the proxy_pass directive. To change the variable used as the basis for the key, use the proxy_cache_key directive. Using a cookie as part of my cache key The cache key can be configured with any value, for example: proxy_cache_key $proxy_host$request_uri$cookie_jessionid; This example incorporates the value of the JSESSIONID cookie into the cache key. Items with the same URI but different JSESSIONID values are cached separately as unique items. Nginx uses the ETag header In Nginx 1.7.3 and later, the ETag header fully supports If-None-Match. How Nginx handles byte range requests If the file is fresh in the cache, Nginx honors the byte range request and serves only the specified bytes of the item to the client. If the file is not cached, or if the file is outdated, Nginx will download the entire file from the origin server. If the request is for a single byte range, Nginx sends that range to the client as soon as it encounters it in the download stream. If the request specifies multiple byte ranges in the same file, Nginx delivers the entire file to the client when the download is complete. Once the download is complete, Nginx moves the entire resource into the cache so that all future byte range requests, whether for a single range or multiple ranges, are immediately satisfied from the cache. Note that the upstream server must support byte range requests for Nginx to support byte range requests to that upstream server. How Nginx handles the Pragma header The Pragma:no-cache header is added by the client to request content that bypasses all intermediate caches and goes directly to the origin server. By default, Nginx does not support the Pragma header, but you can configure that functionality using the proxy_cache_bypass directive: location /images/ { proxy_cache my_cache; proxy_cache_bypass $http_pragma; # ... } Does Nginx support the headers stale-while-revalidate and stale-if-error and extended Cache-Control Supported in Nginx 1.11.10 and higher. What these extensions do: Extension of the Cache-Control HTTP header to allow a stale cached response to be used if it is currently being updated. The stale-if-error extension to the Cache-Control HTTP header allows using a stale cached response when an error occurs. These headers have lower priority than the proxy_cache_use_stale directive described above. Does Nginx support Vary header? The Vary header is supported in Nginx 1.7.7 and later. in conclusion At this point, you should have a good understanding of how Nginx proxy cache works and how to properly configure it. If you have any questions or feedback, feel free to leave a comment. The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM. You may also be interested in:
|
<<: How to restore single table data using MySQL full database backup data
>>: Sample code for the test script for indexes and locks at RR and RC isolation levels
Provide login and obtain user information data in...
SVN service backup steps 1. Prepare the source se...
Introduction to Nginx Nginx ("engine x"...
This article mainly introduces the Mysql backup m...
Web forms are the primary communication channel b...
Preface I believe most people have used MySQL and...
Recently, when I was working on CSS interfaces, I...
Table of contents Proper use of indexes 1. Disadv...
I encountered such a problem during development A...
1. Database transactions will reduce database per...
Preface: This article mainly introduces the conte...
This article mainly introduces an example of impl...
Table of contents Stabilization Throttling: Anti-...
A simple license plate input component (vue) for ...
The MySQL explain command can analyze the perform...