Detailed explanation of nginx forward proxy and reverse proxy

Detailed explanation of nginx forward proxy and reverse proxy

Forward Proxy

Assume there is an intranet

There are two machines in the intranet, only machine a can access the Internet

b cannot access the Internet, but a and b are connected through the Internet

At this time, if b wants to access the external network, it can use a to forward proxy to access the external network.

A forward proxy simulates the target server in the intranet and forwards requests from other machines in the intranet to the target server.

Forward to the real target server in the external network

So the forward proxy accepts requests from other machines in the intranet.

The reverse proxy is the other way around

It is also an intranet with several machines, only one of which is connected to the external network.

However, the reverse proxy does not accept access requests from intranet machines.

The reverse proxy accepts access requests from the external network

Then forward the request to other machines in the intranet

The user who sends the request from the external network does not know to whom the reverse proxy server forwards the request

To set up forward proxy functionality on a machine

As shown in the figure, edit an nginx configuration file

The above picture shows the contents of the configuration file

If you configure a server as a forward proxy server

Then this virtual host configuration file must be the default virtual host

Because all network requests to access this machine should first access this virtual host.

So here we need to set default_server

Then you need to change the original default virtual host configuration file name

As shown in the figure, change the name of the default.conf configuration file

This will cancel the original default virtual host configuration file

Because the default virtual host configuration file is default.conf

The resolver in the configuration file is 119.29.29.29

It means to configure a dns address

Because it is a forward proxy, after accepting the domain name requested by the intranet

To send the request to the server you actually want to access

However, the domain name sent by the intranet does not contain the IP address

So we need to send the domain name to the DNS server to resolve the IP address

After getting the IP address, it can be forwarded to the server to be accessed

So you need to configure a dns address here

After accepting the intranet domain name, the domain name will be sent to this DNS for resolution

The following location can be set according to the figure

In this way, after the forward proxy server accepts the request from the intranet machine

The domain name will be sent to the configured DNS for resolution, and then the real server will be accessed.

Then send the content returned by the real server to the intranet machine that made the request

nginx reverse proxy

Make a reverse proxy example

Create a test virtual host configuration file as shown in the figure

Listening port 8080, domain name is www.test.com

The root directory is /data/wwwroot/test.com

The homepage file displayed when accessing the virtual host is index.html

As shown in the figure, create the root directory of the virtual host /data/wwwroot/test.com

Then use echo "test.com_8080" > !$/index.html

Create a homepage file with the content test.com_8080

This file is in the /data/wwwroot/test.com directory

As shown in the figure, create a new reverse proxy virtual host configuration file

Listening port 80, domain name is www.test.com

The following location / contains the reverse proxy configuration

When accessing this virtual host, the access request will be sent to 127.0.0.1:8080

As shown in the figure, use curl to access the 127.0.0.1:8080 virtual host

The result is test.com_8080, which means that this virtual host can be accessed.

As shown in the figure, create another virtual host configuration file

Similar to the previous test virtual host

But this virtual host does not have a domain name set

The returned content of the location setting is 8080 default string

Save and exit, reload nginx

Also cancel the default server setting of the test virtual host

So now 127.0.0.1:8080 corresponds to two virtual hosts

One is the test virtual host, and the other is the 8080 default virtual host

The IP and port of these two virtual hosts are exactly the same

The difference between them is that the test virtual host has a domain name

The 8080 default virtual host does not have a domain name

Now 8080 default is set as the default virtual host

So if you only access 127.0.0.1:8080

The access must be to the 8080 default virtual host

If you want to access the test virtual host, you need to add the domain name of the test virtual host

To successfully access the test virtual host

As shown in the figure, you can see that the result returned by accessing curl 127.0.0.1:8080/ is 8080 default

Use curl -x127.0.0.1:8080 www.test.com

Here the domain name is included, and the returned value is test.com_8080

If you want to access the test virtual host, you need to bind the IP port to the domain name.

As shown in the figure, curl accesses 127.0.0.1:80 domain name www.test.com

The result returned is test.com_8080, indicating that the reverse proxy is successful.

We accessed port 80, but the content of the virtual host of port 8080 was actually returned

As shown in the figure, comment out the following lines below the proxy_pass line in the reverse proxy virtual host.

Save and exit, reload nginx

As shown in the figure, use curl to access the 127.0.0.1:80 domain name www.test.com

The actual value returned is 8080 default

But what we want to access is the test virtual host

As shown in the figure, proxy_set_header Host $host;

This line of code specifies the domain name to be accessed

The above sets 127.0.0.1:8080

When the reverse proxy is used, it will point to this IP port

If you do not set the host, you will only access the virtual host 127.0.0.1:8080

If host is set, it will point to 127.0.0.1:8080 bound to the specified host

Here $host is a system variable, and its actual value is the server_name of the current virtual host.

That is, www.test.com, what is server_name, what is the value of host

Setting the host here is equivalent to curl -x127.0.0.1:8080 www.test.com

If the host is not set here, only 127.0.0.1:8080 will be accessed

In this way, you can bind the domain name to the IP port

As shown in the figure, in addition to writing the IP port, proxy_pass can also directly write the domain name

Here it says www.123.com:8080/

But if written like this, nginx doesn't know where this domain name points to

So you also need to bind the corresponding ip in the system

For example, in the /etc/hosts file, write the corresponding domain name and IP for binding

In this way, the domain name system of proxy_pass in nginx will resolve an IP address

Then access this ip port

The proxy_header Host below is used to set a domain name

This domain name will be bound to the above IP port for access

If the IP port above is not written as an IP but a domain name

It does not conflict with the domain name specified below, because the domain name written above is used to resolve the IP

The domain name specified below will be bound to the IP port resolved above for access

This example uses $host which is a global variable in nginx

This variable actually corresponds to a value, which is the value of the current virtual host server_name

But generally speaking, it is more convenient to write the IP port directly

The above is to specify the ip port

The following specifies the host domain name bound to the ip port

nginx reverse proxy 02

As shown in the figure, the proxy_pass instruction can be followed by a URL

There are three formats: transmission protocol + domain name + uri (access path)

Transport protocol + ip port + uri

Transport protocol + socket

Here unix, http, and https are all types of transmission protocols

Domain name + uri and ip port + uri and socket are all access paths

Socket is usually a dedicated access port for a program.

Accessing a socket is accessing a specific program, so there is no need to use a path.

As shown in the figure, when writing proxy_pass, different writing methods have different results

For example, location /aming/

If the accessed path contains /aming/, it will be triggered

Here proxy_pass will be executed

However, different ways of writing proxy_pass in location will result in different actual access paths.

Although proxy_pass is executed because the accessed path contains the /aming/ directory

However, the actual access path does not necessarily contain /aming/

This example accesses the /aming/a.html file in the virtual host.

Depending on the way proxy_pass is written, different paths will actually be accessed.

If there is no directory symbol after the IP port

will access /aming/a.html, which is what we want

If the IP port is followed by the root directory symbol /

Then the a.html file in the root directory will be accessed directly, which is obviously wrong.

If the IP port is followed by /linux/, then the a.html file in /linux/ will be accessed.

If the IP port is followed by /linux and there is no directory symbol / at the end

will visit /linuxa.html

So if you want to access /aming/a.html correctly

There are two ways to write it. One is not to add any directory symbol after the IP port.

The second is to write it in full as ip port/aming/

According to the above example, no matter what directory is behind the IP port,

The actual access path will become directly the name of the file to be accessed, a.html

Add directly to the directory behind the ip port

Therefore, if there is no directory symbol after the IP port, the system will automatically add the directory path /aming/a.html

Once any directory symbol exists, a.html will be placed directly after this directory symbol

The second case is, ip port + /linux

The actual result is to access /linuxa.html

This may be because linux is not followed by any directory symbol /

So the system considers linux to be an unfinished file name

Then just paste the file name a.html together with linux

This means the file to be accessed is /linuxa.html

So no matter what path you write, it must be followed by the directory symbol /

Reverse Proxy 03

As shown in the figure, proxy_set_header is used to set the header information that the proxied server can receive.

For example, there are three computers abc

a is the computer we use to access, and we send access requests from a

b is a reverse proxy server, which receives the access request we send.

c is the server being reverse proxied, which is the server we actually want to access

b will forward our access request to c

If proxy_set_header is not set, when b forwards the request to c, it will not carry the corresponding header information.

If this parameter is set, the corresponding header information will be included when forwarding the request.

The variables $remote_addr and $proxy_add_x_forwarded_for are built-in variables of nginx

The $remote_addr variable stores the IP address of the reverse proxy server itself.

The $proxy_add_x_forwarded_for variable stores the IP address of the client computer.

If this variable is not set, the c server does not actually know the real source address of the access request.

By setting this variable, the C server can know which IP address the access request came from.

As shown in the figure, edit the configuration file of the www.test.com virtual host

Assume that this virtual host is the c server we want to access

Two echos are set in location to display the source address of the access request and the real source address

$remote_addr records the address of the reverse proxy server

$proxy_add_x_forwarded_for records the real source address of the access request, that is, the client address

With this setting, when accessing this virtual host, the values ​​stored in these two variables will be displayed.

Save and exit, then reload the configuration file

As shown in the figure, edit the configuration file of the reverse proxy server virtual host

As shown in the picture, you can see the location

The proxy_set_header X-Real-IP and proxy_set_header X-Forwarded-For lines are commented out.

Do a test first, save and exit to reload the configuration file

As shown in the figure, use curl to test the access request from 192.168.133.140:80

The IP address 192.168.133.140 is actually the client IP address.

Because the access request is sent from this IP

However, it can be seen that after the test, two 127.0.0.1 loopback addresses are actually displayed.

There is no ip 192.168.133.140

In this test, the reverse proxy server and the real server are both on the local machine.

Therefore, the source IP address of the access request received by the real server c is the loopback address of the local machine.

The reverse proxy service b sends a request to the real server c through the internal loopback address 127.0.0.1

Because both servers are on the same machine, the communication between programs on the same machine is basically done through the 127.0.0.1 loopback address.

So the value of $remote_addr of c is 127.0.0.1

Because the reverse proxy server B does not set $proxy_add_x_forwarded_for

So the value of the $proxy_add_x_forwarded_for variable received by the real server c is the IP address from which the request was sent.

That is 127.0.0.1

$proxy_add_x_forwarded_for This variable actually records the

A variable value of the IP addresses that the request has passed through in total. Multiple IP addresses are separated by commas.

If the access request sent does not set the $proxy_add_x_forwarded_for variable

Then the value of this variable on the receiving end is just the last IP address sent by the access request, which is the same as remote_addr.

For example, the access request is from a to b to c

If b sets $proxy_add_x_forwarded_for

Then the format of this variable is a_ip, b_ip

That is, the IP of a and the IP of b are recorded.

If there are more servers in the middle, their IP addresses will also be recorded, separated by commas.

Of course, each proxy server needs to set the variable $proxy_add_x_forwarded_for

Otherwise, the variable $proxy_add_x_forwarded_for of the next proxy server will not record the previously passed IP address.

Only the IP address of the previous server can be recorded

So in this test, because b does not set $proxy_add_x_forwarded_for

So the value of the $proxy_add_x_forwarded_for variable of the c service is equal to the value of $remote_addr

As shown in the figure, in the second test, edit the configuration file of reverse proxy server b

Uncomment the two lines X-Real-IP and X-Forwarded-For in location

Save and exit to reload the configuration file

As shown, test again

You can see the returned result. The value of remote_addr in the first line is 127.0.0.1

This is the IP address of proxy server B.

The value of $proxy_add_x_forwarded_for in the second line is two IP addresses.

In the curl command, the access request is sent from 192.168.133.140

In other words, the IP address of client A is 192.168.133.140

b's ip is 127.0.0.1

$proxy_add_x_forwarded_for records the IP addresses that the access request to c passes through

The access request is from a to b and then from b to c

So the $proxy_add_x_forwarded_for variable records the IP address of A and the IP address of B.

Because the access request passes through these two IP addresses before reaching c

So when you do reverse proxy in the future, these lines of variables must be set

The real server behind can obtain the real IP address of the access request

Reverse Proxy 04

As shown in the figure, there are not many scenarios for redirect application, and there are mainly three ways to write it.

The function is to modify the location and refresh header information returned by the proxied server

The first way to write it is to redirect the returned header information.

replacement is the information to be modified

redirect will be changed to replacement

The second way to write it is default, which means the default setting.

The third off means turning off the redirect function

As shown in the figure, do a test and edit the configuration file of the proxy server

There are several conditions that need to be met for the test to succeed

First of all, location can only be followed by the root directory/ and cannot be followed by anything else.

The second condition is that the URL after proxy_pass cannot be followed by a / symbol

Normally, it should end with /, but it cannot end with / here.

The directory you are accessing must actually exist. If it does not exist, you can create one.

Then you can also create an index.html file in the directory and edit some string content in it

Save and exit to reload the configuration file

As shown in the figure, edit the configuration file of the proxy server

Written in this simple format as shown in the figure

Save and exit to reload the configuration file

As shown in the figure, when curl tests access, if aming is followed by a /, the index.html file will be accessed.

But what we want to access is the directory itself, not a file in it.

Therefore, when crulling, the address to be accessed cannot end with a / symbol.

This way you can access the aming directory

As you can see, the returned code is 301, which means permanent redirection

The field after location below is the access path with port 8080

As shown in the figure, edit the configuration file of the proxy server

Add access_log /tmp/456.log

This will open the server's access log. Checking the access log can give you a clearer understanding of the access process.

Save Exit Reload

As shown in the figure, re-curl test, this time the test aming ends with a / symbol

cat View /tmp/456.log access log

It is found that the log information does not contain information such as host and port

In this case, you can modify the format configuration in the nginx.conf configuration file

As shown in the figure, the three lines log_format main in the configuration file were originally commented out.

Now remove the comments to make these lines take effect. This is the format setting of the log return information.

As shown in the figure, add two nginx variables $host $server_port at the end.

Then save, exit and reload, so that the information displayed in the access log will be added with the information of these two variables.

As shown in the figure, edit the proxy server configuration file and add the access_log configuration as well.

The log address is /tmp/proxy.log

Add main at the end because the configuration format in nginx.conf is named with main

Adding main here means using the main naming format to display log information

As shown in the figure, the access_log in the proxy server is also

You also need to add main at the end to display log information in the main format.

Save and exit and reload

As shown in the figure, curl tests this time with a / symbol at the end

Check the 456.log backend server log and you can see that port 8080 is accessed.

Check the proxy.log proxy server log and you can see that port 80 is accessed.

The network code is 200, which is normal.

As shown in the picture, this time the access aming ends without the / symbol

You can see that the return is 301

Check proxy.log and it returns 301

As shown in the figure, retest and check the two logs

See the log information of 301 to 200

In short, we have confirmed that we accessed port 80 and jumped to port 8080

But the client cannot access port 8080

As shown in the figure, proxy_redirect can be used to solve this problem

Here is http://$host:8080/ /;

This way you can remove the 8080 port information that was originally returned.

Save Exit Reload

As shown, retest

As you can see, the returned value is 301

Then in the address after location, there is no information about port 8080.

Reverse Proxy 05

proxy_buffering means buffering

Buffering is to allocate an area in memory and write data in it

When a certain amount of data is written, the data in the buffer will be written to the hard disk.

By doing this, the frequency of hard disk read and write can be greatly reduced.

If no buffering is done, the hard disk must be read and written every time data is generated, which will put a heavy burden on the hard disk.

Assume there are three objects, client a, proxy server b, and proxy server c.

a sends a request, b receives the request and forwards it to c

c returns data to b, and then b sends the data to a

This is the general operation, but if a makes many access requests

Or there are many clients making access requests

Then for the proxy server b and the proxied server c

Each request must be processed according to this process, which will be a heavy burden

Proxy_buffering is to set one or more buffer areas in the memory of proxy server b.

When the buffer area is full, the data is forwarded to the corresponding client.

In this way, the number of data forwarding times of proxy server b is greatly reduced, and the burden is reduced.

When proxy_buffering is turned on, proxy_busy_buffer_size determines when to send data to a

During this process, if the buffer area is full, there is data overflow

The extra data will be written to temp_file, which is a temporary file that will be stored on the hard disk.

If proxy_buffering is turned off, the data returned by c will be forwarded directly from b to a

No other operations will occur

As shown in the figure, regardless of whether proxy_buffering is on or off

proxy_buffer_size This option is effective. This parameter is used to set a buffer

This buffer stores the header information fed back by the server.

If the size is not large enough to store the header information, a 502 error code will appear.

So it is recommended to set it to 4K

As shown in the figure, proxy_buffers defines the number of buffers for each request and the specific size of each buffer

Here 8 4k is defined, which means there are 8 buffers, each of which is 4k in size.

Then the total buffer size is 8*4 = 32 k

Assuming there are 10,000 requests, the buffer will be 8 * 10,000 buffers.

Because this setting is for each request, not just 8 buffers in total

proxy_busy_buffer_size defines how much data is needed before transferring data to the client

The definition here is 16k, so when b's buffer belonging to this request receives 16k of data

The data will be forwarded to a

There are 8 buffers here, with a total size of 32k. The buffers are generally in two states:

One is for receiving data, the other is for sending data. You cannot receive and send data at the same time.

proxy_busy_buffer_size defines the size of the buffer for sending data

Therefore, the size of proxy_busy_buffer_size must be smaller than the total size of the buffer.

When the received data reaches the amount of data set by proxy_busy_buffer_size

These buffers enter the state of sending data, and the remaining buffers enter the state of receiving data.

If the total amount of data requested is less than the value set by proxy_busy_buffer_size

Then b will be directly forwarded to a after receiving it.

If the total amount of data requested is greater than the value set by proxy_busy_buffer_size

Then when the amount of data received by the buffer reaches the value set by proxy_busy_buffer_size

This part of the data will be sent to a first

As shown in the figure, proxy_temp_path defines the temporary file storage directory

For example, a sends a request, and the total buffer size allocated by proxy server b to a for this request is 32k

However, the amount of data that the C service responds to this request is 100 MB, which is much larger than the buffer size.

In this case, when b receives c's data, a lot of data will overflow the buffer.

These overflow data will be saved to a temporary file on b's hard disk.

proxy_temp_path defines the path where the temporary file is stored, as well as the subdirectory hierarchy

The path defined here is /usr/local/nginx/proxy_temp which is a directory name

Temporary files will be stored in this directory

The numbers 1 and 2 behind indicate the subdirectory level.

The previous directory path is defined by ourselves, and the subdirectories are automatically created by the system

The number of subdirectory levels to create can be set by the number behind

For example, if you only write 1, it means there is only one level of subdirectory, and the subdirectory name is named in the format of 0-9.

By definition, proxy_temp_path supports three levels of subdirectories, which means you can write 3 numbers.

For example, if you write 1, the number and naming method of subdirectories is 0-9, a total of 10

If you write 2, it means 00-99, a total of 100, if you write 3, it means 000-999, a total of 1000 subdirectories

Subdirectories are named according to these numbers.

If you write 1 3, it means the subdirectories are divided into two levels, the first level is 10 subdirectories 0-9

The second level is 000-999 1000 subdirectories, which can also be written in reverse 3 1

In this way, the first level has 1000 sub-directories, and each directory has 10 sub-directories on the second level.

proxy_max_temp_file_size defines the total size of temporary files

For example, if it is set to 100M here, it means that each temporary file is at most 100M

If the data of the temporary file is transferred, it will be automatically deleted

proxy_temp_file_write_size defines the total size of data written to temporary files at the same time

Define a value here such as 8k or 16k

If the amount of data written simultaneously is lower than this value, then increase the amount of data written simultaneously

If it is higher than this value, then reduce the amount of data written simultaneously

If the amount of data written at the same time is too high, the hard disk IO burden is too great, and if it is too small, the hard disk performance is not fully utilized.

So set a value that is neither too fast nor too slow, fully utilizing the hard disk's performance without overloading it.

As shown in the figure, this is an example of using proxy_buffering

First, set it to on, which means turning on the buffer function.

The size of the buffer area where the header file is stored is 4k

Then there are 2 buffer areas for other data, each with a size of 4k

Then the data volume of busy_buffers is 4k

When the amount of data received by the buffer reaches 4k, the data will be sent

Then the path definition for temporary files is defined, which defines two levels of subdirectories.

They are 1 and 2 respectively, which means the first level has 10 sub-directories from 0 to 9.

Then the second level under each subdirectory has 100 subdirectories from 00 to 99.

Then the size of each temporary file is 20M

Then the amount of data written to the temporary file at the same time is defined as 8k

Reverse Proxy 06

As shown in the figure, to use proxy_cache, you must first turn on the proxy_buffering function.

proxy_cache is the cache function

Client a sends a request. If the data requested by a has been saved in the cache of proxy server b,

Then b will send the relevant data directly to a without requesting data from server c.

If the cache function is not enabled, then for each request from server a, proxy server b will request data from server c.

If the data requested by a twice is the same, it will also request the data from server c twice.

If the cache function is turned on, the data requested for the first time has been saved in the cache. If the same data is requested for the second time

b will directly obtain the data from the cache instead of going to c to obtain the data, thus reducing the burden on server c

In summary, buffering can reduce the burden on proxy server b, and caching can reduce the burden on the proxied server c.

As shown in the figure, the proxy_cache function is turned on and off

proxy_cache off means turning off the cache function

proxy_cache zone is to open the cache zone, zone is the name of the cache zone

The cache area name can be arbitrarily named, it can be zone or 123 and other names

Writing a cache name here means opening a cache named after this name.

Starting from nginx version 0.7.66, after enabling proxy_cache

It also detects the Cache-Control and Expire header fields in the http response header of the proxied server.

If the value of cache-control is no-cache, the data of this request will not be cached.

As shown in the figure, curl -I requests data from a website

You can see that the header file information returned contains the value after Cache-Control.

The presence of no-cache means that the data returned by this request will not be cached.

As shown in the figure, the proxy_cache_bypass parameter is set in certain situations

The requested data is not obtained from the cache, but directly from the backend server

The string after this parameter is usually some variables of nginx

For example, proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;

This setting means that if any of the values ​​of these three variables is not 0 or empty

The response data will not be obtained from the cache, but directly from the backend server

It is rarely used at the moment, just learn about it

As shown in the figure, proxy_no_cache is similar to the above parameters.

Mainly set up in some cases, the acquired data is not cached

Example proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;

This setting means that when the value of any of the following three variables is not 0 or empty

The acquired data is not cached.

As shown in the figure, the format of this parameter is similar to the above parameters. Generally, it does not need to be set, just keep the default

As shown in the figure, proxy_cache_path is a parameter for setting the specific configuration of the cache area

In addition to the space in the memory, the cache can also allocate a space on the hard disk for cache.

path is to specify a directory path as the cache path, the cache will be stored here

levels=1:2 This indicates the directory level. The first number sets the first level.

The second number sets the second layer

1 means 0-9 af, a total of 16 characters, each directory consists of a single character, a total of 16 directories

2 means 0-9 af, a total of 16 characters, but each directory consists of two characters, 00, 01, 04, 2f, etc., there are more than 200 combinations

In short, this parameter sets the subdirectory level. The first number represents the first level.

The second number indicates the second layer

keys_zone is used to set the name and size of the memory zone

keys_zone=my_zone:10m means the name of the zone is my_zone

Then the zone size is 10MB

inactive is the time after which the cache is deleted.

For example, if the value is set to 300s in the figure, it means that if the data has not been accessed within 300 seconds,

Then the data will be deleted from the cache.

max_size is the maximum amount of data that can be stored in the hard disk cache

For example, here it is set to 5g, the directory set above is /data/nginx_cache/

The directory on this hard disk can store up to 5g of data. If it exceeds this amount,

The system will delete the data with the least amount of visits first, and then put the new data in.

The proxy_cache_path line cannot be written within the server brackets of the configuration file.

To be written in the http brackets

For example, first edit the nginx.conf configuration file

As shown in the figure, add the proxy_cache_path code outside the server

As shown in the figure,

Because the specified cache directory /data/nginx_cache/ does not exist, we need to create it here

As shown in the figure, compile a virtual host configuration file and add proxy_cache my_zone in location;

In this way, when this virtual host receives a request, it will use the cache space my_zone

The specific definition of the my_zone cache space has been defined in the nginx.conf configuration file

The configuration content in nginx.conf is valid for all virtual hosts

So if my_zone is defined in nginx.conf

Then use proxy_cache my_zone in all virtual host configuration files

These virtual hosts can all use the cache space my_zone

Then save and exit to reload the configuration file to take effect

For normal use, you only need to add these two lines of code to successfully configure the cache.

As shown in the figure, another problem is that the permission of the nginx service itself is nobody

The directory just now was created with root privileges

So here we need to change the owner group of the cache directory to nobody

In this way, there will be no permission issues when the nginx service operates this directory

As shown in the figure, check the contents of the /data/nginx_cache/ directory

You can see the first level directory of 0-9 af

Enter the 0 directory and you can see the second-level directory consisting of two digits.

In summary, cache space configuration mainly defines proxy_cache_path

It can be defined in nignx.conf so that any virtual host can use it

After defining proxy_cache_path, in the virtual host server where the cache needs to be used

Configure proxy_cache zone_name

zone_name is the cache space name defined in proxy_cache_path

In this way, the corresponding virtual host can use this cache space

The above is the detailed content of the detailed explanation of nginx forward proxy and reverse proxy. For more information about nginx forward proxy and reverse proxy, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • Using nginx forward proxy to implement intranet domain name forwarding process analysis
  • Diagram of the process of implementing direction proxy through nginx
  • Difference and principle analysis of Nginx forward and reverse proxy
  • A universal nginx interface to implement reverse proxy configuration
  • Interview questions about forward proxy and reverse proxy in distributed architecture

<<:  Teach you how to use charAt() in JavaScript to count the most frequently appearing characters and their number of occurrences

>>:  Solve the grouping error Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated in MySQL versions greater than 5.7

Recommend

TypeScript problem with iterating over object properties

Table of contents 1. Problem 2. Solution 1. Decla...

JavaScript implements single linked list process analysis

Preface: To store multiple elements, arrays are t...

Mysql 5.6 adds a method to modify username and password

Log in to MySQL first shell> mysql --user=root...

MySQL stored procedure in, out and inout parameter examples and summary

Stored Procedures 1. Create a stored procedure an...

Detailed explanation of how to use element-plus in Vue3

Table of contents 1. Installation 2. Import in ma...

js to realize a simple disc clock

This article shares the specific code of js to im...

How to install redis5.0.3 in docker

1. Pull the official 5.0.3 image [root@localhost ...

Install .NET 6.0 in CentOS system using cloud server

.NET SDK Download Link https://dotnet.microsoft.c...

Introduction to root directory expansion under Linux system

1. Check Linux disk status df -lh The lsblk comma...

How to avoid garbled characters when importing external files (js/vbs/css)

In the page, external files such as js, css, etc. ...

Vue integrates Tencent TIM instant messaging

This article mainly introduces how to integrate T...