Preface This article only focuses on what Nginx can handle without loading third-party modules. Since there are too many third-party modules, it is impossible to introduce them all. Of course, this article itself may not be complete. After all, it is just what I have used and understood personally. So please forgive me, and welcome to leave a message for communication What Nginx can do 1. Reverse Proxy The above is what I know about what Nginx can do without relying on third-party modules. The following is a detailed description of how to do each function Reverse Proxy Reverse proxy is probably the most common thing that Nginx does. What is reverse proxy? Here is what Baidu Encyclopedia says: Reverse proxy refers to using a proxy server to accept connection requests on the Internet, then forwarding the requests to a server on the internal network, and returning the results obtained from the server to the client requesting the connection on the Internet. At this time, the proxy server appears to the outside world as a reverse proxy server. To put it simply, the real server cannot be directly accessed by the external network, so a proxy server is needed. The proxy server can be accessed by the external network and is in the same network environment as the real server. Of course, it may also be the same server with a different port. Here is a simple code to implement reverse proxy server { listen 80; server_name localhost; client_max_body_size 1024M; location / { proxy_pass http://localhost:8080; proxy_set_header Host $host:$server_port; } } After saving the configuration file, start Nginx. When we access localhost, it is equivalent to accessing localhost:8080. Load Balancing Load balancing is also a commonly used function of Nginx. Load balancing means distributing the execution to multiple operation units, such as Web servers, FTP servers, enterprise key application servers and other mission-critical servers, so as to complete the work tasks together. Simply put, when there are two or more servers, the requests are randomly distributed to the designated servers for processing according to the rules. Load balancing configuration generally requires the configuration of a reverse proxy at the same time, and jumps to the load balancing through the reverse proxy. Nginx currently supports 3 built-in load balancing strategies and 2 commonly used third-party strategies. 1. RR (default) Each request is assigned to a different backend server one by one in chronological order. If the backend server is down, it can be automatically removed. Simple configuration upstream test { server localhost:8080; server localhost:8081; } server { listen 81; server_name localhost; client_max_body_size 1024M; location / { proxy_pass http://test; proxy_set_header Host $host:$server_port; } } The core code of load balancing is upstream test { server localhost:8080; server localhost:8081; } Here I have configured two servers, of course, in fact it is one, but the ports are different, and the server 8081 does not exist, that is to say, it cannot be accessed, but when we visit http://localhost, there will be no problem, it will jump to http://localhost:8080 by default. This is because Nginx will automatically determine the status of the server. If the server is inaccessible (the server is down), it will not jump to this server, so it also avoids the situation where a server is down and affects the use. Since Nginx defaults to RR policy, we do not need any other settings. 2. Weight Specifies the polling probability. The weight is proportional to the access ratio and is used when the backend server performance is uneven. For example upstream test { server localhost:8080 weight=9; server localhost:8081 weight=1; } Then generally only 1 out of 10 will access 8081, and 9 out of 10 will access 8080. 3. ip_hash There is a problem with the above two methods, that is, when the next request comes, the request may be distributed to another server. When our program is not stateless (session is used to save data), there is a big problem at this time. For example, if the login information is saved in the session, then you need to log in again when jumping to another server. Therefore, many times we need a customer to access only one server, then we need to use ip_hash. Each request of ip_hash is allocated according to the hash result of the access ip, so that each visitor accesses a fixed backend server, which can solve the session problem. upstream test { ip_hash; server localhost:8080; server localhost:8081; } 4. fair (third party) Requests are distributed based on the response time of the backend server, with requests with shorter response times given priority. upstream backend { fair; server localhost:8080; server localhost:8081; } 5. url_hash (third party) Requests are distributed according to the hash result of the accessed URL, so that each URL is directed to the same backend server. This is more effective when the backend server is cached. Add a hash statement in the upstream. Other parameters such as weight cannot be written in the server statement. hash_method is the hash algorithm used. upstream backend { hash $request_uri; hash_method crc32; server localhost:8080; server localhost:8081; } The above five types of load balancing are suitable for different situations, so you can choose which strategy mode to use according to the actual situation. However, fair and url_hash require the installation of third-party modules to use. Since this article mainly introduces what Nginx can do, the installation of third-party modules for Nginx will not be introduced in this article. HTTP Server Nginx itself is also a static resource server. When there are only static resources, you can use Nginx as a server. At the same time, it is also very popular to separate static and dynamic resources, which can be achieved through Nginx. First, let's take a look at Nginx as a static resource server. server { listen 80; server_name localhost; client_max_body_size 1024M; location / { root e:wwwroot; index index.html; } } In this way, if you visit http://localhost, you will access index.html in the wwwroot directory of drive E by default. If a website is just a static page, you can deploy it in this way. Separation of static and dynamic Dynamic and static separation is to make the dynamic web pages in the dynamic website distinguish the unchanging resources from the frequently changing resources according to certain rules. After the dynamic and static resources are separated, we can cache them according to the characteristics of the static resources. This is the core idea of website static processing upstream test{ server localhost:8080; server localhost:8081; } server { listen 80; server_name localhost; location / { root e:wwwroot; index index.html; } # All static requests are processed by nginx and stored in the html directory location ~ .(gif|jpg|jpeg|png|bmp|swf|css|js)$ { root e:wwwroot; } # All dynamic requests are forwarded to tomcat for processing location ~ .(jsp|do)$ { proxy_pass http://test; } error_page 500 502 503 504 /50x.html; location = /50x.html { root e:wwwroot; } } In this way, we can put HTML, pictures, css and js in the wwwroot directory, and tomcat is only responsible for processing jsp and requests. For example, when our suffix is gif, Nginx will get the dynamic image file of the current request from wwwroot by default and return it. Of course, the static file here is on the same server as Nginx. We can also use another server and then configure it through reverse proxy and load balancing. As long as we understand the most basic process, many configurations will be very simple. In addition, localtion is actually followed by a regular expression, so it is very flexible. Forward Proxy A forward proxy is a server located between the client and the origin server. In order to obtain content from the origin server, the client sends a request to the proxy and specifies the target (origin server). The proxy then forwards the request to the origin server and returns the obtained content to the client. The client can use the forward proxy. When you need to use your server as a proxy server, you can use Nginx to implement forward proxy. However, there is a problem with Nginx at present, that is, it does not support HTTPS. Although I have searched Baidu to configure HTTPS forward proxy, I found that it still cannot be used as a proxy. Of course, it may be that my configuration is wrong, so I also hope that comrades who know the correct method can leave a message to explain. resolver 114.114.114.114 8.8.8.8; server { resolver_timeout 5s; listen 81; access_log e:wwwrootproxy.access.log; error_log e:wwwrootproxy.error.log; location / { proxy_pass http://$host$request_uri; } } Resolver is the DNS server that configures the forward proxy, and listen is the port of the forward proxy. Once configured, you can use the server IP+port number for proxy on IE or other proxy plug-ins. Final words Nginx supports hot start, which means that after we modify the configuration file, we can make the configuration take effect without shutting down Nginx. Of course, I don’t know how many people know this. Anyway, I didn’t know it at the beginning, which led to often killing the Nginx thread and then starting it again. . . The command for Nginx to re-read the configuration is Under Windows is Summarize The above is what I introduced to you about what Nginx can do. I hope it will be helpful to you. If you have any questions, please leave me a message and I will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website! You may also be interested in:
|
<<: How to use Chrome Dev Tools to analyze page performance (front-end performance optimization)
>>: Detailed explanation of mysql partition function and example analysis
Table of contents 1. Array.at() 2. Array.copyWith...
Table of contents Preface 1. Split a string 2. JS...
This article example shares the specific code for...
A colleague once told me to use a temporary table...
1. Problem Description When starting MYSQL, a pro...
Preface MySQL is a relational database with stron...
Table of contents 1. Quickly recognize the concep...
Written in front In today's Internet field, N...
1. TCP Wrappers Overview TCP Wrappers "wraps...
Get the Dockerfile from the Docker image docker h...
mysqladmin is an official mysql client program th...
introduce If you are using an OSS storage service...
1. px px is the abbreviation of pixel, a relative...
MySQL 5.7.17, now seems to be the latest version,...
1. Installation and use First, install it in your...