1. Introduction Our real servers should not be directly exposed to the public Internet, otherwise it will be easier to leak server information and more vulnerable to attacks. A more "civilian" solution is to use Nginx as a reverse proxy. Today, let’s talk about some of the capabilities of using Nginx reverse proxy. Nginx proxy can help us implement many very effective API control functions. This also explains why I always recommend using Nginx to proxy our Spring Boot application. 2. What capabilities can Nginx provide? Nginx doesn't need much praise, it has been widely recognized by the industry. Let’s talk about what specific functions it can achieve. 2.1 Agent Capabilities This is the most commonly used function on the server side. An Nginx server with a public network can act as a proxy for a real server that can communicate with it on the intranet. Let our servers not be directly exposed to the outside world, increasing their ability to resist risks. Suppose that the Nginx server server { listen 80; server_name felord.cn; # ^~ means that the uri starts with a regular string. If it matches, it will not continue to match. Not a regular match location ^~ /api/v1 { proxy_set_header Host $host; proxy_pass http://192.168.1.9:8080/; } } After the above configuration, the real interface address of our server is If proxy_pass ends with 2.2 Rewrite function Nginx also provides a In the example of 2.1, if we want to return 405 if the request is POST, we only need to change the configuration to: location ^~ /api/v1 { proxy_set_header Host $host; if ($request_method = POST){ return 405; } proxy_pass http://192.168.1.9:8080/; } You can use global variables provided by Nginx (such as 2.3 Configure HTTPS Many students have asked in the group how to configure HTTPS in a Spring Boot project, and I always recommend using Nginx to do this. Nginx is much more convenient than configuring SSL in Spring Boot, and it does not affect our local development. The HTTPS configuration in Nginx can be used by changing it as follows: http{ #Multiple server nodes can be added in the http node server{ #ssl needs to listen to port 443 listen 443; #The domain name corresponding to the CA certificate server_name felord.cn; # Enable SSL ssl on; # Server certificate absolute path ssl_certificate /etc/ssl/cert_felord.cn.crt; # Server certificate key absolute path ssl_certificate_key /etc/ssl/cert_felord.cn.key; ssl_session_timeout 5m; # Protocol type ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # ssl algorithm list ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; # Whether the server decides which algorithm to use on/off TLSv1.1 requires ssl_prefer_server_ciphers on; location ^~ /api/v1 { proxy_set_header Host $host; proxy_pass http://192.168.1.9:8080/; } } # If the user accesses via http, rewrite and jump directly to https. This is a very necessary operation server{ listen 80; server_name felord.cn; rewrite ^/(.*)$ https://felord.cn:443/$1 permanent; } } Rewrite is used here to improve user experience. 2.4 Load Balancing Generally, projects grow from small to large. When you start, deploying a server is enough. If your project has more users, first of all, congratulations, it means that your project is heading in the right direction. But along with it comes server pressure. You certainly don't want to suffer the various losses caused by server downtime. You need to quickly improve the server's pressure resistance, or you want to perform maintenance without stopping to avoid business interruptions. These can be achieved through Nginx load balancing, and it is very simple. Suppose we deploy three nodes The simplest polling strategy Dispatching requests in turns, this configuration is the simplest: http { upstream app { # Node 1 server 192.168.1.9:8080; # Node 2 server 192.168.1.10:8081; # Node 3 server 192.168.1.11:8082; } server { listen 80; server_name felord.cn; # ^~ means that the uri starts with a regular string. If it matches, it will not continue to match. Not a regular match location ^~ /api/v1 { proxy_set_header Host $host; # Load balancing proxy_pass http://app/; } } } Weighted Round Robin Strategy Specify the polling probability, upstream app { # Node 1 server 192.168.1.9:8080 weight = 6; # Node 2 server 192.168.1.10:8081 weight = 3; # Node 3 server 192.168.1.11:8082 weight = 1; } The final number of request processing will be distributed in a ratio of 6:3:1. In fact, simple polling can be regarded as all weights being divided equally into 1. Polling downtime can be automatically eliminated. IP HASH Hash the access IP address so that each client will have a fixed access to the server. If the server goes down, it needs to be manually removed. upstream app { ip_hash; # Node 1 server 192.168.1.9:8080 weight = 6; # Node 2 server 192.168.1.10:8081 weight = 3; # Node 3 server 192.168.1.11:8082 weight = 1; } Least Connections The request will be forwarded to the server with fewer connections, making full use of server resources: upstream app { least_conn; # Node 1 server 192.168.1.9:8080 weight = 6; # Node 2 server 192.168.1.10:8081 weight = 3; # Node 3 server 192.168.1.11:8082 weight = 1; } Other methods We can use some plug-ins to implement other modes of load balancing, such as using nginx-upsync-module to achieve dynamic load balancing. Can we use this to develop a grayscale release function? 2.5 Current Limitation By configuring Nginx, we can implement the leaky bucket algorithm and the token bucket algorithm to limit the access speed by limiting the number of requests per unit time and the number of connections at the same time. I haven't studied this area in depth, so I'll just mention it here. You can look up relevant information for research. 3. Conclusion Nginx is very powerful and is recommended to be used to proxy our back-end applications. We can implement many useful functions through configuration without having to do some non-business logic coding. If you implement current limiting and configure SSL in Spring Boot, it will not only be troublesome, but also affect local development. Using Nginx allows us to concentrate on the business. It can be said that Nginx plays the role of a small gateway here. In fact, many well-known gateways are based on Nginx, such as Kong, Orange, Apache APISIX, etc. If you are interested, you can play with Nginx's advanced form Openresty. This concludes this article on why I recommend Nginx as a backend server proxy. For more information about Nginx as a backend server proxy, please search for previous articles on 123WORDPRESS.COM or continue browsing the related articles below. I hope you will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: How to install MySQL 8.0.13 in Alibaba Cloud CentOS 7
>>: JavaScript to implement the most complete code analysis of simple carousel (ES6 object-oriented)
Table of contents Combining lookahead and lookbeh...
The earliest computers could only use ASCII chara...
Preface: In interviews for various technical posi...
Preface Many friends who have just come into cont...
Table of contents 1. useState hook 2. useRef hook...
Effect: <div class="imgs"> <!-...
There was a problem when installing the compresse...
Due to work reasons, it is often not possible to ...
Pull the image # docker pull codercom/code-server...
There are many methods on the Internet that, alth...
When playing music, the lyrics will gradually fil...
Generally, learning Java and deploying projects a...
This article example shares the specific code for...
Table of contents 1. The difference between multi...
Preface The project requires a circular menu. I s...