Why I recommend Nginx as a backend server proxy (reason analysis)

Why I recommend Nginx as a backend server proxy (reason analysis)

1. Introduction

Our real servers should not be directly exposed to the public Internet, otherwise it will be easier to leak server information and more vulnerable to attacks. A more "civilian" solution is to use Nginx as a reverse proxy. Today, let’s talk about some of the capabilities of using Nginx reverse proxy. Nginx proxy can help us implement many very effective API control functions. This also explains why I always recommend using Nginx to proxy our Spring Boot application.

2. What capabilities can Nginx provide?

Nginx doesn't need much praise, it has been widely recognized by the industry. Let’s talk about what specific functions it can achieve.

2.1 Agent Capabilities

This is the most commonly used function on the server side. An Nginx server with a public network can act as a proxy for a real server that can communicate with it on the intranet. Let our servers not be directly exposed to the outside world, increasing their ability to resist risks.

Suppose that the Nginx server 192.168.1.8 can communicate with the application server 192.168.1.9 in the same intranet segment, and the Nginx server has public network capabilities and binds the public network to the domain name felord.cn . Then the corresponding configuration of our Nginx proxy ( nginx.conf ) is as follows:

 server {
  listen 80;
  server_name felord.cn;
 # ^~ means that the uri starts with a regular string. If it matches, it will not continue to match. Not a regular match location ^~ /api/v1 {
   proxy_set_header Host $host;
   proxy_pass http://192.168.1.9:8080/;
  }
 }

After the above configuration, the real interface address of our server is http://192.168.1.9:8080/foo/get , which can be accessed through http://felord.cn/api/v1/foo/get .

If proxy_pass ends with / , it is equivalent to the absolute root path, and Nginx will not proxy the path part that matches in location; if it does not end with / , it will also proxy the path part that matches.

2.2 Rewrite function

Nginx also provides a rewrite function that allows us to rewrite the URI when the request reaches the server, which is somewhat similar to the meaning of Servlet Filter and performs some pre-processing on the request.

In the example of 2.1, if we want to return 405 if the request is POST, we only need to change the configuration to:

location ^~ /api/v1 {
 proxy_set_header Host $host;
 if ($request_method = POST){
  return 405;
 }
 proxy_pass http://192.168.1.9:8080/;
}

You can use global variables provided by Nginx (such as $request_method in the above configuration) or variables you set as conditions, combined with regular expressions and flags ( last , break , redirect , permanent ) to implement URI rewriting and redirection.

2.3 Configure HTTPS

Many students have asked in the group how to configure HTTPS in a Spring Boot project, and I always recommend using Nginx to do this. Nginx is much more convenient than configuring SSL in Spring Boot, and it does not affect our local development. The HTTPS configuration in Nginx can be used by changing it as follows:

http{
 #Multiple server nodes can be added in the http node server{
  #ssl needs to listen to port 443 listen 443;
  #The domain name corresponding to the CA certificate server_name felord.cn;
  # Enable SSL
  ssl on;
  # Server certificate absolute path ssl_certificate /etc/ssl/cert_felord.cn.crt;
  # Server certificate key absolute path ssl_certificate_key /etc/ssl/cert_felord.cn.key;
  ssl_session_timeout 5m;
  # Protocol type ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  # ssl algorithm list ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;
  # Whether the server decides which algorithm to use on/off TLSv1.1 requires ssl_prefer_server_ciphers on;
  
  location ^~ /api/v1 {
   proxy_set_header Host $host;
   proxy_pass http://192.168.1.9:8080/;
  }
 }
 # If the user accesses via http, rewrite and jump directly to https. This is a very necessary operation server{
  listen 80;
  server_name felord.cn;
  rewrite ^/(.*)$ https://felord.cn:443/$1 permanent;
 }

}

Rewrite is used here to improve user experience.

2.4 Load Balancing

Generally, projects grow from small to large. When you start, deploying a server is enough. If your project has more users, first of all, congratulations, it means that your project is heading in the right direction. But along with it comes server pressure. You certainly don't want to suffer the various losses caused by server downtime. You need to quickly improve the server's pressure resistance, or you want to perform maintenance without stopping to avoid business interruptions. These can be achieved through Nginx load balancing, and it is very simple. Suppose we deploy three nodes felord.cn :

The simplest polling strategy

Dispatching requests in turns, this configuration is the simplest:

http {
 
 upstream app {
   # Node 1
   server 192.168.1.9:8080;
   # Node 2
   server 192.168.1.10:8081;
   # Node 3
   server 192.168.1.11:8082;
 }
 
 server {
  listen 80;
  server_name felord.cn;
 # ^~ means that the uri starts with a regular string. If it matches, it will not continue to match. Not a regular match location ^~ /api/v1 {
   proxy_set_header Host $host;
   # Load balancing proxy_pass http://app/;
  }
 }
}

Weighted Round Robin Strategy

Specify the polling probability, weight is proportional to the access ratio, and is used when the backend server performance is uneven:

upstream app {
  # Node 1
  server 192.168.1.9:8080 weight = 6;
  # Node 2
  server 192.168.1.10:8081 weight = 3;
  # Node 3
  server 192.168.1.11:8082 weight = 1;
}

The final number of request processing will be distributed in a ratio of 6:3:1. In fact, simple polling can be regarded as all weights being divided equally into 1. Polling downtime can be automatically eliminated.

IP HASH

Hash the access IP address so that each client will have a fixed access to the server. If the server goes down, it needs to be manually removed.

upstream app {
  ip_hash;
  # Node 1
  server 192.168.1.9:8080 weight = 6;
  # Node 2
  server 192.168.1.10:8081 weight = 3;
  # Node 3
  server 192.168.1.11:8082 weight = 1;
}

Least Connections

The request will be forwarded to the server with fewer connections, making full use of server resources:

upstream app {
  least_conn;
  # Node 1
  server 192.168.1.9:8080 weight = 6;
  # Node 2
  server 192.168.1.10:8081 weight = 3;
  # Node 3
  server 192.168.1.11:8082 weight = 1;
}

Other methods

We can use some plug-ins to implement other modes of load balancing, such as using nginx-upsync-module to achieve dynamic load balancing. Can we use this to develop a grayscale release function?

2.5 Current Limitation

By configuring Nginx, we can implement the leaky bucket algorithm and the token bucket algorithm to limit the access speed by limiting the number of requests per unit time and the number of connections at the same time. I haven't studied this area in depth, so I'll just mention it here. You can look up relevant information for research.

3. Conclusion

Nginx is very powerful and is recommended to be used to proxy our back-end applications. We can implement many useful functions through configuration without having to do some non-business logic coding. If you implement current limiting and configure SSL in Spring Boot, it will not only be troublesome, but also affect local development. Using Nginx allows us to concentrate on the business. It can be said that Nginx plays the role of a small gateway here. In fact, many well-known gateways are based on Nginx, such as Kong, Orange, Apache APISIX, etc. If you are interested, you can play with Nginx's advanced form Openresty.

This concludes this article on why I recommend Nginx as a backend server proxy. For more information about Nginx as a backend server proxy, please search for previous articles on 123WORDPRESS.COM or continue browsing the related articles below. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Detailed explanation of nginx proxy multiple servers (multiple server mode)
  • Detailed explanation of how Nginx + Tomcat reverse proxy can efficiently deploy multiple sites on one server
  • Nginx configuration tutorial for Tomcat server as reverse proxy
  • Nginx server as reverse proxy to implement URL forwarding configuration in internal LAN
  • Nginx server reverse proxy proxy_pass configuration method explanation

<<:  How to install MySQL 8.0.13 in Alibaba Cloud CentOS 7

>>:  JavaScript to implement the most complete code analysis of simple carousel (ES6 object-oriented)

Recommend

js regular expression lookahead and lookbehind and non-capturing grouping

Table of contents Combining lookahead and lookbeh...

Introduction to using Unicode characters in web pages (&#,\u, etc.)

The earliest computers could only use ASCII chara...

Answers to several high-frequency MySQL interview questions

Preface: In interviews for various technical posi...

Solve the mobile terminal jump problem (CSS transition, target pseudo-class)

Preface Many friends who have just come into cont...

Introduction to useRef and useState in JavaScript

Table of contents 1. useState hook 2. useRef hook...

CSS to achieve text on the background image

Effect: <div class="imgs"> <!-...

Solution to the problem of installing MySQL compressed version zip

There was a problem when installing the compresse...

How to deploy code-server using docker

Pull the image # docker pull codercom/code-server...

How to remove spaces or specified characters in a string in Shell

There are many methods on the Internet that, alth...

Install and deploy java8 and mysql under centos7

Generally, learning Java and deploying projects a...

JavaScript implements the generation of 4-digit random verification code

This article example shares the specific code for...

Vue multi-page configuration details

Table of contents 1. The difference between multi...