How to maintain a long connection when using nginx reverse proxy

How to maintain a long connection when using nginx reverse proxy

· 【Scene description】

After HTTP1.1, the HTTP protocol supports persistent connections, also known as long connections. The advantage is that multiple HTTP requests and responses can be transmitted on one TCP connection, reducing the consumption and delay of establishing and closing connections.

If we use nginx as a reverse proxy or load balancer, the long connection request from the client will be converted into a short connection and sent to the server.

In order to support long connections, we need to do some configuration on the nginx server.

·【Require】

When using nginx, if you want to achieve long connections, we must do the following two things:

  • From client to nginx is a long connection
  • From nginx to server is a long connection

For the client, nginx actually plays the role of a server. Conversely, for the server, nginx is a client.

[Maintain a long connection with the Client]

If we want to maintain a long connection between Client and Nginx, we need to:

  • The request sent by the client carries the "keep-alive" header.
  • Nginx settings support keep-alive

HTTP Configuration

By default, nginx has enabled keepalive support for client connections. For special scenarios, relevant parameters can be adjusted.

http {

keepalive_timeout 120s; #Client connection timeout. When it is 0, long connections are disabled.

keepalive_requests 10000; #The maximum number of requests that can be served on a long connection.

                         #When the maximum number of requests is reached and all existing requests are completed, the connection is closed.

                         #The default value is 100

}

In most cases, keepalive_requests = 100 is sufficient, but for scenarios with high QPS, it is necessary to increase this parameter to avoid a large number of connections being generated and then abandoned, and to reduce TIME_WAIT.

When QPS=10000, the client sends 10,000 requests per second (usually establishing multiple long connections), and each connection can only run a maximum of 100 requests, which means that on average 100 long connections will be closed by nginx every second.

This also means that in order to maintain QPS, the client has to re-establish 100 connections per second.

Therefore, if you use the netstat command to view the client machine, you will find a large number of TIME_WAIT socket connections (even if keep alive is already in effect between the client and NGINX).

[Maintain a long connection with the server]

To maintain a persistent connection between Nginx and the server, the simplest settings are as follows:

http {

upstream backend {

  server 192.168.0.1:8080 weight=1 max_fails=2 fail_timeout=30s;

  server 192.168.0.2:8080 weight=1 max_fails=2 fail_timeout=30s;

  keepalive 300; // This is important!

}

   server {

listen 8080 default_server;

server_name "";



   location / {

proxy_pass http://backend;

proxy_http_version 1.1; # Set http version to 1.1

proxy_set_header Connection ""; # Set Connection to long connection (default is no)}

}

}

}

【Upstream Configuration】

In upstream, there is a particularly important parameter, which is keepalive.

This parameter is different from the keepalive_timeout in the previous http.

The meaning of this parameter is the maximum number of idle connections in the connection pool.

Don't understand? It doesn't matter, let's take an example:

Scenario:

There is an HTTP service that receives requests as an upstream server with a response time of 100 milliseconds.

To achieve a performance of 10,000 QPS, we need to establish approximately 1,000 HTTP requests between nginx and the upstream server. (1000/0.1s=10000)

Best case:

Assuming that the requests are very even and stable, each request takes 100ms, and the request is immediately put into the connection pool and set to idle state.

We use 0.1s as the unit:

1. We now set the keepalive value to 10, with 1000 connections every 0.1s

2. At 0.1s, we received and released a total of 1,000 requests

3. At 0.2s, we received another 1,000 requests, which were released at the end of 0.2s.

The requests and responses are relatively even, the connection released every 0.1s is just enough, there is no need to establish a new connection, and there are no idle connections in the connection pool.

First case:

The response is very stable, but the request is not stable

4. At 0.3s, we only received 500 requests, and 500 requests did not come in due to network delays and other reasons

At this time, Nginx detects that there are 500 idle connections in the connection pool, so it directly closes (500-10) connections.

5. At 0.4s, we received 1500 requests, but there are only (500+10) connections in the pool, so Nginx has to re-establish (1500-510) connections.

If the 490 connections are not closed in step 4, only 500 connections need to be reestablished.

Second case:

When the request is very stable, but the response is not stable

4. At 0.3s, we received a total of 1500 requests

But there are only 1000 connections in the pool. At this time, Nginx creates another 500 connections, a total of 1500 connections

5. At 0.3s, all connections at 0.3s were released and we received 500 requests

Nginx detected that there were 1000 idle connections in the pool, so it had to release (1000-10) connections.

One of the reasons why the number of connections fluctuates repeatedly is the maximum number of idle connections set by keepalive.

The above two situations are about unreasonable keepalive settings, which cause Nginx to release and create connections multiple times, resulting in resource waste.

Be careful when setting the keepalive parameter, especially for scenarios with high QPS requirements or unstable network environments. Generally, the number of long connections required can be roughly estimated based on the QPS value and average response time.

Then set the keepalive value to 10% to 30% of the number of persistent connections.

【location configuration】

http {

server {

location / {

proxy_pass http://backend;

proxy_http_version 1.1; # Set http version to 1.1

proxy_set_header Connection ""; # Set Connection to long connection (default is no)

}

}

}

The support for persistent connections in the HTTP protocol is only available since version 1.1, so it is best to set the proxy_http_version directive to 1.1.

HTTP1.0 does not support the keepalive feature. When HTTP1.1 is not used, the backend service will return a 101 error and then disconnect.

The "Connection" header can be cleaned up, so that even if the connection between the client and Nginx is short, a long connection can be opened between Nginx and the upstream.

[Another advanced method]

http {

map $http_upgrade $connection_upgrade {

default upgrade;

'' close;

}

   upstream backend {

server 192.168.0.1:8080 weight=1 max_fails=2 fail_timeout=30s;

server 192.168.0.2:8080 weight=1 max_fails=2 fail_timeout=30s;

keepalive 300;

}

   server {

listen 8080 default_server;

server_name "";

location / {

proxy_pass http://backend;



   proxy_connect_timeout 15; #Connection timeout with upstream server (no unit, maximum cannot exceed 75s)

proxy_read_timeout 60s; #How long will nginx wait to get a response to a request proxy_send_timeout 12s; #Timeout for sending a request to an upstream server

   proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection $connection_upgrade;

}

}

}

The function of map in http is:

Make the value of the "Connection" header field forwarded to the proxied server depend on the value of the "Upgrade" field in the client request header.

If $http_upgrade does not match, then the value of the "Connection" header field will be upgrade.

If $http_upgrade is the empty string, the value of the "Connection" header field will be close.

【Replenish】

NGINX supports WebSocket.

For NGINX to send the upgrade request from the client to the backend server, the Upgrade and Connection headers must be set explicitly.

This is also a very common scenario in the above situation.

The HTTP Upgrade protocol header mechanism is used to upgrade the connection from an HTTP connection to a WebSocket connection. The Upgrade mechanism uses the Upgrade protocol header and the Connection protocol header.

In order for Nginx to send the Upgrade request from the client to the backend server, the Upgrade and Connection headers must be set explicitly.

【Notice】

In the nginx configuration file, if there is no proxy_set_header setting in the current module, the configuration will be inherited from the upper level.

The inheritance order is: http, server, location.

If the header value is modified using proxy_set_header in the next layer, all header values ​​may change and all previously inherited configurations will be discarded.

Therefore, try to perform proxy_set_header in the same place, otherwise there may be other problems.

·【refer to】

Nginx Chinese official documentation: http://www.nginx.cn/doc/

Test reference document: https://www.lijiaocn.com/问题/2019/05/08/nginx-ingress-keep-alive-not-work.html

Keep-alive reference document: https://wglee.org/2018/12/02/nginx-keepalive/

The above is the details of how to maintain a long connection when using nginx reverse proxy. For more information about how to maintain a long connection with nginx, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • Nginx reverse proxy configuration to remove prefix case tutorial
  • Summary of pitfalls of using nginx as a reverse proxy for grpc
  • Full process record of Nginx reverse proxy configuration
  • How to implement Nginx reverse proxy for multiple servers
  • The whole process of configuring reverse proxy locally through nginx
  • Implementation of proxy_pass in nginx reverse proxy
  • Detailed explanation of Nginx reverse proxy example
  • Nginx reverse proxy to go-fastdfs case explanation

<<:  How to choose the format when using binlog in MySQL

>>:  JavaScript to achieve a simple countdown effect

Recommend

MySQL daily statistics report fills in 0 if there is no data on that day

1. Problem reproduction: Count the total number o...

Summary of XHTML application in web design study

<br />Generally speaking, the file organizat...

The difference between the four file extensions .html, .htm, .shtml and .shtm

Many friends who have just started to make web pag...

How to Rename Multiple Files at Once in Linux

Preface In our daily work, we often need to renam...

Pure CSS to implement iOS style open and close selection box function

1 Effect Demo address: https://www.albertyy.com/2...

Linux kernel device driver kernel debugging technical notes collation

/****************** * Kernel debugging technology...

Websocket+Vuex implements a real-time chat software

Table of contents Preface 1. The effect is as sho...

Detailed explanation of mixed inheritance in Vue

Table of contents The effect of mixed inheritance...

Nginx solves cross-domain issues and embeds third-party pages

Table of contents Preface difficulty Cross-domain...

How to import and export Docker images

This article introduces the import and export of ...

Native js to realize a simple snake game

This article shares the specific code of js to im...

Tutorial on installing Odoo14 from source code on Ubuntu 18.04

Table of contents Background of this series Overv...