Nginx stream configuration proxy (Nginx TCP/UDP load balancing)

Nginx stream configuration proxy (Nginx TCP/UDP load balancing)

Prelude

We all know that nginx is an excellent reverse proxy service. Those who have used nginx should also know upstream. The upstream node is generally placed in http node brackets. The list of servers that need to be load balanced is usually configured in upstream.

The more common uses are as follows:

#user nobody nobody.
#worker_processes 2;
#pid /nginx/pid/nginx.pid;
error_log log/error.log debug;
events {
    …
}
http {
    …
    upstream testserver {   
      server 192.168.1.5:8080;
      server 192.168.1.6:8080;
      …
    }

    server {
        …
        location / {
           …     
           proxy_pass http://testserver;
        } 
    }
}

As can be seen from the structure, this configuration is commonly used, mainly to deal with http reverse proxy.

But if we want to proxy the TCP of the backend service, does nginx support it? For example, mysql, redis, etc., the answer is yes. In fact, nginx also supports load balancing of TCP/UDP. What I want to talk about next is the stream module of nginx. By configuring the stream, such requirements can be met. Here it is more recommended to use nginx mainly for http reverse proxy.

Main article

Nginx's TCP/UDP load balancing is implemented by applying the Stream proxy module (ngx_stream_proxy_module) and the Stream upstream module (ngx_stream_upstream_module). Nginx's TCP load balancing and LVS are both four-layer load balancing applications. The difference is that LVS is placed in the Linux kernel, while Nginx runs at the user level. Nginx-based TCP load balancing can achieve more flexible user access management and control.

Now that we have understood the basic concepts, let's take a look at a specific configuration example:

worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

stream {
    upstream my_servers {
        least_conn;
        # If 3 errors occur within 5 seconds, the server will be disconnected for 5 seconds
        server <IP_SERVER_1>:3306 max_fails=3 fail_timeout=5s;
        server <IP_SERVER_2>:3306 max_fails=3 fail_timeout=5s;
        server <IP_SERVER_3>:3306 max_fails=3 fail_timeout=5s;
    }
    server {
        listen 3306;
        proxy_connect_timeout 5s; # The timeout for establishing a connection with the proxied server is 5s
        proxy_timeout 10s; # The maximum timeout for getting the response from the proxied server is 10s
        proxy_next_upstream on; # When the proxied server returns an error or times out, the client connection request that did not return a response is passed to the next server in the upstream proxy_next_upstream_tries 3; # Forwarding attempt requests up to 3 times proxy_next_upstream_timeout 10s; # The total attempt timeout is 10s
        proxy_socket_keepalive on; # Enable the SO_KEEPALIVE option for heartbeat detection proxy_pass my_servers;
    }
}

More explanation

  • Nginx's TCP/UDP load balancing also supports passive health detection mode when allocating connections. If a connection to a backend server fails and the number of consecutive failures exceeds the max_fails parameter within the fail_timeout parameter, Nginx will place the server in an unavailable state and will not allocate connections to the server within the fail_timeout parameter. When the fail_timeout parameter expires, an attempt will be made to allocate a connection to detect whether the server has recovered. If a connection can be established, it is considered recovered.
  • The parameter max_fails refers to the cumulative value of the number of connection failures assigned by Nginx to the current server within 10 seconds, and it will be reset to 0 every 10 seconds;
  • The parameter fail_timeout is both the maximum time for the failure count and the fuse time for the server to be set to a failed state. If this time is exceeded, the connection will be allocated again;
  • When the proxy_connect_timeout or proxy_timeout parameter is in timeout state, the proxy_next_upstream mechanism will be triggered;
  • The parameter proxy_next_upstream is a mechanism for improving the connection success rate under Nginx. When the proxied server returns an error or times out, it will try to forward to the next available proxied server;
  • The number of times the parameter proxy_next_upstream_tries is set includes the number of times the first request is forwarded.
  • The TCP connection will remain connected until a connection close notification is received. When the maximum interval between two consecutive successful read or write operations between Nginx and the proxied server exceeds the time configured by the proxy_timeout directive, the connection will be closed. In the scenario of TCP long connection, the proxy_timeout setting should be adjusted appropriately, and attention should be paid to the configuration of the system kernel SO_KEEPALIVE option to prevent premature disconnection.

Note: If your configuration of the steam module is invalid, please check whether the version you are using supports stream. If the module is not built-in, you need to specify the parameter --with-stream when compiling to compile it to support stream proxy.

This is the end of this article about Nginx stream configuration proxy (Nginx TCP/UDP load balancing). For more relevant Nginx stream configuration proxy content, please search 123WORDPRESS.COM's previous articles or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Detailed explanation of nginx upstream configuration and function

<<:  A brief analysis and sharing of the advantages and disadvantages of three tree structure table designs in MYSQL

>>:  Some common properties of CSS

Recommend

Detailed explanation of MySQL database transaction isolation levels

Database transaction isolation level There are 4 ...

React nested component construction order

Table of contents In the React official website, ...

Nginx one domain name to access multiple projects method example

Background Recently, I encountered such a problem...

Difference between HTML ReadOnly and Enabled

The TextBox with the ReadOnly attribute will be di...

Detailed examples of Docker-compose networks

Today I experimented with the network settings un...

In-depth explanation of MySQL common index and unique index

Scenario 1. Maintain a citizen system with a fiel...

Front-end JavaScript Promise

Table of contents 1. What is Promise 2. Basic usa...

JS implements a detailed plan for the smooth version of the progress bar

The progress bar is not smooth I believe that mos...

Detailed explanation of Vue's calculated properties

1. What is a calculated attribute? In plain words...

Various methods to implement the prompt function of text box in html

You can use the attribute in HTML5 <input="...

Implementation of drawing audio waveform with wavesurfer.js

1. View the renderings Select forward: Select bac...

Avoid abusing this to read data in data in Vue

Table of contents Preface 1. The process of using...

How to handle long data when displaying it in html

When displaying long data in HTML, you can cut off...

Vue Basics Listener Detailed Explanation

Table of contents What is a listener in vue Usage...