What is Nginx load balancing and how to configure it

What is Nginx load balancing and how to configure it

What is Load Balancing

Load balancing is mainly achieved through specialized hardware devices or software algorithms. The load balancing achieved through hardware devices has good effect, high efficiency and stable performance, but the cost is relatively high. Load balancing implemented through software mainly depends on the choice of balancing algorithm and the robustness of the program. There are also many kinds of balancing algorithms, and the most common ones are two categories: static load balancing algorithms and dynamic load balancing algorithms. Static algorithms are relatively simple to implement and can achieve relatively good results in general network environments. They mainly include general polling algorithms, ratio-based weighted polling algorithms, and priority-based weighted polling algorithms. Dynamic load balancing algorithms are more adaptable and effective in more complex network environments. They mainly include the least connection priority algorithm based on task volume, the fastest response priority algorithm based on performance, the prediction algorithm, and the dynamic performance allocation algorithm.

The general principle of network load balancing technology is to use a certain distribution strategy to evenly distribute the network load to each operating unit of the network cluster, so that a single heavy-load task can be shared among multiple units for parallel processing, or a large amount of concurrent access or data traffic can be shared among multiple units for separate processing, thereby reducing the user's waiting response time.

Nginx server load balancing configuration

The Nginx server implements a static priority-based weighted polling algorithm. The main configurations used are the proxy_pass directive and the upstream directive. These contents are actually very easy to understand. The key point is that the configuration of the Nginx server is flexible and diverse. How to reasonably integrate other functions while configuring load balancing to form a configuration solution that can meet actual needs.

The following are some basic example snippets. Of course, it is impossible to include all configuration situations. I hope it can serve as a starting point for discussion. At the same time, it also requires everyone to summarize and accumulate more in the actual application process. Points that require attention in the configuration will be added as comments.

Configuration Example 1: Implementing a general round-robin load balancing policy for all requests

In the following example snippet, the priorities of all servers in the backend server group are configured to the default weight=1, so that they will receive request tasks in turn according to the general polling strategy. This configuration is the simplest one to implement Nginx server load balancing. All requests to www.myweb.name will be load balanced among the backend server group. The example code is as follows:

...

 upstream backend #Configure the backend server group {
    server 192.168.1.2:80;
    server 192.168.1.3:80;
    server 192.168.1.4:80; #Default weight=1
}
server
{
    listen 80;
    server_name www.myweb.name;
    index index.html index.htm;
    location / {
        proxy_pass http://backend;
        prox_set_header Host $host;
    }
    ...
}

Configuration Example 2: Implementing Weighted Round Robin Load Balancing for All Requests

Compared with "Configuration Example 1", in this example fragment, the servers in the backend server group are assigned different priority levels, and the value of the weight variable is the "weight" in the polling strategy. Among them, 192.168.1.2:80 has the highest level and is the server that receives and processes the least client requests. 192.168.1.4:80 has the lowest level and is the server that receives and processes the least client requests. 192.168.1.3:80 is between the above two. All requests to www.myweb.name will be weighted load balanced in the backend server group. The example code is as follows:

...

 upstream backend #Configure the backend server group {
    server 192.168.1.2:80 weight=5;
    server 192.168.1.3:80 weight=2;
    server 192.168.1.4:80; #Default weight=1
}
server
{
    listen 80;
    server_name www.myweb.name;
    index index.html index.htm;
    location / {
        proxy_pass http://backend;
        prox_set_header Host $host;
    }
    ...
}

Configuration Example 3: Load Balancing for Specific Resources

In this example snippet, we set up two groups of proxied servers, one named "videobackend" is used to load balance client requests for video resources, and the other is used to load balance client requests for filed resources. All requests to "http://www.mywebname/video/*" will be balanced in the videobackend server group, and all requests to "http://www.mywebname/file/*" will be balanced in the filebackend server group. This example shows the configuration for implementing general load balancing. For the configuration of weighted load balancing, refer to "Configuration Example 2".

In the location /file/ {......} block, we fill the client's real information into the "Host", "X-Real-IP" and "X-Forwareded-For" header fields in the request header respectively, so that the request received by the backend server group retains the client's real information instead of the Nginx server information. The example code is as follows:

...

 upstream videobackend #Configure backend server group 1
{
    server 192.168.1.2:80;
    server 192.168.1.3:80;
    server 192.168.1.4:80;
}
upstream filebackend #Configure backend server group 2
{
    server 192.168.1.5:80;
    server 192.168.1.6:80;
    server 192.168.1.7:80;
}
server
{
    listen 80;
    server_name www.myweb.name;
    index index.html index.htm;
    location /video/ {
        proxy_pass http://videobackend; #Use backend server group 1
        prox_set_header Host $host;
        ...
    }
    location /file/ {
        proxy_pass http://filebackend; #Use backend server group 2
                                        #Keep the real information of the client prox_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        ...
    }
}

Configuration Example 4: Load Balancing for Different Domain Names

In this example snippet, we set up two virtual servers and two sets of backend proxy server groups to receive different domain name requests and perform load balancing on these requests. If the client request domain name is "home.myweb.name", server server1 will receive it and forward it to the homebackend server group for load balancing; if the client request domain name is "bbs.myweb.name", server server2 will receive it and forward it to the bbsbackend server level for load balancing. This achieves load balancing for different domain names.

It should be noted that one of the two backend server groups, server 192.168.1.4:80, is shared. All resources under the two domain names need to be deployed on the server to ensure that there are no problems with client requests. The example code is as follows:

...
upstream bbsbackend #Configure backend server group 1
{
    server 192.168.1.2:80 weight=2;
    server 192.168.1.3:80 weight=2;
    server 192.168.1.4:80;
}
upstream homebackend #Configure backend server group 2
{
    server 192.168.1.4:80;
    server 192.168.1.5:80;
    server 192.168.1.6:80;
}
                                        #Start configuring server 1
server
{
    listen 80;
    server_name home.myweb.name;
    index index.html index.htm;
    location / {
        proxy_pass http://homebackend;
        prox_set_header Host $host;
        ...
    }
    ...
}
                                        #Start configuring server 2
server
{
    listen 80;
    server_name bbs.myweb.name;
    index index.html index.htm;
    location / {
        proxy_pass http://bbsbackend;
        prox_set_header Host $host;
        ...
    }
    ...
}

Configuration Example 5: Implementing Load Balancing with URL Rewriting

First, let's look at the specific source code, which is modified based on Example 1:

...
upstream backend #Configure the backend server group {
    server 192.168.1.2:80;
    server 192.168.1.3:80;
    server 192.168.1.4:80; #Default weight=1
}
server
{
    listen 80;
    server_name www.myweb.name;
    index index.html index.htm;

         location /file/ {
        rewrite ^(/file/.*)/media/(.*)\.*$) $1/mp3/$2.mp3 last;
    }

         location / {
        proxy_pass http://backend;
        prox_set_header Host $host;
    }
    ...
}

Compared with "Configuration 1", this example fragment adds the URL rewriting function for URIs containing "/file/". For example, when the URL requested by the client is "http://www.myweb.name/file/downlaod/media/1.mp3", the virtual server first uses the location file/{......} block to forward it to the backend server group to achieve load balancing. In this way, load balancing with URL rewriting function can be easily implemented. In this configuration scheme, you must clearly understand the difference between the last tag and the break tag in the rewrite instruction to achieve the desired effect.

The above five configuration examples demonstrate the basic methods of implementing load balancing configuration in Nginx server under different circumstances. Since the functions of the Nginx server are incremental in structure, we can continue to add more functions based on these configurations, such as Web caching, Gzip compression technology, identity authentication, permission management, etc. At the same time, when using the upstream directive to configure the server group, you can give full play to the functions of each directive to configure an Nginx server that meets the needs, is efficient, stable, and rich in functions.

The above is the details of what Nginx load balancing is and how to configure it. For more information about Nginx load balancing, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • The principle and configuration of Nginx load balancing and dynamic and static separation
  • Nginx Layer 4 Load Balancing Configuration Guide
  • How to configure Nginx load balancing
  • Analysis of the principle of Nginx+Tomcat to achieve load balancing and dynamic and static separation
  • Implementation method of Nginx+tomcat load balancing cluster
  • Docker Nginx container and Tomcat container to achieve load balancing and dynamic and static separation operations
  • Detailed explanation of how to use Nginx + consul + upsync to achieve dynamic load balancing
  • Nginx configuration to achieve multiple server load balancing

<<:  MySQL method steps to determine whether it is a subset

>>:  Detailed explanation of the use of router-view components in Vue

Recommend

Why can't the MP4 format video embedded in HTML be played?

The following code is in my test.html. The video c...

Solve the installation problem of mysql8.0.19 winx64 version

MySQL is an open source, small relational databas...

Solve the problem of using linuxdeployqt to package Qt programs in Ubuntu

I wrote some Qt interface programs, but found it ...

Web Design Tutorial (7): Improving Web Design Efficiency

<br />Previous article: Web Design Tutorial ...

How to install the graphical interface in Linux

1. Linux installation (root user operation) 1. In...

Calculation of percentage value when the css position property is absolute

When position is absolute, the percentage of its ...

Introduction to the use and advantages and disadvantages of MySQL triggers

Table of contents Preface 1. Trigger Overview 2. ...

Tutorial on installing Android Studio on Ubuntu 19 and below

Based on past experience, taking notes after comp...

Summary of DTD usage in HTML

DTD is a set of grammatical rules for markup. It i...

MySQL optimization query_cache_limit parameter description

query_cache_limit query_cache_limit specifies the...

MySQL database connection exception summary (worth collecting)

I found a strange problem when deploying the proj...

9 Tips for Web Page Layout

<br />Related articles: 9 practical suggesti...

MySQL chooses the right storage engine

When it comes to databases, one of the most frequ...