Detailed explanation of nginx server installation and load balancing configuration on Linux system

Detailed explanation of nginx server installation and load balancing configuration on Linux system

nginx (engine x) is a high-performance HTTP and reverse proxy server, mail proxy server, and general TCP/UDP proxy server. Its characteristics are lightweight (occupying less system resources), good stability, scalability (modular structure), strong concurrency capability, and simple configuration.

This article mainly introduces the basic load balancing function implemented by nginx in a test environment.

nginx can provide HTTP services, including processing static files, supporting SSL and TLS SNI, GZIP web page compression, virtual hosts, URL rewriting and other functions, and can be used with FastCGI, uwsgi and other programs to process dynamic requests.

In addition, nginx can also be used for server functions such as proxy, reverse proxy, load balancing, and caching to improve network load and availability in a cluster environment.

1. Build a test environment

The test environment here is two Lubuntu 19.04 virtual machines installed through VirtualBox. The Linux system installation method is not described in detail.

In order to ensure mutual access between the two Linux virtual machines, the network configuration of the virtual machines uses the internal network (Internal) networking method provided by the VirtualBox software in addition to the default NAT method.

In addition, you also need to bind the network cards associated with the "internal network" in the two virtual machines to the static IP addresses of the same network segment, so that the two hosts form a local area network and can directly access each other.

Network Configuration

Open the VirtualBox software, enter the settings interface of the two virtual machines respectively, and add a network connection with the internal network as the connection method. The screenshot is as follows (the two virtual machines have the same configuration):

Internal Network

Log in to the virtual machine system and use the ip addr command to view the current network connection information:

$ ip addr
...
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
 link/ether 08:00:27:38:65:a8 brd ff:ff:ff:ff:ff:ff
 inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s3
  valid_lft 86390sec preferred_lft 86390sec
 inet6 fe80::9a49:54d3:2ea6:1b50/64 scope link noprefixroute
  valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
 link/ether 08:00:27:0d:0b:de brd ff:ff:ff:ff:ff:ff
 inet6 fe80::2329:85bd:937e:c484/64 scope link noprefixroute
  valid_lft forever preferred_lft forever

It can be seen that the enp0s8 network card has not yet been bound to an IPv4 address, and a static IP needs to be manually assigned to it.

It should be noted that starting from Ubuntu 17.10, a new tool called netplan has been introduced, and the original /etc/network/interfaces is no longer effective.

Therefore, when setting a static IP for the network card, you need to modify the /etc/netplan/01-network-manager-all.yaml configuration file. The example is as follows:

network:
 version: 2
 renderer: NetworkManager
  ethernets:
  enp0s8:
   dhcp4: no
   dhcp6: no
   addresses: [192.168.1.101/24]
# gateway4: 192.168.1.101
# nameservers:
# addresses: [192.168.1.101, 8.8.8.8]

Since the two hosts are in the same subnet, they can still access each other even if the gateway and DNS server are not configured. Comment out the corresponding configuration items for now (you can try to build your own DNS server later).

After editing, run the sudo netplan apply command and the static IP configured previously will take effect.

$ ip addr
...
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  link/ether 08:00:27:0d:0b:de brd ff:ff:ff:ff:ff:ff
  inet 192.168.1.101/24 brd 192.168.1.255 scope global noprefixroute enp0s8
    valid_lft forever preferred_lft forever
  inet6 fe80::a00:27ff:fe0d:bde/64 scope link
    valid_lft forever preferred_lft forever

Log in to another virtual machine and perform the same operation (note that the addresses item in the configuration file is changed to [192.168.1.102/24]). The network configuration of the two virtual machines is complete.

At this time, there is a Linux virtual machine server1 with an IP address of 192.168.1.101 and a Linux virtual machine server2 with an IP address of 192.168.1.102. The two hosts can access each other. The test is as follows:

starky@server1:~$ ping 192.168.1.102 -c 2
PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data.
64 bytes from 192.168.1.102: icmp_seq=1 ttl=64 time=0.951 ms
64 bytes from 192.168.1.102: icmp_seq=2 ttl=64 time=0.330 ms
--- 192.168.1.102 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.330/0.640/0.951/0.311 ms
skitar@server2:~$ ping 192.168.1.101 -c 2
PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data.
64 bytes from 192.168.1.101: icmp_seq=1 ttl=64 time=0.223 ms
64 bytes from 192.168.1.101: icmp_seq=2 ttl=64 time=0.249 ms
--- 192.168.1.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 29ms
rtt min/avg/max/mdev = 0.223/0.236/0.249/0.013 ms

2. Install nginx server

There are two main ways to install nginx:

  • Precompiled binaries. This is the simplest and fastest way to install it. All major operating systems can install it through package managers (such as apt-get in Ubuntu). This method will install almost all official modules or plugins.
  • Compile and install from source code. This method is more flexible than the former, and you can choose the modules or third-party plug-ins you want to install.

This example does not have any special requirements, so directly choose the first installation method. The command is as follows:

$ sudo apt-get update
$ sudo apt-get install nginx

After successful installation, use systemctl status nginx command to view the running status of the nginx service:

$ systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
  Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: en
  Active: active (running) since Tue 2019-07-02 01:22:07 CST; 26s ago
   Docs: man:nginx(8)
 Main PID: 3748 (nginx)
  Tasks: 2 (limit: 1092)
  Memory: 4.9M
  CGroup: /system.slice/nginx.service
      ├─3748 nginx: master process /usr/sbin/nginx -g daemon on; master_pro
      └─3749 nginx: worker process

Use the curl -I 127.0.0.1 command to verify whether the web server can be accessed normally:

$ curl -I 127.0.0.1
HTTP/1.1 200 OK
Server: nginx/1.15.9 (Ubuntu)
...

3. Load Balancing Configuration

Load balancing is to distribute the load to multiple operation units according to certain rules, thereby improving the availability and response speed of the service.

A simple example diagram is as follows:

load-balancing

For example, if a website application is deployed on a server cluster consisting of multiple hosts, the load balancing server is located between the terminal user and the server cluster. It is responsible for receiving the access traffic of the terminal user and distributing the user access to the back-end server host according to certain rules, thereby improving the response speed under high concurrency.

Load Balancing Server

Nginx can configure load balancing through the upstream option. Here, the virtual machine server1 is used as the load balancing server.

Modify the default site configuration file on serve1 ( sudo vim /etc/nginx/sites-available/default ) to the following content:

upstream backend {
  server 192.168.1.102:8000;
  server 192.168.1.102;
}
server {
  listen 80;

  location / {
    proxy_pass http://backend;
  }
}

For testing purposes, there are currently only two virtual machines. Server1 (192.168.1.101) is already used as a load balancing server, so use server2 (192.168.1.102) as the application server.

Here, with the help of nginx's virtual host function, 192.168.1.102 and 192.168.1.102:8000 are "simulated" as two different application servers.

Application Server

Modify the default site configuration file on server2 ( sudo vim /etc/nginx/sites-available/default ) to the following content:

server {
    listen 80;

    root /var/www/html;

    index index.html index.htm index.nginx-debian.html;

    server_name 192.168.1.102;

    location / {
        try_files $uri $uri/ =404;
    }
}

Create an index.html file in the /var/www/html directory as the index page of the default site. The content is as follows:

<html>
  <head>
    <title>Index Page From Server1</title>
  </head>
  <body>
    <h1>This is Server1, Address 192.168.1.102.</h1>
  </body>
</html>

Run the sudo systemctl restart nginx command to restart the nginx service. Now visit http://192.168.1.102 to get the index.html page you just created:

$ curl 192.168.1.102
<html>
  <head>
    <title>Index Page From Server1</title>
  </head>
  <body>
    <h1>This is Server1, Address 192.168.1.102.</h1>
  </body>
</html>

Configure the site on "another host" and create the /etc/nginx/sites-available/server2 configuration file on server2 with the following content:

server {
    listen 8000;

    root /var/www/html;

    index index2.html index.htm index.nginx-debian.html;

    server_name 192.168.1.102;

    location / {
        try_files $uri $uri/ =404;
    }
}

Note the changes in the listening port and index page configuration. Create the index2.html file in the /var/www/html directory as the index page of the server2 site. The content is as follows:

<html>
  <head>
    <title>Index Page From Server2</title>
  </head>
  <body>
    <h1>This is Server2, Address 192.168.1.102:8000.</h1>
  </body>
</html>

PS: For testing purposes, the default site and server2 site are configured on the same host server2, and the pages are slightly different. In actual environments, these two sites are usually configured on different hosts with the same content.

Run sudo ln -s /etc/nginx/sites-available/server2 /etc/nginx/sites-enabled/ command to enable the server2 site just created.

Restart the nginx service. Now visit http://192.168.1.102:8000 to get the index2.html page you just created:

$ curl 192.168.1.102:8000
<html>
  <head>
    <title>Index Page From Server2</title>
  </head>
  <body>
    <h1>This is Server2, Address 192.168.1.102:8000.</h1>
  </body>
</html>

Load balancing test

Back to the load balancing server, virtual machine server1, the reverse proxy URL set in its configuration file is http://backend.

Since the domain name resolution service has not been configured, the URL http://backend cannot be located at the correct location.

You can modify the /etc/hosts file on server1 and add the following record:

127.0.0.1 backend

The domain name can be resolved to the local IP to complete the access to the load balancing server.

Restart the nginx service and access http://backend on server1. The results are as follows:

$ curl http://backend
<html>
  <head>
    <title>Index Page From Server1</title>
  </head>
  <body>
    <h1>This is Server1, Address 192.168.1.102.</h1>
  </body>
</html>
$ curl http://backend
<html>
  <head>
    <title>Index Page From Server2</title>
  </head>
  <body>
    <h1>This is Server2, Address 192.168.1.102:8000.</h1>
  </body>
</html>
$ curl http://backend
<html>
  <head>
    <title>Index Page From Server1</title>
  </head>
  <body>
    <h1>This is Server1, Address 192.168.1.102.</h1>
  </body>
</html>
$ curl http://backend
<html>
  <head>
    <title>Index Page From Server2</title>
  </head>
  <body>
    <h1>This is Server2, Address 192.168.1.102:8000.</h1>
  </body>
</html>

From the output, we can see that server1's access to the load balancing server http://backend completes the polling of the two Web sites on the application server server2, thus playing a role in load balancing.

4. Load Balancing Methods

The open source version of nginx provides four load balancing implementation methods, which are briefly introduced as follows.

1. Round Robin

User requests are evenly distributed to the backend server cluster (the weight of the polling can be set through the weight option). This is the default load balancing method used by nginx:

upstream backend {
  server backend1.example.com weight=5;
  server backend2.example.com;
}

2. Least Connections

User requests will be forwarded to the server with the least number of active connections in the cluster. The weight option is also supported.

upstream backend {
  least_conn;
  server backend1.example.com;
  server backend2.example.com;
}

3. IP Hash

User requests are forwarded based on the client IP address. That is, this method is intended to ensure that a specific client will eventually access the same server host.

upstream backend {
  ip_hash;
  server backend1.example.com;
  server backend2.example.com;
}

4. Generic Hash

The user request will determine the final forwarding destination based on a custom key value, which can be a string, variable, or combination (such as source IP and port number).

upstream backend {
  hash $request_uri consistent;
  server backend1.example.com;
  server backend2.example.com;
}

Weight

Refer to the following example configuration:

upstream backend {
  server backend1.example.com weight=5;
  server backend2.example.com;
  server 192.0.0.1 backup;
}

The default weight is 1. The backup server will only accept requests if all other servers are down.

In the example above, for every 6 requests, 5 will be forwarded to backend1.example.com and 1 will be forwarded to backend2.example.com. Only when backend1 and backend2 are both down, 192.0.0.1 will receive and process the request.

References

HTTP Load Balancing

Summarize

The above is the detailed explanation of the Linux system nginx server installation and load balancing configuration introduced by the editor. I hope it will be helpful to everyone. If you have any questions, please leave me a message and the editor will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website!
If you find this article helpful, please feel free to reprint it and please indicate the source. Thank you!

You may also be interested in:
  • How to implement Nginx reverse proxy and load balancing (based on Linux)
  • How to configure multiple tomcats with Nginx load balancing under Linux
  • How to build nginx load balancing under Linux
  • Detailed explanation of Linux system configuration nginx load balancing
  • Detailed explanation of the use cases of Nginx load balancing configuration on Linux.

<<:  Tips for importing csv, excel or sql files into MySQL

>>:  How to export and import .sql files under Linux command

Recommend

Detailed explanation of the difference between flex and inline-flex in CSS

inline-flex is the same as inline-block. It is a ...

js to achieve image fade-in and fade-out effect

This article shares the specific code of js to ac...

A brief discussion on the corresponding versions of node node-sass sass-loader

Table of contents The node version does not corre...

MYSQL unlock and lock table introduction

MySQL Lock Overview Compared with other databases...

Detailed process of building nfs server using Docker's NFS-Ganesha image

Table of contents 1. Introduction to NFS-Ganesha ...

Implementation of installing Docker in win10 environment

1. Enter the Docker official website First, go to...

Specific use of nginx keepalive

The default request header of the http1.1 protoco...

mysql splits a row of data into multiple rows based on commas

Table of contents Separation effect Command line ...

MySql learning day03: connection and query details between data tables

Primary Key: Keyword: primary key Features: canno...

Sample code for implementing radar chart with vue+antv

1. Download Dependency npm install @antv/data-set...

How to configure the My.ini file when installing MySQL5.6.17 database

I recently used the MySql database when developin...

Summary of considerations for writing web front-end code

1. It is best to add a sentence like this before t...

Summary of MySQL log related knowledge

Table of contents SQL execution order bin log Wha...

JavaScript anti-shake and throttling explained

Table of contents Stabilization Throttling Summar...