Problems and solutions for deploying Nginx+KeepAlived cluster dual-active architecture in VMware

Problems and solutions for deploying Nginx+KeepAlived cluster dual-active architecture in VMware

Preface

Use nginx for load balancing. As the front end or middle layer of the architecture, with the increasing traffic, it is necessary to build a high-availability architecture for load balancing. Use keepalived to solve single point risks. Once nginx goes down, it can quickly switch to the backup server.

Solutions to possible problems encountered in Vmware network configuration

  • Start VMware DHCP Service and VMware NAT Service Service
  • Enable network sharing in the network adapter, check and save the permission for other networks, and restart the virtual machine.

Install

Node deployment

node address Serve
centos7_1 192.168.211.130 Keepalived+Nginx
centos7_2 192.168.211.131 Keepalived+Nginx
centos7_3 192.168.211.132 Redis Server
web1 (physical machine) 192.168.211.128 FastApi+Celery
web2 (physical machine) 192.168.211.129 FastApi+Celery

Web configuration

web1 starts the python http server

vim index.html

<html>
<body>
<h1>Web Svr 1</h1>
</body>
</html>

nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &

web2 starts a python http server

vim index.html

<html>
<body>
<h1>Web Svr 2</h1>
</body>
</html>

nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &

Turn off firewall

firewall-cmd --state
systemctl stop firewalld.service
systemctl disable firewalld.service

Now the browser access is normal, and the page shows Web Svr 1 and 2

Install Nginx on centos1 and 2

First, configure the Alibaba Cloud source

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

Install dependency packages

yum -y install gcc
yum install -y pcre pcre-devel
yum install -y zlib zlib-devel
yum install -y openssl openssl-devel

Download nginx and unzip it

wget http://nginx.org/download/nginx-1.8.0.tar.gz
tar -zxvf nginx-1.8.0.tar.gz

Install nginx

cd nginx-1.8.0
./configure --user=nobody --group=nobody --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-http_sub_module --with-http_ssl_module
make
make install
cd /usr/local/nginx/sbin/
# Check the configuration file ./nginx -t
# Start nginx
./nginx

Open nginx access

firewall-cmd --zone=public --add-port=80/tcp --permanent
systemctl restart firewalld.service

At this time, you can see the homepage of nginx by visiting 130 and 131.

Create nginx startup file

You need to create the nginx startup file in the init.d folder. This way, each time the server restarts the init process, Nginx will be automatically started.

cd /etc/init.d/
vim nginx

#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: -85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# Proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# pidfile: /var/run/nginx.pid
# user: nginx

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)

NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"

lockfile=/var/run/nginx.lock

start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    killproc $prog -QUIT
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    configtest || return $?
    stop
    start
}

reload() {
    configtest || return $?
    echo -n $"Reloading $prog: "
    killproc $nginx -HUP
    RETVAL=$?
    echo
}

force_reload() {
    restart
}

configtest() {
  $nginx -t -c $NGINX_CONF_FILE
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart|configtest)
        $1
        ;;
    reload
        rh_status_q || exit 7
        $1
        ;;
    force-reload
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
            ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
        exit 2
esac

Verify the configuration file and enter the following commands in sequence

chkconfig --add nginx
chkconfig --level 345 nginx on

Add execute permissions to this file

chmod +x nginx 
ls

functions netconsole network nginx README

Start Nginx service

service nginx start
service nginx status
service nginx reload

Nginx reverse proxy, load balancing (centos_1)

Modify the nginx.conf configuration file and remove the commented code

cd /usr/local/nginx/conf/
mv nginx.conf nginx.conf.bak
egrep -v '^#' nginx.conf.bak
egrep -v '^#|^[ ]*#' nginx.conf.bak
egrep -v '^#|^[ ]*#|^$' nginx.conf.bak 
egrep -v '^#|^[ ]*#|^$' nginx.conf.bak >> nginx.conf
cat nginx.conf

The output is as follows

worker_processes 1;
events {
    worker_connections 1024;
}
http {
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    keepalive_timeout 65;
    server {
        listen 80;
        server_name localhost;
        location / {
            root html;
            index index.html index.htm;
        }
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root html;
        }
    }
}

Reload nginx configuration

# Test whether the configuration file is normal../sbin/nginx -t
# Reload nginx configuration../sbin/nginx -s reload

Configure nginx reverse proxy and load balancing

worker_processes 1;
events {
    worker_connections 1024;
}
http {
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    keepalive_timeout 65;
    
    # websvr server cluster (also called load balancing pool)	
    upstream websvr {
        server 192.168.211.128:8001 weight=1;
        server 192.168.211.129:8001 weight=2;
    }
	
    server {
        listen 80;
        # Used to specify the IP address or domain name. Multiple configurations are separated by spaces. server_name 192.168.211.130;
        location / {
            # Submit all requests to the websvr cluster for processing proxy_pass http://websvr;
        }
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root html;
        }
    }
}

Now restart nginx

sbin/nginx -s reload

The websvr names can be customized to indicate the meaning of these servers. In other words, you only need to add upstream websvr and proxy_pass to achieve load balancing.

Now when you visit 130, Web Svr 1 and Web Svr 2 will switch on the page. The server will be selected based on the weight. The larger the weight value, the higher the weight. That is, if you refresh the page repeatedly, Web Svr 2 will appear twice on average and Web Svr 1 will appear once.

So far, high availability cannot be achieved. Although web services can be done this way and single point failures can be handled in this way, if the nginx service fails, the entire system will basically be inaccessible, so multiple Nginx servers will be needed to ensure it.

Multiple Nginx work together, Nginx high availability [dual machine master-slave mode]

Add a new nginx service on the 131 server (centos_2). Just like the previous configuration, you only need to modify nginx.conf

worker_processes 1;
events {
    worker_connections 1024;
}
http {
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    keepalive_timeout 65;

        upstream websvr {
        server 192.168.211.128:8001 weight=1;
        server 192.168.211.129:8001 weight=2;
    }

    server {
        listen 80;
        server_name 192.168.211.131;
        location / {
            proxy_pass http://websvr;
        }
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root html;
        }
    }
}

# Reload nginx
sbin/nginx -s reload

Now visiting http://192.168.211.130/ can also get similar results as http://192.168.211.131/.

The IPs of these two Nginx servers are different, so how can we make these two nginx servers work together? This requires the use of keepalived.

Install the software and install two centos at the same time

yum install keepalived pcre-devel -y

Configure keepalived

Backup on both

cp /etc/keepalived/keepalived.conf keepalived.conf.bak

centos_1 Keepalived-MASTER

[root@localhost keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
    script_user root
	enable_script_security
}

vrrp_script chk_nginx {
    # Specify the monitoring script to detect whether the nginx service is running normally script "/etc/keepalived/chk_nginx.sh"
    #Specify the monitoring time and execute interval 10 every 10 seconds
    # The priority changes caused by the script result. If the detection fails (the script returns a non-zero value), the priority is -5.
    # weight -5
    # # A detection is considered a true failure only if it fails twice in a row. The priority will be reduced by weight (between 1-255)
    # fall 2
    # A successful detection is considered successful if it succeeds once. But does not change the priority# rise 1
}

vrrp_instance VI_1 {
	#Specify the keepalived role, set the host to MASTER and the standby to BACKUP
    state BACKUP
	# Specify the interface of the HA monitoring network. CentOS7 uses ip addr to get interface ens33
	# The virtual_router_id of the primary and backup routers must be the same. You can set it to the second group of IP addresses: must be between 1 & 255
    virtual_router_id 51
	# Priority value. In the same vrrp_instance, MASTRE must be higher than BAUCKUP. After MASTER recovers, BACKUP automatically takes over priority 90
	# VRRP broadcast cycle in seconds. If the broadcast is not detected, it is considered that the service is down and the primary and backup advert_int 1
	# Set the authentication type and password. The master and slave must have the same authentication {
		# Set the vrrp authentication type, there are two main types: PASS and AH auth_type PASS
		# The encrypted password must be the same on both servers for normal communication auth_pass 1111
    }
	track_script {
        # Execute the monitored service, referencing the VRRP script, which is the name specified in the vrrp_script section. Run them periodically to change the priority chk_nginx
    }
    virtual_ipaddress {
		# VRRP HA virtual address If there are multiple VIPs, continue to fill in 192.168.211.140
    }
}

Send the configuration file to node 131

scp /etc/keepalived/keppalived.conf 192.168.211.131:/etc/keepalived/keepalived.conf

For node 131 you only need to modify

state BACKUP
priority 90

Main keepalived configuration monitoring script chk_nginx.sh

Create a script to execute in keepalived

vi /etc/keepalived/chk_nginx.sh

#!/bin/bash
# Check whether there is an nginx process that assigns a value to the variable counter
counter=`ps -C nginx --no-header |wc -l`
# If there is no process, the value is 0
if [ $counter -eq 0 ];then
    # Try to start nginx
    echo "Keepalived Info: Try to start nginx" >> /var/log/messages
    /etc/nginx/sbin/nginx
    sleep 3
    if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
        # Output system messages echo "Keepalived Info: Unable to start nginx" >> /var/log/messages
        # If it has not started yet, end the keepalived process # killall keepalived
        # Or stop /etc/init.d/keepalived stop
        exit 1
    else
        echo "Keepalived Info: Nginx service has been restored" >> /var/log/messages
        exit 0
    fi
else
    # Status is normal echo "Keepalived Info: Nginx detection is normal" >> /var/log/messages;
    exit 0
fi

Next, grant execution permissions and test

chmod +x chk_nginx.sh
./chk_nginx.sh

Restart keepalived on both sides

systemctl restart keepalived
systemctl status keepalived

At this time, accessing .140 can also be displayed normally, which means that the bound IP is successful. Before execution, you can view the output log in messages in real time using the following command:

tail -f /var/log/messages 

# If nginx turns off Keepalived Info: Try to start nginx
Keepalived Info: Nginx service has been restored
# nginx opens Keepalived normally Info: Nginx detection is normal

When nginx detects normally, it will return 0; if there is no detection, it will return 1. However, keepalived does not seem to detect this return value to achieve the transfer, but to detect whether the keepalived service exists to release the local VIP and finally transfer the virtual IP to another server.

References

https://www.jianshu.com/p/7e8e61d34960
https://www.cnblogs.com/zhangxingeng/p/10721083.html

This is the end of this article about VMware deployment of Nginx+KeepAlived cluster dual-active architecture. For more related Nginx+KeepAlived cluster content, please search 123WORDPRESS.COM's previous articles or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Nginx implements high availability cluster construction (Keepalived+Haproxy+Nginx)
  • Keepalived+Nginx+Tomcat sample code to implement high-availability Web cluster

<<:  Detailed explanation of Vue's monitoring properties

>>:  A link refresh page and js refresh page usage examples

Recommend

Detailed explanation of top command output in Linux

Preface I believe everyone has used the top comma...

JavaScript to implement the function of changing avatar

This article shares the specific code of JavaScri...

6 Practical Tips for TypeScript Development

Table of contents 1. Determine the entity type be...

Some data processing methods that may be commonly used in JS

Table of contents DOM processing Arrays method Su...

Two ways to remove the 30-second ad code from Youku video

I believe everyone has had this feeling: watching ...

How to deploy k8s in docker

K8s k8s is a cluster. There are multiple Namespac...

Detailed explanation of the cache implementation principle of Vue computed

Table of contents Initialize computed Dependency ...

Docker deployment of Flask application implementation steps

1. Purpose Write a Flask application locally, pac...

Five solutions to cross-browser problems (summary)

Brief review: Browser compatibility issues are of...

Solution to CSS flex-basis text overflow problem

The insignificant flex-basis has caused a lot of ...

How to implement n-grid layout in CSS

Common application scenarios The interfaces of cu...

Use native js to simulate the scrolling effect of live bullet screen

Table of contents 1. Basic principles 2. Specific...