Keepalived+Nginx+Tomcat to achieve high availability Web cluster 1. Nginx installation process 1. Download the Nginx installation package and install the dependent environment package (1) Install the C++ compilation environment yum -y install gcc #C++ (2) Install PCRE yum -y install pcre-devel (3) Install zlib yum -y install zlib-devel (4) Install Nginx Locate the nginx decompression file location and execute the compile and install command [root@localhost nginx-1.12.2]# pwd /usr/local/nginx/nginx-1.12.2 [root@localhost nginx-1.12.2]# ./configure && make && make install (5) Start Nginx After the installation is complete, first find the directory location where the installation is completed [root@localhost nginx-1.12.2]# whereis nginx nginx: /usr/local/nginx [root@localhost nginx-1.12.2]# Enter the Nginx subdirectory sbin and start Nginx [root@localhost sbin]# ls nginx [root@localhost sbin]# ./nginx & [1] 5768 [root@localhost sbin]# Check whether Nginx is started Or check the Nginx startup status through the process [root@localhost sbin]# ps -aux|grep nginx root 5769 0.0 0.0 20484 608 ? Ss 14:03 0:00 nginx: master process ./nginx nobody 5770 0.0 0.0 23012 1620 ? S 14:03 0:00 nginx: worker process root 5796 0.0 0.0 112668 972 pts/0 R+ 14:07 0:00 grep --color=auto nginx [1]+ Completed ./nginx [root@localhost sbin]# At this point, Nginx is installed and started successfully. (6) Nginx quick start and boot configuration Edit the Nginx quick start script [ pay attention to the Nginx installation path , you need to change it according to your own NGINX path ] [root@localhost init.d]# vim /etc/rc.d/init.d/nginx #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: -85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # Proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /usr/local/nginx/conf/nginx.conf # pidfile: /usr/local/nginx/logs/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/local/nginx/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/var/lock/subsys/nginx make_dirs() { # make required directories user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` if [ -z "`grep $user /etc/passwd`" ]; then useradd -M -s /bin/nologin $user fi options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { #configtest || return $? stop sleep 1 start } reload() { #configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload rh_status_q || exit 7 $1 ;; force-reload force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac Authorize the startup script and add it to the startup [root@localhost init.d]# chmod -R 777 /etc/rc.d/init.d/nginx [root@localhost init.d]# chkconfig nginx Start Nginx [root@localhost init.d]# ./nginx start Add Nginx to system environment variables [root@localhost init.d]# echo 'export PATH=$PATH:/usr/local/nginx/sbin' >>/etc/profile && source /etc/profile Nginx command [service nginx (start|stop|restart)] [root@localhost init.d]# service nginx start Starting nginx (via systemctl): [ OK ] Tips: Quick Commands service nginx (start|stop|restart) 2. KeepAlived installation and configuration 1. Install Keepalived dependency environment yum install -y popt-devel yum install -y ipvsadm yum install -y libnl* yum install -y libnf* yum install -y openssl-devel 2. Compile Keepalived and install [root@localhost keepalived-1.3.9]# ./configure [root@localhost keepalived-1.3.9]# make && make install 3. Install Keepalive as a system service [root@localhost etc]# mkdir /etc/keepalived [root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ Manually copy the default configuration file to the default path [root@localhost etc]# mkdir /etc/keepalived [root@localhost etc]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ [root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ Create a soft link for keepalived [root@localhost sysconfig]# ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin/ Set Keepalived to start automatically at boot [root@localhost sysconfig]# chkconfig keepalived on NOTE: Forwarding request to 'systemctl enable keepalived.service'. Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service Start the Keepalived service [root@localhost keepalived]# keepalived -D -f /etc/keepalived/keepalived.conf Shut down the Keepalived service [root@localhost keepalived]# killall keepalived 3. Cluster Planning and Construction Environmental preparation: CentOS 7.2 Keepalived Version 1.4.0 - December 29, 2017 Nginx Version: nginx/1.12.2 Tomcat Version:8 Cluster Planning Checklist |
Virtual Machines | IP | illustrate |
---|---|---|
Keepalived+Nginx1[Master] | 192.168.43.101 | Nginx Server 01 |
Keeepalived+Nginx[Backup] | 192.168.43.102 | Nginx Server 02 |
Tomcat01 | 192.168.43.103 | Tomcat Web Server01 |
Tomcat02 | 192.168.43.104 | Tomcat Web Server02 |
VIP | 192.168.43.150 | Virtual Drift IP |
1. Change the Tomcat default welcome page to identify the switch to Web
Change the ROOT/index.jsp information of the TomcatServer01 node, add the Tomcat IP address, and add the Nginx value, that is, modify the node 192.168.43.103 information as follows:
<div id="asf-box"> <h1>${pageContext.servletContext.serverInfo}(192.168.224.103)<%=request.getHeader("X-NGINX")%></h1> </div>
Change the ROOT/index.jsp information of the TomcatServer02 node, add the Tomcat IP address, and add the Nginx value, that is, modify the node 192.168.43.104 information as follows:
<div id="asf-box"> <h1>${pageContext.servletContext.serverInfo}(192.168.224.104)<%=request.getHeader("X-NGINX")%></h1> </div>
2. Start the Tomcat service and check the Tomcat service IP information. At this time, Nginx is not started, so the request-header does not contain Nginx information.
3. Configure Nginx proxy information
1. Configure the Master node [192.168.43.101] proxy information
upstream tomcat { server 192.168.43.103:8080 weight=1; server 192.168.43.104:8080 weight=1; } server{ location / { proxy_pass http://tomcat; proxy_set_header X-NGINX "NGINX-1"; } #......Others omitted}
2. Configure the Backup node [192.168.43.102] proxy information
upstream tomcat { server 192.168.43.103:8080 weight=1; server 192.168.43.104:8080 weight=1; } server{ location / { proxy_pass http://tomcat; proxy_set_header X-NGINX "NGINX-2"; } #......Others omitted}
3. Start the Master node Nginx service
[root@localhost init.d]# service nginx start Starting nginx (via systemctl): [ OK ]
At this time, when you visit 192.168.43.101, you can see that the Tcomat nodes 103 and 104 are displayed alternately, indicating that the Nginx service has loaded the request to two tomcats.
4. Similarly, configure the Backup [192.168.43.102] Nginx information. After starting Nginx, visit 192.168.43.102 and you can see that the Backup node has played a load role.
4. Configure Keepalived script information
1. Add the check_nginx.sh file to the /etc/keepalived directory on the Master and Slave nodes to detect the inventory status of Nginx and add the keepalived.conf file
The check_nginx.sh file information is as follows:
#!/bin/bash #Time variable, used to record logs d=`date --date today +%Y%m%d_%H:%M:%S` # Calculate the number of nginx processes n=`ps -C nginx --no-heading|wc -l` #If the process is 0, start nginx and check the number of nginx processes again. #If it is still 0, it means that nginx cannot be started, and you need to turn off keepalived if [ $n -eq "0" ]; then /etc/rc.d/init.d/nginx start n2=`ps -C nginx --no-heading|wc -l` if [ $n2 -eq "0" ]; then echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log systemctl stop keepalived fi fi
After adding, authorize the check_nginx.sh file so that the script can obtain execution permissions.
[root@localhost keepalived]# chmod -R 777 /etc/keepalived/check_nginx.sh
2. Add the keepalived.conf file in the /etc/keepalived directory of the Master node. The specific information is as follows:
vrrp_script chk_nginx { script "/etc/keepalived/check_nginx.sh" //Script interval 2 to detect nginx process weight -20 } global_defs { notification_email { //You can add email reminders} } vrrp_instance VI_1 { state MASTER #Indicates the state is MASTER and the backup machine is BACKUP interface ens33 #Set the network card bound to the instance (check ip addr, need to bind according to personal network card) virtual_router_id 51 #The virtual_router_id must be the same in the same instance mcast_src_ip 192.168.43.101 priority 250 #MASTER has a higher priority than BACKUP, for example, BACKUP is 240 advert_int 1 #Time interval between synchronization checks between MASTER and BACKUP load balancers, in seconds nopreempt #Non-preemptive mode authentication { #Set authentication auth_type PASS #Master-slave server authentication method auth_pass 123456 } track_script { check_nginx } virtual_ipaddress { #Set VIP 192.168.43.150 #You can have multiple virtual IPs, just wrap the line} }
3. Add the keepalived.conf configuration file in the etc/keepalived directory of the Backup node
The information is as follows:
vrrp_script chk_nginx { script "/etc/keepalived/check_nginx.sh" //Script interval 2 to detect nginx process weight -20 } global_defs { notification_email { //You can add email reminders} } vrrp_instance VI_1 { state BACKUP #Indicates the state is MASTER and the backup machine is BACKUP interface ens33 #Set the network card bound to the instance (view by ip addr) virtual_router_id 51 #The virtual_router_id must be the same in the same instance mcast_src_ip 192.168.43.102 priority 240 #MASTER has a higher priority than BACKUP. For example, BACKUP is 240. advert_int 1 #Time interval between synchronization checks between MASTER and BACKUP load balancers, in seconds nopreempt #Non-preemptive mode authentication { #Set authentication auth_type PASS #Master-slave server authentication method auth_pass 123456 } track_script { check_nginx } virtual_ipaddress { #Set VIP 192.168.43.150 #You can have multiple virtual IPs, just wrap the line} }
Tips: Some notes on configuration information
5. Cluster High Availability (HA) Verification
Step 1 Start the Keepalived and Nginx services on the Master machine
[root@localhost keepalived]# keepalived -D -f /etc/keepalived/keepalived.conf [root@localhost keepalived]# service nginx start
View service startup process
[root@localhost keepalived]# ps -aux|grep nginx root 6390 0.0 0.0 20484 612 ? Ss 19:13 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf nobody 6392 0.0 0.0 23008 1628 ? S 19:13 0:00 nginx: worker process root 6978 0.0 0.0 112672 968 pts/0 S+ 20:08 0:00 grep --color=auto nginx
View the Keepalived startup process
[root@localhost keepalived]# ps -aux|grep keepalived root 6402 0.0 0.0 45920 1016 ? Ss 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf root 6403 0.0 0.0 48044 1468 ? S 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf root 6404 0.0 0.0 50128 1780 ? S 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf root 7004 0.0 0.0 112672 976 pts/0 S+ 20:10 0:00 grep --color=auto keepalived
Use ip add to check the virtual IP binding status. If 192.168.43.150 node information appears, it is bound to the Master node.
[root@localhost keepalived]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:91:bf:59 brd ff:ff:ff:ff:ff:ff inet 192.168.43.101/24 brd 192.168.43.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.43.150/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::9abb:4544:f6db:8255/64 scope link valid_lft forever preferred_lft forever inet6 fe80::b0b3:d0ca:7382:2779/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link tentative dadfailed valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
Step 2 Start the Nginx service and Keepalived service on the Backup node and check the service startup status. If a virtual IP appears on the Backup node, there is a problem with the Keepalived configuration file. This situation is called split-brain.
[root@localhost keepalived]# clear [root@localhost keepalived]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:14:df:79 brd ff:ff:ff:ff:ff:ff inet 192.168.43.102/24 brd 192.168.43.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
Step 3 Verify the service
Browse and force refresh the address multiple times: http://192.168.43.150. You can see that 103 and 104 are displayed alternately multiple times, and Nginx-1 is displayed, indicating that the Master node is forwarding the web service.
Step 4 Close the Master keepalived service and Nginx service, access the Web service to observe the service transfer status
[root@localhost keepalived]# killall keepalived [root@localhost keepalived]# service nginx stop
At this time, forcefully refresh 192.168.43.150 and find that the page displays 103 and 104 alternately and displays Nginx-2. The VIP has been transferred to 192.168.43.102, proving that the service automatically switches to the backup node.
Step 5 Start the Master Keepalived service and Nginx service
At this time, we verified again that the VIP has been recaptured by the Master, and the page displays 103 and 104 alternately. At this time, Nginx-1 is displayed.
4. Keepalived preemptive mode and non-preemptive mode
Keepalived's HA is divided into preemptive mode and non-preemptive mode. In preemptive mode, after the MASTER recovers from a failure, it will preempt the VIP from the BACKUP node. The non-preemptive mode means that after the MASTER is restored, the VIP after the BACKUP is upgraded to the MASTER will not be preempted.
Non-preemptive mode configuration:
1> Added the nopreempt instruction to each of the two nodes under the vrrp_instance block, indicating that no VIP competition is required
2> The state of the nodes is BACKUP. After both keepalived nodes are started, they are in BACKUP state by default. After sending multicast information, both parties will elect a MASTER based on priority. Since nopreempt is configured for both, the MASTER will not preempt the VIP after recovering from a failure. This will avoid service delays that may be caused by VIP switching.
The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.
<<: Explanation of factors affecting database performance in MySQL
>>: How to install MySQL Community Server 5.6.39
Table of contents 1. this keyword 2. Custom attri...
This article shares the specific code of JS to ac...
First of all, let's talk about the execution ...
Prerequisites Need to install git Installation St...
Table of contents 1. Introduction to platform bus...
Problem description: Copy code The code is as fol...
This article shares the MySQL installation tutori...
1. After connecting and logging in to MySQL, firs...
This article mainly introduces how to build a MyS...
Table of contents Overview The history of CPU-bou...
Effect: css: .s_type { border: none; border-radiu...
How to save and exit after editing a file in Linu...
Table of contents The concept of affairs The stat...
This article mainly introduces the sample code of...
introduction Looking back four years ago, when I ...