In the previous blog, we talked about using Nginx and httpd to configure the backend tomcat service for reverse generation. For a review, please refer to https://www.jb51.net/article/191277.htm. Today, let's talk about using Nginx and httpd to configure the tomcat cluster for load balancing and the points that need to be paid attention to. The previous demonstration and configuration are all configured and used with a single tomcat. However, in production, a single tomcat cannot support large-scale access. At this time, we need to consider clustering multiple tomcats to provide external services. When multiple tomcats are clustered to provide external services, there must be a scheduler to schedule client requests. Commonly used schedulers include nginx httpd haproxy lvs, etc. There is no essential difference between using these schedulers to configure tomcat for load balancing and other web servers for load balancing. We can just configure tomcat as a web server. 1. Environmental preparation Run docker to start two tomcat containers as backend tomcat servers and map the web directories of the two tomcat containers to /tomcat/doc/tomcat1 and tomcat1 directories respectively using storage volumes. Tip: Two tomcat containers are running above, namely tct1 and tc2, and we map /usr/local/tomcat/webapps/myapp to /tomcat/doc/tomcat1 and tomcat2 of the host machine. In this way, we can directly put the web script into this directory on the host machine to deploy the web page to the default virtual host of tomcat; Edit the homepage file contents of both containers Tip: The above provides a test homepage for tomcat1 and tomcat2 respectively; Now put tomcat1 and tomcat2 separately to see if the corresponding homepage can be accessed Tip: You can see that both tomcat1 and tomcat2 are accessible, so the backend tomcat environment is ready; next we will configure nginx to load balance them; 2. Configure nginx to load balance tomcat Tip: The above configuration is to merge the two tomcat containers into the tcsevs group, and then access this group through reverse proxy /. This configuration is polling by default; Verification: Access port 80 on the host machine to see if you can access the homepages provided by the two backend tomcat containers respectively? Check the syntax format of the nginx configuration file and start nginx Access port 80 of the host machine to see if you can access the page provided by the backend Tomcat? Tip: You can see that accessing port 80 of the host machine can access the backend tomcat server normally, and you can also see the effect of the default polling scheduling; but there is a problem here. When the same user accesses port 80 of the host machine, the response result session ID is different, which means that nginx does not track the user's status information. The reason is that http requests are stateless. In order to let the service record the user's status information, we can schedule based on the source IP on nginx. What does it mean? For the same source IP address, nginx schedules the request to the same backend server, so that the status information of the same user access is always scheduled to the same backend server; Nginx maintains session based on source IP Tip: ip_hash and hash $remote_addr both represent hashing the source IP address, and then performing a modulo operation on the result and the total weight. The request will be dispatched to the node where the result falls. What does this mean? As shown above, there are two backend servers, and their weights are both 1, so the sum of their weights is 2. ip_hash and hash $remote_addr are to perform a hashing operation on the first three segments of the client's IP address, and then perform a modulo operation on the obtained value and the sum of the weights. Obviously, the result of the modulo backend is either 0 or 1. If the result after the modulo operation is 1, then nginx dispatches the request to tomcatB or tomcatA based on its internal correspondence. Test: Restart niginx and access port 80 of the host machine to see if all requests are dispatched to the same backend server? Tip: You can see that the access to port 80 of the host machine is no longer polled, but is always dispatched to the tomcatA server for response; but when we access port 80 of 127.0.0.1, it is dispatched to tomcatB. This is because in the nginx scheduling algorithm, hash $remote_addr and ip_hash are the first 24 bits of the IP address. So if the first three segments of your IP are the same, nginx will think that it is in the same LAN as nginxserver, so it will dispatch the request to the backend server that has been requested before in the same LAN for response; of course, in addition to hashing the source address, we can also hash other headers. The principle is similar. The corresponding header value is hashed, and then the weight and modulus are calculated; then according to the corresponding relationship within nginx, the requests with the same modulus backend result are dispatched to the same backend server. It is based on this principle that the client and the backend server are bound together to achieve session binding; httpd does load balancing for tomcat When using httpd as a load balancer, you need to confirm whether httpd has enabled proxy_http_module, proxy_module, and proxy_balancer_module. If you need to use ajp, you also need to confirm whether the proxy_ajp_module module is enabled, as well as the three modules of the scheduling algorithm, lbmethod_bybusyness_module, lbmethod_byrequests_module, and lbmethod_bytraffic_module. For the scheduling algorithm, you can enable any of the above modules, and the same applies to http or ajp. Enable them if you need them, and it doesn't matter if you don't use them. Tip: You can see that all the modules we need to use are enabled; Configure httpd to load balance the backend tomcat : : : : : : : : : : : : : : : Stop nginx, check the syntax of httpd configuration file, and start httpd if there is no problem Access the service provided by httpd to see if you can access the backend tomcat page Tip: You can see that polling can be achieved just like the access of nginx; httpd uses cookies to make session stickiness to the backend tomcat : : : : : : : : : : : : : : : Test: Check the syntax of httpd's configuration file. If there is no problem, restart httpd and then access httpd to see what changes will occur. Use curl to simulate the first access to the httpd server and see if there are any changes in the response header? Tip: You can see that when accessing the httpd server, there will be an additional set-cookie header in the response header, and the value of this header is the KEY and value we configured in the configuration file before; the set-cookie header is mainly used when the browser makes the next request, it will carry the value of the set-cookie header with the cookie header to access the server, so that the server can analyze which client sent the request and how to schedule the subsequent server according to the value of the cookie in the client request message; Use a browser to access the website and check if the client uses the set-cookie value from the first visit to request the server in subsequent requests. Tip: You can see that when the browser visits for the first time, the server will add a set-cookie header in the response header; the value of this header is ROUTEID, which is the value of the route on the current response to our backend server; Tip: You can see that the client carries the values in the previous set-cookie in the request header cookie. At this time, when httpd receives the client request, it can determine which backend server to send the corresponding request to for response based on the KEY specified by the set stickysession. In this way, as long as the client's cookie remains unchanged, each time it accesses the server, it will use the value of the cookie header to tell the server which backend server to dispatch to. Use curl to simulate the client request to carry cookies to access the server Tip: You can see that when we use curl to imitate the client access to carry cookies, the set-cookie header will not be sent to us in the response header (the set-cookie here refers to the header related to our server settings), and we carry cookies with different ROUTEIDs, it will dispatch us to different backend servers for response according to the value of the ROUTEID we carry; the configuration method of using ajp for the httpd load balancing proxy backend tomcat is the same as the configuration method of http, the only difference is that the http protocol of the backend server is changed to ajp, and the port of the backend tomcat is changed to the port listened by the ajp protocol. By default, the tomcat ajp protocol listens on port 8009; Configure the httpd backend management interface page Tip: The above configuration means starting the httpd management page and binding it to the uri /manager-page. No proxy is done for the uri /manager-page, and the rui can only allow the host with the IP address 192.168.0.232 to access. Other hosts have no permission, including the server itself. Verification: Use a host other than 192.168.0.232 to access 192.168.0.22/manager-page to see if it can be accessed? Tip: You can see that when you use 192.168.0.22 to access, the prompt 403 does not have permission; Use 192.168.0.232 to access and see if you can access the management page? Tip: You can access the httpd management page normally using a browser on 192.168.0.232; Dynamically modify the weight of tomcat1 Tip: Because this page can dynamically change the properties of the backend server, access restrictions are usually required; This is the end of this article about Nginx/Httpd load balancing tomcat configuration. For more relevant Nginx/Httpd load balancing tomcat configuration content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: Vue implements drag and drop or click to upload pictures
>>: Two methods to implement MySQL group counting and range aggregation
Table of contents Preface 1. What is 2. How to us...
The docker exec command can execute commands in a...
Table of contents 1. MySQL join buffer 2. JoinBuf...
What is HTML? To put it simply: HTML is used to m...
The most popular tag is IE8 Browser vendors are sc...
Recently, the Vue project needs to refresh the da...
Application Scenario In many cases, we install so...
The mini program collected user personal informat...
What are the lifecycle functions of React compone...
Problem Description When filter attribute is used...
To understand what this means, we must first know ...
Preface Whether it is a stand-alone lock or a dis...
<br />Use of line break tag<br>The lin...
Preface Most of our MySQL online environments use...
Let’s start the discussion from a common question...