Reverse Proxy Reverse proxy refers to receiving the user's access request through a proxy server, re-initiating the request to the internal server on behalf of the user, and finally returning the response information of the internal server to the user. In this way, the proxy server appears to the outside world as a server, and the client accessing the internal server uses the proxy server instead of the real website access user. Why use a reverse proxy
Reverse proxy example Environmental Description Suppose there are two servers AB. Server A provides web resources and is only accessible to the intranet. Server B has two network cards, one is in the same intranet as server A, and the other is in the external network. At this point, it is not possible for user C to directly access server A. At this time, the request of user C can be accessed through server B.
Configuring Virtual Hosts Edit the virtual host configuration file on the moli-04 machine. The content is as follows: [root@moli-04 extra]$ cat blog.syushin.org.conf server{ listen 80; server_name blog.syushin.org; location / { proxy_pass http://192.168.30.7; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Change the hosts file Modify the hosts file on Windows and add configuration 192.168.93.129 blog.syushin.org Browser Testing The access address is 192.168.93.129, and the page of machine 05 appears on the interface, and the configuration is successful. Load Balancing Load balancing function
When a load balancing cluster is running, it generally sends client access requests to a group of backend servers through one or more front-end load balancers. Nginx Load Balancing Strictly speaking, Nginx is only used as a reverse proxy of Nginx Proxy, but because the effect of this reverse proxy function is the effect of a load balancing machine, nginx load balancing is a special reverse proxy. The main components for implementing Nginx load balancing are:
Upstream module introduction The proxy methods supported by the ngx_http_upstream_module module include proxy_pass, fastcgi_pass, etc., and proxy_pass is mainly used. The upstream module allows nginx to define one or more groups of node server groups. When used, the website request is sent to the defined corresponding node group through the proxy_pass proxy. Example: Creating a Node Server Pool upstream blog server 192.168.30.5:80 weight=5; server 192.168.30.6:81 weight=10; server 192.168.30.7:82 weight=15; } In addition to weight, the status value of the node server can also be set as: Use the domain name upstream upstream blog2{ server www.syushin.com weight=5; server blog.syushin.org down; server blog.syushin.cc backup; } Scheduling Algorithm rr polling (default scheduling algorithm, static scheduling algorithm) Distribute client requests to different backend node servers one by one according to the order of client requests. wrr (weighted round-robin, static scheduling algorithm) Weights are added on the basis of rr polling. When using this algorithm, the weight is proportional to user access. The larger the weight value, the more requests are forwarded. upstream pools{ server 10.0.0.1 weight=1; server 10.0.0.2 weight=2; } ip_hash (static scheduling algorithm) Each request is assigned according to the hash result of the client IP. When a new request arrives, the client IP is first hashed into a value through a hash algorithm. In subsequent client requests, as long as the hash value of the client IP is the same, it will be assigned to the same server. upstream blog_pool{ ip_hash; server 192.168.30.5:80; server 192.168.30.6:8090; } Note: When using ip_hash, weight and backup are not allowed. least_conn algorithm The least_conn algorithm determines the distribution based on the number of connections to the backend servers, and the server with the least number of connections will be assigned more requests. In addition to the (commonly used) scheduling algorithms listed above, there are many more, which are not listed here one by one. http_proxy_module module http_proxy_module can forward requests to another server. In the reverse proxy, the specified URI will be matched through the location function, and then the requests that match the matching URI will be thrown to the defined upstream node pool through proxy_pass. http_proxy module parameters
Proxy_pass usage Format: proxy_pass URL; Here is an example:
The URL can be a domain name, an IP address, or a socket file. There are a few things to note about the proxy_pass configuration:
Example 2
Example 3
Example 4
If server_name is blog.syushin.com, when requesting http://blog.syushin.com/uploa..., the request result of example 1-4 above is:
Okay, that’s all for this article. I hope you will support 123WORDPRESS.COM in the future. You may also be interested in:
|
<<: Detailed analysis of binlog_format mode and configuration in MySQL
>>: React implementation example using Amap (react-amap)
The excellence of Linux lies in its multi-user, m...
There are caches everywhere inside MySQL. When I ...
This article example shares the specific code of ...
Recently, when I was learning Django, I needed to...
Preface In the development of actual projects, we...
1. Components and implemented functions Keepalive...
Find the problem After upgrading MySQL to MySQL 5...
Table of contents Preface JS Magic Number Storing...
Preface Creating shortcuts in Linux can open appl...
Table of contents Introduction Uses of closures C...
Table of contents What is Docker Client-side Dock...
Preface Seeing the title, everyone should be thin...
This article example shares the specific code of ...
Table of contents 1. What is a prototype? 1.1 Fun...
This article introduces the sample code of CSS3 t...