background As we all know, nginx is a high-performance web server, often used for load balancing and reverse proxy. The so-called reverse proxy corresponds to the forward proxy. The forward proxy is what we understand as a "proxy" in the conventional sense: for example, under normal circumstances, Google cannot be accessed in China. If we need to access it, we need to forward it through a layer of proxy. This forward proxy represents the server (that is, Google), while the reverse proxy represents the client (that is, the user). After the user's request reaches nginx, nginx will proxy the user's request to the actual backend service and return the result to the user. (Image from Wikipedia) Forward proxy and reverse proxy are actually defined from the user's perspective. Forward means proxying the service that the user wants to request, while reverse means proxying the user to initiate a request to the service. There is a very important difference between the two: The forward proxy server is unaware of the requester, and the reverse proxy requester is unaware of the server. The most common scenario where nginx is used as a reverse proxy is the well-known http protocol. By configuring the nginx.conf file, you can easily define a reverse proxy rule: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 80; server_name localhost; location / { proxy_pass http://domain; } } } Nginx supports reverse proxy of gRPC protocol since 1.13.10, and the configuration is similar: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 81 http2; server_name localhost; location / { grpc_pass http://ip; } } } However, when the demand scenario is more complex, it is found that the gRPC module of nginx actually has many pitfalls, and the implementation capability is not as complete as that of http. When applying the solution of http, problems will arise. Scenario At the beginning, our scenario was very simple. We implemented a simple C/S architecture through the gRPC protocol: However, this simple direct connection is not feasible in some scenarios. For example, the client and server are in two network environments and are not connected to each other. In this case, it is impossible to access the service through a simple gRPC connection. One solution is to forward through an intermediate proxy server, using the nginx reverse proxy gRPC method mentioned above: The nginx proxy is deployed on a cluster that is accessible to both environments, thus enabling gRPC access across network environments. The question that follows is how to configure this routing rule? Note that our initial gRPC target nodes are clear, that is, the IP addresses of server1 and server2. When a layer of nginx proxy is added in the middle, the objects of the gRPC requests initiated by the client are all the IP addresses of the nginx proxy. After the client establishes a connection with nginx, how does nginx know whether to forward the request to server1 or server2? (Here, server1 and server2 are not simply redundant deployments of the same service. It may be necessary to decide who responds based on the attributes of the request, such as the user ID, so load balancing cannot be used to randomly select a response request.) SolutionIf it is http protocol, there are many ways to implement it: Differentiate by pathThe request adds the server information to the path, for example: /server1/service/method, and then nginx restores the original request when forwarding the request: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 80; server_name localhost; location ~ ^/server1/ { proxy_pass http://domain1/; } location ~ ^/server2/ { proxy_pass http://domain2/; } } } Note the slash at the end of http://domain/. Without this slash, the requested path would be /server1/service/method, and the server can only respond to requests for /service/method, which will result in a 404 error. Differentiate by request parametersYou can also put the information of server1 in the request parameters: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 80; server_name localhost; location /service/method { if ($query_string ~ x_server=(.*)) { proxy_pass http://$1; } } } } But it is not that simple for gRPC. First of all, gRPC does not support URI writing. The request forwarded by nginx will retain the original path and cannot modify the path when forwarding. This means that the first method mentioned above is not feasible. Secondly, gRPC is based on the HTTP 2.0 protocol. HTTP2 does not have the concept of queryString. There is an item in the request header: path represents the request path, such as /service/method, and this path cannot carry request parameters, that is, path cannot be written as /service/method?server=server1. This means that the second method mentioned above is also not feasible. Note that the request header in HTTP2:path specifies the path of the request, so why don't we just modify:path directly? worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 80 http2; server_name localhost; location ~ ^/(.*)/service/.* { grpc_set_header :path /service/$2; grpc_pass http://$1; } } } However, actual verification shows that this method is not feasible. Directly modifying the request header of :path will cause the server to report an error. One possible error is as follows:
After capturing the packet, it was found that grpc_set_header did not overwrite the result of :path, but added a new request header, which is equivalent to having two :path in the request header. This may be the reason why the server reported a 502 error. When we are at our wit's end, we think of the metadata function of gRPC. We can store the server information in the metadata on the client side, and then forward it to the corresponding backend service according to the server information in the metadata during nginx routing, thus achieving our needs. For the Go language, setting metadata requires implementing the PerRPCCredentials interface, and then passing in an instance of this implementation class when initiating a connection: type extraMetadata struct { Ip string } func (c extraMetadata) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) { return map[string]string{ "x-ip": c.Ip, }, nil } func (c extraMetadata) RequireTransportSecurity() bool { return false } func main(){ ... // nginxProxy is the IP or domain name address of nginx proxy var nginxProxy string // serverIp is the IP address of the backend service calculated based on the request attributes var serverIp string con, err := grpc.Dial(nginxProxy, grpc.WithInsecure(), grpc.WithPerRPCCredentials(extraMetadata{Ip: serverIp})) } Then forward it to the corresponding server according to this metadata in the nginx configuration: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 80 http2; server_name localhost; location ~ ^/service/.* { grpc_pass grpc://$http_x_ip:8200; } } } Note that the syntax $http_x_ip is used here to reference the x-ip metadata information we passed. This method is proven to be effective, and the client can successfully access the server's gRPC service through the nginx proxy. SummarizeThere is too little documentation for nginx's gRPC module. The official documentation only gives the purpose of a few instructions, and does not explain the metadata method. There is also little documentation online that touches on this topic, which resulted in two or three days of troubleshooting. I'll summarize the whole process here, hoping it can help people who encounter the same problem. This is the end of this article about the pitfalls of using nginx as a reverse proxy for grpc. For more relevant nginx grpc reverse proxy content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: MySql sharing of null function usage
>>: Introduction to the use of form OnSubmit and input type=image
Abstract: Analysis of two MySQL SQL statement loc...
This article mainly introduces the detailed proce...
Table of contents Before transformation: After tr...
1. Create a scheduling task instruction crontab -...
The execution relationship between the href jump ...
Achieve resultsImplementation Code html <input...
1. Command Introduction nl (Number of Lines) adds...
Table of contents Overview Example 1) Freeze Obje...
Preface The essence of deadlock is resource compe...
Table of contents 1. Understanding the stack stru...
TRUNCATE TABLE Deletes all rows in a table withou...
Batch comments in SQL Server Batch Annotation Ctr...
1. How to represent the current time in MySQL? In...
This article example shares the specific code of ...
Preface I recently sorted out my previous notes o...