Summary of pitfalls of using nginx as a reverse proxy for grpc

Summary of pitfalls of using nginx as a reverse proxy for grpc

background

As we all know, nginx is a high-performance web server, often used for load balancing and reverse proxy. The so-called reverse proxy corresponds to the forward proxy. The forward proxy is what we understand as a "proxy" in the conventional sense: for example, under normal circumstances, Google cannot be accessed in China. If we need to access it, we need to forward it through a layer of proxy. This forward proxy represents the server (that is, Google), while the reverse proxy represents the client (that is, the user). After the user's request reaches nginx, nginx will proxy the user's request to the actual backend service and return the result to the user.

(Image from Wikipedia)

Forward proxy and reverse proxy are actually defined from the user's perspective. Forward means proxying the service that the user wants to request, while reverse means proxying the user to initiate a request to the service. There is a very important difference between the two:

The forward proxy server is unaware of the requester, and the reverse proxy requester is unaware of the server.
Think about the example above. When you access Google through a proxy, Google can only sense that the request comes from the proxy server, but cannot directly sense you (of course it can be tracked through cookies, etc.); when using nginx reverse proxy, you are not aware of which backend server the request is forwarded to.

The most common scenario where nginx is used as a reverse proxy is the well-known http protocol. By configuring the nginx.conf file, you can easily define a reverse proxy rule:

worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include mime.types;
    default_type application/octet-stream;

    server {
        listen 80;
        server_name localhost;

        
        location / {
            proxy_pass http://domain;
        }
    }
}

Nginx supports reverse proxy of gRPC protocol since 1.13.10, and the configuration is similar:

worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include mime.types;
    default_type application/octet-stream;

    server {
        listen 81 http2;
        server_name localhost;

        
        location / {
            grpc_pass http://ip;
        }
    }
}

However, when the demand scenario is more complex, it is found that the gRPC module of nginx actually has many pitfalls, and the implementation capability is not as complete as that of http. When applying the solution of http, problems will arise.

Scenario

At the beginning, our scenario was very simple. We implemented a simple C/S architecture through the gRPC protocol:

However, this simple direct connection is not feasible in some scenarios. For example, the client and server are in two network environments and are not connected to each other. In this case, it is impossible to access the service through a simple gRPC connection. One solution is to forward through an intermediate proxy server, using the nginx reverse proxy gRPC method mentioned above:

The nginx proxy is deployed on a cluster that is accessible to both environments, thus enabling gRPC access across network environments. The question that follows is how to configure this routing rule? Note that our initial gRPC target nodes are clear, that is, the IP addresses of server1 and server2. When a layer of nginx proxy is added in the middle, the objects of the gRPC requests initiated by the client are all the IP addresses of the nginx proxy. After the client establishes a connection with nginx, how does nginx know whether to forward the request to server1 or server2? (Here, server1 and server2 are not simply redundant deployments of the same service. It may be necessary to decide who responds based on the attributes of the request, such as the user ID, so load balancing cannot be used to randomly select a response request.)

Solution

If it is http protocol, there are many ways to implement it:

Differentiate by path

The request adds the server information to the path, for example: /server1/service/method, and then nginx restores the original request when forwarding the request:

worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include mime.types;
    default_type application/octet-stream;

    server {
        listen 80;
        server_name localhost;

        location ~ ^/server1/ {
            proxy_pass http://domain1/;
        }
        
        location ~ ^/server2/ {
            proxy_pass http://domain2/;
        }
    }
}

Note the slash at the end of http://domain/. Without this slash, the requested path would be /server1/service/method, and the server can only respond to requests for /service/method, which will result in a 404 error.

Differentiate by request parameters

You can also put the information of server1 in the request parameters:

worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include mime.types;
    default_type application/octet-stream;

    server {
        listen 80;
        server_name localhost;

        location /service/method {
            if ($query_string ~ x_server=(.*)) {
                proxy_pass http://$1;
            }
        }
    }
}

But it is not that simple for gRPC. First of all, gRPC does not support URI writing. The request forwarded by nginx will retain the original path and cannot modify the path when forwarding. This means that the first method mentioned above is not feasible. Secondly, gRPC is based on the HTTP 2.0 protocol. HTTP2 does not have the concept of queryString. There is an item in the request header: path represents the request path, such as /service/method, and this path cannot carry request parameters, that is, path cannot be written as /service/method?server=server1. This means that the second method mentioned above is also not feasible.

Note that the request header in HTTP2:path specifies the path of the request, so why don't we just modify:path directly?

worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include mime.types;
    default_type application/octet-stream;

    server {
        listen 80 http2;
        server_name localhost;

        location ~ ^/(.*)/service/.* {
            grpc_set_header :path /service/$2;
            grpc_pass http://$1;
        }
    }
}

However, actual verification shows that this method is not feasible. Directly modifying the request header of :path will cause the server to report an error. One possible error is as follows:

rpc error: code = Unavailable desc = Bad Gateway: HTTP status code 502; transport: received the unexpected content-type "text/html"

After capturing the packet, it was found that grpc_set_header did not overwrite the result of :path, but added a new request header, which is equivalent to having two :path in the request header. This may be the reason why the server reported a 502 error.

When we are at our wit's end, we think of the metadata function of gRPC. We can store the server information in the metadata on the client side, and then forward it to the corresponding backend service according to the server information in the metadata during nginx routing, thus achieving our needs. For the Go language, setting metadata requires implementing the PerRPCCredentials interface, and then passing in an instance of this implementation class when initiating a connection:

type extraMetadata struct {
    Ip string
}

func (c extraMetadata) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) {
    return map[string]string{
        "x-ip": c.Ip,
    }, nil
}

func (c extraMetadata) RequireTransportSecurity() bool {
    return false
}

func main(){
    ...
    // nginxProxy is the IP or domain name address of nginx proxy var nginxProxy string
    // serverIp is the IP address of the backend service calculated based on the request attributes
    var serverIp string
    con, err := grpc.Dial(nginxProxy, grpc.WithInsecure(),
        grpc.WithPerRPCCredentials(extraMetadata{Ip: serverIp}))
}

Then forward it to the corresponding server according to this metadata in the nginx configuration:

worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include mime.types;
    default_type application/octet-stream;

    server {
        listen 80 http2;
        server_name localhost;

        location ~ ^/service/.* {
            grpc_pass grpc://$http_x_ip:8200;
        }
    }
}

Note that the syntax $http_x_ip is used here to reference the x-ip metadata information we passed. This method is proven to be effective, and the client can successfully access the server's gRPC service through the nginx proxy.

Summarize

There is too little documentation for nginx's gRPC module. The official documentation only gives the purpose of a few instructions, and does not explain the metadata method. There is also little documentation online that touches on this topic, which resulted in two or three days of troubleshooting. I'll summarize the whole process here, hoping it can help people who encounter the same problem.

This is the end of this article about the pitfalls of using nginx as a reverse proxy for grpc. For more relevant nginx grpc reverse proxy content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Nginx reverse proxy configuration to remove prefix case tutorial
  • Full process record of Nginx reverse proxy configuration
  • How to implement Nginx reverse proxy for multiple servers
  • The whole process of configuring reverse proxy locally through nginx
  • Implementation of proxy_pass in nginx reverse proxy
  • How to maintain a long connection when using nginx reverse proxy
  • Detailed explanation of Nginx reverse proxy example
  • Nginx reverse proxy to go-fastdfs case explanation

<<:  MySql sharing of null function usage

>>:  Introduction to the use of form OnSubmit and input type=image

Recommend

Analysis of the implementation of MySQL statement locking

Abstract: Analysis of two MySQL SQL statement loc...

Detailed explanation of browser negotiation cache process based on nginx

This article mainly introduces the detailed proce...

React implements a highly adaptive virtual list

Table of contents Before transformation: After tr...

A brief analysis of crontab task scheduling in Linux

1. Create a scheduling task instruction crontab -...

CSS3 implements the sample code of NES game console

Achieve resultsImplementation Code html <input...

How to use the Linux nl command

1. Command Introduction nl (Number of Lines) adds...

JS Object constructor Object.freeze

Table of contents Overview Example 1) Freeze Obje...

MySQL deadlock routine: inconsistent batch insertion order under unique index

Preface The essence of deadlock is resource compe...

JavaScript implements the detailed process of stack structure

Table of contents 1. Understanding the stack stru...

Explanation of Truncate Table usage

TRUNCATE TABLE Deletes all rows in a table withou...

SQL Server Comment Shortcut Key Operation

Batch comments in SQL Server Batch Annotation Ctr...

The difference and usage of datetime and timestamp in MySQL

1. How to represent the current time in MySQL? In...

JavaScript quickly implements calendar effects

This article example shares the specific code of ...

Mobile web screen adaptation (rem)

Preface I recently sorted out my previous notes o...