A few days ago, I exchanged some knowledge about epoll and request processing in nodejs with a friend. Today, let’s briefly talk about the logic of nodejs request processing. We start with the listen function. int uv_tcp_listen(uv_tcp_t* tcp, int backlog, uv_connection_cb cb) { // Set the strategy for processing requests, see the analysis below if (single_accept == -1) { const char* val = getenv("UV_TCP_SINGLE_ACCEPT"); single_accept = (val != NULL && atoi(val) != 0); /* Off by default. */ } if (single_accept) tcp->flags |= UV_HANDLE_TCP_SINGLE_ACCEPT; // Execute bind or set flag err = maybe_new_socket(tcp, AF_INET, flags); // Start listening if (listen(tcp->io_watcher.fd, backlog)) return UV__ERR(errno); // Set callback tcp->connection_cb = cb; tcp->flags |= UV_HANDLE_BOUND; // Set the callback of io watcher, which will be executed when epoll monitors the connection. tcp->io_watcher.cb = uv__server_io; // Insert the observer queue. At this time, it has not been added to epoll. The poll io stage traverses the observer queue for processing (epoll_ctl) uv__io_start(tcp->loop, &tcp->io_watcher, POLLIN); return 0; } We can see that when we createServer, the Libuv layer follows the traditional network programming logic. At this time our service is started. In the poll io phase, our listening file descriptors and contexts (events of interest, callbacks, etc.) will be registered with epoll. Normally it is blocked in epoll. So what happens if a TCP connection comes in at this time? epoll first traverses the fd that triggered the event, and then executes the callback in the fd context, that is, uvserver_io. Let’s take a look at uvserver_io. void uv__server_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) { // Loop processing, uv__stream_fd(stream) is the fd corresponding to the server while (uv__stream_fd(stream) != -1) { // Get the fd for communicating with the client through accept, and we can see that this fd is different from the server's fd err = uv__accept(uv__stream_fd(stream)); // The fd corresponding to uv__stream_fd(stream) is non-blocking. Returning this error means that there is no connection available to accept. Return directly if (err < 0) { if (err == UV_EAGAIN || err == UV__ERR(EWOULDBLOCK)) return; } //Record it stream->accepted_fd = err; // Execute callback stream->connection_cb(stream, 0); /* stream->accepted_fd is -1, which means accepted_fd has been consumed in the callback connection_cb. Otherwise, first unregister the read event of the server's fd in epoll, wait for consumption and then register again, that is, no longer process the request*/ if (stream->accepted_fd != -1) { uv__io_stop(loop, &stream->io_watcher, POLLIN); return; } /* OK, accepted_fd has been consumed, do we still want to accept new fd? If UV_HANDLE_TCP_SINGLE_ACCEPT is set, it means that only one connection is processed at a time, and then sleeps for a while to give other processes a chance to accept (in multi-process architecture). If it is not a multi-process architecture, set this again, This will cause the connection processing to be delayed*/ if (stream->type == UV_TCP && (stream->flags & UV_HANDLE_TCP_SINGLE_ACCEPT)) { struct timespec timeout = { 0, 1 }; nanosleep(&timeout, NULL); } } } From uv__server_io, we know that Libuv continuously accepts new fd in a loop and then executes the callback. Normally, the callback consumes fd, and the cycle continues until there are no connections to process. Next, let's focus on how fd is consumed in the callback, and whether a large number of loops will consume too much time and cause the Libuv event loop to be blocked for a while. The callback of tcp is OnConnection of the c++ layer. //Callback triggered when there is a connection template <typename WrapType, typename UVType> void ConnectionWrap<WrapType, UVType>::OnConnection(uv_stream_t* handle, int status) { // Get the C++ layer object corresponding to the Libuv structure WrapType* wrap_data = static_cast<WrapType*>(handle->data); CHECK_EQ(&wrap_data->handle_, reinterpret_cast<UVType*>(handle)); Environment* env = wrap_data->env(); HandleScope handle_scope(env->isolate()); Context::Scope context_scope(env->context()); //Object for communicating with the client Local<Value> client_handle; if (status == 0) { // Instantiate the client javascript object and handle. // Create a new js layer using the object Local<Object> client_obj; if (!WrapType::Instantiate(env, wrap_data, WrapType::SOCKET) .ToLocal(&client_obj)) return; // Unwrap the client javascript object. WrapType* wrap; // Store the c++ layer object corresponding to the object client_obj used by the js layer into wrap ASSIGN_OR_RETURN_UNWRAP(&wrap, client_obj); // Get the corresponding handle uv_stream_t* client = reinterpret_cast<uv_stream_t*>(&wrap->handle_); // Take one of the fds received by handleAccept and save it to the client, so that the client can communicate with the client if (uv_accept(handle, client)) return; client_handle = client_obj; } else { client_handle = Undefined(env->isolate()); } // Callback js, client_handle is equivalent to executing new TCP in the js layer Local<Value> argv[] = { Integer::New(env->isolate(), status), client_handle }; wrap_data->MakeCallback(env->onconnection_string(), arraysize(argv), argv); } The code looks complicated, we only need to focus on uv_accept. The first parameter of uv_accept is the handle corresponding to the server, and the second is the object representing communication with the client. int uv_accept(uv_stream_t* server, uv_stream_t* client) { int err; switch (client->type) { case UV_NAMED_PIPE: case UV_TCP: // Set fd to client err = uv__stream_open(client, server->accepted_fd, UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); break; // ... } client->flags |= UV_HANDLE_BOUND; // Mark that fd has been consumed server->accepted_fd = -1; return err; } uv_accept mainly has two logics: setting the fd for communicating with the client to the client and marking it as consumed, thereby driving the while loop just mentioned to continue executing. For the upper layer, it is an object related to the client. In the Libuv layer, it is a structure, in the C++ layer, it is a C++ object, and in the JS layer, it is a JS object. The three are encapsulated and associated layer by layer. The core is the fd in the Libuv client structure, which is the underlying ticket for communicating with the client. Finally, the js layer is called back, which is to execute the onconnection of net.js. onconnection encapsulates a Socket object to represent communication with the client. It holds the object of the C++ layer, the C++ layer object holds the Libuv structure, and the Libuv structure holds fd. const socket = new Socket({ handle: clientHandle, allowHalfOpen: self.allowHalfOpen, pauseOnCreate: self.pauseOnConnect, readable: true, writable: true }); This is the end of this article about the core process of nodejs processing tcp connection. For more relevant content about nodejs processing tcp connection, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: Solve the error problem caused by modifying mysql data_dir
>>: How to use Docker containers to implement proxy forwarding and data backup
Preface This article describes two situations I h...
Table of contents Vue recursive component drag ev...
This article example shares the specific code of ...
Table of contents Preface 1.notnull 2. unique 3. ...
1. What are custom hooks Logic reuse Simply put, ...
Why learn vim Linux has a large number of configu...
Table of contents Preface: System Requirements: I...
Recently, Oracle announced the public availabilit...
【1】<i></i> and <em></em> ...
Mysql converts query result set into JSON data Pr...
Web Server 1. The web server turns off unnecessar...
This article shares the specific code of JavaScri...
1. Usage of instanceof instanceof operator is use...
In one sentence: Data hijacking (Object.definePro...
Background: position: sticky is also called stick...