Nginx's shared memory is one of the main reasons why it can achieve high performance, and it is mainly used for file caching. This article will first explain how to use shared memory, and then explain how nginx manages shared memory. 1. Usage Examples The instructions for nginx to declare shared memory are: proxy_cache_path /Users/Mike/nginx-cache levels=1:2 keys_zone=one:10m max_size=10g inactive=60m use_temp_path=off; Here we just declare a shared memory named one with a maximum available memory of 10g. The meaning of each parameter here is as follows:
2. Working Principle The management of shared memory is mainly divided into several parts as shown in the following figure: As you can see, it is mainly divided into several aspects, including initialization, shared memory management, shared memory loading and shared memory use. During the initialization process, the proxy_cache_path instruction is parsed first, and then the cache manager and cache loader processes are started respectively; the cache manager process is mainly responsible for the management of shared memory, which mainly clears expired data through the LRU algorithm, or forcibly deletes some unreferenced memory data when resources are tight; and the main task of the cache loader process is to read the existing files in the file storage directory and load them into the shared memory after nginx is started; the use of shared memory is mainly to cache the response data after the request is processed. This part will be explained in the following article. This article mainly explains the working principles of the first three parts. According to the above division, the management of shared memory can be divided into three parts (the use of shared memory will be explained later). The following is a schematic diagram of the processing flow of these three parts: As can be seen from the above flowchart, in the main process, the main work is to parse the proxy_cache_path instruction, start the cache manager process and start the cache loader process. In the cache manager process, the main work is divided into two parts: 1. Check whether the element at the tail of the queue is expired. If it is expired and the reference count is 0, delete the element and the file corresponding to the element; 2. Check whether the current shared memory resources are tight. If the resources are tight, delete all elements and their files with a reference count of 0, regardless of whether they are expired. In the processing flow of the cache loader process, the files in the directory where the files are stored and its subdirectories are traversed recursively, and then these files are loaded into the shared memory. It should be noted that the cache manager process will enter the next loop after traversing all shared memory blocks each time, while the cache loader process will be executed once 60 seconds after nginx is started and then exit the process. 3. Source code interpretation 3.1 proxy_cache_path directive analysis For the parsing of each nginx instruction, an ngx_command_t structure will be defined in the corresponding module. There is a set method in the structure that specifies the method used to parse the current instruction. The following is the definition of the ngx_command_t structure corresponding to proxy_cache_path: static ngx_command_t ngx_http_proxy_commands[] = { { ngx_string("proxy_cache_path"), // specifies the name of the current instruction // specifies the location where the current instruction is used, that is, the http module, and specifies the number of parameters of the current module, which must be greater than or equal to 2 NGX_HTTP_MAIN_CONF|NGX_CONF_2MORE, //Specifies the method ngx_http_file_cache_set_slot pointed to by the set() method, NGX_HTTP_MAIN_CONF_OFFSET, offsetof(ngx_http_proxy_main_conf_t, caches), &ngx_http_proxy_module } } As you can see, the parsing method used by this instruction is ngx_http_file_cache_set_slot(). Here we directly read the source code of this method: char *ngx_http_file_cache_set_slot(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) { char *confp = conf; off_t max_size; u_char *last, *p; time_t inactive; ssize_t size; ngx_str_t s, name, *value; ngx_int_t loader_files, manager_files; ngx_msec_t loader_sleep, manager_sleep, loader_threshold, manager_threshold; ngx_uint_t i, n, use_temp_path; ngx_array_t *caches; ngx_http_file_cache_t *cache, **ce; cache = ngx_pcalloc(cf->pool, sizeof(ngx_http_file_cache_t)); if (cache == NULL) { return NGX_CONF_ERROR; } cache->path = ngx_pcalloc(cf->pool, sizeof(ngx_path_t)); if (cache->path == NULL) { return NGX_CONF_ERROR; } // Initialize the default values of each attribute use_temp_path = 1; inactive = 600; loader_files = 100; loader_sleep = 50; loader_threshold = 200; manager_files = 100; manager_sleep = 50; manager_threshold = 200; name.len = 0; size = 0; max_size = NGX_MAX_OFF_T_VALUE; // Example configuration: proxy_cache_path /Users/Mike/nginx-cache levels=1:2 keys_zone=one:10m max_size=10g inactive=60m use_temp_path=off; // Here, cf->args->elts stores the token items contained in the proxy_cache_path instruction when parsing it. // The so-called token item refers to the character fragments separated by spaces value = cf->args->elts; // value[1] is the first parameter of the configuration, which is the root path where the cache file will be saved cache->path->name = value[1]; if (cache->path->name.data[cache->path->name.len - 1] == '/') { cache->path->name.len--; } if (ngx_conf_full_name(cf->cycle, &cache->path->name, 0) != NGX_OK) { return NGX_CONF_ERROR; } // Start parsing from the third parameter for (i = 2; i < cf->args->nelts; i++) { // If the third parameter starts with "levels=", parse the levels subparameter if (ngx_strncmp(value[i].data, "levels=", 7) == 0) { p = value[i].data + 7; // Calculate the actual position to start parsing last = value[i].data + value[i].len; // Calculate the position of the last character // Start parsing 1:2 for (n = 0; n < NGX_MAX_PATH_LEVEL && p < last; n++) { if (*p > '0' && *p < '3') { // Get the current parameter value, such as 1 and 2 that need to be parsed cache->path->level[n] = *p++ - '0'; cache->path->len += cache->path->level[n] + 1; if (p == last) { break; } // If the current character is a colon, continue parsing the next character; // The NGX_MAX_PATH_LEVEL value here is 3, which means that there are at most 3 subdirectories after the levels parameter if (*p++ == ':' && n < NGX_MAX_PATH_LEVEL - 1 && p < last) { continue; } goto invalid_levels; } goto invalid_levels; } if (cache->path->len < 10 + NGX_MAX_PATH_LEVEL) { continue; } invalid_levels: ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid \"levels\" \"%V\"", &value[i]); return NGX_CONF_ERROR; } // If the current parameter starts with "use_temp_path=", parse the use_temp_path parameter, which can be on or off. // Indicates whether the current cache file is first stored in a temporary folder and then written to the target folder. If it is off, it is directly stored in the target folder if (ngx_strncmp(value[i].data, "use_temp_path=", 14) == 0) { // If on, mark use_temp_path as 1 if (ngx_strcmp(&value[i].data[14], "on") == 0) { use_temp_path = 1; // If it is off, mark use_temp_path as 0 } else if (ngx_strcmp(&value[i].data[14], "off") == 0) { use_temp_path = 0; // If there are more than 100, return parsing exception} else { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid use_temp_path value \"%V\", " "it must be \"on\" or \"off\"", &value[i]); return NGX_CONF_ERROR; } continue; } // If the parameter starts with "keys_zone=", parse the keys_zone parameter. The format of this parameter is keys_zone=one:10m. // Here one is a name for subsequent location configuration, and 10m is a size. // Indicates the cache size for storing keys if (ngx_strncmp(value[i].data, "keys_zone=", 10) == 0) { name.data = value[i].data + 10; p = (u_char *) ngx_strchr(name.data, ':'); if (p) { // Calculate the length of name, which records the name of the current buffer, that is, one here name.len = p - name.data; p++; // Parse the specified size s.len = value[i].data + value[i].len - p; s.data = p; // Parse the size and convert the specified size into bytes. The number of bytes here must be greater than 8191 size = ngx_parse_size(&s); if (size > 8191) { continue; } } ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid keys zone size \"%V\"", &value[i]); return NGX_CONF_ERROR; } // If the parameter starts with "inactive=", parse the inactive parameter. The parameter format is such as inactive=60m, // Indicates how long the cached file will expire after not being accessed if (ngx_strncmp(value[i].data, "inactive=", 9) == 0) { s.len = value[i].len - 9; s.data = value[i].data + 9; // Parse the time and convert it into time length in seconds inactive = ngx_parse_time(&s, 1); if (inactive == (time_t) NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid inactive value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // If the parameter starts with "max_size=", parse the max_size parameter. The parameter format is such as max_size=10g, // Indicates the maximum memory space that the current cache can use if (ngx_strncmp(value[i].data, "max_size=", 9) == 0) { s.len = value[i].len - 9; s.data = value[i].data + 9; // Convert the parsed value to the number of bytes max_size = ngx_parse_offset(&s); if (max_size < 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid max_size value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // If the parameter starts with "loader_files=", parse the loader_files parameter. The parameter is in the form of loader_files=100. // Indicates how many files in the cache directory will be loaded into the cache by default when starting nginx if (ngx_strncmp(value[i].data, "loader_files=", 13) == 0) { // Parse the value of the loader_files parameter loader_files = ngx_atoi(value[i].data + 13, value[i].len - 13); if (loader_files == NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid loader_files value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // If the parameter starts with "loader_sleep=", parse the loader_sleep parameter. The parameter is in the form of loader_sleep=10s. // Indicates how long to sleep after loading each file before loading the next file if (ngx_strncmp(value[i].data, "loader_sleep=", 13) == 0) { s.len = value[i].len - 13; s.data = value[i].data + 13; // Convert the value of loader_sleep, here in milliseconds loader_sleep = ngx_parse_time(&s, 0); if (loader_sleep == (ngx_msec_t)NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid loader_sleep value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // If the parameter starts with "loader_threshold=", parse the loader_threshold parameter, which is in the form of loader_threshold=10s. // Indicates the maximum time that can be used to load a file each time if (ngx_strncmp(value[i].data, "loader_threshold=", 17) == 0) { s.len = value[i].len - 17; s.data = value[i].data + 17; // Parse and convert the value of loader_threshold to milliseconds loader_threshold = ngx_parse_time(&s, 0); if (loader_threshold == (ngx_msec_t)NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid loader_threshold value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // If the parameter starts with "manager_files=", parse the manager_files parameter, which is in the form of manager_files=100. // Indicates that when the cache space is exhausted, the files will be deleted using the LRU algorithm, but each iteration will delete at most the number of files specified by manager_files if (ngx_strncmp(value[i].data, "manager_files=", 14) == 0) { // Parse the manager_files parameter value manager_files = ngx_atoi(value[i].data + 14, value[i].len - 14); if (manager_files == NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid manager_files value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // If the parameter starts with "manager_sleep=", parse the manager_sleep parameter, which is in the form of manager_sleep=1s. // Indicates that each iteration will sleep for the duration specified by the manager_sleep parameter if (ngx_strncmp(value[i].data, "manager_sleep=", 14) == 0) { s.len = value[i].len - 14; s.data = value[i].data + 14; // Parse the value specified by manager_sleep manager_sleep = ngx_parse_time(&s, 0); if (manager_sleep == (ngx_msec_t) NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid manager_sleep value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // If the parameter starts with "manager_threshold=", the manager_threshold parameter is parsed. The parameter is in the form of manager_threshold=2s. // Indicates that the longest time for each iteration of clearing files cannot exceed the value specified by this parameter if (ngx_strncmp(value[i].data, "manager_threshold=", 18) == 0) { s.len = value[i].len - 18; s.data = value[i].data + 18; // Parse the manager_threshold parameter value and convert it to a value in milliseconds manager_threshold = ngx_parse_time(&s, 0); if (manager_threshold == (ngx_msec_t)NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid manager_threshold value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid parameter \"%V\"", &value[i]); return NGX_CONF_ERROR; } if (name.len == 0 || size == 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "\"%V\" must have \"keys_zone\" parameter", &cmd->name); return NGX_CONF_ERROR; } // The values of cache->path->manager and cache->path->loader here are two functions. It should be noted that, // After nginx is started, two separate processes are started, a cache manager and a cache loader. // Will continuously execute the method specified by cache->path->manager for each shared memory in a loop, // So as to clean up the cache. The other process, cache loader, will be executed only once 60 seconds after nginx is started. // The execution method is the method specified by cache->path->loader, // The main function of this method is to load existing file data into the current shared memory cache->path->manager = ngx_http_file_cache_manager; cache->path->loader = ngx_http_file_cache_loader; cache->path->data = cache; cache->path->conf_file = cf->conf_file->file.name.data; cache->path->line = cf->conf_file->line; cache->loader_files = loader_files; cache->loader_sleep = loader_sleep; cache->loader_threshold = loader_threshold; cache->manager_files = manager_files; cache->manager_sleep = manager_sleep; cache->manager_threshold = manager_threshold; // Add the current path to the cycle. These paths will be checked later. If the path does not exist, the corresponding path will be created if (ngx_add_path(cf, &cache->path) != NGX_OK) { return NGX_CONF_ERROR; } // Add the current shared memory to the shared memory list specified by cf->cycle->shared_memory cache->shm_zone = ngx_shared_memory_add(cf, &name, size, cmd->post); if (cache->shm_zone == NULL) { return NGX_CONF_ERROR; } if (cache->shm_zone->data) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "duplicate zone \"%V\"", &name); return NGX_CONF_ERROR; } // This specifies the initialization method for each shared memory, which will be executed when the master process starts cache->shm_zone->init = ngx_http_file_cache_init; cache->shm_zone->data = cache; cache->use_temp_path = use_temp_path; cache->inactive = inactive; cache->max_size = max_size; caches = (ngx_array_t *) (confp + cmd->offset); ce = ngx_array_push(caches); if (ce == NULL) { return NGX_CONF_ERROR; } *ce = cache; return NGX_CONF_OK; } As can be seen from the above code, in the proxy_cache_path method, a ngx_http_file_cache_t structure is mainly initialized. The various properties in this structure are obtained by parsing the various parameters of proxy_cache_path. 3.2 Cache manager and cache loader process startup The entry method of the nginx program is the main() method of nginx.c. If the master-worker process mode is turned on, it will finally enter the ngx_master_process_cycle() method, which will first start the worker process to receive client requests; then start the cache manager and cache loader processes respectively; finally enter an infinite loop to process the instructions sent by the user to nginx in the command line. The following is the source code for starting the cache manager and cache loader processes: void ngx_master_process_cycle(ngx_cycle_t *cycle) { ... // Get the core module configuration ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module); // Start each worker process ngx_start_worker_processes(cycle, ccf->worker_processes, NGX_PROCESS_RESPAWN); // Start cache process ngx_start_cache_manager_processes(cycle, 0); ... } As for the startup of cache manager and cache loader processes, it can be seen that it is mainly in the ngx_start_cache_manager_processes() method. The following is the source code of this method: static void ngx_start_cache_manager_processes(ngx_cycle_t *cycle, ngx_uint_t respawn) { ngx_uint_t i, manager, loader; ngx_path_t **path; ngx_channel_t ch; manager = 0; loader = 0; path = ngx_cycle->paths.elts; for (i = 0; i < ngx_cycle->paths.nelts; i++) { // Check if any path specifies manager as 1 if (path[i]->manager) { manager = 1; } // Check if any path specifies loader as 1 if (path[i]->loader) { loader = 1; } } // If no path's manager is specified as 1, return directly if (manager == 0) { return; } // Create a process to execute the cycle executed in the ngx_cache_manager_process_cycle() method. Note that // When calling back the ngx_cache_manager_process_cycle method, the second parameter passed in here is ngx_cache_manager_ctx ngx_spawn_process(cycle, ngx_cache_manager_process_cycle, &ngx_cache_manager_ctx, "cache manager process", respawn ? NGX_PROCESS_JUST_RESPAWN : NGX_PROCESS_RESPAWN); ngx_memzero(&ch, sizeof(ngx_channel_t)); // Create a ch structure to broadcast the creation message of the current process ch.command = NGX_CMD_OPEN_CHANNEL; ch.pid = ngx_processes[ngx_process_slot].pid; ch.slot = ngx_process_slot; ch.fd = ngx_processes[ngx_process_slot].channel[0]; // Broadcast the message that the cache manager process is created ngx_pass_open_channel(cycle, &ch); if (loader == 0) { return; } // Create a process to execute the process specified by ngx_cache_manager_process_cycle(). It should be noted that // When calling back the ngx_cache_manager_process_cycle method, the second parameter passed in here is ngx_cache_loader_ctx ngx_spawn_process(cycle, ngx_cache_manager_process_cycle, &ngx_cache_loader_ctx, "cache loader process", respawn ? NGX_PROCESS_JUST_SPAWN : NGX_PROCESS_NORESPAWN); // Create a ch structure to broadcast the creation message of the current process ch.command = NGX_CMD_OPEN_CHANNEL; ch.pid = ngx_processes[ngx_process_slot].pid; ch.slot = ngx_process_slot; ch.fd = ngx_processes[ngx_process_slot].channel[0]; // Broadcast the message that the cache loader process is created ngx_pass_open_channel(cycle, &ch); } The above code is actually quite simple. First, it checks whether any path specifies the use of cache manager or cache loader. If so, the corresponding inheritance is started. Otherwise, the cache manager and cache loader processes will not be created. The methods used to start these two processes are: // Start the cache manager process ngx_spawn_process(cycle, ngx_cache_manager_process_cycle, &ngx_cache_manager_ctx, "cache manager process", respawn ? NGX_PROCESS_JUST_RESPAWN : NGX_PROCESS_RESPAWN); // Start the cache loader process ngx_spawn_process(cycle, ngx_cache_manager_process_cycle, &ngx_cache_loader_ctx, "cache loader process", respawn ? NGX_PROCESS_JUST_SPAWN : NGX_PROCESS_NORESPAWN); The main function of the ngx_spawn_process() method here is to create a new process. After the process is created, the method specified by the second parameter will be executed, and the parameter passed in when executing the method is the structure object specified by the third parameter here. Observe the two ways of starting the process above. The method executed after the new process is created is ngx_cache_manager_process_cycle(), but the parameters passed in when calling this method are different, one is ngx_cache_manager_ctx and the other is ngx_cache_loader_ctx. Here we first look at the definitions of these two structures: // The ngx_cache_manager_process_handler here specifies the method that the current cache manager process will execute. // cache manager process specifies the name of the process, and the last 0 indicates how long the current process will be executed after it is started. // ngx_cache_manager_process_handler() method is executed immediately here static ngx_cache_manager_ctx_t ngx_cache_manager_ctx = { ngx_cache_manager_process_handler, "cache manager process", 0 }; // The ngx_cache_loader_process_handler here specifies the method that the current cache loader process will execute. // It will execute the ngx_cache_loader_process_handler() method 60 seconds after the cache loader process is started static ngx_cache_manager_ctx_t ngx_cache_loader_ctx = { ngx_cache_loader_process_handler, "cache loader process", 60000 }; As you can see, these two structures mainly define the different behaviors of the cache manager and cache loader processes respectively. Let's take a look at how the ngx_cache_manager_process_cycle() method calls these two methods: static void ngx_cache_manager_process_cycle(ngx_cycle_t *cycle, void *data) { ngx_cache_manager_ctx_t *ctx = data; void *ident[4]; ngx_event_t ev; ngx_process = NGX_PROCESS_HELPER; // The current process is mainly used to handle the work of cache manager and cache loader, so it does not need to listen to the socket, so it needs to be closed here ngx_close_listening_sockets(cycle); /* Set a moderate number of connections for a helper process. */ cycle->connection_n = 512; // Initialize the current process, mainly setting some parameter attributes, and finally setting the event of listening to the channel[1] handle for the current process to receive the message of the master process ngx_worker_process_init(cycle, -1); ngx_memzero(&ev, sizeof(ngx_event_t)); // For cache manager, the handler here refers to the ngx_cache_manager_process_handler() method. // For cache loader, the handler here points to the ngx_cache_loader_process_handler() method ev.handler = ctx->handler; ev.data = ident; ev.log = cycle->log; ident[3] = (void *) -1; // The cache module does not need to use a shared lock ngx_use_accept_mutex = 0; ngx_setproctitle(ctx->name); // Add the current event to the event queue. The delay time of the event is ctx->delay. For cache manager, this value is 0. // For cache loader, the value is 60s. // It should be noted that in the current event processing method, if ngx_cache_manager_process_handler() has processed the current event, // The current event will be added to the event queue again, thus realizing the function of timed processing; and for // the ngx_cache_loader_process_handler() method, after processing once, it will not add the current event // to the event queue again, so it is equivalent to the current event will only be executed once, and then the cache loader process will exit ngx_add_timer(&ev, ctx->delay); for ( ;; ) { // If the master marks the current process as terminate or quit, exit the process if (ngx_terminate || ngx_quit) { ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0, "exiting"); exit(0); } // If the master process sends a reopen message, reopen all cache files if (ngx_reopen) { ngx_reopen = 0; ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0, "reopening logs"); ngx_reopen_files(cycle, -1); } //Execute events in the event queue ngx_process_events_and_timers(cycle); } } In the above code, first an event object is created, ev.handler = ctx->handler; the logic to be processed by the event is specified, that is, the method corresponding to the first parameter in the above two structures; then the event is added to the event queue, that is, ngx_add_timer(&ev, ctx->delay);, it should be noted that the second parameter here is the third parameter specified in the above two structures, that is to say, the execution time of the handler() method is controlled by the delay time of the event; finally, in an infinite for loop, the ngx_process_events_and_timers() method is used to continuously check the events in the event queue and process the events. 3.3 Cache manager process processing logic As for the cache manager processing process, it can be seen from the above explanation that it is carried out in the ngx_cache_manager_process_handler() method in the cache manager structure defined by it. The following is the source code of this method: static void ngx_cache_manager_process_handler(ngx_event_t *ev) { ngx_uint_t i; ngx_msec_t next, n; ngx_path_t **path; next = 60 * 60 * 1000; path = ngx_cycle->paths.elts; for (i = 0; i < ngx_cycle->paths.nelts; i++) { // The manager method here refers to the ngx_http_file_cache_manager() method if (path[i]->manager) { n = path[i]->manager(path[i]->data); next = (n <= next) ? n : next; ngx_time_update(); } } if (next == 0) { next = 1; } // After one processing is completed, the current event will be added to the event queue again for the next processing ngx_add_timer(ev, next); } Here, all path definitions are first obtained, and then their manager() methods are checked to see if they are empty. If not, the method is called. The actual method pointed to by the manager() method here is the one defined in the parsing of the proxy_cache_path directive in Section 3.1, that is, cache->path->manager = ngx_http_file_cache_manager;, which means that this method is the main method for managing the cache. After calling the management method, the current event will be added to the event queue for the next cache management cycle. The following is the source code of the ngx_http_file_cache_manager() method: static ngx_msec_t ngx_http_file_cache_manager(void *data) { // The ngx_http_file_cache_t structure here is obtained by parsing the proxy_cache_path configuration item ngx_http_file_cache_t *cache = data; off_t size; time_t wait; ngx_msec_t elapsed, next; ngx_uint_t count, watermark; cache->last = ngx_current_msec; cache->files = 0; // The ngx_http_file_cache_expire() method here is in an infinite loop, constantly checking whether there is expired // shared memory at the end of the cache queue. If it exists, it will delete it and its corresponding file next = (ngx_msec_t) ngx_http_file_cache_expire(cache) * 1000; // next is the return value of the ngx_http_file_cache_expire() method, which returns 0 only in two cases: // 1. When the number of deleted files exceeds the number of files specified by manager_files; // 2. When the total time taken to delete each file exceeds the total time specified by manager_threshold; // If next is 0, it means that a batch of cache cleanup work has been completed. At this time, it is necessary to sleep for a while before the next cleanup work. // The duration of this sleep is the value specified by manager_sleep. That is to say, the value of next here is actually the waiting time for the next // cache cleanup if (next == 0) { next = cache->manager_sleep; goto done; } for ( ;; ) { ngx_shmtx_lock(&cache->shpool->mutex); // size here refers to the total size used by the current cache // count specifies the number of files in the current cache // watermark indicates the water level, which is 7/8 of the total number of files that can be stored size = cache->sh->size; count = cache->sh->count; watermark = cache->sh->watermark; ngx_shmtx_unlock(&cache->shpool->mutex); ngx_log_debug3(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, "http file cache size: %O c:%ui w:%i", size, count, (ngx_int_t) watermark); // If the memory size used by the current cache is less than the maximum size that can be used and the number of cached files is less than the watermark, // Indicates that cache files can continue to be stored, then exit the loop if (size < cache->max_size && count < watermark) { break; } // Going here means that the available shared memory resources are insufficient // Here is mainly to force the deletion of unreferenced files in the current queue, regardless of whether they are expired wait = ngx_http_file_cache_forced_expire(cache); // Calculate the next execution time if (wait > 0) { next = (ngx_msec_t) wait * 1000; break; } // If the current nginx has exited or terminated, jump out of the loop if (ngx_quit || ngx_terminate) { break; } // If the number of files currently being deleted exceeds the number specified by manager_files, then exit the loop. // And specify the sleep time required before the next cleanup if (++cache->files >= cache->manager_files) { next = cache->manager_sleep; break; } ngx_time_update(); elapsed = ngx_abs((ngx_msec_int_t) (ngx_current_msec - cache->last)); // If the current deletion action takes longer than the time specified by manager_threshold, then exit the loop. // And specify the sleep time required before the next cleanup if (elapsed >= cache->manager_threshold) { next = cache->manager_sleep; break; } } done: elapsed = ngx_abs((ngx_msec_int_t) (ngx_current_msec - cache->last)); ngx_log_debug3(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, "http file cache manager: %ui e:%M n:%M", cache->files, elapsed, next); return next; } In the ngx_http_file_cache_manager() method, the ngx_http_file_cache_expire() method is first entered. The main function of this method is to check whether the element at the tail of the current shared memory queue is expired. If expired, it is determined whether the element and the disk file corresponding to the element need to be deleted based on the number of references and whether it is being deleted. After this check, it will enter an infinite for loop. The main purpose of the loop here is to check whether the current shared memory resources are tight, that is, whether the memory used exceeds the maximum memory defined by max_size, or the total number of files currently cached exceeds 7/8 of the total number of files. If one of these two conditions is met, an attempt will be made to forcefully clear the cache file. The so-called forced clearing is to delete all elements in the current shared memory with a reference count of 0 and their corresponding disk files. Here we first read the ngx_http_file_cache_expire() method: static time_t ngx_http_file_cache_expire(ngx_http_file_cache_t *cache) { u_char *name, *p; size_t len; time_t now, wait; ngx_path_t *path; ngx_msec_t elapsed; ngx_queue_t *q; ngx_http_file_cache_node_t *fcn; u_char key[2 * NGX_HTTP_CACHE_KEY_LEN]; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, "http file cache expire"); path = cache->path; len = path->name.len + 1 + path->len + 2 * NGX_HTTP_CACHE_KEY_LEN; name = ngx_alloc(len + 1, ngx_cycle->log); if (name == NULL) { return 10; } ngx_memcpy(name, path->name.data, path->name.len); now = ngx_time(); ngx_shmtx_lock(&cache->shpool->mutex); for ( ;; ) { // If the current nginx has exited or terminated, jump out of the current loop if (ngx_quit || ngx_terminate) { wait = 1; break; } // If the current shared memory queue is empty, jump out of the current loop if (ngx_queue_empty(&cache->sh->queue)) { wait = 10; break; } // Get the last element of the queue q = ngx_queue_last(&cache->sh->queue); // Get the queue node fcn = ngx_queue_data(q, ngx_http_file_cache_node_t, queue); //Calculate the length of time between the node's expiration time and the current time wait = fcn->expire - now; // If the current node has not expired, exit the current loop if (wait > 0) { wait = wait > 10 ? 10 : wait; break; } ngx_log_debug6(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, "http file cache expire: #%d %d %02xd%02xd%02xd%02xd", fcn->count, fcn->exists, fcn->key[0], fcn->key[1], fcn->key[2], fcn->key[3]); // The count here indicates the number of times the current node is referenced. If the number of references is 0, delete the node directly if (fcn->count == 0) { // The main action here is to remove the current node from the queue and delete the file corresponding to the node ngx_http_file_cache_delete(cache, q, name); goto next; } // If the current node is being deleted, the current process does not need to process it if (fcn->deleting) { wait = 1; break; } // Going here means that the current node has expired, but the reference count is greater than 0, and no process is deleting the node // The name of the file after the hex calculation of the node is calculated here p = ngx_hex_dump(key, (u_char *) &fcn->node.key, sizeof(ngx_rbtree_key_t)); len = NGX_HTTP_CACHE_KEY_LEN - sizeof(ngx_rbtree_key_t); (void) ngx_hex_dump(p, fcn->key, len); // Since the current node has expired in time, but there are requests referencing the node, and no process is deleting the node, // This means that the node should be retained, so we try to delete the node from the end of the queue and recalculate the next expiration time for it. // Then insert it into the queue head ngx_queue_remove(q); fcn->expire = ngx_time() + cache->inactive; ngx_queue_insert_head(&cache->sh->queue, &fcn->queue); ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0, "ignore long locked inactive cache entry %*s, count:%d", (size_t) 2 * NGX_HTTP_CACHE_KEY_LEN, key, fcn->count); next: // This is the logic that will be executed only after the last node in the queue is deleted and the corresponding file is deleted. // The cache->files here records the number of nodes currently processed. The meaning of manager_files is, // When the LRU algorithm is used to forcibly clear files, at most the number of files specified by this parameter will be cleared. The default value is 100. // So here if cache->files is greater than or equal to manager_files, then jump out of the loop if (++cache->files >= cache->manager_files) { wait = 0; break; } // Update the current nginx cache time ngx_time_update(); // elapsed is equal to the total time taken for the current deletion action elapsed = ngx_abs((ngx_msec_int_t) (ngx_current_msec - cache->last)); // If the total time exceeds the value specified by manager_threshold, jump out of the current loop if (elapsed >= cache->manager_threshold) { wait = 0; break; } } // Release the current lock ngx_shmtx_unlock(&cache->shpool->mutex); ngx_free(name); return wait; } It can be seen that the main processing logic here is to first check the element at the tail of the queue. According to the LRU algorithm, the element at the tail of the queue is the element most likely to expire, so only this element needs to be checked. Then check whether the element is expired. If not, exit the current method. Otherwise, check whether the reference count of the current element is 0. That is to say, if the current element is expired and the reference count is 0, delete the element and its corresponding disk file directly. If the current element reference count is not 0, it will check whether it is being deleted. It should be noted that if an element is being deleted, the deletion process will set its reference count to 1 to prevent other processes from performing deletion operations. If it is being deleted, the current process will not process the element. If it is not deleted, the current process will try to move the element from the tail of the queue to the head of the queue. The main reason for doing this is that although the element has expired, its reference count is not 0, and no process is deleting the element, which means that the element is still an active element and needs to be moved to the head of the queue. Let's take a look at how the cache manager forces the removal of elements when resources are tight. The following is the source code of the ngx_http_file_cache_forced_expire() method: static time_t ngx_http_file_cache_forced_expire(ngx_http_file_cache_t *cache) { u_char *name; size_t len; time_t wait; ngx_uint_t tries; ngx_path_t *path; ngx_queue_t *q; ngx_http_file_cache_node_t *fcn; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, "http file cache forced expire"); path = cache->path; len = path->name.len + 1 + path->len + 2 * NGX_HTTP_CACHE_KEY_LEN; name = ngx_alloc(len + 1, ngx_cycle->log); if (name == NULL) { return 10; } ngx_memcpy(name, path->name.data, path->name.len); wait = 10; tries = 20; ngx_shmtx_lock(&cache->shpool->mutex); //Continuously traverse each node in the queue for (q = ngx_queue_last(&cache->sh->queue); q != ngx_queue_sentinel(&cache->sh->queue); q = ngx_queue_prev(q)) { // Get the data of the current node fcn = ngx_queue_data(q, ngx_http_file_cache_node_t, queue); ngx_log_debug6(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, "http file cache forced expire: #%d %d %02xd%02xd%02xd%02xd", fcn->count, fcn->exists, fcn->key[0], fcn->key[1], fcn->key[2], fcn->key[3]); // If the number of references to the current node is 0, delete the node directly if (fcn->count == 0) { ngx_http_file_cache_delete(cache, q, name); wait = 0; } else { // Try the next node. If the reference counts of 20 consecutive nodes are greater than 0, the current loop will be exited if (--tries) { continue; } wait = 1; } break; } ngx_shmtx_unlock(&cache->shpool->mutex); ngx_free(name); return wait; } As you can see, the processing logic here is relatively simple. It mainly starts from the tail of the queue and checks whether the number of references of the elements in the queue is 0. If it is 0, delete it directly and then check the next element. If it is not 0, check the next element, and so on. It should be noted here that if it is checked that there are a total of 20 elements being referenced, the current loop will be jumped out. 3.4 Cache loader process processing logic As mentioned earlier, the main processing flow of the cache loader is in the ngx_cache_loader_process_handler() method. The following is the main processing logic of this method: static void ngx_cache_loader_process_handler(ngx_event_t *ev) { ngx_uint_t i; ngx_path_t **path; ngx_cycle_t *cycle; cycle = (ngx_cycle_t *) ngx_cycle; path = cycle->paths.elts; for (i = 0; i < cycle->paths.nelts; i++) { if (ngx_terminate || ngx_quit) { break; } // The loader method here refers to the ngx_http_file_cache_loader() method if (path[i]->loader) { path[i]->loader(path[i]->data); ngx_time_update(); } } // Exit the current process after loading is complete exit(0); } The main processing flow of cache loader and cache manager is very similar. Data is loaded by calling the loader() method of each path. The specific implementation method of the loader() method is also defined when the proxy_cache_path configuration item is parsed. The specific definition is as follows (in the last part of Section 3.1): Here we continue to read the source code of the ngx_http_file_cache_loader() method: static void ngx_http_file_cache_loader(void *data) { ngx_http_file_cache_t *cache = data; ngx_tree_ctx_t tree; // If the loading is complete or loading is in progress, return directly if (!cache->sh->cold || cache->sh->loading) { return; } // Try to lock if (!ngx_atomic_cmp_set(&cache->sh->loading, 0, ngx_pid)) { return; } ngx_log_debug0(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, "http file cache loader"); // The tree here is a main process object for loading. The loading process is performed recursively tree.init_handler = NULL; // Encapsulates the operation of loading a single file tree.file_handler = ngx_http_file_cache_manage_file; // The operation before loading a directory, mainly to check whether the current directory has operation permissions tree.pre_tree_handler = ngx_http_file_cache_manage_directory; // Operation after loading a directory, this is actually an empty method tree.post_tree_handler = ngx_http_file_cache_noop; // This mainly deals with special files, that is, files that are neither files nor folders. Here, the main purpose is to delete the filetree.spec_handler = ngx_http_file_cache_delete_file; tree.data = cache; tree.alloc = 0; tree.log = ngx_cycle->log; cache->last = ngx_current_msec; cache->files = 0; // Start to recursively traverse all files in the specified directory, and then process them according to the method defined above, that is, load them into shared memory if (ngx_walk_tree(&tree, &cache->path->name) == NGX_ABORT) { cache->sh->loading = 0; return; } //Mark loading status cache->sh->cold = 0; cache->sh->loading = 0; ngx_log_error(NGX_LOG_NOTICE, ngx_cycle->log, 0, "http file cache: %V %.3fM, bsize: %uz", &cache->path->name, ((double) cache->sh->size * cache->bsize) / (1024 * 1024), cache->bsize); } During the loading process, the target loading directory is first encapsulated into an ngx_tree_ctx_t structure, and the method used to load the file is specified for it. The final loading logic is mainly carried out in the ngx_walk_tree() method, and the entire loading process is also implemented recursively. The following is the implementation principle of the ngx_walk_tree() method: ngx_int_t ngx_walk_tree(ngx_tree_ctx_t *ctx, ngx_str_t *tree) { void *data, *prev; u_char *p, *name; size_t len; ngx_int_t rc; ngx_err_t err; ngx_str_t file, buf; ngx_dir_t dir; ngx_str_null(&buf); ngx_log_debug1(NGX_LOG_DEBUG_CORE, ctx->log, 0, "walk tree \"%V\"", tree); // Open the target directory if (ngx_open_dir(tree, &dir) == NGX_ERROR) { ngx_log_error(NGX_LOG_CRIT, ctx->log, ngx_errno, ngx_open_dir_n " \"%s\" failed", tree->data); return NGX_ERROR; } prev = ctx->data; // The alloc passed in here is 0, so it will not enter the current branch if (ctx->alloc) { data = ngx_alloc(ctx->alloc, ctx->log); if (data == NULL) { goto failed; } if (ctx->init_handler(data, prev) == NGX_ABORT) { goto failed; } ctx->data = data; } else { data = NULL; } for ( ;; ) { ngx_set_errno(0); // Read the contents of the current subdirectory if (ngx_read_dir(&dir) == NGX_ERROR) { err = ngx_errno; if (err == NGX_ENOMOREFILES) { rc = NGX_OK; } else { ngx_log_error(NGX_LOG_CRIT, ctx->log, err, ngx_read_dir_n " \"%s\" failed", tree->data); rc = NGX_ERROR; } goto done; } len = ngx_de_namelen(&dir); name = ngx_de_name(&dir); ngx_log_debug2(NGX_LOG_DEBUG_CORE, ctx->log, 0, "tree name %uz:\"%s\"", len, name); // If the current directory is ., it means it is the current directory and skip this directory if (len == 1 && name[0] == '.') { continue; } // If the current read is .., it means it is the flag to return to the previous directory, skip this directory if (len == 2 && name[0] == '.' && name[1] == '.') { continue; } file.len = tree->len + 1 + len; // Update available buffer size if (file.len + NGX_DIR_MASK_LEN > buf.len) { if (buf.len) { ngx_free(buf.data); } buf.len = tree->len + 1 + len + NGX_DIR_MASK_LEN; buf.data = ngx_alloc(buf.len + 1, ctx->log); if (buf.data == NULL) { goto failed; } } p = ngx_cpymem(buf.data, tree->data, tree->len); *p++ = '/'; ngx_memcpy(p, name, len + 1); file.data = buf.data; ngx_log_debug1(NGX_LOG_DEBUG_CORE, ctx->log, 0, "tree path \"%s\"", file.data); if (!dir.valid_info) { if (ngx_de_info(file.data, &dir) == NGX_FILE_ERROR) { ngx_log_error(NGX_LOG_CRIT, ctx->log, ngx_errno, ngx_de_info_n " \"%s\" failed", file.data); continue; } } // If the current file is a file, call ctx->file_handler() to load the contents of the file if (ngx_de_is_file(&dir)) { ngx_log_debug1(NGX_LOG_DEBUG_CORE, ctx->log, 0, "tree file \"%s\"", file.data); // Set the file's related attributes ctx->size = ngx_de_size(&dir); ctx->fs_size = ngx_de_fs_size(&dir); ctx->access = ngx_de_access(&dir); ctx->mtime = ngx_de_mtime(&dir); if (ctx->file_handler(ctx, &file) == NGX_ABORT) { goto failed; } // If the current directory is a directory, first call the pre_tree_handler() method, then call // the ngx_walk_tree() method to recursively read the subdirectories, and finally call the post_tree_handler() method} else if (ngx_de_is_dir(&dir)) { ngx_log_debug1(NGX_LOG_DEBUG_CORE, ctx->log, 0, "tree enter dir \"%s\"", file.data); ctx->access = ngx_de_access(&dir); ctx->mtime = ngx_de_mtime(&dir); // Apply the pre-logic of reading the directory rc = ctx->pre_tree_handler(ctx, &file); if (rc == NGX_ABORT) { goto failed; } if (rc == NGX_DECLINED) { ngx_log_debug1(NGX_LOG_DEBUG_CORE, ctx->log, 0, "tree skip dir \"%s\"", file.data); continue; } // Recursively read the current directory if (ngx_walk_tree(ctx, &file) == NGX_ABORT) { goto failed; } ctx->access = ngx_de_access(&dir); ctx->mtime = ngx_de_mtime(&dir); // Apply post-logic to read directory if (ctx->post_tree_handler(ctx, &file) == NGX_ABORT) { goto failed; } } else { ngx_log_debug1(NGX_LOG_DEBUG_CORE, ctx->log, 0, "tree special \"%s\"", file.data); if (ctx->spec_handler(ctx, &file) == NGX_ABORT) { goto failed; } } } failed: rc = NGX_ABORT; done: if (buf.len) { ngx_free(buf.data); } if (data) { ngx_free(data); ctx->data = prev; } if (ngx_close_dir(&dir) == NGX_ERROR) { ngx_log_error(NGX_LOG_CRIT, ctx->log, ngx_errno, ngx_close_dir_n " \"%s\" failed", tree->data); } return rc; } From the above process flow, we can see that the real logic of loading files is in the ngx_http_file_cache_manage_file() method. The following is the source code of this method: static ngx_int_t ngx_http_file_cache_manage_file(ngx_tree_ctx_t *ctx, ngx_str_t *path) { ngx_msec_t elapsed; ngx_http_file_cache_t *cache; cache = ctx->data; // Add the file to the shared memory if (ngx_http_file_cache_add_file(ctx, path) != NGX_OK) { (void) ngx_http_file_cache_delete_file(ctx, path); } // If the number of files loaded exceeds the number specified by loader_files, sleep for a while if (++cache->files >= cache->loader_files) { ngx_http_file_cache_loader_sleep(cache); } else { // Update the current cached time ngx_time_update(); // Calculate the time taken to load the current thread elapsed = ngx_abs((ngx_msec_int_t) (ngx_current_msec - cache->last)); ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, "http file cache loader time elapsed: %M", elapsed); // If the loading operation takes longer than the time specified by loader_threshold, sleep for the specified time if (elapsed >= cache->loader_threshold) { ngx_http_file_cache_loader_sleep(cache); } } return (ngx_quit || ngx_terminate) ? NGX_ABORT : NGX_OK; } The loading logic here is relatively simple overall. The main process is to load the file into the shared memory and determine whether the number of loaded files exceeds the limit. If so, it will sleep for a specified period of time. In addition, it will also determine whether the total time taken to load the file exceeds the specified period of time. If so, it will also sleep for a specified period of time. 4. Summary This article first explains how to use nginx shared memory and the specific meaning of each parameter, then explains the implementation principle of shared memory, and finally focuses on the initialization of shared memory, the working principles of cache manager and cache loader. The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM. You may also be interested in:
|
<<: React implements import and export of Excel files
>>: Mysql WorkBench installation and configuration graphic tutorial
I recently deployed and tested VMware Horizon, an...
What is an index? An index is a data structure th...
Grouping and linking in MYSQL are the two most co...
This article example shares the specific code of ...
Translated from Docker official documentation, or...
I just started learning database operations. Toda...
The process of installing MySQL database and conf...
Today, when I was configuring Tomcat to access th...
Table of contents MySQL federated query execution...
Netfilter Netfilter is a packet processing module...
Preface After installing MySQL and Navicat, when ...
Chapter 1: Introduction to keepalived The purpose...
Table of contents Small but beautiful Keep it sim...
In previous blog posts, I have been focusing on so...
Table of contents js deep copy Data storage metho...