PS: I've recently been reading the Nginx chapter of <<High-Performance Linux Server Construction Practice>>, which introduces nginx in great detail. I've copied down the Chinese excerpts of the frequently used Nginx configuration parameters and my real demonstration of nginx load balancing for later review! Detailed description of Nginx configuration parameters in Chinese #Define the user and user group that Nginx runs user www www; # #Number of nginx processes, it is recommended to set it equal to the total number of CPU cores. worker_processes 8; # #Global error log definition type, [ debug | info | notice | warn | error | crit ] error_log /var/log/nginx/error.log info; # #Process file pid /var/run/nginx.pid; # #The maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files (the system value ulimit -n) divided by the number of nginx processes. However, nginx allocates requests unevenly, so it is recommended to keep it consistent with the value of ulimit -n. worker_rlimit_nofile 65535; # #Working mode and upper limit of connection number events { #Refer to the event model, use [ kqueue | rtsig | epoll | /dev/poll | select | poll ]; the epoll model is a high-performance network I/O model in the Linux kernel version 2.6 or above. If running on FreeBSD, use the kqueue model. use epoll; #Maximum number of connections for a single process (maximum number of connections = number of connections * number of processes) worker_connections 65535; } # #Set up the http server http { include mime.types; #File extension and file type mapping table default_type application/octet-stream; #Default file type #charset utf-8; #Default encoding server_names_hash_bucket_size 128; #Server name hash table size client_header_buffer_size 32k; #Upload file size limit large_client_header_buffers 4 64k; #Set request buffer client_max_body_size 8m; #Set request buffer # #Open directory list access, suitable for downloading servers, closed by default. autoindex on; #Show directory autoindex_exact_size on; #Display file size. By default, it is on, showing the exact size of the file in bytes. After changing to off, it shows the approximate size of the file in kB, MB or GB. autoindex_localtime on; #Display file time. By default, the displayed file time is off, and the displayed file time is GMT time. After changing to on, the displayed file time is the server time of the file. # sendfile on; #Turn on efficient file transfer mode. The sendfile directive specifies whether nginx calls the sendfile function to output files. For common applications, it is set to on. If it is used for downloading and other applications with heavy disk IO load, it can be set to off to balance the disk and network I/O processing speeds and reduce the system load. Note: If the image display is abnormal, change this to off. tcp_nopush on; #Prevent network congestion tcp_nodelay on; #Prevent network congestion # keepalive_timeout 120; #(Unit: s) Set the timeout period for the client connection to remain active. After this period, the server will close the connection. # #FastCGI related parameters are to improve the performance of the website: reduce resource usage and increase access speed. The following parameters can be understood by looking at the literal meaning. fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # #gzip module settings gzip on; # Enable gzip compression output gzip_min_length 1k; #The minimum number of bytes allowed for compressed pages. The number of bytes for a page is obtained from the content-length in the header. The default is 0, which means that the page is compressed regardless of its size. It is recommended to set it to a number of bytes greater than 1k. If it is less than 1k, the page may become larger and larger. gzip_buffers 4 16k; #Indicates applying for 4 units of 16k memory as compression result stream cache. The default value is to apply for the same memory space as the original data size to store the gzip compression result gzip_http_version 1.1; #Compression version (default 1.1, most browsers currently support gzip decompression. If the front-end is squid2.5, please use 1.0) gzip_comp_level 2; #Compression level. 1 has the smallest compression ratio and fast processing speed. 9 has the largest compression ratio, consumes more CPU resources, and has the slowest processing speed, but because the compression ratio is the largest, the packet size is the smallest and the transmission speed is fast gzip_types text/plain application/x-javascript text/css application/xml; #Compression type, by default it already includes text/html, so you don't need to write it below. There will be no problem if you write it, but there will be a warning. gzip_vary on;# option allows the front-end cache server to cache gzip compressed pages. For example: use squid to cache data compressed by nginx # #Needed to use when limiting the number of IP connections #limit_zone crawler $binary_remote_addr 10m; # ##Upstream load balancing, four scheduling algorithms (the following example is the main lecture)## # #Virtual host configuration server { #Listening port listen 80; #You can have multiple domain names, separated by spaces server_name wangying.sinaapp.com; index index.html index.htm index.php; root /data/www/; location ~ .*\.(php|php5)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } #Image cache time setting location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ { expires 10d; } #JS and CSS cache time settings location ~ .*\.(js|css)?$ { expires 1h; } #Log format settings log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; #Define the access log for this virtual host access_log /var/log/nginx/access.log access; # #Set the address to view the Nginx status. The StubStatus module can obtain the working status of Nginx since the last startup. This module is not a core module and needs to be manually specified when Nginx is compiled and installed before it can be used location /NginxStatus { stub_status on; access_log on; auth_basic "NginxStatus"; auth_basic_user_file conf/htpasswd; #The content of the htpasswd file can be generated using the htpasswd tool provided by Apache. } } }
Nginx multiple servers to achieve load balancing Nginx load balancing server: IP: 192.168.1.1 Web Server List: Web1:192.168.1.2 Web2:192.168.1.3 Purpose: When users access the 192.168.1.1 server, they are load balanced to the Web1 and Web2 servers through Nginx
http
{
##Upstream load balancing, four scheduling algorithms##
#Scheduling algorithm 1: Polling. Each request is assigned to different backend servers one by one in chronological order. If a backend server goes down, the faulty system is automatically removed so that user access is not affected upstream webhost {
server 192.168.1.2:80 ;
server 192.168.1.3:80 ;
}
#Scheduling algorithm 2: weight. You can define weights based on machine configuration. The higher the weight, the greater the chance of being assigned. upstream webhost {
server 192.168.1.2:80 weight=2;
server 192.168.1.3:80 weight=3;
}
#Scheduling algorithm 3: ip_hash. Each request is assigned according to the hash result of the access IP, so that visitors from the same IP access a fixed backend server, which effectively solves the session sharing problem of dynamic web pages upstream webhost {
ip_hash;
server 192.168.1.2:80 ;
server 192.168.1.3:80 ;
}
#Scheduling algorithm 4: url_hash (need to install a third-party plug-in). This method distributes requests according to the hash result of the accessed URL, so that each URL is directed to the same backend server, which can further improve the efficiency of the backend cache server. Nginx itself does not support url_hash. If you need to use this scheduling algorithm, you must install the Nginx hash package upstream webhost {
server 192.168.1.2:80 ;
server 192.168.1.3:80 ;
hash $request_uri;
}
#Scheduling algorithm 5: fair (need to install a third-party plug-in). This is a smarter load balancing algorithm than the above two. This algorithm can intelligently balance the load based on the page size and loading time, that is, it allocates requests based on the response time of the backend server, and prioritizes requests with shorter response time. Nginx itself does not support fair. If you need to use this scheduling algorithm, you must download Nginx's upstream_fair module#
#Configuration of virtual host (using scheduling algorithm 3: ip_hash)
server
{
listen 80;
server_name wangying.sinaapp.com;
# Enable reverse proxy for "/" location / {
proxy_pass http://webhost;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
#The backend Web server can obtain the user's real IP through X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#The following are some reverse proxy configurations, optional.
proxy_set_header Host $host;
client_max_body_size 10m; #The maximum number of bytes of a single file allowed to be requested by the client client_body_buffer_size 128k; #The maximum number of bytes that the buffer proxy can buffer for client requests,
proxy_connect_timeout 90; #Nginx connection timeout with the backend server (proxy connection timeout)
proxy_send_timeout 90; #Backend server data return time (proxy send timeout)
proxy_read_timeout 90; #After the connection is successful, the backend server response time (proxy reception timeout)
proxy_buffer_size 4k; #Set the buffer size of the proxy server (nginx) to save user header information proxy_buffers 4 32k; #proxy_buffers buffer, the average web page is set below 32k proxy_busy_buffers_size 64k; #Buffer size under high load (proxy_buffers*2)
proxy_temp_file_write_size 64k;
#Set the cache folder size. If it is larger than this value, it will be transferred from the upstream server.
}
} Test domain name: wangying.sinaapp.com Resolved to 192.168.1.1 respectively When customers visit these three sites, Nginx load balances to the configuration of the virtual hosts on the Web1 and Web2 servers based on the ip_hash value visited by the customer. A local single server implements dynamic and static separation multi-port reverse proxy configuration Nginx load balancing server: IP: 192.168.1.1:80 List of web servers (on the same machine): Web1:192.168.1.1:8080 Web1:192.168.1.1:8081 Web1:192.168.1.1:8082 Achievement purpose: The user visits http://wangying.sinaapp.com, which is load balanced to ports 8080, 8081, and 8082 of the local server.
http
{
#Because the server load is balanced to the local ports 8080, 8081, and 8082, the local port 8080, 8081, and 8082 should be opened for script parsing server {
listen 8080;
server_name wangying.sinaapp.com;
root /mnt/hgfs/vmhtdocs/fastdfs/;
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
#As we know from the port 80 below, 8080, 8081, and 8082 are only responsible for PHP dynamic program parsing, so there is no need to set static file configuration.}
server {
listen 8081;
server_name wangying.sinaapp.com;
root /mnt/hgfs/vmhtdocs/fastdfs/;
index index.php index.html index.htm;
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
#8082 can follow the server configuration above and just modify the listen function#
#Local multi-port load balancing configuration#
#Because it is a server, you can use 127.0.0.1 instead of its intranet IP
#The host name after upstream is just an identifier. It can be a word or a domain name. It can be the same as proxy_pass http://webhost upstream webhost {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
server 127.0.0.1:8082;
}
#Local port 80, accepting requests as a load balancing server
{
listen 80;
server_name wangying.sinaapp.com;
#Local dynamic and static separation reverse proxy configuration #All PHP pages are handled by local fastcgi location ~ \.php$ {
proxy_pass http://webhost;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
#All static files are read directly by nginx #Image cache time setting location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ {
expires 10d;
}
#JS and CSS cache time setting location ~ .*\.(js|css)?$ {
expires 1h;
}
} The following are the additions from other netizens 1. Main Configuration Section 1. Necessary configuration for normal operation #Running user and group, group identity can be omitted user nginx nginx; #Specify the pid file of the nginx daemon process pid path/to/nginx.pid; #Specify the maximum number of file handles that all worker processes can open worker_rlimit_nofile 100000; 2. Configuration related to performance optimization #The number of worker processes should usually be slightly less than the number of CPU physical cores. You can also use auto to automatically obtain worker_processes auto; #CPU affinity binding (CPU context switching is also unavoidable) #Advantages: Improve cache hit rate #context switch: will cause unnecessary CPU consumption #http://blog.chinaunix.net/uid-20662363-id-2953741.html work_cpu_affinity 00000001 00000010 00000100 00001000 000100000 001000000 01000000 10000000; #Timer resolution (after the request reaches nginx, nginx responds to the user request, it needs to obtain the system time and record it in the log. When the concurrency is high, it may obtain the time many times per second) #Lowering this value can reduce the number of gettimeofday() system calls timer_resolution 100ms; #Indicates the nice value of the worker process: the smaller the number, the higher the priority #nice value range: -20,19 #Corresponding priority: 100, 139 worker_priority number; 2. Event-related configuration events { #The load balancing lock used when the master dispatches user requests to a worker process: on means that multiple workers can respond to new requests in turn and in a serialized manner accept_mutex {off|on}
#Delay waiting time, the default is 500ms accept_mutex_delay time;
#Accept_mutex lock file path used lock_file file;
#Specify the time model to use: It is recommended to let Nginx choose it use [epoll|rtsig|select|poll];
#The maximum number of concurrent connections opened by a single worker process, worker_processes*worker_connections worker_connections 2048;
#Tell nginx to accept as many connections as possible after receiving a new connection notification multi_accept on; } 3. Used for debugging and locating problems #Whether to run nginx as a daemon process; should be set to off for debugging daemon {on|off} #Whether to run in master/worker mode; can be set to off during debugging master_process {on|off} #error_log location level, if you want to use debug, you need to use the --with-debug option when compiling nginx error_log file | stderr | syslog:server=address[,parameter=value] | memory:size [debug|info|notice|warn|error|crit|alert|emerg]; Summary: Parameters that often need to be adjusted: worker_processes, worker_connections, work_cpu_affinity, worker_priority New configuration changes take effect: nginx -s reload Other parameters stop, quit, reopen can also be viewed using nginx -h 4. Configuration of nginx as a web server http {}: Configuration framework introduced by the ngx_http_core_module module: http { upstream ... } server { location URL { root "/path/to/somedir" ... }#Similar to <Location> in httpd, used to define the mapping relationship between URL and local file system location URL { if ... { ... } } }#Each server is similar to a <VirtualHost> in httpd server { ... } } Note: http-related directives can only be used in http, server, location, upstream, and if contexts, but some directives only apply to certain of these five contexts. http { #Turn on or off the nginx version number in the error page server_tokens on; #!server_tag on; #!server_info on;
#Optimize disk IO settings, specify whether nginx calls the sendfile function to output files, set it to on for common applications, and set it to off for applications with high disk IO such as downloads sendfile on; #Set nginx to send all headers in one packet instead of one by one tcp_nopush on; #Set nginx not to cache data, but to send it piece by piece.
#The timeout for long connections is 75s by default keepalive_timeout 30; #The maximum number of resources that can be requested in a long connection keepalive_requests 20; #Disable persistent connections for a specific type of User Agent keepalive_disable [msie6|safari|none]; #Whether to use the TCP_NODELAY option for long connections and not combine multiple small files for transmission tcp_nodelay on; #Read the timeout for the http request header client_header_timeout #; #Timeout for reading the body of the http request message client_body_timeout #; #Timeout for sending response message send_timeout #;
#Set the parameters of the shared memory where users save various keys. 5m means 5 megabytes limit_conn_zone $binary_remote_addr zone=addr:5m; #Set the maximum number of connections for a given key. Here, the key is addr and the value is 100, which means that each IP address is allowed to open up to 100 connections at the same time. limit_conn addr 100; #include means to include the contents of another file in the current file include mime.types; #Set the file to use the default mine-type default_type text/html; #Set the default character set charset UTF-8; #Set nginx to send data in the form of gzip compression to reduce the amount of data sent, but it will increase the request processing time and CPU processing time, which needs to be weighed gzip on; #Add vary to the proxy server. Some browsers support compression, while others do not. Determine whether compression is needed based on the client's HTTP header. gzip_vary on; #Before compressing resources, nginx first checks whether there are any resources that have been pre-gzipped. #!gzip_static on; #Disable gzip for the specified client gzip_disable "MSIE[1-6]\."; # Allow or disable compression based on the request and the corresponding response stream. Any means compressing all requests gzip_proxied any; #Set the minimum number of bytes to enable compression for data. If the request is less than 10240 bytes, it will not be compressed, which will affect the request speed gzip_min_length 10240; #Set the data compression level, between 1-9, 9 is the slowest and has the highest compression ratio gzip_comp_level 2; #Set the data format that needs to be compressed gzip_types text/plain text/css text/xml text/javascript application/json application/x-javascript application/xml application/xml+rss; #While developing the cache, the maximum number of cached files is also specified. If there is no request for a file in 20 seconds, the cache is deleted. open_file_cache max=100000 inactive=20s;
#Indicates how often to check the cached valid information open_file_cache_valid 60s;
#The minimum number of accesses to the file cache. Only files accessed more than 5 times will be cached. open_file_cache_min_uses 5;
#Whether to cache error messages when searching for a file open_file_cache_errors on; #The maximum number of bytes per file allowed to be requested by the client client_max_body_size 8m;
#The maximum number of bytes that the buffer zone proxy buffers for client requests client_header_buffer_size 32k;
#Reference all configuration files under /etc/nginx/vhosts. If there are many host names, you can create a file for each host name to facilitate management include /etc/nginx/vhosts/*; } 5. Virtual Host Settings Module #Load balancing server list (I usually configure the load balancing category in the configuration file of the corresponding virtual host) upstream fansik #Backend server access rules ip_hash; #The weight parameter represents the weight value. The higher the weight, the greater the probability of being assigned. server 192.168.1.101:8081 weight=5; server 192.168.1.102:8081 max_fails=3 fail_timeout=10s; } server { #Listen on port 80 listen 80; #Define the host name. There can be multiple host names, and the name can also use regular expressions (~) or wildcards #(1) Check for exact match first #(2) Wildcard match check on the left: *.fansik.com #(3) Wildcard match check on the right: mail.* #(4) Regular expression matching check: such as ~^.*\.fansik\.com$ #(5)detault_server server_name www.jb51.net; #Set the access log for this virtual host access_log logs/www.jb51.net.access.log; location [=|~|~*|^~] uri {...} Function: Allows you to match defined locations based on the URI requested by the user. When a match is found, the request will be processed by the configuration in the corresponding location configuration block. =: Indicates exact match check ~: Regular expression pattern matching check, character case sensitive ~*: Regular expression pattern matching check, not case sensitive ^~: matches the first half of the URI, regular expressions are not supported !~: The beginning indicates a case-sensitive, non-matching regular expression !~*: The beginning indicates a case-insensitive, non-matching regular expression /: Universal match, any request will be matched location / { #Define the default website root directory location of the server root html; #Define the name of the homepage index file index index.html index.htm; #Reference the reverse proxy configuration, the configuration file directory depends on the compilation parameters #If --conf-path=/etc/nginx/nginx.conf is added during compilation to specify the path of the configuration file, then proxy.conf is placed in the /etc/nginx/directory #If no configuration file path is specified, put the proxy.conf configuration in the conf directory of nginx include proxy.conf; #Define the backend load server group proxy_pass http://fansik; } The difference between alias path and root path; location /images/ { root "/data/images" } //www.jb51.net/images/a.jpg <-- /data/images/images/a.jpg location /images/ { alias "/data/images/" } //www.jb51.net/images/a.jpg <-- /data/images/a.jpg
#Define the error page error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } #Set the address to view Nginx status #Can only be defined in location #htpasswd -c -m /etc/nginx/.htpasswd fansik (-c parameter is used when creating for the first time) location /Status { stub_status on; allow all; #access_log off; #allow 192.168.1.0/24; #deny all; #auth_basic "Status"; #auth_basic_user_file /etc/nginx/.htpasswd; } Status result example description: Active connections: 1 (the number of connections currently open) server accepts handled requests 174 (incoming connections accepted) 174 (processed connections) 492 (processed requests, in keep-alive mode, the number of requests may be more than the number of connections) Reading: 0 Writing: 1 Waiting: 0 Reading: The number of connections that are accepting requests. Writing: The number of connections that have completed the request and are in the process of processing the request or sending the response. Waiting: The number of active connections in the keep-connected mode.
#IP-based access control allow IP/Netmask deny IP/Netmask location ~ /\.ht { deny all; } } 6. Reverse proxy configuration (reverse proxy configuration is usually placed in a separate configuration file proxy.conf, referenced by include)
proxy_redirect off; #The backend Web server can obtain the user's real IP through X-Forwarded-For proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #Nginx connection timeout with backend server (proxy connection timeout) proxy_connect_timeout 60; #After the connection is successful, the backend server response time (proxy receiving timeout) proxy_read_timeout 120; #Backend server data transmission time (proxy sending timeout) proxy_send_timeout 20; #Set the proxy server (nginx) to save the user header information buffer size proxy_buffer_size 32k; #proxy_buffers buffer, the average web page is set below 32k proxy_buffers 4 128k; #Buffer size under high load (proxy_buffers*2) proxy_busy_buffers_size 256k; #Set the cache folder size. If it is larger than this value, it will be transferred from the upstream server. proxy_temp_file_write_size 256k; #1G memory buffer space, no need to delete for 3 days, maximum disk buffer space 2G proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=cache_one:1024m inactive=3d max_size=2g; 7. Configuration of https service
server { listen 443 ssl; server_name test.fansik.cn; ssl_certificate 100doc.cn.crt; ssl_certificate_key 100doc.cn.key; ssl_session_cache shared:SSL:1m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_session_timeout 5m; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { root /data/app index index.html index.htm; } }
8. URL address rewriting rewrite regex replacment flag For example: rewrite ^/images/(.*\.jpg)$ /imgs/$1 break;#$1 is the content in the previous brackets //www.jb51.net/images/a/1.jpg --> //www.jb51.net/imgs/a/1.jpg flag: last: Once this rewrite rule is rewritten, it will no longer be processed by other rewrite rules. Instead, the User Agent re-initiates a request for the rewritten URL and starts a similar process from the beginning. break: Once the rewrite rule is completed, the User Agent will re-initiate a request to the new URL. And will not be checked by any rewrite rules in the current location redirect: Returns a new URL with a 302 response code (temporary redirect) permanent: Returns the new URL with a 301 response code (permanent redirect) 9. If judgment <br /> Syntax: if (condition) {...} Application environment: server, location condition: (1) Variable name: If the variable value is an empty string or starts with "0", it is false, otherwise it is true (2) Comparison expressions with variables as operands can be tested using comparison operators such as = and != (3) Pattern matching operations of regular expressions ~: Case-sensitive pattern matching check ~*: case-insensitive pattern matching check !~ and !~*: Negate the above two tests (4) Test the possibility that the path is a file: -f ,~-f (5) Test the possibility that the specified path is a directory: -d,!-d (6) Test file existence: -e,!-e (7) Check whether the file has execution permission: -x,!-x For example: if($http_user_agent ~* MSIE){ rewrite ^(.*)$ /msie/$1 break; }
10. Anti-hotlinking
location ~* \.(jpg|gif|jpeg|png)$ { valid_referer none blocked www.jb51.net; if ($invalid_referer) { rewrite ^/ //www.jb51.net/403.html; } }
Summarize: Finally, I recommend a website for in-depth study of Nginx: http://tengine.taobao.org/book/index.html This is the end of this article about the detailed Chinese explanation of Nginx configuration parameters (load balancing and reverse proxy). For more relevant Chinese explanations of Nginx parameters, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future! You may also be interested in:- Full process record of Nginx reverse proxy configuration
- Nginx forward and reverse proxy and load balancing functions configuration code example
- Simple steps to configure Nginx reverse proxy with SSL
- Nginx reverse proxy configuration removes prefix
- Detailed steps for yum configuration of nginx reverse proxy
- Detailed explanation of nginx reverse proxy webSocket configuration
- A universal nginx interface to implement reverse proxy configuration
- Nginx reverse proxy configuration to remove prefix case tutorial
|