1. Some problems encountered I remember when we were doing performance testing in 2008, we purchased 7 new Lenovo 4-core 4G servers for performance testing. At that time, resources were tight, and these 7 servers were all installed with dual systems (Win2003/CentOS5) and used as test machines (stress testing agents) when idle. At that time, I did a series of tests on Nginx, and what impressed me most was the stress test of the Nginx status page on this batch of machines. For short connections, the best QPS is about 40,000, and for long connections, the highest QPS is about 130,000. About 3 years later, no one cared about that batch of Lenovo servers anymore and they could only be used as zombie machines. However, an inadvertent test found that no matter how powerful the server is, the best QPS of short connection will not be much higher. Moreover, the resources of the test machine are not exhausted, the resources of the tested server are not exhausted, and there is no network bottleneck. Server resource utilization is low, but the response is just not fast enough. Finally, we found that the bottleneck was at the entrance of the monitoring! Is it possible to improve the performance of the listener entry? Is port reuse possible? Finally we found SO_REUSEPORT. SO_REUSEPORT supports multiple processes or threads binding to the same port, improving the performance of the server program. 2. Solution Test environment
View compilation parameters Nginx configuration is as follows: Note that there is a reuse_port parameter user www-data; worker_processes auto; pid/run/nginx.pid; events{ useepoll; multi_accept on; reuse_port on; worker_connections 1048576; } dso{# Dynamically load function module /usr/share/nginx/modules load ngx_http_memcached_module.so; load ngx_http_limit_conn_module.so; load ngx_http_empty_gif_module.so; load ngx_http_scgi_module.so; load ngx_http_upstream_session_sticky_module.so; load ngx_http_user_agent_module.so; load ngx_http_referer_module.so; load ngx_http_upstream_least_conn_module.so; load ngx_http_uwsgi_module.so; load ngx_http_reqstat_module.so; load ngx_http_browser_module.so; load ngx_http_limit_req_module.so; load ngx_http_split_clients_module.so; load ngx_http_upstream_ip_hash_module.so; } http{ include /etc/nginx/mime.types; default_type text/plain; access_log off; sendfile on; tcp_nopush on; tcp_nodelay on; server_tokens off; keepalive_timeout 120; server_names_hash_bucket_size512; server_name_in_redirect off; fastcgi_connect_timeout3s; fastcgi_send_timeout3s; fastcgi_read_timeout3s; fastcgi_buffer_size128k; fastcgi_buffers8128k; fastcgi_busy_buffers_size256k; fastcgi_temp_file_write_size256k; variables_hash_max_size 1024; set_real_ip_from10.0.0.0/8; set_real_ip_from172.28.0.0/16; set_real_ip_from192.168.0.0/16; real_ip_headerX–Forwarded–For; gzip off; gzip_disable "msie6"; gzip_min_length1k; gzip_buffers1664k; gzip_http_version1.1; gzip_comp_level6; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; gzip_vary on; ssl_protocols TLSv1 TLSv1.1TLSv1.2;# Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; access_log/var/log/nginx/access.log; error_log/var/log/nginx/error.log; server{ listen 80backlog=65535; charset utf-8; location/{# Print Tengine status page stub_status on;# Open the status page, rely on http_stub_status_module module access_log off;# Do not log the access process} location~^(.*)\/\.(svn|git|hg|bzr|cvs)\/{# Block these directories deny all; access_log off; log_not_found off; } location~/\.{# Shield directories or files starting with ., such as .htaccess .bash_history deny all; access_log off; log_not_found off; } location/do_not_delete.html{ access_log off; empty_gif; } } } Stress test reuse_port Tengine already supports reuse_port. After enabling reuse_port, you will find that many processes are listening on port 80 at the same time: After the pressure is applied, you will find that the server performance can be squeezed out by you: Comparing the results of the reuse_port test, the friends were shocked (the short connection QPS exceeded 240,000)! Now that the truth is out, what are you waiting for? Find out During the test, a large number of errors occurred due to increased TCP: Possible SYN flooding on port 80. So the concurrency was reduced to 60,000 net.core.somaxconn = 65535. After closing reuse_port, let's take a look at the situation of perf top: Then open reuse_port and compare the results of perf top: Now zoom in on the back_log monitored by Nginx to see the resource usage: Let's take a look at the queue situation at this time (there are more than 10,000 entries): Then we challenge 300,000 concurrent connections (MTT is the mean response time (ms)): After a series of optimizations, the TCP: Possible SYN flooding on port 80. problem no longer occurred in the same environment with the same concurrency. However, a small number of connection timeouts occurred: The test is now complete. Turning on reuse_port can indeed improve the performance by 3 times. Why not give it a try? The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM. You may also be interested in:
|
<<: Introduction to MySQL method of deleting table data with foreign key constraints
>>: What are the usages of limit in MySQL (recommended)
This article uses the element official website an...
Original link: https://vien.tech/article/157 Pref...
Table of contents 01 CMD 02 ENTRYPOINT 03 WORKDIR...
1 Background Recently, I have been studying how t...
MySQL version: MySQL Community Edition (GPL) ----...
The error is as follows: Uncaught TypeError: Cann...
Table of contents Uncontrolled components Control...
This article mainly introduces the example of imp...
The overall architecture of NGINX is characterize...
The complete steps of Centos7 bridge network conf...
Floating elements cause their parent elements to ...
After installing Jenkins, the initial download of...
Table of contents 1. switch 2. While Loop 3. Do/W...
The 10-day tutorial uses the most understandable ...
1. An error (1064) is reported when using mysqldu...