How to increase your web performance by 3 times by turning on a parameter in Nginx

How to increase your web performance by 3 times by turning on a parameter in Nginx

1. Some problems encountered

I remember when we were doing performance testing in 2008, we purchased 7 new Lenovo 4-core 4G servers for performance testing.

At that time, resources were tight, and these 7 servers were all installed with dual systems (Win2003/CentOS5) and used as test machines (stress testing agents) when idle.

At that time, I did a series of tests on Nginx, and what impressed me most was the stress test of the Nginx status page on this batch of machines.

For short connections, the best QPS is about 40,000, and for long connections, the highest QPS is about 130,000.

About 3 years later, no one cared about that batch of Lenovo servers anymore and they could only be used as zombie machines.

However, an inadvertent test found that no matter how powerful the server is, the best QPS of short connection will not be much higher. Moreover, the resources of the test machine are not exhausted, the resources of the tested server are not exhausted, and there is no network bottleneck.

Server resource utilization is low, but the response is just not fast enough.

Finally, we found that the bottleneck was at the entrance of the monitoring! Is it possible to improve the performance of the listener entry? Is port reuse possible? Finally we found SO_REUSEPORT.

SO_REUSEPORT supports multiple processes or threads binding to the same port, improving the performance of the server program.

2. Solution

Test environment

  Dell PowerEdge M620 Intel(R)Xeon(R)CPU E5–[email protected]
Linux3.16.0–4–amd64#1 SMP Debian 3.16.7-ckt11-1+deb8u3 (2015-08-04) x86_64 GNU/Linux
Ethernet controller: Broadcom Corporation NetXtreme II BCM5781010Gigabit Ethernet (rev10)

View compilation parameters

Nginx configuration is as follows:

Note that there is a reuse_port parameter

user www-data;
worker_processes auto;
pid/run/nginx.pid;
events{
useepoll;
multi_accept on;
reuse_port on;
worker_connections 1048576;
}
dso{# Dynamically load function module /usr/share/nginx/modules
load ngx_http_memcached_module.so;
load ngx_http_limit_conn_module.so;
load ngx_http_empty_gif_module.so;
load ngx_http_scgi_module.so;
load ngx_http_upstream_session_sticky_module.so;
load ngx_http_user_agent_module.so;
load ngx_http_referer_module.so;
load ngx_http_upstream_least_conn_module.so;
load ngx_http_uwsgi_module.so;
load ngx_http_reqstat_module.so;
load ngx_http_browser_module.so;
load ngx_http_limit_req_module.so;
load ngx_http_split_clients_module.so;
load ngx_http_upstream_ip_hash_module.so;
}
http{
include /etc/nginx/mime.types;
default_type text/plain;
access_log off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
keepalive_timeout 120;
server_names_hash_bucket_size512;
server_name_in_redirect off;
fastcgi_connect_timeout3s;
fastcgi_send_timeout3s;
fastcgi_read_timeout3s;
fastcgi_buffer_size128k;
fastcgi_buffers8128k;
fastcgi_busy_buffers_size256k;
fastcgi_temp_file_write_size256k;
variables_hash_max_size 1024;
set_real_ip_from10.0.0.0/8;
set_real_ip_from172.28.0.0/16;
set_real_ip_from192.168.0.0/16;
real_ip_headerX–Forwarded–For;
gzip off;
gzip_disable "msie6";
gzip_min_length1k;
gzip_buffers1664k;
gzip_http_version1.1;
gzip_comp_level6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
gzip_vary on;
ssl_protocols TLSv1 TLSv1.1TLSv1.2;# Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log/var/log/nginx/access.log;
error_log/var/log/nginx/error.log;
server{
listen 80backlog=65535;
charset utf-8;
location/{# Print Tengine status page stub_status on;# Open the status page, rely on http_stub_status_module module access_log off;# Do not log the access process}
location~^(.*)\/\.(svn|git|hg|bzr|cvs)\/{# Block these directories deny all;
access_log off;
log_not_found off;
}
location~/\.{# Shield directories or files starting with ., such as .htaccess .bash_history
deny all;
access_log off;
log_not_found off;
}
location/do_not_delete.html{
access_log off;
empty_gif;
}
}
}

Stress test reuse_port

Tengine already supports reuse_port. After enabling reuse_port, you will find that many processes are listening on port 80 at the same time:

After the pressure is applied, you will find that the server performance can be squeezed out by you:

Comparing the results of the reuse_port test, the friends were shocked (the short connection QPS exceeded 240,000)!

Now that the truth is out, what are you waiting for?

Find out

During the test, a large number of errors occurred due to increased TCP: Possible SYN flooding on port 80.

So the concurrency was reduced to 60,000 net.core.somaxconn = 65535.

After closing reuse_port, let's take a look at the situation of perf top:

Then open reuse_port and compare the results of perf top:

Now zoom in on the back_log monitored by Nginx to see the resource usage:

Let's take a look at the queue situation at this time (there are more than 10,000 entries):

Then we challenge 300,000 concurrent connections (MTT is the mean response time (ms)):

After a series of optimizations, the TCP: Possible SYN flooding on port 80. problem no longer occurred in the same environment with the same concurrency. However, a small number of connection timeouts occurred:

The test is now complete. Turning on reuse_port can indeed improve the performance by 3 times. Why not give it a try?

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • Using X-Sendfile header in Nginx to improve PHP file download performance (for large file downloads)
  • Django uses celery and NGINX to generate static pages to achieve performance optimization
  • Detailed explanation of the best configuration for Nginx to improve security and performance
  • Nginx+Tomcat high performance load balancing cluster construction tutorial
  • How to build a high-performance load balancing cluster with Nginx+Tomcat
  • Speed ​​up nginx performance: enable gzip and cache
  • Nginx server configuration performance optimization solution
  • Some suggestions for improving Nginx performance

<<:  Introduction to MySQL method of deleting table data with foreign key constraints

>>:  What are the usages of limit in MySQL (recommended)

Recommend

Element avatar upload practice

This article uses the element official website an...

Ubuntu regularly executes Python script example code

Original link: https://vien.tech/article/157 Pref...

Introduction to the use of common Dockerfile commands

Table of contents 01 CMD 02 ENTRYPOINT 03 WORKDIR...

The latest version of MySQL5.7.19 decompression version installation guide

MySQL version: MySQL Community Edition (GPL) ----...

vue-electron problem solution when using serialport

The error is as follows: Uncaught TypeError: Cann...

A brief analysis of controlled and uncontrolled components in React

Table of contents Uncontrolled components Control...

HTML+CSS to achieve layered pyramid example

This article mainly introduces the example of imp...

Deeply understand how nginx achieves high performance and scalability

The overall architecture of NGINX is characterize...

JavaScript common statements loop, judgment, string to number

Table of contents 1. switch 2. While Loop 3. Do/W...

Website Building Tutorial for Beginners: Learn to Build a Website in Ten Days

The 10-day tutorial uses the most understandable ...

Detailed discussion on the issue of mysqldump data export

1. An error (1064) is reported when using mysqldu...