【question】 The INSERT statement is one of the most common SQL statements. Recently, a MySQL server has been reporting concurrent thread alerts from time to time. From the log information, there are a large number of slow insert queries that take dozens of seconds to execute, wait for flushing logs, and are in the query end state. [Preliminary analysis] From the perspective of waiting for resources, most of the time is spent in the innodb_log_file stage. It is suspected that it may be caused by disk problems. After investigation, no hardware problems were found in the server itself. Later, when the thread increases, the automatic collection of pstack is enabled to locate the position where the MySQL thread is waiting. 【Analysis process】 After deploying the automatic crawling of pstack, there were 6 thread concurrency >= 50 alarms (each time an alarm was issued, a large number of slow queries were generated), and 3 of them were caught on the spot. When the number of concurrent threads increases, more than 50 threads are stuck in The SQL statement corresponding to thread 0x519c5940 is as follows and has been executed for 18 seconds The purpose of If the first thread executes slowly, the subsequent threads are in a waiting state and the entire group of transactions cannot be committed. The process can also be understood as follows: Session A COMMIT-->Get the lock-->Write binlog-->Commit completed Session B COMMIT-->Wait for lock--------------------------->Get lock-->Write binlog-->Commit completed Why is the first thread executing so slowly? We analyzed the log files during the alarm period and found that there were two large transactions of 15M and 20M in the log files. Check the log details. There is a large transaction deletion statement called "delete from", which contains about 230,000 records. Deleting 230,000 records in ROW mode will generate a log file of about 20 MB. The disk flushing time is long, which blocks the submission of other transactions in the same group. The start time of the transaction coincides with the alarm time The backlogged grouped transactions are flushed to disk in a concentrated manner. This is reflected in the disk index. The disk_write_kbytes index shows a significant increase during the problem period. 【Optimization plan】 1. It is recommended that developers avoid using the delete from statement to delete a large transaction of the entire table. [Other workarounds] 2. Binlog records in ROW mode will generate a large number of logs. Changing to MIXED mode can theoretically solve the problem. 3. Replace the disk with one with better performance Summarize The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. If you have any questions, you can leave a message to communicate. Thank you for your support for 123WORDPRESS.COM. You may also be interested in:
|
<<: Solution to nginx not jumping to the upstream address
>>: A brief discussion of 12 classic problems in Angular
privot is the intermediate table of many-to-many ...
1. Install Apache $ sudo apt update && su...
"The great river flows eastward, the waves w...
The virtual machine used is CentOS 8.4, which sim...
To execute a shell command in Docker, you need to...
Table of contents 1. Introduction 2. Prototype ch...
Disable SeLinux setenforce 0 Permanently closed: ...
Table of contents Implementation ideas There are ...
Table of contents Preface start Basic layout Data...
Enter Alibaba vector icon library Alibaba Vector ...
Source code preview: https://github.com/jdf2e/nut...
Organize the MySQL 5.5 installation and configura...
For historical reasons, MySQL replication is base...
This article shares the specific code for impleme...
Serialization implementation InnoDB implements se...