【question】 The INSERT statement is one of the most common SQL statements. Recently, a MySQL server has been reporting concurrent thread alerts from time to time. From the log information, there are a large number of slow insert queries that take dozens of seconds to execute, wait for flushing logs, and are in the query end state. [Preliminary analysis] From the perspective of waiting for resources, most of the time is spent in the innodb_log_file stage. It is suspected that it may be caused by disk problems. After investigation, no hardware problems were found in the server itself. Later, when the thread increases, the automatic collection of pstack is enabled to locate the position where the MySQL thread is waiting. 【Analysis process】 After deploying the automatic crawling of pstack, there were 6 thread concurrency >= 50 alarms (each time an alarm was issued, a large number of slow queries were generated), and 3 of them were caught on the spot. When the number of concurrent threads increases, more than 50 threads are stuck in The SQL statement corresponding to thread 0x519c5940 is as follows and has been executed for 18 seconds The purpose of If the first thread executes slowly, the subsequent threads are in a waiting state and the entire group of transactions cannot be committed. The process can also be understood as follows: Session A COMMIT-->Get the lock-->Write binlog-->Commit completed Session B COMMIT-->Wait for lock--------------------------->Get lock-->Write binlog-->Commit completed Why is the first thread executing so slowly? We analyzed the log files during the alarm period and found that there were two large transactions of 15M and 20M in the log files. Check the log details. There is a large transaction deletion statement called "delete from", which contains about 230,000 records. Deleting 230,000 records in ROW mode will generate a log file of about 20 MB. The disk flushing time is long, which blocks the submission of other transactions in the same group. The start time of the transaction coincides with the alarm time The backlogged grouped transactions are flushed to disk in a concentrated manner. This is reflected in the disk index. The disk_write_kbytes index shows a significant increase during the problem period. 【Optimization plan】 1. It is recommended that developers avoid using the delete from statement to delete a large transaction of the entire table. [Other workarounds] 2. Binlog records in ROW mode will generate a large number of logs. Changing to MIXED mode can theoretically solve the problem. 3. Replace the disk with one with better performance Summarize The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. If you have any questions, you can leave a message to communicate. Thank you for your support for 123WORDPRESS.COM. You may also be interested in:
|
<<: Solution to nginx not jumping to the upstream address
>>: A brief discussion of 12 classic problems in Angular
I am currently developing a new app project. This...
[Problem description] On the application side, th...
When you first start using Docker, you will inevi...
Generic load/write methods Manually specify optio...
Table of contents 1. What is a transaction? 2. Th...
1. HTML part <Col span="2">Upload...
There is a picture in a big box. When you put the...
Adding the attribute selected = "selected&quo...
This article shares the installation tutorial of ...
This is an article about website usability. The a...
The project test environment database data is los...
The editor also shares with you the corresponding...
1. Rendering2. Operation steps 1. Apply for Tence...
In development projects, we can monitor SQL with ...
html: In fact, it is to arrange several solid cir...