Detailed explanation of MySQL slow queries

Detailed explanation of MySQL slow queries

Query mysql operation information

show status -- Display all MySQL operation information show status like "com_insert%"; -- Get the number of MySQL insertions;

show status like "com_delete%"; -- Get the number of deletions in mysql;

show status like "com_select%"; -- Get the number of mysql queries;

show status like "uptime"; -- Get the MySQL server running time show status like 'connections'; -- Get the number of MySQL connections

Query mysql operation information show [session|global] status like .... If you do not write [session|global], the default is session, and only the execution of the current window is retrieved. If you want to see all (from mysql startup to now, you should use global)

By querying the read-write ratio of MySQL, you can make corresponding configuration optimization;

Slow query

When the performance of MySQL drops, enable slow query to find out which SQL statement causes the slow response and analyze it. Of course, turning on slow query will cause CPU loss and log record IO overhead, so we need to intermittently turn on the slow query log to check the MySQL running status.

Slow query can record all SQL statements that are executed for more than long_query_time, which is used to find slow SQL statements and optimize them.

show variables like "%slow%";-- Whether to enable slow query;
show status like "%slow%"; -- Query the slow query SQL status;
show variables like "long_query_time"; -- Slow query time

Slow query enable setting

mysql> show variables like 'long_query_time'; -- By default, MySQL considers 10 seconds to be a slow query+-----------------+-----------+
| Variable_name | Value |
+-----------------+-----------+
| long_query_time | 10.000000 |
+-----------------+-----------+

mysql> set long_query_time=1; -- Modify the slow query time, only valid for the current session;
mysql> set global slow_query_log='ON';-- Enable slow query, add global, otherwise an error will be reported;

You can also modify the MySQL configuration file my.ini [windows] / my.cnf [Linux] in the configuration file and add it. Note that it must be added after [mysqld]

slow_query_log = on -- Enable logging;
slow_query_log_file = /data/f/mysql_slow_cw.log -- log file for recording logs; Note: the absolute path must be written on the window, such as D:/wamp/bin/mysql/mysql5.5.16/data/show-slow.log 
long_query_time = 2 -- the longest query time in seconds;
log-queries-not-using-indexes -- indicates that queries that do not use indexes are logged

Using slow queries

Example 1:

mysql> select sleep(3);

mysql> show status like '%slow%';
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| Slow_launch_threads | 0 |
| Slow_queries | 1 |
+---------------------+-------+
-- Slow_queries There is a total of one slow query

Example 2:

Use stored procedures to build a large database for testing;

Data preparation

CREATE TABLE dept(
deptno MEDIUMINT UNSIGNED NOT NULL DEFAULT 0 comment 'Number', 
dname VARCHAR(20) NOT NULL DEFAULT "" comment 'name', 
loc VARCHAR(13) NOT NULL DEFAULT "" comment 'Location'
) ENGINE=MyISAM DEFAULT CHARSET=utf8 comment 'Department table';

CREATE TABLE emp
(empno MEDIUMINT UNSIGNED NOT NULL DEFAULT 0, 
ename VARCHAR(20) NOT NULL DEFAULT "" comment 'name', 
job VARCHAR(9) NOT NULL DEFAULT "" comment 'Work',
mgr MEDIUMINT UNSIGNED NOT NULL DEFAULT 0 comment 'Superior number',
hiredate DATE NOT NULL comment 'Job entry date',
sal DECIMAL(7,2) NOT NULL comment 'Salary',
comm DECIMAL(7,2) NOT NULL comment 'Dividend',
deptno MEDIUMINT UNSIGNED NOT NULL DEFAULT 0 comment 'Department number' 
)ENGINE=MyISAM DEFAULT CHARSET=utf8 comment 'Employee table';

CREATE TABLE salgrade(
grade MEDIUMINT UNSIGNED NOT NULL DEFAULT 0 comment 'Grade',
losal DECIMAL(17,2) NOT NULL comment 'Minimum wage',
hisal DECIMAL(17,2) NOT NULL comment 'Highest salary'
)ENGINE=MyISAM DEFAULT CHARSET=utf8 comment 'Salary level table';

INSERT INTO salgrade VALUES (1,700,1200);
INSERT INTO salgrade VALUES (2,1201,1400);
INSERT INTO salgrade VALUES (3,1401,2000);
INSERT INTO salgrade VALUES (4,2001,3000);
INSERT INTO salgrade VALUES (5,3001,9999);

delimiter $
create function rand_num() 
returns tinyint(6) READS SQL DATA 
begin 
 declare return_num tinyint(6) default 0;
 set return_num = floor(1+rand()*30);
 return return_num;
end $

delimiter $
create function rand_string(n INT) 
returns varchar(255) READS SQL DATA 
begin 
 declare chars_str varchar(100) default
 'abcdefghijklmnopqrstuvwxyzABCDEFJHIJKLMNOPQRSTUVWXYZ';
 declare return_str varchar(255) default '';
 declare i int default 0;
 while i < n do 
  set return_str =concat(return_str,substring(chars_str,floor(1+rand()*52),1));
  set i = i + 1;
 end while;
 return return_str;
end $

delimiter $
create procedure insert_emp(in start int(10),in max_num int(10))
begin
 declare i int default 0; 
 #set autocommit =0 Set autocommit to 0 and turn off automatic submission;
 set autocommit = 0; 
 repeat
  set i = i + 1;
  insert into emp values ​​((start+i) ,rand_string(6),'SALESMAN',0001,curdate(),2000,400,rand_num());
  until i = max_num
 end repeat;
 commit;
end $

call insert_emp(1,4000000);
SELECT * FROM `emp` where ename like '%mQspyv%'; -- 1.163s

# Time: 150530 15:30:58 -- The query occurred at 2015-5-30 15:30:58
# User@Host: root[root] @ localhost [127.0.0.1] -- Who is the queryer and on what host? # Query_time: 1.134065 Lock_time: 0.000000 Rows_sent: 8 Rows_examined: 4000000 
-- Query_time: the total time taken for the query, Lock_time: the time the table is locked during the query, Rows_sent: the number of rows returned, Rows_examined: the result obtained after scanning 4 million rows of data;
SET timestamp=1432971058; -- The timestamp when the slow query occurred;
SELECT * FROM `emp` where ename like '%mQspyv%';

After slow query is enabled, there may be several GB of slow query logs every day. At this time, manual analysis is obviously impractical.

Slow query analysis tools:

mysqldumpslow

This tool is a slow query analysis tool that comes with slow query. Generally, as long as MySQL is installed, this tool will be available;

Usage: mysqldumpslow [ OPTS... ] [ LOGS... ] -- followed by parameters and the absolute address of the log file;

 -s what to sort by (al, at, ar, c, l, r, t), 'at' is default 
        al: average lock time 
        ar: average rows sent
        at: average query time
        c: count 
        l: lock time
        r: rows sent
        t: query time

 -r reverse the sort order (largest last instead of first)
 -t NUM just show the top n queries
 -a don't abstract all numbers to N and strings to 'S'
 -n NUM abstract numbers with at least n digits within names
 -g PATTERN grep: only consider stmts that include this string
 -h HOSTNAME hostname of db server for *-slow.log filename (can be wildcard),
        The default is '*', ie match all
 -i NAME name of server instance (if using mysql.server startup script)
 -l don't subtract lock time from total time

Common usage

mysqldumpslow -sc -t 10 /var/run/mysqld/mysqld-slow.log # Get the top 10 slow queries mysqldumpslow -st -t 3 /var/run/mysqld/mysqld-slow.log # Get the 3 slowest queries mysqldumpslow -st -t 10 -g "left join" /database/mysql/slow-log # Get the top 10 queries with left joins sorted by time mysqldumpslow -sr -t 10 -g 'left join' /var/run/mysqld/mysqld-slow.log # Get the top 10 slow queries with left joins sorted by time mysqldumpslow -sr -t 10 -g 'left join' /var/run/mysqld/mysqld-slow.log #

Note: The analysis results using mysqldumpslow will not display the complete SQL statement, but only the structure of the SQL statement.

If: SELECT * FROM sms_send WHERE service_id=10 GROUP BY content LIMIT 0, 1000;
mysqldumpslow to display

Count: 1 Time=1.91s (1s) Lock=0.00s (0s) Rows=1000.0 (1000), vgos_dba[vgos_dba]@[10.130.229.196]
SELECT * FROM sms_send WHERE service_id=N GROUP BY content LIMIT N, N;

pt-query-digest

illustrate

pt-query-digest is a tool for analyzing MySQL slow queries. It can analyze binlog, general log, slowlog, or MySQL protocol data captured by SHOWPROCESSLIST or tcpdump. The analysis results can be output to a file. The analysis process is to first parameterize the query statement conditions, then group and count the parameterized queries, and calculate the execution time, number of times, proportion, etc. of each query. The analysis results can be used to identify problems and optimize them.
pt-query-digest is a perl script that can be executed by simply downloading and granting permissions.

Install

wget http://www.percona.com/get/pt-query-digest 
chmod +x pt-query-digest
# Note that this is a Linux script, you need to specify the absolute or relative path to use it - or download the entire tool set wget percona.com/get/percona-toolkit.rpm
rpm -ivh percona-toolkit-2.2.13-1.noarch.rpm

wget percona.com/get/percona-toolkit.tar.gz
tar -zxvf percona-toolkit-2.2.13.tar.gz 
cd percona-toolkit-2.2.13
perl Makefile.PL
make && make install

Syntax and important options

pt-query-digest [OPTIONS] [FILES] [DSN]

--create-review-table When the --review parameter is used to output the analysis results to a table, it will be created automatically if it does not exist.
--create-history-table When the --history parameter is used to output the analysis results to a table, it will be created automatically if the table does not exist.
--filter matches and filters the input slow queries according to the specified string before analyzing them. --limit limits the percentage or number of output results. The default value is 20, which means that the 20 slowest statements are output. If it is 50%, they are sorted from large to small according to the proportion of total response time, and the output ends when the total reaches 50%.
--host mysql server address --user mysql username --password mysql user password --history Save the analysis results to the table. The analysis results are relatively detailed. The next time you use --history, if the same statement exists and the time interval of the query is different from that in the history table, it will be recorded in the data table. You can compare the historical changes of a certain type of query by querying the same CHECKSUM.
--review saves the analysis results to the table. This analysis is just parameterizing the query conditions, one record for one type of query, which is relatively simple. The next time you use --review, if the same statement analysis exists, it will not be recorded in the data table.
--output The output type of the analysis results. The values ​​can be report (standard analysis report), slowlog (MySQL slow log), json, or json-anon. Report is generally used for easier reading.
--since The time to start the analysis. The value is a string. It can be a specified time point in the format of "yyyy-mm-dd [hh:mm:ss]" or a simple time value: s (seconds), h (hours), m (minutes), d (days). For example, 12h means that the statistics start from 12 hours ago.
--until deadline, combined with --since, can analyze slow queries over a period of time.

Part I: Overall statistical results:

Standard Analysis Report Explanation

Overall: How many queries are there in total? In the above example, there are 266 queries in total.
Time range: The time range for query execution.
unique: The number of unique queries, that is, the total number of different queries after the query conditions are parameterized. In this example, it is 4.
total: total min: minimum max: maximum avg: average
95%: Arrange all the values ​​from small to large, and the number at 95% is generally the most valuable for reference.
median: The median is the number in the middle of all the values ​​arranged from small to large.

Part 2: Query group statistics results:

This part parameterizes and groups the queries, and then analyzes the execution of each type of query. The results are sorted from largest to smallest by total execution time.
Response: Total response time.
time: The total time percentage of this query in this analysis.
calls: The number of executions, that is, how many query statements of this type are there in this analysis.
R/Call: Average response time per execution.
Item : Query object

Part 3: Detailed statistics for each query:

As can be seen from the figure above, the detailed statistical results of query No. 1, the top table lists the statistics of each item such as the number of executions, maximum, minimum, average, 95%, etc.
Databases: Database name
Users: The number of times each user executes (percentage)
Query_time distribution: query time distribution. The length reflects the interval ratio. In this example, there are no queries between 1s and 10s, and all queries are concentrated in 10s.
Tables: Tables involved in the query
Explain: An example of this query

Usage Examples

(1) Directly analyze the slow query file:

pt-query-digest slow.log > slow_report.log

(2) Analyze the queries in the last 12 hours:

pt-query-digest --since=12h slow.log > slow_report2.log

(3) Analyze queries within a specified time range:

pt-query-digest slow.log --since '2014-05-17 09:30:00' --until '2014-06-17 10:00:00' >> slow_report3.log

(4) Analyze slow queries that only contain select statements

pt-query-digest --filter '$event->{fingerprint} =~ m/^select/i' slow.log> slow_report4.log

(5) Slow query for a specific user

pt-query-digest --filter '($event->{user} || "") =~ m/^root/i' slow.log> slow_report5.log

(6) Query all slow queries involving full table scans or full joins

pt-query-digest --filter '(($event->{Full_scan} || "") eq "yes") ||(($event->{Full_join} || "") eq "yes")' slow.log> slow_report6.log

(7) Save the query to the query_review table of the test database. If it does not exist, it will be created automatically.

pt-query-digest --user=root –password=abc123 --review h=localhost,D=test,t=query_review --create-review-table slow.log

(8) Save the query to the query_history table

pt-query-digest --user=root –password=abc123 --review h=localhost,D=test,t=query_history --create-review-table slow.log_20140401

(9) Capture MySQL's TCP protocol data through tcpdump and then analyze it

tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306 > mysql.tcp.txt
pt-query-digest --type tcpdump mysql.tcp.txt> slow_report9.log

(10) Analyze binlog

mysqlbinlog mysql-bin.000093 > mysql-bin000093.sql
pt-query-digest --type=binlog mysql-bin000093.sql > slow_report10.log

(11) Analyze general log

pt-query-digest --type=genlog localhost.log > slow_report11.log

The above is the detailed explanation of MySQL slow query. For more information about MySQL slow query, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • Mysql sql slow query monitoring script code example
  • How to locate MySQL slow queries
  • Analysis of the Principles of MySQL Slow Query Related Parameters
  • MySQL slow query method and example
  • Detailed explanation of the reason why the MySQL 5.7 slow query log time is 8 hours behind the system time
  • Mysql slow query optimization method and optimization principle
  • How to optimize MySQL performance through MySQL slow query
  • How to enable the slow query log function in MySQL
  • MySQL slow query log configuration and usage tutorial
  • How to enable slow query log in MySQL
  • Example of MySQL slow query

<<:  Detailed installation history of Ubuntu 20.04 LTS

>>:  Detailed explanation of commands to read and write remote files using Vim in Linux system

Recommend

Vue basics MVVM, template syntax and data binding

Table of contents 1. Vue Overview Vue official we...

How to install and deploy ftp image server in linux

Refer to the tutorial on setting up FTP server in...

Detailed analysis of binlog_format mode and configuration in MySQL

There are three main ways of MySQL replication: S...

How to add fields to a large data table in MySQL

Preface I believe everyone is familiar with addin...

Understanding and using React useEffect

Table of contents Avoid repetitive rendering loop...

How to modify the sources.list of Ubuntu 18.04 to Alibaba or Tsinghua mirror

1. Backup source list The default source of Ubunt...

React Diff Principle In-depth Analysis

Table of contents Diffing Algorithm Layer-by-laye...

Solution to MySQL startup successfully but not listening to the port

Problem Description MySQL is started successfully...

Detailed tutorial on deploying Apollo custom environment with docker-compose

Table of contents What is the Apollo Configuratio...

MySQL series of experience summary and analysis tutorials on NUll values

Table of contents 1. Test Data 2. The inconvenien...

MySQL sorting using index scan

Table of contents Install sakila Index Scan Sort ...

JS implements dragging the progress bar to change the transparency of elements

What I want to share today is to use native JS to...

Example of how to enable Slow query in MySQL

Preface Slow query log is a very important functi...

Responsive Web Design Learning (2) — Can videos be made responsive?

Previous episode review: Yesterday we talked abou...

HTML&CSS&JS compatibility tree (IE, Firefox, Chrome)

What is a tree in web design? Simply put, clicking...