How to find slow SQL statements in MySQL? This may be a problem that confuses many people. MySQL locates SQL statements with low execution efficiency through slow query logs. When started with the --log-slow-queries[=file_name] option, mysqld will write a log file containing all SQL statements whose execution time exceeds long_query_time seconds. By viewing this log file, you can locate the SQL statements with low efficiency. Here's how to query slow SQL statements in MySQL 1. MySQL database has several configuration options that can help us capture inefficient SQL statements in a timely manner 1. slow_query_log If this parameter is set to ON, SQL statements whose execution time exceeds a certain value can be captured. 2. long_query_time When the SQL statement execution time exceeds this value, it will be recorded in the log. It is recommended to set it to 1 or shorter. 3. slow_query_log_file The file name of the log file. 4. log_queries_not_using_indexes If this parameter is set to ON, all SQL statements that do not use indexes can be captured, although the SQL statement may be executed very quickly. 2. Methods for detecting the efficiency of SQL statements in MySQL 1. By querying the log (1) Enable MySQL slow query under Windows The configuration file of MySQL in Windows system is generally my.ini. Find it under [mysqld] and add The code is as follows log-slow-queries = F:/MySQL/log/mysqlslowquery. log long_query_time = 2 (2) Enable MySQL slow query under Linux The configuration file of MySQL in Windows system is generally my.cnf. Find it under [mysqld] and add The code is as follows log-slow-queries=/data/mysqldata/slowquery. log long_query_time=2 illustrate log-slow-queries = F:/MySQL/log/mysqlslowquery. This is the location where the slow query log is stored. Generally, this directory must have writable permissions for the MySQL running account. Generally, this directory is set as the MySQL data storage directory. The 2 in long_query_time=2 means that the query will be recorded only if it exceeds two seconds; 2.show processlist command WSHOW PROCESSLIST shows which threads are running. You can also use the mysqladmin processlist statement to obtain this information. The meaning and purpose of each column: ID Column A flag that is useful when you want to kill a statement. Kill the query with the command /*/mysqladmin kill process number. User column Displays the current user. If you are not the root user, this command will only display the SQL statements within your authority. Host column Shows which IP and port this statement was sent from. Useful for tracking down the user who created the problematic statement. db column Shows which database this process is currently connected to. Command Line Displays the commands executed by the current connection, generally sleep, query, or connect. time column The duration of this state, in seconds. state column Displays the status of the SQL statement using the current connection. This is a very important column. All the statuses will be described later. Please note that state is only a certain state in the execution of a statement. For example, a SQL statement, such as query, may need to go through the states of copying to tmp table, sorting result, and sending data before it can be completed. Info column This SQL statement is displayed. Due to its limited length, long SQL statements will not be fully displayed, but it is an important basis for judging problem statements. The most critical part of this command is the state column. The states listed by MySQL are mainly the following: Checking table Checking data tables (this is automatic). Closing tables The modified data in the table is being flushed to disk, and the table that has been used is being closed. This is a very quick operation, but if it is not, you should check to see if the disk is full or if the disk is under heavy load. Connect Out The replication slave server is connecting to the master server. Copying to tmp table on disk Since the temporary result set is larger than tmp_table_size, the temporary table is being converted from memory storage to disk storage to save memory. Creating tmp table Creating a temporary table to hold partial query results. deleting from main table The server is executing the first part of a multiple-table delete and has just deleted the first table. deleting from reference tables The server is executing the second part of a multi-table delete and is deleting records from other tables. Flushing tables FLUSH TABLES is being executed, waiting for other threads to close the data table. Killed If a kill request is sent to a thread, the thread will check the kill flag and abandon the next kill request. MySQL checks the kill flag in each main loop, but in some cases it may take a while for the thread to die. If the thread is locked by other threads, the kill request will take effect immediately when the lock is released. Locked Locked by another query. Sending data The records of the SELECT query are being processed and the results are being sent to the client. Sorting for group Sorting for GROUP BY. Sorting for order Sorting for ORDER BY. Opening tables This process should be quick unless interfered with by other factors. For example, the table cannot be opened by other threads until the ALTER TABLE or LOCK TABLE statement is completed. Trying to open a table. Removing duplicates A SELECT DISTINCT query is being executed, but MySQL was unable to optimize away duplicate records in the previous stage. Therefore, MySQL needs to remove duplicate records again and then send the results to the client. Reopen table A lock on a table is obtained, but the lock can only be obtained after the table structure is modified. The lock has been released, the table has been closed, and an attempt is being made to reopen the table. Repair by sorting Fix instructions being sorted to create index. Repair with keycache The repair instruction is creating new indexes one by one using the index cache. It will be slower than Repair by sorting. Searching rows for update The records that meet the conditions are being found for update. It must be done before the UPDATE can modify related records. Sleeping Waiting for new client request. System lock Waiting to acquire an external system lock. If you are not running multiple mysqld servers requesting the same table at the same time, you can disable external system locks by adding the --skip-external-locking parameter. Upgrading lock INSERT DELAYED is trying to acquire a table lock to insert a new record. Updating Searching for matching records and modifying them. User Lock Waiting for GET_LOCK(). Waiting for tables The thread is notified that the data table structure has been modified and needs to be reopened to obtain the new structure. Then, in order to reopen the data table, you must wait until all other threads close the table. This notification is generated in the following situations: FLUSH TABLES tbl_name, ALTER TABLE, RENAME TABLE, REPAIR TABLE, ANALYZE TABLE, or OPTIMIZE TABLE. waiting for handler insert INSERT DELAYED has processed all pending insert operations and is waiting for new requests. Most states correspond to very fast operations. If a thread remains in the same state for several seconds, there may be a problem and it needs to be checked. There are other statuses not listed above, but most of them are only useful to see if there are any errors with the server. For example, as shown in the figure: 
3. Explain to understand the status of SQL execution EXPLAIN shows how MySQL uses indexes to process select statements and join tables. It can help choose better indexes and write more optimized query statements. To use it, just add explain before the select statement: For example:
explain select surname,first_name form a,b where a.id=b.id The results are shown in the figure 
Explanation of EXPLAIN columns table Shows which table this row of data is about type This is the important column, showing what type of connection is used. The join types from best to worst are const, eq_reg, ref, range, indexhe, and ALL possible_keys Shows possible indexes that may be applied to this table. If empty, no index is possible. You can select an appropriate statement from the WHERE clause for the relevant field. key The actual index used. If NULL, no index is used. In rare cases, MYSQL will choose an under-optimized index. In this case, you can use USE INDEX (indexname) in the SELECT statement to force the use of an index or use IGNORE INDEX (indexname) to force MySQL to ignore the index. key_len The length of the index to use. The shorter the length, the better without losing accuracy. ref Shows which column of the index is used, or a constant if possible. rows The number of rows that MYSQL considers necessary to check to return the requested data Extra Additional information about how MYSQL parses the query. This will be discussed in Table 4.3, but the bad examples that can be seen here are Using temporary and Using filesort, which means that MYSQL cannot use the index at all, resulting in slow retrieval. The meaning of the description returned in the extra column Distinct Once MYSQL finds a row that matches the row union, it will no longer search. Not exists MYSQL optimizes LEFT JOIN. Once it finds a row that matches the LEFT JOIN criteria, it does not search again. Range checked for each Record(index map:#) No ideal index was found, so for each combination of rows from the previous tables, MYSQL checks which index to use and uses it to return the rows from the table. This is one of the slowest connections using the index Using filesort When you see this, the query needs to be optimized. MYSQL needs to perform an extra step to discover how to order the returned rows. It sorts all rows based on the join type and stores the sort key value and row pointers for all rows matching the condition. Using index Column data is returned from the table using only the information in the index without actually reading it. This happens when all requested columns for the table are part of the same index. Using temporary When you see this, the query needs to be optimized. Here, MYSQL needs to create a temporary table to store the results, which usually happens when ORDER BY is performed on different sets of columns, rather than GROUP BY. Where used The WHERE clause is used to restrict which rows will be matched to the next table or returned to the user. This can happen if you do not want to return all rows in the table and the join type is ALL or index, or if there is a problem with the query. Explanation of different join types (sorted in order of efficiency) const The maximum value of a record in the table that can match this query (the index can be a primary key or a unique index). Because there is only one row, this value is actually a constant, because MYSQL reads the value first and then treats it as a constant. eq_ref In the connection, MYSQL reads a record from the previous table for each record union when querying. It is used when the query uses the index as the primary key or the unique key. ref This join type occurs only when the query uses a key that is not a unique or primary key, or a part of one of these types (for example, using a leftmost prefix). For each row join with the previous tables, all records will be read from the table. This type depends heavily on how many records are matched against the index - the fewer the better. range This join type uses an index to return a range of rows, such as what happens when you use > or < to find something. index This join type performs a full scan of every record in the previous table (better than ALL because the index is generally smaller than the table data) ALL This join type performs a full scan of each previous record, which is generally bad and should be avoided. MySQL - View Slow SQL Check whether MySQL has enabled the slow SQL log file (1) Check whether the slow SQL log is enabled mysql> show variables like 'log_slow_queries'; +------------------+-------+ | Variable_name | Value | +------------------+-------+ | log_slow_queries | ON | +------------------+-------+ 1 row in set (0.00 sec) (2) Check how many seconds SQL statements that are slower than the specified number of seconds will be recorded in the log file mysql> show variables like 'long_query_time'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | long_query_time | 1 | +-----------------+-------+ 1 row in set (0.00 sec) Here value=1 means 1 second 2. Configure the my.ini file (the file name in Linux is my.cnf), find the [mysqld] section, and add the log configuration, as shown in the following example: [mysqld] log="C:/temp/mysql.log" log_slow_queries="C:/temp/mysql_slow.log" long_query_time=1 log indicates the directory where log files are stored; log_slow_queries indicates the sql log directory that records the long execution time; long_query_time indicates how long the execution time is considered long, in seconds. These configuration items should already exist in Linux, but they are commented out. You can remove the comments. But it is also OK to add configuration items directly. After finding the inefficient SQL statement, you can use the EXPLAIN or DESC command to obtain information about how MySQL executes the SELECT statement, including how the tables are connected and the order in which they are connected during the execution of the SELECT statement. For example, if we want to calculate the sales of all companies in 2006, we need to join the sales table and the company table and perform a sum operation on the profit field. The corresponding SQL execution plan is as follows: mysql> explain select sum(profit) from sales a,company b where a.company_id = b.id and a.year = 2006\G; *************************** 1. row *************************** id: 1 select_type: SIMPLE table: a type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 12 Extra: Using where *************************** 2. row *************************** id: 1 select_type: SIMPLE table: b type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 12 Extra: Using where 2 rows in set (0.00 sec) The explanation of each column is as follows: •select_type: indicates the type of SELECT. Common values include SIMPLE (simple table, that is, no table join or subquery is used), PRIMARY (primary query, that is, outer query), UNION (the second or subsequent query statement in UNION), SUBQUERY (the first SELECT in a subquery), etc. •table: The table to output the result set. •type: indicates the connection type of the table. The connection types from best to worst are system (only one row in the table, that is, a constant table), const (there is at most one matching row in a single table, such as a primary key or a unique index), eq_ref (for each previous row, only one record is queried in this table. In simple terms, primary key or unique index is used in multi-table connection), ref (similar to eq_ref, the difference is that instead of using primary key or unique index, ordinary index is used), ref_or_null (similar to ref, the difference is that the condition includes a query for NULL), index_merge (index merge optimization), unique_subquery (in is followed by a subquery that queries the primary key field), index_subquery (similar to unique_subquery, the difference is that in is followed by a subquery that queries the non-unique index field), range (range query in a single table), index (for each previous row, data is obtained by querying the index), all (for each previous row, data is obtained by scanning the entire table). •possible_keys: indicates the indexes that may be used when querying. •key: indicates the index actually used. •key_len: The length of the index field. •rows: The number of rows to scan. •Extra: Notes and description of the implementation. In the above example, it can be confirmed that the full table scan of table a leads to poor efficiency. Then create an index for the year field of table a as follows: mysql> create index idx_sales_year on sales(year); Query OK, 12 rows affected (0.01 sec) Records: 12 Duplicates: 0 Warnings: 0 After creating the index, the execution plan for this statement is as follows: mysql> explain select sum(profit) from sales a,company b where a.company_id = b.id and a.year = 2006\G; *************************** 1. row *************************** id: 1 select_type: SIMPLE table: a type: ref possible_keys: idx_sales_year key: idx_sales_year key_len: 4 ref: const rows: 3 Extra: *************************** 2. row *************************** id: 1 select_type: SIMPLE table: b type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 12 Extra: Using where 2 rows in set (0.00 sec) It can be found that after the index is established, the number of rows that need to be scanned for table a is significantly reduced (from full table scan to 3 rows). It can be seen that the use of indexes can greatly improve the access speed of the database, especially when the table is very large. This advantage is more obvious. Using indexes to optimize SQL is a common basic method for optimizing problematic SQL. In the following chapters, we will specifically introduce how to use indexes to optimize SQL. This article mainly introduces the MySQL slow query analysis method. Some time ago, I set up a record to query SQL statements slower than 1 second in the MySQL database. I remembered that there were a few very specific setting methods, and I couldn't remember the names of a few parameters, so I reorganized them and made some notes for myself. When it comes to troubleshooting and finding performance bottlenecks, the easiest problems to find and solve are slow MySQL queries and queries without indexes. OK, let’s start finding SQL statements that are not “pleasant” to execute in MySQL. MySQL slow query analysis method 1: I am using this method, haha, I prefer the immediacy of it. MySQL versions 5.0 and above can support recording SQL statements that execute slowly. MySQL> show variables like 'long%'; Note: This long_query_time is used to define how many seconds slower a query is considered a "slow query" +-----------------+-----------+ | Variable_name | Value | +-----------------+-----------+ | long_query_time | 10.000000 | +-----------------+-----------+ 1 row in set (0.00 sec) MySQL> set long_query_time=1; Note: I set it to 1, which means that any query that takes more than 1 second to execute is considered a slow query. Query OK, 0 rows affected (0.00 sec) MySQL> show variables like 'slow%'; +---------------------+---------------+ | Variable_name | Value | +---------------------+---------------+ | slow_launch_time | 2 | | slow_query_log | ON | Note: Is logging enabled? | slow_query_log_file | /tmp/slow.log | Note: What position to set to +---------------------+---------------+ 3 rows in set (0.00 sec) MySQL> set global slow_query_log='ON' Note: Turning on logging Once the slow_query_log variable is set to ON, MySQL starts logging immediately. The initial values of the above MySQL global variables can be set in /etc/my.cnf. long_query_time=1 slow_query_log_file=/tmp/slow.log MySQL slow query analysis method 2: MySQLdumpslow command /path/MySQLdumpslow -sc -t 10 /tmp/slow-log This will output the top 10 SQL statements with the most records, where: -s, indicates the sorting method. c, t, l, and r are sorted by the number of records, time, query time, and number of records returned, respectively. ac, at, al, and ar indicate the corresponding reverse sorting. -t, which means top n, means how many records are returned. -g, followed by a regular expression matching pattern, case-insensitive; For example, /path/MySQLdumpslow -sr -t 10 /tmp/slow-log Get the top 10 queries that return the most records. /path/MySQLdumpslow -st -t 10 -g "left join" /tmp/slow-log Get the first 10 query statements containing left joins sorted by time.
A simpler way: Open my.ini, find [mysqld] and add long_query_time = 2 below it log-slow-queries = D:/mysql/logs/slow.log #Set where to write the logs. It can be empty and the system will give a default file #log-slow-queries = /var/youpath/slow.log host_name-slow.log under linux log-queries-not-using-indexes long_query_time refers to how long (in seconds) it takes for SQL to be executed before it will be recorded. The setting here is 2 seconds. The following is a description of the common parameters of mysqldumpslow. For detailed information, you can use mysqldumpslow -help to query. -s indicates the sorting method. c, t, l, and r are sorted by the number of records, time, query time, and number of records returned (from large to small). ac, at, al, and ar are sorted in reverse order. -t means top n, which means how many records are returned. www.jb51.net -g, followed by a regular expression matching pattern, which is case insensitive. The next step is to use mysqldumpslow, a slow query tool that comes with mysql, to analyze it (in the bin directory of mysql). The name of my log file is host-slow.log. List the 10 sql statements with the most records mysqldumpslow -sc -t 10 host-slow.log List the 10 sql statements with the most returned record sets mysqldumpslow -sr -t 10 host-slow.log Return the top 10 sql statements with left joins according to time mysqldumpslow -st -t 10 -g "left join" host-slow.log Using the mysqldumpslow command, we can clearly get the various query statements we need, which is very helpful for monitoring, analyzing, and optimizing MySQL query statements.
In daily development, we often encounter situations where the page opens extremely slowly. After elimination, we determine that it is the influence of the database. In order to quickly find the specific SQL, we can use the MySQL logging method. -- Enable SQL execution logging set global log_output='TABLE'; -- Output to table set global log=ON; -- Enable general_log for all command executions, all statements: successful and unsuccessful. set global log_slow_queries=ON; -- Turn on slow query sql logging slow_log, successful execution: slow query statements and statements that do not use indexes set global long_query_time=0.1; -- Slow query time limit (seconds) set global log_queries_not_using_indexes=ON; -- Log SQL statements that do not use indexes -- Query SQL execution records select * from mysql.slow_log order by 1; -- Successfully executed: slow query statements and statements that do not use indexes select * from mysql.general_log order by 1; -- All statements: successful and unsuccessful. -- Close SQL execution logging set global log=OFF; set global log_slow_queries=OFF; -- long_query_time parameter description -- v4.0, 4.1, 5.0, v5.1 to 5.1.20 (inclusive): do not support millisecond-level slow query analysis (support accuracy of 1-10 seconds); -- Version 5.1.21 and later: support millisecond-level slow query analysis, such as 0.1; -- 6.0 to 6.0.3: Does not support millisecond-level slow query analysis (supports precision of 1-10 seconds); -- 6.0.4 and later: support slow query analysis at millisecond level; Through the Sql recorded in the log, you can quickly locate the specific file and optimize the SQL to see if the speed is improved.
This article analyzes the problem of MySQL database server query becoming slower and slower, and proposes corresponding solutions. The specific analysis and solutions are as follows: Developers often check for statements without indexes or statements without limit n. These statements will have a great impact on the database...
This article analyzes the problem of MySQL database server query becoming slower and slower, and proposes corresponding solutions. The specific analysis and solutions are as follows: Developers are often found checking for statements without indexes or limit n. These statements can have a significant impact on the database. For example, a large table with tens of millions of records needs to be scanned completely, or filesort is performed continuously, which affects the IO of the database and server. This is the situation on the mirror library. When it comes to the online database, in addition to statements without indexes and statements without limit, there is another problem: too many MySQL connections. Speaking of this, let’s first take a look at our previous monitoring practices 1. Deploy open source distributed monitoring systems such as zabbix to obtain daily database io, cpu, and number of connections 2. Deploy weekly performance statistics, including data increase, iostat, vmstat, and datasize 3. Mysql slowlog collection, list top 10 I used to think that the monitoring was perfect, but after deploying MySQL node process monitoring, I found many disadvantages. The disadvantage of the first approach: Zabbix is too large, and the monitoring is not done inside MySQL. A lot of data is not very prepared, and it is generally used to check historical data. The disadvantage of the second approach: because it is only run once a week, many situations cannot be discovered and alarmed. The disadvantage of the third approach: when the node has a lot of slowlogs, top10 becomes meaningless, and often you will be given regular task statements that must be run. . The reference value is not great, so how do we solve and query these problems? For troubleshooting and finding performance bottlenecks, the easiest problems to find and solve are MYSQL's slow queries and queries without indexes. OK, let’s start finding SQL statements that are not “pleasant” to execute in MySQL. Method 1: I am currently using this method. Haha, I prefer the immediacy of this method. Mysql versions 5.0 and above can support recording SQL statements that execute slowly. mysql> show variables like 'long%'; Note: This long_query_time is used to define how many seconds slower a query is considered a "slow query". +-----------------+-----------+ | Variable_name | Value | +-----------------+-----------+ | long_query_time | 10.000000 | +-----------------+-----------+ 1 row in set (0.00 sec) mysql> set long_query_time=1; Note: I set it to 1, which means that any query that takes more than 1 second to execute is considered a slow query. Query OK, 0 rows affected (0.00 sec) mysql> show variables like 'slow%'; +---------------------+---------------+ | Variable_name | Value | +---------------------+---------------+ | slow_launch_time | 2 | | slow_query_log | ON | Note: Whether to open logging | slow_query_log_file | /tmp/slow.log | Note: Where to set it +---------------------+---------------+ 3 rows in set (0.00 sec) mysql> set global slow_query_log='ON' Note: Turn on logging Once the slow_query_log variable is set to ON, mysql starts logging immediately. The initial values of the above MYSQL global variables can be set in /etc/my.cnf. long_query_time=1 slow_query_log_file=/tmp/slow.log Method 2: mysqldumpslow command /path/mysqldumpslow -sc -t 10 /tmp/slow-log This will output the top 10 SQL statements with the most records, where: -s, indicates the sorting method. c, t, l, and r are sorted by the number of records, time, query time, and number of records returned, respectively. ac, at, al, and ar indicate the corresponding reverse sorting. -t, which means top n, means how many records are returned. -g, followed by a regular expression matching pattern, case-insensitive; for example /path/mysqldumpslow -sr -t 10 /tmp/slow-log Get the top 10 queries that return the most records. /path/mysqldumpslow -st -t 10 -g "left join" /tmp/slow-log Get the first 10 query statements containing left joins sorted by time. Finally, let’s summarize the benefits of node monitoring 1. Lightweight monitoring, real-time, and can be customized and modified according to actual conditions 2. A filter is set up to filter the statements that must be run. 3. Timely discover those unused indexes or illegal queries. Although it takes time to process those slow statements, it is worth it to avoid database crashes. 4. When there are too many connections to the database, the program will automatically save the processlist of the current database, which is a great tool for DBA to find the cause. 5. When using mysqlbinlog for analysis, you can get a clear time period when the database status is abnormal. Some people suggest that we set the mysql configuration file. When adjusting tmp_table_size, I found some other parameters Qcache_queries_in_cache The number of queries registered in the cache Qcache_inserts The number of queries added to the cache Qcache_hits The number of cache samples Qcache_lowmem_prunes The number of queries that were removed from the cache due to lack of memory. Qcache_not_cached Number of queries that were not cached (either not cacheable or due to QUERY_CACHE_TYPE) Qcache_free_memory The total amount of free memory in the query cache Qcache_free_blocks The number of free memory blocks in the query cache Qcache_total_blocks The total number of blocks in the query cache Qcache_free_memory can cache some commonly used queries. If it is a commonly used SQL, it will be loaded into memory. That will increase the speed of database access |