Preface In actual business, paging is a common business requirement. Then the limit query will be used. When we use the Limit query, the efficiency is very high when the data is relatively small or only the first part of the data is queried. However, when the amount of data is large, or the number of query offsets is large, such as limit 100000,20, the efficiency is often unsatisfactory. A common method is to use Limit in conjunction with order by. If the order by has an index for the user, the efficiency is usually quite good. In this case, the simplest query is to use a covering index to query certain required columns. This effect is very good As shown below mysql> SELECT * FROM student LIMIT 1000000,1; +---------+------------+------------+------------+-------+---------------------+ | id | first_name | last_name | created_at | score | updated_at | +---------+------------+------------+------------+-------+---------------------+ | 1000001 | kF9DxBgnUi | yLXnPSHJpH | 2019-07-11 | 97 | 2019-07-11 14:29:59 | | +---------+------------+------------+------------+-------+---------------------+ 1 row in set (0.31 sec) You can see the time mysql> EXPLAIN SELECT score,first_name FROM student ORDER BY created_at LIMIT 1000000,20 \G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: student partitions: NULL type: index possible_keys: NULL key: time_source_name key_len: 69 ref: NULL rows: 1000001 filtered: 100.00 Extra: Using index 1 row in set, 1 warning (0.00 sec) mysql> In this way, the queried columns use the covering index, and the number of scanned rows will be greatly reduced, but the effect is not very satisfactory. If there are other queries, such queries will become very slow. For example, we add the last_name column. as follows mysql> SELECT score,first_name,last_name FROM student ORDER BY created_at LIMIT 1000000,1; +-------+------------+------------+ | score | first_name | last_name | +-------+------------+------------+ | 86 | knKsV2g2fY | WB5qJeLZuk | +-------+------------+------------+ 1 row in set (4.81 sec) mysql> This query takes a little over 4 seconds to execute. Through analysis, we can see that this query cannot use the index mysql> explain SELECT score,first_name,last_name FROM student ORDER BY created_at LIMIT 1000000,1\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: student partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 6489221 filtered: 100.00 Extra: Using filesort 1 row in set, 1 warning (0.00 sec) mysql> Now we modify the query as follows mysql> SELECT student.score,student.first_name FROM student INNER JOIN (SELECT id FROM student ORDER BY created_at LIMIT 1000000,1 ) AS temp USING(id); +-------+------------+ | score | first_name | +-------+------------+ | 15 | 2QWZ | +-------+------------+ 1 row in set (0.18 sec) mysql> EXPLAIN SELECT student.score,student.first_name,last_name FROM student INNER JOIN (SELECT id FROM student ORDER BY created_at LIMIT 1000000,1 ) AS temp USING(id); +----+-------------+------------+------------+--------+---------------+-----------------+--------+---------+---------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+------------+------------+--------+---------------+-----------------+--------+---------+---------+----------+-------------+ | 1 | PRIMARY | <derived2> | NULL | ALL | NULL | NULL | NULL | NULL | 1000001 | 100.00 | NULL | | 1 | PRIMARY | student | NULL | eq_ref | PRIMARY | PRIMARY | 4 | temp.id | 1 | 100.00 | NULL | | 2 | DERIVED | student | NULL | index | NULL | time_source_name | 69 | NULL | 1000001 | 100.00 | Using index | +----+-------------+------------+------------+--------+---------------+-----------------+--------+---------+---------+----------+-------------+ 3 rows in set, 1 warning (0.00 sec) From the analysis results, we can see that only 1000001 data records were queried at this time. Why is there such a change? This is called delayed join. It first returns the required primary key by using the covering index query, and then joins the original table based on the primary key to obtain the required data, thus reducing the number of rows that need to be scanned as much as possible. In some specific situations, there is actually another optimization solution. For example, to get the latest few inserted records. Then we can record the primary key ID (last_id) of the last record in the last query. SELECT score,first_name,last_name,id FROM student WHERE id>=last_id ORDER BY id ASC LIMIT 1 For example, if last_id=1000000, the query will start from 1000000. In such a scenario, the performance will be very good regardless of the offset of the data. Summarize The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. Thank you for your support of 123WORDPRESS.COM. You may also be interested in:
|
<<: Detailed code for implementing 3D tag cloud in Vue
>>: How to use Docker buildx to build multi-platform images and push them to private repositories
1 MySQL autocommit settings MySQL automatically c...
The party that creates a new connection is equiva...
(I) Method 1: Define it in advance directly in th...
Q: Whether using Outlook or IE, when you right-cl...
Win10 system locally installed MySQL8.0.20, perso...
Install related dependencies npm i lib-flexible -...
Basic Concepts Current read and snapshot read In ...
Performance of union all in MySQL 5.6 Part 1:MySQ...
Effect: Ideas: Use the input type attribute to di...
Quickly modify the table structure of a MySQL tab...
Recently, after refreshing the website, 503 Servi...
This morning I planned to use Wampserver to build...
vue scaffolding -> vue.cli Quickly create a la...
Table of contents 1. What is a database? 2. Class...
A few days ago, I discovered that my website was ...