A common problem is that the offset value is too large. If the query is something like LIMIT 10000, 20, 10020 rows will be generated and the previous 10000 rows will be discarded, which is very expensive. Assuming that all pages are accessed with the same frequency, such a query will scan half of the table on average. To optimize them, you can limit the maximum number of pages that can be accessed in the paginated view, or make large and cheap queries more efficient. A simple trick to improve performance is to perform queries on a covering index instead of on the entire row. You can union the result with the complete row and then get the additional columns you need. This will be more efficient, for example, the following query: SELECT film_id, description FROM sakila.film ORDER BY title LIMIT 50, 5; If the data table is large, you can optimize it as follows: SELECT film.film_id, film.description FROM sakila.film INNER JOIN ( SELECT film_id FROM sakila.film ORDER BY title LIMIT 50, 5) ) as lim USING(film_id); This "inferred join query" works effectively because it uses indexes to reduce the number of rows that the server has to access to check the data. Once the rows required for review are found, they are joined with the rows of the corresponding data table to obtain the other columns of the corresponding rows. Sometimes you can also convert limit into a fixed position query, which can be completed by range scanning the index. For example, if you precompute a fixed position column called position, you can rewrite the query as follows: SELECT film_id, description FROM sakila.film WHERE position BETWEEN 50 AND 54 ORDER BY position; Sorted data can be handled in a similar way, but will usually be affected by the GROUP BY operation. In most cases, the sort value needs to be calculated and stored in advance. The real problem with LIMIT and OFFSET is OFFSET, which means that the server will discard many rows. If an ordered bookmark is used to record the position of the next row to be retrieved, the next data can be accessed starting from the last position. For example, if you need to paginate rental records, starting with the latest rental record and working backwards, you can rely on the fact that the primary key of the record is always increasing, so you can query the first page of data like this: SELECT * FROM sakila.rental ORDER BY rental_id DESC LIMIT 20; This query returns data between 16049 and 16030. The next query can start from where it ended previously: SELECT * FROM sakila.rental WHERE rental_id < 16030 ORDER BY rental_id DESC LIMIT 20; This trick works no matter how far off the offset you start your query from. Other techniques include using pre-computed statistics or querying by joining tables with redundant primary keys and sort columns, both of which improve query efficiency by trading space for time. The above is the details of the optimization techniques for MySQL paging queries. For more information on the optimization of MySQL paging queries, please pay attention to other related articles on 123WORDPRESS.COM! You may also be interested in:
|
<<: Design theory: Why are we looking in the wrong place?
>>: Four data type judgment methods in JS
Preface In WEB development, we often involve cros...
Portainer is an excellent Docker graphical manage...
Table of contents 1. Integrate Ant Design Vue 2. ...
1. Download and install Download the community ed...
Table of contents Preface 1. The effect is as sho...
There are two types of html tags, inline elements...
1. Software Download MySQL download and installat...
In Windows operating system, the program to query...
Currently, layui officials have not provided the ...
MyISAM and InnoDB are the most common storage eng...
This article example shares the specific code of ...
Note: The basic directory path for software insta...
There are two ways to install MySQL 5.7. One is t...
Ellipses appear when multi-line text overflows Th...
1. MySQL User Management [Example 1.1] Log in to ...