Preface In daily development, when we use MySQL to implement paging, we always use MySQL limit syntax. But how to use it is very particular, let's summarize it today. limit syntax The limit syntax supports two parameters, offset and limit. The former indicates the offset, and the latter indicates taking the first limit data. For example: ## Return the first 10 statements that meet the conditions select * from user limit 10 ## Return the 11th to 20th data that meet the conditions select * from user limit 10,20 It can also be seen from the above that limit n is equivalent to limit 0,n. Performance Analysis In actual use, we will find that the loading of some pages behind the paging will become slower, that is to say: select * from user limit 1000000,10 The statement executes slowly. So let's test it first. First, we take 100 data with a small offset (the total amount of data is about 200). Then we gradually increase the offset. select * from user limit 0,100 ---------Time consumption 0.03s select * from user limit 10000,100 ---------Time consumption 0.05s select * from user limit 100000,100 ---------Time consumption 0.13s select * from user limit 500000,100 ---------Time consumption 0.23s select * from user limit 1000000,100 ---------Time consumption 0.50s select * from user limit 1800000,100 ---------Time consumption 0.98s It can be seen that as the offset increases, the performance becomes worse and worse. Why is this? Because the syntax of limit 10000,10 actually means that MySQL finds the first 10010 rows of data and then discards the first 10000 rows. This step is actually a waste. optimization Optimize with id First find the maximum ID of the last paging, and then use the index on the id to query, similar to select * from user where id>1000000 limit 100. Optimizing with covering indexes When a MySQL query completely hits an index, it is called a covering index, which is very fast because the query only needs to search on the index and then can return directly without going back to the data table to get the data. Therefore, we can first find the index ID and then get the data based on the ID. select * from (select id from job limit 1000000,100) a left join job b on a.id = b.id; This took 0.2 seconds. Summarize It is indeed difficult to use MySQL to paginate large amounts of data, but there are some ways to optimize it, which requires more testing in combination with business scenarios. Well, that’s all for this article. I hope the content of this article will be of certain reference value to your study or work. Thank you for your support of 123WORDPRESS.COM. You may also be interested in:
|
<<: Linux bash: ./xxx: Unable to execute binary file error
>>: Several common methods of sending requests using axios in React
Syntax: <marquee> …</marquee> Using th...
Original article: Ultimate IE6 Cheatsheet: How To...
1. Referential Integrity Referential integrity re...
This article describes how to boot the Linux syst...
Since the default Linux kernel parameters are bas...
Table of contents Preface 1. JDBC timeout setting...
1. According to the online tutorial, the installa...
Problem: When using JDBC to connect to the MySQL ...
Color contrast and harmony In contrasting conditi...
GUN Screen: Official website: http://www.gnu.org/...
The installation tutorial of MySQL 5.7.19 winx64 ...
For a website, it is the most basic function. So l...
Table of contents mapState mapGetters mapMutation...
Table of contents 1. Background 2. Slow query cau...
Table of contents need: Function Points Rendering...