MySQL Limit can query database data in segments and is mainly used in paging. Although the data of the websites written nowadays are in the thousands, some small optimizations will not play a big role, but development must be done to the extreme and pursue perfect performance. The following are some limit performance optimization methods. Limit syntax: SELECT * FROM table LIMIT [offset,] rows | rows OFFSET offset The LIMIT clause can be used to force a SELECT statement to return a specified number of records. LIMIT accepts one or two numeric arguments. The argument must be an integer constant. If two arguments are given, the first argument specifies the offset of the first returned row, and the second argument specifies the maximum number of returned rows. The initial record row has an offset of 0 (not 1). Support limit # offset # syntax: mysql> SELECT * FROM table LIMIT 5,10; // Retrieve rows 6-15 //To retrieve all the rows from a certain offset to the end of the record set, you can specify the second parameter as -1 mysql> SELECT * FROM table LIMIT 95,-1; // Retrieve rows 96-last //If only one parameter is given, it indicates the maximum number of rows to return. In other words, LIMIT n is equivalent to LIMIT 0,n mysql> SELECT * FROM table LIMIT 5; // Retrieve the first 5 rows limit n,m means to select m records starting from the nth record. Most developers like to use this type of statement to solve the classic paging problem in the Web. For small data sets, this is not a big problem. For applications such as forums that may have very large amounts of data, the efficiency of limit n,m is very low. Because data needs to be selected every time. If you only select the first five records, it is very easy and convenient; however, for 1 million records, if you select five records starting from the 800,000th row, you still need to scan the records to this position. That is to say, limit 10000,20 means scanning 10020 rows that meet the conditions, discarding the first 10000 rows, and returning the last 20 rows. The problem lies here. If limit 100000,100 is used, 100100 rows need to be scanned. In a highly concurrent application, each query needs to scan more than 100,000 rows, and the performance will definitely be greatly reduced. Comparison of data reading efficiency with different data volumes: 1. When the offset is small: select * from table limit 5,10 After multiple runs, the time remained between 0.0004-0.0005 Select * From table Where id >=( Select id From table Order By id limit 10,1 ) limit 10 After multiple runs, the time remained between 0.0005-0.0006. Therefore, when the offset is small, it is more efficient to use limit directly! 2. When the offset data is large: select * from table limit 10000,10 After multiple runs, the time remained around 0.0187 seconds. Select * From table Where id >=( Select id From table Order By id limit 10000,1 ) limit 10 After multiple runs, the time remained at around 0.061 seconds, which is about 1/3 of the former. Therefore, when the offset is large, using the latter will be less efficient! This is the result of using id as index. If id is used as the primary key of the data table: select id from table limit 10000,10 The query takes about 0.04 seconds, which is because the id primary key is used as the index. Limit performance optimization: Select * From cyclopedia Where ID>=( Select Max(ID) From ( Select ID From cyclopedia Order By ID limit 90001 ) As tmp ) limit 100; Select * From cyclopedia Where ID>=( Select Max(ID) From ( Select ID From cyclopedia Order By ID limit 90000,1 ) As tmp ) limit 100; The second sentence will be faster when fetching the last 100 records from 90,000. Because the first sentence first takes the first 90,001 records, takes the largest ID value as the starting identifier, and then uses it to quickly locate the next 100 records; while the second sentence only takes the last record, and then takes the ID value as the starting identifier to locate the next 100 records. The second sentence can be shortened to: Select * From cyclopedia Where ID>=( Select ID From ( Select ID From cyclopedia Order By ID limit 90000,1 ) As tmp ) limit 100; Max operation is omitted, and IDs are generally incremented. Paging data performance optimization: 1. For data tables with large amounts of data, you can create primary keys and index fields to create index tables, query the corresponding primary keys through the index tables, and then query the data tables with large amounts of data through the primary keys; 2. If you have a where condition and want to use the index to use limit, you must design an index, put where first, the primary key used for limit second, and you can only select the primary key! This will increase the reading speed 3. Use in: first obtain the corresponding primary key value through the where condition, and then use the primary key value to query the corresponding field value. Paging using a cursor: In order to achieve the best query performance for MySQL, I changed the paging query to cursor query mode: select * from table where id > last_id limit 20 order by reply_id ASC; The last_id above is the id of the last record on this page, so that the "next page" query can be realized, and similarly, the "previous page" query can also be realized. Cursor paging is only suitable for sequential data and does not support page jumps. We can create an auto-increment ID or add ordered fields to the data table: For projects with large amounts of data, page jumps are not very useful, and you can use filtering conditions to achieve the purpose of searching. Summarize The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. Thank you for your support of 123WORDPRESS.COM. If you want to learn more about this, please check out the following links You may also be interested in:
|
<<: Complete steps to use vue-router in vue3
>>: Docker deploys Macvlan to achieve cross-host network communication
Web page encoding is translated into English as we...
The following HTML tags basically include all exis...
Because the Raspberry Pi is based on ARM architec...
Exploiting a newly discovered sudo vulnerability ...
Global Object All modules can be called global: r...
1. First prepare the MySQL and Eclipse environmen...
filter is generally used to filter certain values...
This article example shares the specific code of ...
Uninstall MySQL 1. In the control panel, uninstal...
This article mainly introduces the method of conf...
This article mainly introduces the ::master pseud...
HTML beginners often encounter the problem of how ...
Table of contents Basic Introduction Getting Star...
Table of contents Where are the logs stored? View...
Table of contents 1. charAt grammar parameter ind...