The primary key of MySQL can be auto-incremented. So if the newly added value after power failure and restart continues the auto-increment value before the power failure? The default value of auto-increment is 1, so can it be changed? Let's talk about MySQL's auto-increment. FeaturesPreservation strategy1. If the storage engine is MyISAM, then the auto-increment value is stored in the data file; 2. If it is InnoDB engine, 1) before 5.6, it is stored in memory and not persisted. After restart, it will find the largest key value. For example, if the largest id in the current data row of a table is 10, AUTO_INCREMENT=11. At this time, we delete the row with id=10, and AUTO_INCREMENT is still 11. But if you restart the instance immediately, the AUTO_INCREMENT of this table will become 10 after the restart; 2) Starting from 8.0, the auto-increment value is saved in the redo log. After restart, the previously saved auto-increment value will be read from the redo log. Determination of self-increment1. If the id field is specified as 0, null or unspecified when inserting data, the current AUTO_INCREMENT value of the table is filled into the auto-increment field, and auto_increment_offset is used as the initial value and auto_increment_increment is used as the step size to find the first value greater than the current auto-increment value as the new auto-increment value. 2. If the id field of the inserted data specifies a specific value, use the value in the statement directly. In some scenarios, not all default values are used. For example, when dual write is required in the dual-M master-slave structure, we may set auto_increment_increment=2, so that the auto-increment IDs of one database are all odd numbers and the auto-increment IDs of the other database are all even numbers, to avoid conflicts in the primary keys generated by the two databases. Modification of auto-increment valueSuppose the value to be input is X and the current auto-increment value is Y. So: 1. If X<Y, the auto-increment value of this table remains unchanged; 2. If X ≥ Y, then change the current auto-increment value to the new auto-increment value. Execution processAssume there is a table t, id is an auto-increment primary key, and if (1,1,1) is already present, insert a row (null,1,1), then the execution process is as follows: 1. The executor calls the InnoDB engine interface to write a row. The value of the row passed in is (0,1,1); 2. InnoDB finds that the user has not specified the value of the auto-increment id, and obtains the current auto-increment value 2 of table t; 3. Change the value of the incoming row to (2,1,1); 4. Change the table's auto-increment value to 3; 5. Continue to insert data. Since there is already a record with c=1, a Duplicate key error is reported and the statement returns. The problemsDue to the above-mentioned characteristics, primary keys may be discontinuous in some scenarios. Scenario 1: Unique index is repeated when adding data After the index of column c is repeated, the primary key value 2 that was originally assigned will be discarded, and the next insertion will start from 2, which becomes 3. Scenario 2: Transaction rollback insert into t values(null,1,1); begin; insert into t values(null,2,2); rollback; insert into t values(null,2,2); //The inserted row is (3,2,2) The primary key 2 assigned to it after the second statement is rolled back is also discarded. Scenario 3: Special batch insert optimization causes The special batch insert mentioned here refers to the insert ... select, replace ... select and load data statements. Why might these statements lead to? This is about self-increment lock. First of all, the auto-increment lock is to avoid multi-thread conflicts. Because in multi-threading, if multiple threads obtain the auto-increment value at the same time, it may cause the same auto-increment value to be assigned to multiple records, leading to gradual conflicts. Therefore, a self-increment lock is needed. The reasons why the batch insert statements mentioned above will cause discontinuity of the primary key will be discussed in the following self-increment lock section. Question: Before talking about auto-increment locks, let's think about a question first. Why are the auto-increment primary key values not set to be rollable for the first two scenarios? Wouldn't this avoid discontinuities? Answer: Because the design is rollbackable, which will lead to performance degradation. See the following scenario. 1. Assume that transaction A applies for id=2, and transaction B applies for id=3. Then the auto-increment value of table t is 4, and the execution continues. 2. Transaction B is submitted correctly, but a unique key conflict occurs in transaction A. 3. If transaction A is allowed to roll back the auto-increment id, that is, change the current auto-increment value of table t back to 2, then the following situation will occur: there is already a row with id=3 in the table, and the current auto-increment id value is 2. 4. Next, other transactions that continue to be executed will apply for id=2, and then apply for id=3. At this time, the insert statement will report an error "primary key conflict". In order to solve the above problem, you need to choose one of the following two methods. Method 1: Before applying for an ID each time, first determine whether the ID already exists in the table. If it exists, skip this id. However, this method is very costly. Because applying for an ID is a very quick operation, but now we have to go to the primary key index tree to determine whether the ID exists. Method 2: Expand the lock range of the auto-increment ID. The next transaction can only apply for the auto-increment ID after a transaction is completed and committed. The problem with this method is that the lock granularity is too large, which greatly reduces the system's concurrency capability. So, on the whole, for example, we can cancel the function of self-valued value rollback. Auto-increment lockThe purpose of the auto-increment lock is to prevent multiple threads from obtaining the same primary key value in a multi-threaded environment, which may lead to primary key conflicts. Locking strategyVersion 5.0: The scope is the statement and it will not be released until the statement is executed. Starting from 5.1.22: an innodb_autoinc_lock_mode parameter is introduced, and different strategies are implemented according to different parameter values. The default is 1. 1. If the parameter is equal to 0, it means that the previous strategy is adopted, that is, the statement will be released after execution. 2. If the parameter is equal to 1, for a common insert statement, the auto-increment lock is released immediately after being applied; For statements that insert data in batches, such as insert...select, the data will not be released until the statement is executed. The locking range is the range and gap involved in the select. 3. If the parameter is equal to 3, all actions that apply for auto-increment primary keys will release the lock after application. Question: Why do batch operations such as insert...select use statement-level locks by default? Why is the default parameter not 2? Answer: Because statements such as insert...select that insert data in batches may cause inconsistency between the master and the slave. After sessionB executes "create table t2 like t", sessionA and sessionB operate t2 at the same time. If there is no lock, the execution process may be as follows. Session B first inserts two records, (1,1,1) and (2,2,2). Then, session A applies for an auto-increment id and gets id=3, and inserts (3,5,5). After that, session B continues to execute and inserts two records, (4,3,3) and (5,4,4). Although this does not seem to be a problem, if it is in a cluster and the master executes this way and prompts that the binlog is in statement format, then the execution order of the slave may be inconsistent with the master, eventually leading to inconsistency between the master and the slave. Therefore, locking is required during batch insertion. If it is set to 2, then if the binlog is not row, it will cause inconsistency between the master and slave data. Therefore, in order to ensure data consistency and system concurrency, there are two solutions: Solution 1: Set the binlog format to statement and innodb_autoinc_lock_mode to 1. Solution 2: Set the binlog format to row and innodb_autoinc_lock_mode to 2. Generally, in order to ensure the high availability of MySQL, we set binlog to row, so we usually choose the second option. Batch insert optimizationWhen inserting in batches, since we don't know how many statements will be inserted at one time, if there are tens of millions or even hundreds of millions of records, then each insertion needs to be assigned an auto-increment value, which will be very inefficient. Therefore, MySQL optimizes batch operations: 1. During statement execution, if you apply for an auto-increment id for the first time, 1 will be allocated; 2. After 1 is used up, this statement applies for the auto-increment id for the second time, and 2 will be allocated; 3. After the 2 are used up, the same statement is used to apply for the third self-increment id, and 4 will be allocated; 4. Similarly, when the same statement is used to apply for an auto-increment ID, the number of auto-increment IDs applied for each time is twice the number of the previous one. For example, execute the following code insert into t values(null, 1,1); insert into t values(null, 2,2); insert into t values(null, 3,3); insert into t values(null, 4,4); create table t2 like t; insert into t2(c,d) select c,d from t; insert into t2 values(null, 5,5); insert…select actually inserts 4 rows of data into table t2. However, these four rows of data were applied for auto-increment ids three times. The first time they were applied for, they were assigned id=1, the second time they were assigned id=2 and id=3, and the third time they were assigned id=4 to id=7. Since this statement actually only uses 4 ids, id=5 to id=7 are wasted. After that, execute insert into t2 values(null, 5,5) and the data actually inserted is (8,5,5). This is the third case where the primary key is discontinuous mentioned earlier. Insert...select before and after the same table will use a temporary tableAssume that there is a table structure CREATE TABLE `t` ( `id` int(11) NOT NULL AUTO_INCREMENT, `c` int(11) DEFAULT NULL, `d` int(11) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `c` (`c`) )ENGINE=InnoDB; insert into t values(null, 1,1); insert into t values(null, 2,2); insert into t values(null, 3,3); insert into t values(null, 4,4); create table t2 like t If the statement executed is: insert into t2(c,d) (select c+1, d from t force index(c) order by c desc limit 1); If we query the slow log, we will find The number of scanned rows is 1, which means that the record is found directly on t through the index and then inserted into the t2 table. If you change this statement to insert into t(c,d) (select c+1, d from t force index(c) order by c desc limit 1); If you check the slow log at this time, you will find that it has become 5. Why is this? Even if all are checked, there will only be 4. At this time, we check the changes in the number of scan lines. It is found that the change before and after is 4 rows, so it is determined that a temporary table is used, and then the process can be determined as follows: 1. Create a temporary table with two fields c and d. 2. Scan table t according to index c, take c=4, 3, 2, 1 in sequence, then return to the table, read the values of c and d and write them into the temporary table. At this time, Rows_examined=4. 3. Because there is limit 1 in the semantics, only the first row of the temporary table is taken and then inserted into table t. At this time, the value of Rows_examined increases by 1 and becomes 5. As for why a temporary table is needed, this is to prevent reading the value that has just been inserted when reading. optimization Because the number of records returned by select is small, a temporary memory table can be used to optimize. create temporary table temp_t(c int,d int) engine=memory; insert into temp_t (select c+1, d from t force index(c) order by c desc limit 1); insert into t select * from temp_t; drop table temp_t; The total number of rows scanned is only 1 from the select plus 1 from the temporary table. at lastFor unique index conflicts, you can use insert into ... on duplicate key update to perform post-conflict update processing. Assume that there are two records (1,1,1) and (2,2,2) in table t, then execute: If a conflict is found during insertion, the conflicting record will be modified. SummarizeThis is the end of this article about auto-increment primary keys in MySQL. For more information about auto-increment primary keys in MySQL, please search previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: A Brief Analysis of CSS Selector Grouping
>>: Some parameter descriptions of text input boxes in web design
1. IE8's getElementById only supports id, not ...
There is a big difference between the writing ord...
Table of contents Preface Global Lock Table lock ...
[LeetCode] 177.Nth Highest Salary Write a SQL que...
Before understanding this problem, let's firs...
Preface In the process of developing a mini progr...
Method 1: Please add the following code after <...
<br />This is from the content of Web front-...
In HTML, colors are represented in two ways. One i...
Table of contents Problem Description Front-end c...
Precautions 1) Add interpreter at the beginning: ...
View the dependent libraries of so or executable ...
CSS: 1. <link type="text/css" href=&q...
Vue2+elementui's hover prompts are divided in...
You may often see the following effect: That’s ri...