The database, like the operating system, is a shared resource used by multiple users. When multiple users access data concurrently, multiple transactions will be generated in the database to access the same data at the same time. If concurrent operations are not controlled, incorrect data may be read and stored, destroying the consistency of the database. Locking is a very important technology to implement database concurrency control. In actual applications, we often encounter lock-related exceptions. When two transactions require a set of conflicting locks and the transactions cannot continue, a deadlock will occur, which seriously affects the normal execution of the application. There are two basic types of locks in the database: exclusive locks (Exclusive Locks, ie X locks) and shared locks (Share Locks, ie S locks). When a data object is exclusively locked, other transactions cannot read or modify it. Data objects with shared locks can be read by other transactions but cannot be modified. The database uses these two basic lock types to control the concurrency of database transactions. The first case of deadlock A user A accesses table A (locks table A), and then accesses table B; another user B accesses table B (locks table B), and then attempts to access table A; at this time, since user B has locked table B, user A must wait for user B to release table B before it can continue. Similarly, user B must wait for user A to release table A before it can continue. This is a deadlock. Solution: This type of deadlock is quite common and is caused by a program bug. There is no other way to fix it except to adjust the program logic. Carefully analyze the logic of the program. When operating multiple tables in the database, try to process them in the same order and try to avoid locking two resources at the same time. For example, when operating tables A and B, always process them in the order of A first and then B. When two resources must be locked at the same time, ensure that the resources are locked in the same order at all times. The second deadlock situation User A queries a record and then modifies it; then user B modifies the record. The nature of the lock in user A's transaction is upgraded from the shared lock attempt of the query to an exclusive lock. However, since user B has a shared lock, the exclusive lock in user B must wait for A to release the shared lock. Since A cannot upgrade its exclusive lock due to B's exclusive lock, it cannot release the shared lock either, so a deadlock occurs. This deadlock is more subtle, but often occurs in larger projects. For example, in a certain project, after clicking a button on a page, the button does not become invalid immediately, causing the user to quickly click the same button multiple times. In this way, the same section of code performs multiple operations on the same record in the database, which can easily lead to a deadlock. Solution: 1. For controls such as buttons, make them invalid immediately after clicking them to prevent users from clicking them repeatedly and to avoid operating on the same record at the same time. 2. Use optimistic locking for control. Optimistic locking is mostly implemented based on the data version recording mechanism. That is, add a version identifier to the data. In a version solution based on a database table, this is generally achieved by adding a "version" field to the database table. When reading data, the version number is read out together, and when updating later, the version number is increased by one. At this time, the version data of the submitted data is compared with the current version information of the corresponding record in the database table. If the submitted data version number is greater than the current version number of the database table, it is updated, otherwise it is considered to be expired data. The optimistic locking mechanism avoids the database locking overhead in long transactions (user A and user B do not lock the database data during the operation), greatly improving the overall performance of the system under high concurrency. Hibernate has an optimistic locking implementation built into its data access engine. It should be noted that since the optimistic locking mechanism is implemented in our system, user update operations from external systems are not controlled by our system, which may cause dirty data to be updated in the database. 3. Use pessimistic locks for control. In most cases, pessimistic locking is implemented by the database's locking mechanism, such as Oracle's Select ... for update statement, to ensure the maximum exclusivity of the operation. But this comes with a huge overhead on database performance, which is often unbearable, especially for long transactions. For example, in a financial system, when an operator reads user data and makes modifications based on the read user data (such as changing the user account balance), if a pessimistic locking mechanism is used, it means that during the entire operation (from the operator reading the data, starting to modify the data to submitting the modification results, and even including the time when the operator goes to make coffee in the middle), the database record is always in a locked state. It is conceivable that if faced with hundreds or thousands of concurrencies, such a situation will lead to catastrophic consequences. Therefore, you must think carefully when using pessimistic locking for control. The third deadlock situation If an update statement that does not meet the conditions is executed in a transaction, a full table scan is performed, and the row-level lock is upgraded to a table-level lock. After multiple such transactions are executed, deadlock and blocking are likely to occur. A similar situation occurs when the amount of data in the table is very large but too few or inappropriate indexes are built, resulting in frequent full table scans, which eventually causes the application system to become slower and slower, and eventually lead to blocking or deadlock. Solution: Do not use overly complex queries that associate multiple tables in SQL statements; use the "Execution Plan" to analyze SQL statements, and for SQL statements that require full table scans, create corresponding indexes for optimization. summary Generally speaking, memory overflow and table lock are caused by poorly written code, so improving the quality of the code is the most fundamental solution. Some people think that they should implement the function first and then correct any bugs during the testing phase. This idea is wrong. Just as the quality of a product is determined during the manufacturing process rather than during quality inspection, the quality of software is determined during the design and coding stages. Testing is only a verification of software quality because it is impossible to find all the bugs in the software. The above is the detailed content of the causes and solutions of MySQL deadlock. For more information about MySQL deadlock, please pay attention to other related articles on 123WORDPRESS.COM! You may also be interested in:
|
<<: Detailed tutorial on how to publish springboot projects through docker plug-in in IDEA
>>: Tutorial on installing MySQL with Docker and implementing remote connection
The VMware Workstation Pro version I use is: 1. F...
<br />Question: Why is it not recommended to...
Here is a brief summary of the installation and c...
1. Introduction Today a colleague asked me how to...
To facilitate the maintenance of MySQL, a script ...
Table of contents uni-app Introduction HTML part ...
Table of contents Preface Component Introduction ...
The first thing to do is to pick a good browser. ...
This article shares the installation tutorial of ...
This article mainly records a tomcat process, and...
The Linux stream editor is a useful way to run sc...
Table of contents 1. Prepare the springboot proje...
Table of contents Browser kernel JavaScript Engin...
Some time ago, I needed to use pip downloads freq...