High CPU load caused by MySQL This afternoon, I discovered a high server load issue caused by MySQL. The background of the issue is as follows: On a new server, a new MySQL instance was created. There was only one MySQL process on the server, but the CPU load remained high. The results of the top command query were as follows: [dba_mysql@dba-mysql ~]$ top top - 17:12:44 up 104 days, 20 min, 2 users, load average: 1.06, 1.02, 1.00 Tasks: 218 total, 1 running, 217 sleeping, 0 stopped, 0 zombie Cpu0: 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1: 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2: 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3: 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4: 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5: 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6: 100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7: 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16318504k total, 7863412k used, 8455092k free, 322048k buffers Swap: 5242876k total, 0k used, 5242876k free, 6226588k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 75373 mysql 20 0 845m 699m 29m S 100.0 4.4 112256:10 mysqld 43285 root 20 0 174m 40m 19m S 0.7 0.3 750:40.75 consul 116553 root 20 0 518m 13m 4200 S 0.3 0.1 0:05.78 falcon-agent 116596 nobody 20 0 143m 6216 2784 S 0.3 0.0 0:00.81 python 124304 dba_mysq 20 0 15144 1420 1000 R 0.3 0.0 0:02.09 top 1 root 20 0 21452 1560 1248 S 0.0 0.0 0:02.43 init From the above results, we can see that among the 8-core CPUs, only one core has a load of 100%, while the others are all 0%. The result of sorting by CPU usage also shows that the mysqld process occupies more CPU. I had never encountered this problem before. My first reaction was to wonder if there were some business-level problems, such as some slow queries that were constantly occupying CPU resources. So I logged into MySQL and used show processlist to view the current process. I found that except for a few update operations, no other SQL statements were being executed. So I took another look at the slow log and found that the execution time of the SQL statements in the slow log was very short. Most of them were caused by not using indexes. However, the number of records scanned was very small, only a few hundred rows. It seemed that there was no problem at the business level. Having ruled out business-level issues, let's now look at database-level issues. After taking a look at the buffer pool, we can see that the value is: [email protected]:(none) 17:20:35>>show variables like '%pool%'; +-------------------------------------+----------------+ | Variable_name | Value | +-------------------------------------+----------------+ | innodb_buffer_pool_chunk_size | 5242880 | | innodb_buffer_pool_dump_at_shutdown | ON | | innodb_buffer_pool_dump_now | OFF | | innodb_buffer_pool_dump_pct | 25 | | innodb_buffer_pool_filename | ib_buffer_pool | | innodb_buffer_pool_instances | 1 | | innodb_buffer_pool_load_abort | OFF | | innodb_buffer_pool_load_at_startup | ON | | innodb_buffer_pool_load_now | OFF | | innodb_buffer_pool_size | 5242880 | | thread_pool_high_prio_mode | transactions | | thread_pool_high_prio_tickets | 4294967295 | | thread_pool_idle_timeout | 60 | | thread_pool_max_threads | 100000 | | thread_pool_oversubscribe | 3 | | thread_pool_size | 8 | | thread_pool_stall_limit | 500 | +-------------------------------------+----------------+ 17 rows in set (0.01 sec) From this result, it can be seen that the buffer pool size is only 5M, which is definitely a problem. Generally, the buffer pool of the online environment is 1G or more. So I checked the my.cnf configuration file and found that when this instance was started, the innodb_buffer_pool_size setting was 0M. Yes, you read that right, it was 0M. Here we have to mention another parameter. We can see that the size of innodb_buffer_pool_size is the same as the size of innodb_buffer_pool_chunk_size. The concept of chunk is a memory block, that is, each time a buffer pool is applied for, it is applied for in units of "memory blocks". A buffer pool contains multiple memory blocks, so the buffer pool size needs to be an integer multiple of the chunk size. Since the value of innodb_buffer_pool_chunk_size itself is 5M, when we set it to 0M, it will automatically set its size to a multiple of 5M, so our innodb_buffer_pool_size value is 5M. Since the buffer pool value is relatively small, I will change it to 1G to see if this problem still occurs: [email protected]:(none) 17:20:41>>set global innodb_buffer_pool_size=1073741824; Query OK, 0 rows affected, 1 warning (0.00 sec) [email protected]:(none) 17:23:34>>show variables like '%pool%'; +-------------------------------------+----------------+ | Variable_name | Value | +-------------------------------------+----------------+ | innodb_buffer_pool_chunk_size | 5242880 | | innodb_buffer_pool_dump_at_shutdown | ON | | innodb_buffer_pool_dump_now | OFF | | innodb_buffer_pool_dump_pct | 25 | | innodb_buffer_pool_filename | ib_buffer_pool | | innodb_buffer_pool_instances | 1 | | innodb_buffer_pool_load_abort | OFF | | innodb_buffer_pool_load_at_startup | ON | | innodb_buffer_pool_load_now | OFF | | innodb_buffer_pool_size | 1074790400 | | thread_pool_high_prio_mode | transactions | | thread_pool_high_prio_tickets | 4294967295 | | thread_pool_idle_timeout | 60 | | thread_pool_max_threads | 100000 | | thread_pool_oversubscribe | 3 | | thread_pool_size | 8 | | thread_pool_stall_limit | 500 | +-------------------------------------+----------------+ 17 rows in set (0.00 sec) The operation is as above. In this way, we modify the value of the buffer pool to 1G. The value we set is 1073741824, but the actual value becomes 1074790400. The reason for this has been mentioned above, which is the influence of the chunk size value. At this time, use the top command to observe the CPU usage: [dba_mysql@dba-mysql ~]$ top top - 22:19:09 up 104 days, 5:26, 2 users, load average: 0.45, 0.84, 0.86 Tasks: 218 total, 1 running, 217 sleeping, 0 stopped, 0 zombie Cpu0: 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1: 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2: 1.0%us, 0.0%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3: 1.0%us, 0.0%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4: 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5: 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6: 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7: 0.7%us, 0.0%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16318504k total, 8008140k used, 8310364k free, 322048k buffers Swap: 5242876k total, 0k used, 5242876k free, 6230600k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 43285 root 20 0 174m 40m 19m S 1.0 0.3 753:07.38 consul 116842 root 20 0 202m 17m 5160 S 1.0 0.1 0:21.30 python 75373 mysql 20 0 1966m 834m 29m S 0.7 5.2 112313:36 mysqld 116553 root 20 0 670m 14m 4244 S 0.7 0.1 0:44.31 falcon-agent 116584 root 20 0 331m 11m 3544 S 0.7 0.1 0:37.92 python2.6 1 root 20 0 21452 1560 1248 S 0.0 0.0 0:02.43 init It can be found that the CPU usage has gone down. In order to prevent accidental phenomena, I changed the buffer pool size back to the original 5M value, and found that the previous problem reappeared. In other words, setting a large buffer pool is indeed a solution. At this point, the problem has been solved, but some of the things behind this problem are worth thinking about. Why does a small buffer pool cause the usage rate of one of the CPUs to be 100%? Here, one reason I can think of is that the 5M buffer pool is too small, which will cause the business SQL to interact frequently with the disk when reading data. The disk speed is relatively slow, so it will increase the IO load and cause the CPU load to be too high. As for why only one CPU has a relatively high load and the others are almost 0, this problem may need to be checked. If anyone knows, please let me know. The above is the detailed content of troubleshooting MySQL CPU high load problem. For more information about MySQL CPU high load, please pay attention to other related articles on 123WORDPRESS.COM! You may also be interested in:
|
<<: How to get USB scanner data using js
>>: How to implement form validation in Vue
1. Vulnerability Description On May 15, 2019, Mic...
Since the project requires a questionnaire, but th...
title: vue uses vue-meta-info to set the title an...
I just started learning database operations. Toda...
Supervisor Introduction Supervisor is a client/se...
Table of contents 1. Simple to use 2. Use DISTINC...
Table of contents 1.1Tinyint Type Description 1.2...
Prerequisites: Docker is already installed 1. Fin...
The MySQL built on Tencent Cloud is always very s...
Written in front Often, after we install Nginx ba...
Table of contents Preface: Ubuntu 18.04 changes a...
1. Download 1. Download the installation package ...
Reference: Docker official redis documentation 1....
Table of contents npm Install the loader Import P...
Table of contents Demand Background Why use Nginx...