1. Query Optimization1. MySQL tuning outline
2. Small table drives large tableThe implementation principle of MySQL join is to use the data of the driving table as the basis and use "nested loops" to match records in the driven table. The index of the driving table will be invalid, while the index of the driven table is valid. #Assume that table a has 10,000 data and table b has 20 data select * from a join b on a.bid =b.id Table a drives table b as follows: for 20 data to match 10000 data (based on the join condition on a.bid=b.id, perform a B+ tree search) The number of searches is: 20+ log10000 Table b drives table a to match 20 data for 10,000 data (based on the join condition on a.bid=b.id, a B+ tree search is performed). The number of searches is: 10,000+ log20 3. in and existsUse of exists
#If in is used, the internal table B drives the external table A select * from A where id in (select id from B) #Using exists, external table A drives internal table B select * from A where exists(select 1 from B where B.id = A.id) in conclusion: Always remember that when the B table data set is smaller than the A table data set, use in When the data set of table A is smaller than that of table B, use exist 4. Create table by orderbycreate table tblA( #id int primary key not null auto_increment, age int, birth timestamp not null ); insert into tblA(age, birth) values(22, now()); insert into tblA(age, birth) values(23, now()); insert into tblA(age, birth) values(24, now()); #Create a composite index create index idx_A_ageBirth on tblA(age, birth); Orderby hits the index Orderby does not hit the index
select * from user where name = "zs" order by age #Dual sort 1) Find the first primary key id that satisfies name = 'zs' from name 2) Take the entire row based on the primary key id, and put the sort fields age and primary key id into the sort buffer. 3) Take the next primary key id from name that satisfies the record with name = 'zs'. 4) Repeat 2 and 3 until name = 'zs' is no longer satisfied 5) Sort the age field and primary key id in sort_buffer according to the age field 6) Traverse the sorted id and age field, return to the original table according to the id value, retrieve the values of all fields and return them to the client #Single-way sorting 1) Find the first primary key id from name that meets the condition name = 'zs' 2) Take the entire row based on the primary key id, take out the values of all fields, and store them in the sort_buffer (sort buffer) 3) Find the next primary key id that satisfies the condition name = 'zs' from the index name 4) Repeat steps 2 and 3 until name = 'zs' is no longer satisfied 5) Sort the data in sort_buffer by the field age and return the result to the client Problems and optimization of single-path sorting question: Since the single-pass algorithm is an improved one, it is generally better than the dual-pass algorithm in the sort_buffer. Method B takes up much more space than method A because method B takes out all fields. Therefore, the total size of the data taken out may exceed the capacity of the sort_buffer, resulting in only taking out data up to the capacity of the sort_buffer each time for sorting (creating a tmp file, merging multiple passes), taking out data up to the capacity of the sort_buffer, and sorting again... This will result in multiple I/Os. Optimization strategy: Increase the setting of the sort_buffer_size parameter Increase the setting of the max_length_for_sort_data parameter Notes: Selecting * when ordering by is a big taboo. Only query the required fields. Because the more fields there are, the more data must be stored in memory, which results in fewer data columns being loaded per I/O. 5. Groupby optimization
2. Slow query log1. What is the slow query log?
2. Enable slow query logBy default, the MySQL slow query log is not enabled. If it is not necessary for tuning, it is generally not recommended to enable this parameter, because enabling the slow query log will affect performance. The slow query log supports writing log records to files. a) Enable slow query log #Check whether slow log is enabled show variables like 'slow_query_log%'; # Enable the slow query log. To make it permanent, set global slow_query_log = 1 in my.cnf; b) Set the threshold for slow query logs #The default threshold time for viewing slow query logs is 10s show variables like 'long_query_time%'; #Set to 3s. Restart will fail. If you want it to be permanent, set global long_query_time = 3 in my.cnf #To view again, you need to switch windows to view show variables like 'long_query_time%'; c) Persistent slow query logs and time thresholds [mysqld] #Persistent slow query log slow_query_log=1; slow_query_log_file=/var/lib/mysql/hadoop102-slow.log long_query_time = 3; log_output=FILE d) Slow query case #Query wait 4s select sleep(4); #In Linux system, view the slow query log cat /var/lib/mysql/hadoop102-slow.log e) Check the number of slow query logs in the current system show global status like '%Slow_queries%'; 3. Log analysis command mysqldumpslowa) Parameter explanation
b) Common methods #Get the 10 SQL statements that return the most records mysqldumpslow -sr -t 10 /var/lib/mysql/hadoop102-slow.log #Get the 10 most visited SQLs mysqldumpslow -sc -t 10 /var/lib/mysql/hadoop102-slow.log #Get the first 10 queries sorted by time that contain left joins mysqldumpslow -st -t 10 -g "left join" /var/lib/mysql/hadoop102-slow.log #These commands are combined with | and more to use mysqldumpslow -sr -t 10 /var/lib/mysql/hadoop102-slow.log | more 3. Batch writing data scripts1. Create a tableCREATE TABLE dept ( deptno int unsigned primary key auto_increment, dname varchar(20) not null default '', loc varchar(8) not null default '' )ENGINE=INNODB DEFAULT CHARSET=utf8; CREATE TABLE emp ( id int unsigned primary key auto_increment, empno mediumint unsigned not null default 0, ename varchar(20) not null default '', job varchar(9) not null default '', mgr mediumint unsigned not null default 0, hiredate date not null, sal decimal(7,2) not null, comm decimal(7,2) not null, deptno mediumint unsigned not null default 0 )ENGINE=INNODB DEFAULT CHARSET=utf8; 2. Set whether to trust the creator of the stored function #View binlog status show variables like 'log_bin%'; #Add trusted storage function creators set global log_bin_trust_function_creators = 1; 3. Create a function Function to generate random string # Define two $$ to indicate the end (replace the original ;) delimiter $$ create function rand_string(n int) returns varchar(255) begin declare chars_str varchar(100) default 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'; declare return_str varchar(255) default ''; declare i int default 0; while i < n do set return_str = concat(return_str,substring(chars_str,floor(1+rand()*52),1)); set i=i+1; end while; return return_str; end $$ Function to randomly generate department numbers delimiter $$ create function rand_num() returns int(5) begin declare i int default 0; set i=floor(100+rand()*10); return i; end $$ 4. Create a stored procedureCreate a stored procedure to insert data into the emp table delimiter $$ create procedure insert_emp(in start int(10),in max_num int(10)) begin declare i int default 0; set autocommit = 0; repeat set i = i+1; insert into emp(empno,ename,job,mgr,hiredate,sal,comm,deptno) values((start+i),rand_string(6),'salesman',0001,curdate(),2000,400,rand_num()); until i=max_num end repeat; commit; end $$ Create a stored procedure to insert data into the dept table delimiter $$ create procedure insert_dept(in start int(10),in max_num int(10)) begin declare i int default 0; set autocommit = 0; repeat set i = i+1; insert into dept(deptno,dname,loc) values((start+i),rand_string(10),rand_string(8)); until i=max_num end repeat; commit; end $$ 5. Call stored procedures to generate data#Insert 10 records into the department table DELIMITER; CALL insert_dept(100, 10); #Insert 500,000 records into the employee table CALL insert_emp(100001, 500000); 4. show profiles1. Introduction
2. Open #Check whether Show Profile is enabled show variables like 'profiling%'; #Open Show Profile set profiling=on; 3. Use show profilesCreating Test Data select * from emp group by id%10 limit 150000; select * from emp group by id%10 limit 150000; select * from emp group by id%10 order by 5; select * from emp select * from dept select * from emp left join dept on emp.deptno = dept.deptno Execute show profiles Execute show profile cpu, block io for query Query_ID; Search Parameters
Return results
5. Global Query LogDo not enable this feature in a production environment Configure in my.cnf # Enable general_log=1 # Record the path of the log file general_log_file=/path/logfile # Output format log_output=FILE Encoding Enabled set global general_log=1; set global log_output='TABLE'; After the configuration is completed, it will be recorded in the general_log table in the MySQL database select * from mysql.general_log; SummarizeThis is the end of this article about MySQL query interception. For more relevant MySQL query interception content, please search 123WORDPRESS.COM’s previous articles or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: WeChat applet custom bottom navigation bar component
>>: Detailed steps to enable SourceGuardian (sg11) encryption component on Linux virtual hosts
Through permission-based email marketing, not onl...
An interesting discovery: There is a table with a...
Yesterday, I wrote a blog about the circular prog...
background Ever wondered how to create a shadow e...
Table of contents PXE implements unattended batch...
Table of contents 1. Technology Selection 2. Tech...
<!DOCTYPE HEML PUBLIC> <html> <hea...
If you are using the latest Ubuntu Server version...
1. Introduction to Nginx Nginx is a web server th...
Table of contents Primary key constraint Unique p...
Overview Databases generally execute multiple tra...
Mirroring is also one of the core components of D...
IE8 will have multiple compatibility modes . IE pl...
Install Follow the README to install The document...
Port mapping is not the only way to connect Docke...