SQL finds all duplicate records in a table 1. There are two fields in the table, id and name. Find all the data with duplicate name. select * from xi a where (a.username) in (select username from xi group by username having count(*) > 1) 2. After finding all the data and grouping them, the query data of the number of repetitions of the repeated data is listed first: select count(username) as 'Number of repetitions',username from xi group by username having count(*)>1 order by username desc 3. To view other people's results, here are the methods for querying and deleting duplicate records: 1. Find redundant duplicate records in the table. Duplicate records are determined based on a single field (peopleId) select * from people where peopleId in (select peopleId from people group by peopleId having count(peopleId) > 1) 2. Delete redundant duplicate records in the table. Duplicate records are determined based on a single field (peopleId). Only the record with the smallest rowid is retained. delete from people where peopleId in (select peopleId from people group by peopleId having count(peopleId) > 1) and rowid not in (select min(rowid) from people group by peopleId having count(peopleId )>1) 3. Find redundant duplicate records in the table (multiple fields) select * from vitae a where (a.peopleId,a.seq) in (select peopleId,seq from vitae group by peopleId,seq having count(*) > 1) 4. Delete redundant duplicate records (multiple fields) in the table, leaving only the record with the smallest rowid delete from vitae a where (a.peopleId,a.seq) in (select peopleId,seq from vitae group by peopleId,seq having count(*) > 1) and rowid not in (select min(rowid) from vitae group by peopleId,seq having count(*)>1) 5. Find redundant duplicate records (multiple fields) in the table, excluding the record with the smallest rowid select * from vitae a where (a.peopleId,a.seq) in (select peopleId,seq from vitae group by peopleId,seq having count(*) > 1) and rowid not in (select min(rowid) from vitae group by peopleId,seq having count(*)>1) (two) For example There is a field "name" in table A. And the "name" value between different records may be the same. Now we need to find out the items with duplicate "name" values among the records in the table. Select Name,Count(*) From A Group By Name Having Count(*) > 1 If the gender is the same, the results are as follows: Select Name,sex,Count(*) From A Group By Name,sex Having Count(*) > 1 (three) Method 1 declare @max integer,@id integer declare cur_rows cursor local for select primary field, count(*) from table name group by primary field having count(*) >; open cur_rows fetch cur_rows into @id,@maxwhile @@fetch_status=0 begin select @max = @max -1 set rowcount @max delete from table name where primary field = @id fetch cur_rows into @id,@maxend close cur_rows set rowcount 0 Method 2: There are two meanings of duplicate records: one is completely duplicate records, that is, records with all fields repeated; the other is records with some key fields repeated, such as the Name field is repeated, while other fields may not be repeated or all repeated can be ignored. 1. For the first type of repetition, it is easier to solve. Use select distinct * from tableName You can get a result set without duplicate records. If the table needs to delete duplicate records (retain only one duplicate record), You can delete it as follows select distinct * into #Tmp from tableName drop table tableName select * into tableName from #Tmp drop table #Tmp This duplication occurs due to poor table design and can be resolved by adding a unique index column. 2. This type of duplication problem usually requires retaining the first record of the duplicate records. The operation method is as follows: Assume that the duplicate fields are Name and Address, and you need to get a unique result set for these two fields. select identity(int,1,1) as autoID, * into #Tmp from tableName select min(autoID) as autoID into #Tmp2 from #Tmp group by Name,autoID select * from #Tmp where autoID in(select autoID from #tmp2) The last select returns a result set with unique Name and Address (but with an additional autoID field, which can be omitted in the select clause when writing). (IV) Duplicate query select * from tablename where id in (select id from tablenamegroup by idhaving count(id) > 1) Find duplicate records for a field Find duplicate records based on the sample_code field SELECT * FROM tb_table WHERE sample_code IN( SELECT sample_code FROM tb_table GROUP BY sample_code HAVING COUNT(sample_code) > 1 ); Find duplicate records for multiple fields (here we take 2 as an example) Find duplicate records based on the name and code fields SELECT * from (SELECT *, CONCAT(name,code) as nameAndCode from tb_table) t WHERE t.nameAndCode in ( SELECT nameAndCode from (SELECT CONCAT(name,code) as nameAndCode from tb_table) tt GROUP BY nameAndCode HAVING count(nameAndCode) > 1 ) Summarize The above is the MySQL SQL statement that I introduced to you to find duplicate data based on one or more fields. I hope it will be helpful to you. If you have any questions, please leave me a message and I will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website! You may also be interested in:
|
>>: js to realize automatic lock screen function
HTML operation principle: 1. Local operation: ope...
What is an inode? To understand inode, we must st...
Vuex is a state management pattern developed spec...
Mysql sets boolean type 1. Tinyint type We create...
Preface CSS grids are usually bundled in various ...
Install Follow the README to install The document...
This article example shares the specific code of ...
Recent experience in installing mysql5.7.17 free ...
I installed it in msi format, mainly to see the m...
Since Uniapp does not have DingTalk authorization...
MySQL Performance Optimization MySQL is widely us...
It is recommended that you do not set the width, h...
Table of contents Preface analyze Data Total Repe...
This article describes how to export and import ....
Table of contents 1.Json string 1.1Json Syntax 1....