This week has been as busy as a war. I feel like I'm being led by the nose by others. I go out early and come back late every day, and there's endless work to do. Sometimes I feel that being a DBA is not a good job. It requires strong stress resistance and psychological endurance. When I was having lunch this afternoon, I really felt like I was about to collapse. But after dinner, I still decided to go home from get off work. Walking on the street, looking at the crowds of people leaving get off work, I thought to myself, isn't this just a normal time to get off work? Why do I still feel ashamed for leaving early? Maybe the whole person has been brainwashed. The update of the official account content this week was delayed for two days. I was so tired on Tuesday that I just took a rest. When I was leaving last night, it was around nine o'clock. I was very tired after a day's work, and my brain didn't listen to me. I caused a malfunction and deleted the account permission table of an online environment. Then I found that the environment was new and had not been backed up. I was devastated and felt like the sky was falling. Fortunately, I usually have the habit of retaining changes, and I found some account permissions that I had previously opened to the business party in my own txt file. It took me two hours to fix it, including testing whether the service is available and whether the synchronization is timely. When I came back and looked at the time, it was already half past eleven. I hurried to write on the official account, but by the time I finished writing, it was already midnight, so I simply did not update the official account. This feeling is really bad. I don't know where I saw this sentence, "The best way to ruin an IT person is to keep him so busy that he has no time to grow." I feel like I'm in this vicious cycle now, and I remember what a buddy once said to me, "When you're busy working for the company, you should look up from time to time to see the road ahead." I hope I can adjust quickly. I believe that people who have moved to Beijing must have some of the same feelings as me. Everyone may have their own definition of busy. On this point, I think I may be able to resonate with some viewers of the public account ^_^. I have said so much nonsense. Complaining alone will not solve the problem. I should focus on the present and write something useful. I hope it will be helpful to everyone. It can also be regarded as my own summary. A way to build a master-slave system in a large databaseThis morning when I went to the company, I encountered a problem. The alarm information showed that some master-slave relationships in a distributed cluster were down, that is, the slave database was disconnected. Then I checked the reason and found that it was because the business party and another colleague were importing data to the master database at the same time. The operations performed by these two people were dependent and involved a large number of transactions. Due to conflicts, the transactions were rolled back, and the slave database found that the data to be rolled back no longer existed, so the slave database was disconnected. Seeing this problem, I first tried to repair the slave library. Because it was a master-slave replication built using gtid, I tried to use the set next gtid method to repair it. The specific method can be seen in the gtid article. The article is classified at the bottom of the official account. Then begin; commit; set automatic gtid and found that the problem was fixed, but after about five minutes, it broke again. Obviously, this method is not a long-term solution, and there is still a lot of data from the business side that has not flowed in. After considering various options, we finally had to rebuild the slave library. I checked the data volume of the main database, which is about 100G. The two intuitive methods I thought of are as follows: 1. Back up directly on the server 2. Backup in the remote nfs mounted backup machine Looking at these two methods, the server itself does not have that much remaining space to use. Forced backup is also possible, but it will cause disk alarms, which is definitely not a good method. Moreover, if you use the xtrabackup method, the two steps of apply log and copy back will take a long time. Let's look at the remote nfs backup machine. The backup machine has a large capacity and solves the disk problem, but the bandwidth required for remote transmission cannot be provided. If the backup is performed in parallel, the bandwidth is definitely not enough, and the concurrent backup process will be slower. A conservative estimate is that 5 sets of master-slave should take about 8 hours. So what should we do? A relatively rough method is used here. We communicate directly with the business party, temporarily stop the service, open up the ssh mutual trust between the two machines, configure the scp tool, and directly copy the file to the slave library by physical file copying. No compression is performed because it takes a lot of time to compress and decompress a 100G file. The benefits of doing this are as follows: First: Each backup is decoupled and not affected by other environments. Second: The native files on the master database can be imported into the slave database through the bandwidth between machines to ensure complete data consistency. Third: Faster time So I did it, and after a rough look, I found that it only takes about 17 minutes to copy a 100G file using SCP, which solved the problem of long backup time. With five windows running in parallel without affecting each other, it only takes about 30 minutes for the data from five environments to be uploaded. Now the data of the master and slave databases are completely consistent. Now we start to build the slave database. There are several things we need to do: 1. Replace the copied my.cnf file of the master database with the original my.cnf file in the slave database, otherwise the server_id will be repeated, resulting in errors in the master-slave setup. 2. Copy the original slave-relay-log.index file in the library to the new directory, otherwise when building the master-slave, it will prompt that the file cannot be found. 3. Change the UUID of the slave library. This thing needs to be used when building GTID replication. The master-slave environment cannot be repeated, otherwise the service will be unavailable. The change of this UUID is generally in the auto.cnf file, which saves the UUID value of the current library. 4. Reset slave all on the slave database, and then use the replication mode of auto_position=1 to build the master-slave replication. After building the master-slave, verify the consistency of the master-slave data. 5. Set the read-only option on the slave database to prohibit direct DML operations on the slave database. The above is the detailed content of sharing an idea of building a master-slave relationship for a large MySQL database. For more information about building a master-slave relationship for a large MySQL database, please pay attention to other related articles on 123WORDPRESS.COM! You may also be interested in:
|
<<: A brief discussion on HTML table tags
>>: Several common methods of CSS equal height layout
This article shares the specific code of Vue to a...
Table of contents 1. Basic use of axio 2. How to ...
Table of contents Single Node Diff reconcileSingl...
Recently, I need to frequently use iframe to draw ...
Preface So I wrote this blog. This blog also reco...
This article introduces common problems of Xshell...
Writing a Dockerfile Configure yum source cd /tmp...
Problem Description As we all know, when writing ...
Detailed description of properties The purpose of...
Table of contents 1. What is recursion? 2. Solve ...
Excel is the most commonly used tool for data ana...
This article uses an example to describe how MySQ...
Table of contents 1. What is Dockerfile? 2. Analy...
After going through a lot of hardships, I searched...
Introduction to Load Balancing Before introducing...