background As the business develops, the company's business and scale continue to expand, and the website has accumulated a large amount of user information and data. For an Internet company, user and business data are the foundation. Once the company's data is disordered or lost, it will be a disaster for the Internet company. In order to prevent data loss due to system operation errors or system failures, the company requires strengthening the reliability of user data, comprehensively strengthening data-level backup, and being able to restore it as soon as possible when a failure occurs. Data backup form File backup: Use the Linux backup command to package the files and save them locally or on a remote server. When you need to restore them, you can use these files to restore them to the specified location. Database data backup: In some industries that have high requirements for data reliability, such as banking, securities, and telecommunications, if unexpected downtime or data loss occurs, the losses will be very heavy. To this end, database administrators should develop detailed database backup and disaster recovery strategies based on specific business requirements and simulate failures for each Strict testing of all possible situations is the only way to ensure high data availability. Database backup is a long process, while recovery is only performed after an accident occurs. Recovery can be seen as the reverse process of backup. The degree of recovery depends largely on the backup situation. also, Whether the steps taken by the database administrator during recovery are correct or not also directly affects the final recovery results. Data backup type By business: It can be divided into full backup, incremental backup, differential backup 1. Full backup: back up the data and data structure of the entire database Advantages: intuitive and easy to understand Disadvantages: 1. A large amount of the backup data is duplicated, which takes up a lot of space and increases costs. 2. The amount of data to be backed up is large and takes a long time (Full Backup) The so-called full backup is to back up the data and data structure of the entire database. The advantage of this backup method is that it is very intuitive and easy to understand. And when a data loss disaster occurs, the lost data can be restored by using the backup files before the disaster. However, it also has its shortcomings: first, since the system is fully backed up every day, a large amount of backup data is repeated. These duplicate data take up a lot of space, which means increased costs for users; secondly, since the amount of data to be backed up is quite large, the backup takes a long time. For those units with busy business and limited backup windows, choosing this backup strategy is undoubtedly unwise. 2. Incremental Backup: The data backed up each time is only equivalent to the data added and modified after the last backup. Advantages: No duplicate backup data, saving space Disadvantages: Data recovery is troublesome, and any problem with the backup data will lead to data loss That is, the data backed up each time is only equivalent to the data added and modified after the last backup. The advantages of this type of backup are obvious: there is no duplicate backup data, which saves space and shortens backup time. But its disadvantage is that it is troublesome to recover data when a disaster occurs. For example, if If the system fails on Thursday morning and a large amount of data is lost, the system needs to be restored to the state on Wednesday night. At this time, the administrator needs to first find the full backup data on Monday to restore the system, then find the data on Tuesday to restore Tuesday's data, and then find the data on Wednesday to restore Wednesday's data. Obviously this is much more troublesome than the first strategy. In addition, this backup is reliable The sex is also bad. In this type of backup, the relationship between each backup data is like a chain, one link after another. If there is a problem with any of the backup data, the entire chain will be disconnected. 3. Differential Backup: Each backup contains data that has been newly added or modified since the last full backup. That is, the data backed up each time is the newly added and modified data after the last full backup. The administrator first performs a full system backup on Monday; then in the following days, the administrator backs up all the data that is different from Monday (new or modified) to tape. For example, on Monday, the network administrator performs a full system backup as usual; on Tuesday, suppose there is only one more asset list in the system, so the administrator only needs to back up this asset list; on Wednesday, there is another product catalog in the system, so the administrator Not only should this catalog be backed up, but also the asset list from Tuesday. If there is an additional payroll in the system on Thursday, then the content that needs to be backed up on Thursday is: payroll + product catalog + asset list. From this we can see that full backup takes the longest time, but recovery time is the shortest and operation is the most convenient. When the amount of data in the system is not large, full backup is the most reliable. Differential backup can avoid the defects of the other two strategies, but different backup types can exist in certain combinations. Different backup types can exist in certain combinations. Different backup types can exist in certain combinations. Examples of combining different backup types Full and differential backups Full backups are performed on Mondays, and differential backups are performed on Tuesday through Friday. If the data is corrupted on Friday, you only need to restore Monday's full backup and Thursday's differential backup. This strategy takes more time to back up data, but less time to restore data. Full and incremental backups Full backups are performed on Mondays, and incremental backups are performed on Tuesday through Friday. If the data is corrupted on Friday, you will need to restore Monday's normal backup and all incremental backups from Tuesday through Friday. This strategy takes less time to back up data, but takes more time to restore data. Classification by mode: can be divided into hot standby, warm standby, cold standby Hot backup refers to backing up the database directly while it is running, without any impact on the running database. Cold backup refers to backup performed when the database is stopped. This type of backup is the simplest and generally only requires copying the relevant database physical files. Warm backup is also performed while the database is running, but it will affect the current database operations, such as adding a global read lock to ensure the consistency of the backup data. (When you back up a table in the database, lock the table first to prevent others from adding, checking, deleting or modifying the data in the table. This way, the data in the table will not change when you back it up, ensuring the consistency of the backup data.) Physical backup: backup by directly copying data files (data files directly copied and backed up are in binary format) Advantages: No additional tools are required, just copy, and restore by directly copying the backup file Disadvantages: related to storage engine, weak cross-platform capability Logical backup: A backup that is performed by "exporting" data from the database and saving it separately (exporting SQL statements into a text file, which is larger than a binary file) Advantages: Can be processed using an editor, easy to restore, can be restored over the network, helps avoid data corruption Disadvantages: The backup file is large, the backup is slow, the precision of floating point numbers cannot be guaranteed, and after restoring the logical backup data, the index needs to be manually rebuilt, which consumes a lot of CPU resources. Backup Flowchart Introduction to MySQL log MySQL logs: Mainly includes: error log, query log, slow query log, transaction log, binary log, etc.; Logs are an important part of the MySQL database. The log file records the changes that occur during the operation of the MySQL database; that is, it is used to record Record the client connection status of the MySQL database, the execution status of SQL statements, and error messages. When the database is accidentally damaged, you can You can view the cause of the file error through the log file and restore the data through the log file. Mysql error log In the MySQL database, the error log function is enabled by default. Also, error logging cannot be disabled. By default, the error log is stored in the data file of the mysql database. The error log file is usually named hostname.err. Here, hostname indicates the server host name. The error log information can be configured by yourself. The information recorded in the error log can be defined by logerror and log-warnings. Log-err defines whether to enable the error log function and the storage location of the error log, and log-warnings defines whether to define warning information in the error log. By default, the error log records the following information: information during server startup and shutdown (not necessarily error information, such as how MySQL starts the InnoDB tablespace file, how to initialize its own storage engine, etc.), error information during server operation, information generated when the event scheduler runs an event, and information generated when the server process is started from the server. mysql -uroot -p select global variables like '%log%'; You can modify log_error through the configuration file vim /etc/my.cnf //As shown below: I changed the path of the error log to /var/log/mariadb/mariadb.err log-error=/var/log/mariadb/mariadb.err Then restart the database service to connect to the database and view the global log. The modification is successful. View the contents of the error log Temporary modification: In the MySQL error log, log_error can be defined directly as a file path or as ON|OFF. log_warings can only be enabled using 1|0. Permanent modification: To change the error log location, you can use log_error to set the format as follows: [root@stu18 data]# vim /etc/my.cnf [mysqld] Log_error=DIR/[filename] Analysis: The DIR parameter specifies the path of the error log. The filename parameter is the name of the error log. If this parameter is not specified, the host name is used by default. Modify the configuration file and restart the MySQL server to take effect. Note: Before MySQL 5.5.7: Database administrators can delete error logs from a long time ago to ensure hard disk space on the MySQL server. In the mysql database, you can use the mysqladmin command to open a new error log. The syntax of the mysqladmin command is as follows: mysqladmin –u root –pflush-logs You can also log in to the mysql database and use the FLUSHLOGS statement to open a new error log. Mysql query log By default, query logging is turned off. Since the query log records all user operations, including additions, deletions, queries, and modifications, a large amount of information will be generated in an environment with large concurrent operations, resulting in unnecessary disk IO, which will affect the performance of MySQL. It is recommended not to enable query logging if it is not for the purpose of debugging the database. mysql show global variables like '%log%'; Mysql slow query log The slow query log is used to record query statements that take longer than a specified time to execute. Through the slow query log, you can find out which query statements have low execution efficiency (some query statements take a long time to execute, and you need to find and clear these query statements to optimize server performance) so that you can make optimizations. It is strongly recommended to enable it. It has little impact on server performance, but it can record query statements that have been executed for a long time on the MySQL server. It can help us locate performance issues. Start and set up the slow query log: 1. The slow query log can be enabled through the log-slow-queries option in the configuration file my.cnf; The form is as follows: vim /etc/my.cnf [mysqld] slow-query-log = ON slow-query-log-file = /var/log/mariadb/slow.log long-query-time = 0.01 The DIR parameter specifies the storage path of the slow query log; the filename parameter specifies the file name of the log. The complete name of the generated log file is filename-slow.log. If the storage path is not specified, the slow query log is stored in the data file of the MySQL database by default. If the file name is not specified, the default file name is hostname-slow.log 2. Define directly by logging in to the MySQL server Here is how: First, you need to have global permissions; then execute mysql>set global slow_query_log=1; (temporary effect, SQL statements that take more than 1 second to execute will be called slow query logs) By default, what is the time limit for a slow query log? This time value is usually set through the long_query_time option. The time is in seconds and can be accurate to microseconds. If the query time exceeds this time value (the default is 10 seconds), the query statement will be recorded in the slow query log. To view the default server time value, do the following: Note: The slow query time does not only mean that the statement execution itself exceeds 10 seconds. It also includes the query execution time blocked due to the requisition of other resources or other reasons, which are all recorded in the slow query. Therefore, the duration of this slow query represents all the time from the start of the query to the end of the query, including any possible reasons. View the slow query log content Mysql transaction log Transaction: A transaction is a collection of a series of operations that need to be committed. Only after submission can this series of operations be called a transaction. (Either all operations are performed, or none of them are performed) Transaction logs (InnoDB-specific logs) can help improve transaction efficiency. With transaction logs, the storage engine only needs to modify its memory copy when modifying the data in the table, and then record the modification behavior in the transaction log persisted on the hard disk, without having to persist the modified data itself to the disk every time. The transaction log is appended, so the operation of writing the log is a sequential I/O in a small area on the disk. Unlike random I/O, which requires moving the head in multiple places on the disk, the transaction log is relatively faster. After the transaction log is persisted, the modified data in memory can be slowly flushed back to disk in the background. Most storage engines currently implement this, which we usually call write-ahead logging. Modifying data requires writing to disk twice. MySQL transaction-based operations will directly change the data in the corresponding memory. After the change, you can check that it has taken effect, but it is not written to the disk. It is first written to the transaction log and then flushed to the disk regularly (the transaction log is written to the disk in an append manner in sequence, which greatly improves the efficiency of the transaction) If the data modification has been recorded in the transaction log and persisted, but the data itself has not been written back to disk, and the system crashes, the storage engine can automatically restore the modified data when it restarts. The available recovery methods depend on the storage engine. The innodb engine is an engine that supports transactions View the definition of the transaction log show global variables like '%log%'; Mysql binary log The binary log is also called the change log. It is mainly used to record MySQL statements that modify data or may cause data changes. It also records the statement occurrence time, execution time, operation data, etc. Therefore, the binary log can be used to query what changes have been made in the MySQL database. The maximum size is usually 1G. show global variables like '%log%'; sql_log_bin ={ON|OFF} #Used to control the session level (connecting to mysql to execute an operation statement is the session level, for example, directly importing a file into mysql is not considered the session level) binary logging function is turned on or off. The default is ON, which means the logging function is enabled. The user can modify the value of this variable at the session level, but must have the SUPER privilege. binlog_cache_size =32768 #Default value 32768 Binlog Cache is used in an environment where the binary log (binlog) recording function is turned on. It is a memory area designed by MySQL to improve the recording efficiency of binlog and is used to temporarily cache binlog data for a short period of time. Generally speaking, if there are no large transactions in our database and writes are not particularly frequent, 2MB to 4MB is a suitable choice. However, if our database has many large transactions and a large amount of writes, we can increase binlog_cache_size appropriately. At the same time, we can use binlog_cache_use and binlog_cache_disk_use to analyze whether the set binlog_cache_size is sufficient, and whether there is a large amount of binlog_cache using temporary files (binlog_cache_disk_use) for caching due to insufficient memory size. log_bin = mysql-bin #Specify the location of binlog, which is in the data directory by default. binlog-format= {ROW|STATEMENT|MIXED} #Specify the binary log type, MIXED is recommended. If the binary log format is set but the binary log is not enabled, warning log information will be generated when MySQL starts and recorded in the error log. row: does not record the context of each SQL statement, but only records that each data has been modified statement: Every SQL statement that modifies data will be recorded Mixed: indicates a mixture of the first two sync_binlog = 10 #Set how often to synchronize the binary log to the disk file. 0 means no synchronization, and any positive value means synchronization after every certain number of write operations to the binary. When the value of autocommit is 1, the execution of each statement will cause binary log synchronization. Otherwise, the submission of each transaction will cause binary log synchronization. Binary logging can be enabled by editing the log-bin option in my.cnf; the format is as follows: The DIR parameter specifies the storage path of the binary file; the filename parameter specifies the file name of the binary file in the form of filename.number, where number is in the form of 000001, 000002, and so on. Each time you restart the MySQL service or run mysql> flush logs;, a new binary log file will be generated, and the number of these log files will continue to increase. In addition to generating the above files, a file named filename.index will also be generated. This file stores a list of all binary log files, also known as the binary file index. Every time the database service is restarted, a binary log file is generated View the binary log: The binary log is defined in binary format; using this format can store more information and make writing to the binary log more efficient. However, you cannot directly use the view command to open and view the binary log. Small extension: The recording position of the binary log is usually the position of the end time of the previous event execution. Each log file itself has its own metadata, so for the current version of MySQL, the starting position of the binary is usually 107; Connect to MySQL and enter several SQL statements that can modify data to generate binary logs View the specified binary log information View the binary log in the command line: Since you cannot use cat or other methods to directly open and view the binary log, you must use the mysqlbinlog command. However, it is recommended not to use this to open the binary log file in use when MySQL read and write operations are being performed; if you must open it, flushlogs can be used. How to use the mysqlbinlog command: Export the information of this database: [root@stu18 data]#mysqlbinlog mysql-bin.000017 > /tmp/a.sql Information to import into this database: [root@stu18 data]#mysql < a.sql Delete binary log information: The binary log records a large amount of information (including some useless information). If the binary log is not cleaned up for a long time, a lot of disk space will be wasted. However, deletion may cause the database to crash and be unable to recover, so if you want to delete the binary log, you must first back it up with the database. You can only delete the binary log before the backup, and the newly generated log information cannot be deleted (you can do a point-in-time restore). You cannot delete it directly after shutting down the MySQL server because this may cause errors in the database. If you must delete the binary log, you need to do the following: export the backup database and binary log files for compressed archive storage. To delete the binary file: Use the RESET MASTER statement to delete all binary logs. The statement has the following form: mysql> reset master; Query OK, 0 rows affected (0.17 sec) mysql> show binary logs; Mysql backup tool mysqldump: logical backup tool, applicable to all storage engines, can be used for warm backup, can achieve full backup, partial backup; for InnoDB storage engine Support hot standby; File system tools such as cp and tar: physical backup tools, applicable to all storage engines; used for cold backup, capable of full backup and partial backup; lvm2 snapshot: almost hot backup; physical backup is achieved with the help of file system tools; mysqlhotcopy: Almost cold backup; only applicable to MyISAM storage engine; Mysql backup solution ①mysqldump+binlog: (recommended) Full backup, incremental backup by backing up binary logs ②xtrabackup: For InnoDB: hot standby, support for full backup and incremental backup For MyISAM: warm backup, only full backup is supported ③lvm2 snapshot + binlog: Almost hot standby, physical backup The syntax format of the mysqldump+binlog command mysqldump [OPTIONS] database [tables]: Back up a single database, or one or more tables specified by the database mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2DB3...]: Back up one or more databases mysqldump [OPTIONS] --all-databases [OPTIONS]: Back up all databases Other options -x, --lock-all-tables: lock all tables -l, --lock-tables: Lock the backup tables --single-transaction: Start a large single transaction to perform the backup -C, --compress: compress transmission -E, --events: Back up the event scheduler of the specified library -R, --routines: Back up stored procedures and stored functions --triggers: backup triggers --master-data={0|1|2} 0: Do not record 1: Log the CHANGE MASTER TO statement; this statement is not commented 2: Record as comment statement -F, --flush-logs: execute the flush logs command after locking the table mysqldump+binlog backup and recovery 1. Modify the mysql configuration file and enable binary log vim /etc/my.cnf log-bin = master-log Then restart mysql systemctl restart mariadb Enter mysql to check whether binary log is generated 2. Prepare the backup directory 3. Prepare to back up the database and tables mysql create database test; use magedu; create table m(id int,name char(20)); 4. Make a full backup mysqldump --all-databases --lock-all-tables --flush-log --master-data=2 > /backup/`date +%F_%T`-all.sql 5. Insert data into the table mysql use magedu; show master status; insert into m26 (id,name) values(1,'fuming'),(2,'zhangmeng'); 6. Perform incremental backup and backup binary logs mysqlbinlog --start-position=245 --stop-position=479 /var/lib/mysql/master-log.000002 > /backup/binlog/binlog-`date +%F_%T`.sql Determine the start and stop of position show master logs; show binlog events in 'master-log.000002'; The end must include commit. 7. Continue to insert data, delete the database without backup, and simulate misoperation 8. Data recovery. Since we deleted the database without backup, we first need to protect the last binary log. If these binary logs are lost, they cannot be recovered. Check the position value before the deletion operation mysqlbinlog /var/lib/mysql/master-log.000002 9. Back up the binary log of the last operation mysqlbinlog --start-position=467 --stop-position=677 /var/lib/mysql/master-log.000002 > /backup/binlog/binlog-`date +%F_%T`.sql ls /backup/binlog/ 10. Import all previous backups mysql < /backup/2017-12-07_20\:20\:04-all.sql Import full backup mysql < /backup/binlog/binlog-2017-12-07_20\:45\:17.sql import incremental backup mysql < /backup/binlog/binlog-2017-12-07_21\:05\:42.sql imports the incremental backup before deleting the database 11. View database and data xtrabackup Xtrabackup is a MySQL database backup tool provided by Percona. According to the official introduction, it is an open source tool that can perform hot backup of InnoDB and XtraDB databases. Features: (1) The backup process is fast and reliable (2) The backup process will not interrupt ongoing transactions (3) Ability to save disk space and traffic based on compression and other functions (4) Automatic backup verification (5) Fast recovery speed Experimental steps: (1) Installation of xtrabackup yum install percona-xtrabackup -y (2) Full backup innobackupex --user=root /backup (3) Add data mysql -uroot create database magedu; use magedu create table m26 (id int,name char(20)); insert into m26 values (007,'fuming'),(008,'zhangmeng') (4) Incremental backup innobackupex --incremental /backup/ --incremental-basedir=/backup/2017-11-16_16-53-4 (5) Data recovery preparation innobackupex --apply-log --redo-only BASE-DIR (BASE-DIR is the directory of the full backup) For example: innobackupex --apply-log --redo-only BASE-DIR --incrementaldir=/backup/2017-11-16_17-17-52/ 2. Then execute (incremental): innobackupex --apply-log --redo-only BASE-DIR --incrementaldir=INCREMENTAL-DIR-1 (INCREMENTAL-DIR-1 is the incremental backup directory) For example: innobackupex --apply-log --redo-only /backup/2017-11-16_16-53-43 --incrementaldir=/backup/2017-11-16_17-17-52/ (6) Recovery phase: restoring data mv /var/lib/mysql /var/lib/mysql.bak simulates deleting the database mkdir /var/lib/mysql cd /var/lib/mysql innobackupex --copy-back /backup/2017-11-16_16-53-43 Restore full backup lvm2 snapshot + binlog Before doing the experiment, let's review the knowledge of lvm2-snapshot In simple terms, an LVM snapshot saves the metadata of all files in the snapshot source partition at a point in time. If the source file has not changed, then the corresponding file accessing the snapshot volume directly points to the source file of the source partition. If the source file has changed, the corresponding file in the snapshot volume will not change. Snapshot volumes are mainly used to assist in backing up files. Experimental steps: 1. Add a hard disk and divide the disk type into lvm type echo '- - -' > /sys/class/scsi_host/host2/scan 2. Partition t 8e is lvm partx -a /dev/sdb makes the kernel recognize the new disk 3.pvcreate /dev/sdb1 to add a physical volume 4.vgcreate myvg /dev/sdb1 to add a volume group 5.lvcreate -n mydata -L 5G myvg to add a logical volume 6. mkfs.ext4 /dev/mapper/myvg-mydata formats the logical volume 7. Mount /dev/mapper/myvg-mydata /lvm_data and use it 8. Modify the Mysql configuration so that the data file is on the logical volume datadir=/lvm_data 9. service mysqld restart starts the MySQL service 10. Create a database and perform operations 11.mysql> FLUSH TABLES WITH READ LOCK; #Lock the table 12. lvcreate -L 1G -n mydata-snap -pr -s /dev/mapper/myvgmydata #Create snapshot volume Logical volume "mydata-snap" created. 13.mysql> UNLOCK TABLES; #Unlock all tables 14. mount /dev/myvg/mydata-snap /lvm_snap/ #Mount snap 15. tar cvf /tmp/mysqlback.tar ./* #Package physical backup 16. umount /lvm_snap/ #Uninstall snap 17. lvremove myvg mydata-snap #Delete snap 18. Delete mysql data rm -rf /lvm_data/* 19. Unzip and restore deleted data tar xvf /tmp/mysqlback.tar ./ 20. Verify whether the database data is restored correctly Summarize
Okay, that’s all for today. See you next time. The above practical tutorial on how to implement enterprise-level log management, backup and recovery with Mysql is all I want to share with you. I hope it can give you a reference, and I also hope that you will support 123WORDPRESS.COM. You may also be interested in:
|
<<: How to set up Referer in Nginx to prevent image theft
>>: Vue implements graphic verification code login
Use Javascript to achieve the countdown effect, f...
The performance of your website or service depend...
view: When a temporary table is used repeatedly, ...
background go-fastdfs is a distributed file syste...
1. Database transactions will reduce database per...
Implementation effect: 1. count(1) and count(*) W...
<br />Hello everyone! It’s my honor to chat ...
The installation of mysql-5.7.17 is introduced be...
React tsx generates a random verification code fo...
question The tomcat container was successfully ad...
Table of contents DOM processing Arrays method Su...
This article shares the specific code of JavaScri...
This article example shares the specific code of ...
Installing Docker on CentOS requires the operatin...
KVM stands for Kernel-based Virtual Machine, whic...