background: As a DBA, most of the DDL changes of large tables are done using Percona's pt-online-schema-change. This article explains the use of another tool, gh-ost: it does not rely on triggers because it simulates the slave, obtains incremental changes in the row binlog, and then asynchronously applies them to the ghost table. Before using gh-ost, you can first read the GitHub open source MySQL online schema change tool [transfer] article or the official website to understand its features and principles. This article only explains the usage. illustrate: 1) Download and install: https://github.com/github/gh-ost/tags 2) Parameter description: gh-ost --help Usage of gh-ost: --aliyun-rds: Whether to execute on Alibaba Cloud Database. true --allow-master-master: Whether to allow gh-ost to run in a dual-master replication architecture. Generally used together with the -assume-master-host parameter. --allow-nullable-unique-key: Allow gh-ost to allow the unique key that the data migration depends on to be NULL. The default is to not allow NULL unique keys. If the unique key that the data migration relies on allows NULL values, incorrect data may result. Please use it with caution. --allow-on-master: Allow gh-ost to run directly on the master. The default slave that gh-ost connects to. --alter string:DDL statement --approve-renamed-columns ALTER: If you change the name of a column, gh-ost will recognize this and ask for a reason for the column rename. By default, gh-ost will not proceed unless you provide --approve-renamed-columns ALTER. --ask-pass:MySQL password --assume-master-host string:Specify a master database for gh-ost, in the format of "ip:port" or "hostname:port". This is useful in a master-master architecture, or when gh-ost cannot find the master. --assume-rbr: If binlog_format=ROW is set for the database instance to which gh-ost is connected, you can specify -assume-rbr. This will prevent stop slave and start slave from running on the slave, and the gh-ost user does not need the SUPER privilege to execute. --check-flag --chunk-size int: Number of rows to process in each iteration (allowed range: 100-100000), default value is 1000. --concurrent-rowcount: If this parameter is True (default), after row-copying, gh-ost estimates the number of rows (using explain select count(*)) and adjusts the ETA. Otherwise, gh-ost first estimates the number of rows and then starts row-copying. --conf string: gh-ost configuration file path. --critical-load string: A comma-separated list of status-name=values. When the MySQL status exceeds the corresponding values, gh-ost will exit. -critical-load Threads_connected=20,Connections=1500 means that when the status values in MySQL Threads_connected>20,Connections>1500, gh-ost will stop and exit due to the critical load of the database. Comma delimited status-name=threshold, same format as --max-load. When status exceeds threshold, app panics and quits --critical-load-hibernate-seconds int : When the critical load is reached, gh-ost will go into hibernation for the specified time. It will not read/write anything from any server. --critical-load-interval-millis int: When the value is 0, gh-ost will exit immediately when the critical load is reached. When the value is not 0, gh-ost will check again after -critical-load-interval-millis seconds when the -critical-load is reached. If the -critical-load is still reached after the second check, gh-ost will exit. --cut-over string: Select the cut-over type: atomic/two-step. The atomic (default) type of cut-over uses the github algorithm, and the two-step type uses the facebook-OSC algorithm. --cut-over-exponential-backoff --cut-over-lock-timeout-seconds int: The maximum lock waiting time during the cut-over phase of gh-ost. When the lock times out, gh-ost will retry the cut-over. (Default: 3) --database string: database name. --debug: debug mode. --default-retries int: The number of times various operations are retried before panicking. (Default is 60) --discard-foreign-keys: This parameter is for a table with foreign keys. When gh-ost creates a ghost table, it will not create foreign keys for the ghost table. This parameter is useful for deleting foreign keys, but otherwise use it with caution. --dml-batch-size int: Batch size for applying DML events in a single transaction (range 1-100) (default 10) --exact-rowcount: Accurately count the number of table rows (using select count(*)) to get a more accurate estimate of the time. --execute: actually execute alter & migrate table. The default is noop, which means no execution. Just test and exit. If you want the ALTER TABLE statement to be actually implemented in the database, you need to explicitly specify -execute --exponential-backoff-max-interval int --force-named-cut-over: If true, the 'unpostpone | cut-over' interactive command must name the migrated table --force-table-names string: Table name prefix to use on temporary tables --heartbeat-interval-millis int: gh-ost heartbeat frequency value, default is 500 --help --hooks-hint string: arbitrary messages injected into hooks via GH_OST_HOOKS_HINT --hooks-path string: directory where hook files are stored (default is empty, i.e. hooks are disabled). The hook will search for a hook file with the same naming convention in this directory to execute. --host string :MySQL IP/hostname --initially-drop-ghost-table: Check and drop existing ghost tables before gh-ost operation. This parameter is not recommended. Please manually process the existing ghost table. By default, this parameter is not enabled and gh-ost will exit the operation. --initially-drop-old-table: Before gh-ost operation, check and drop the existing old table. This parameter is not recommended. Please manually process the existing ghost table. By default, this parameter is not enabled and gh-ost will exit the operation. --initially-drop-socket-file:gh-ost forcefully deletes an existing socket file. This parameter is not recommended and may delete a running gh-ost program, causing DDL failures. --master-password string :MySQL master password --master-user string:MySQL master account --max-lag-millis int:Maximum delay time of master-slave replication. When the delay time of master-slave replication exceeds this value, gh-ost will take throttle measures. Default value: 1500s. --max-load string: comma separated status name = threshold, such as: 'Threads_running=100,Threads_connected=500'. When status exceeds threshold, app throttles writes --migrate-on-replica: gh-ost runs migration on the replica, not the master. --nice-ratio float: The sleep time for each chunk period, in the range [0.0…100.0]. 0: No sleep in each chunk period, that is, one chunk is executed after another; 1: Sleep for an additional 1 millisecond for each row-copy; 0.7: Sleep for an additional 7 milliseconds for each row-copy of 10 milliseconds. --ok-to-drop-table: After the gh-ost operation is completed, the old table is deleted. By default, the old table is not deleted and the _tablename_del table will exist. --panic-flag-file string: When this file is created, gh-ost will exit immediately. --password string : MySQL password --port int : MySQL port, preferably from the slave --postpone-cut-over-flag-file string: When this file exists, the cut-over phase of gh-ost will be postponed and data will still be replicated until the file is deleted. --quiet: Quiet mode. --replica-server-id uint : gh-ost's server_id --replication-lag-query string: Deprecated --serve-socket-file string: Absolute path to gh-ost's socket file. --serve-tcp-port int: Port to use for gh-ost. Default is closed port. --skip-foreign-key-checks: Set to 'true' when you are sure there are no foreign keys on your table, and want to skip gh-ost validation. --skip-renamed-columns ALTER: If you change the name of a column (e.g., change column), gh-ost will recognize this and ask for a reason for the column rename. By default, gh-ost will not proceed with the command. This parameter tells gh-ost to skip the migration of the column and treat the renamed column as a don’t care column. This operation is dangerous, you will lose all values in the column. --stack: Add error stack trace. --switch-to-rbr: Tell gh-ost to automatically switch the binlog_format of the slave repository to ROW format. --table string: table name --test-on-replica: Test gh-ost on the slave, including data migration on the slave. After the data migration is complete, stop the slave, and immediately swap the original table with the ghost table and then immediately swap back. Continue to stop the slave so that you can compare the two tables. --test-on-replica-skip-replica-stop: When -test-on-replica is executed, this parameter means that the slave does not need to be stopped during the process. --throttle-additional-flag-file string: When this file is created, gh-ost operation will stop immediately. This parameter can be used when multiple gh-ost operations are running at the same time to create a file and stop all gh-ost operations, or delete the file and resume all gh-ost operations. --throttle-control-replicas string: Lists all slave libraries that need to be checked for master-slave replication delay. --throttle-flag-file string: When this file is created, gh-ost operation will stop immediately. This parameter is suitable for controlling a single gh-ost operation. -throttle-additional-flag-file string is suitable for controlling multiple gh-ost operations. --throttle-http string --throttle-query string: Throttle query. Executes once every second. When the return value = 0, no throttling is required. When the return value > 0, throttling needs to be performed. This query will be run on the migrated server, so please make sure it is lightweight. --timestamp-old-table: Use timestamp in old table names. This will make cross-migration of old table names unique and conflict-free --tungsten: Tells gh-ost that you are running a tungsten-replication topology. --user string :MYSQL user --verbose --version 3) Instructions for use: The prerequisite is that the binlog mode required for the MySQL operation is ROW. If you are testing on a slave, you must also use ROW mode and enable log_slave_updates. Adjust the parameters as required according to the above description. Environment: Master library: 192.168.163.131; Slave library: 192.168.163.130 DDL process: ① Check if there are foreign keys and triggers. ② Check the primary key information of the table. ③ Check whether it is the master or slave database, whether log_slave_updates is enabled, and the binlog information ④ Check whether the temporary tables ending with gho and del exist ⑤ Create a table ending with ghc to store data migration information and binlog information, etc.---the above verification stage ⑥ Initialize the stream connection and add binlog listening---the following migration stage ⑥ Create a temporary table ending with gho and execute DDL on the temporary table ending with gho ⑦ Start a transaction, write the source table data to the table ending with gho according to the primary key id, submit it, and apply binlog. ---The following cut-over stage⑧ lock source table, rename table: rename source table to source_del table, gho table to source table. ⑨ Clean up the ghc table. 1. DDL on a single instance: A single instance is equivalent to a master database, and the --allow-on-master parameter and ROW mode need to be enabled. gh-ost --user="root" --password="root" --host=192.168.163.131 --database="test" --table="t1" --alter="ADD COLUMN cc2 varchar(10),add column cc3 int not null default 0 comment 'test' " --allow-on-master --execute 2. DDL on the master and slave: There are two options. One is to directly synchronize data from the master to the slave according to step 1. The other is to connect to the slave and migrate data from the master (just make sure that the binlog of the slave is ROW, not the master): gh-ost --user="root" --password="root" --host=192.168.163.130 --database="test" --table="t" --initially-drop-old-table --alter="ADD COLUMN y1 varchar(10),add column y2 int not null default 0 comment 'test' " --execute The operation at this time is roughly as follows:
When executing DDL, the slave database will execute a stop/start slave command. If you are sure that the slave's binlog is ROW, you can add the parameter: --assume-rbr. If the slave's binlog is not ROW, you can use the parameter --switch-to-rbr to convert it to ROW. At this time, it should be noted that after the execution, the binlog mode will not be converted to the original value. The --assume-rbr and --switch-to-rbr parameters cannot be used together. 3. Perform DDL test on the slave: gh-ost --user="root" --password="root" --host=192.168.163.130 --database="test" --table="t" --alter="ADD COLUMN abc1 varchar(10),add column abc2 int not null default 0 comment 'test' " --test-on-replica --switch-to-rbr --execute Parameter --test-on-replica: Test gh-ost on the slave, including data migration on the slave, stopping the slave after the data migration is complete, and immediately swapping the original table with the ghost table and then immediately swapping back. Continue to stop the slave so that you can compare the two tables. If you do not want to stop the slave, you can add the parameter: --test-on-replica-skip-replica-stop The above three are gh-ost operation modes. In the above operations, the temporary table will not be cleaned up at the end and needs to be cleaned up manually. If the temporary table still exists before the next execution, the execution will fail. You can delete it through the parameters: --initially-drop-ghost-table: Check and drop existing ghost tables before gh-ost operation. This parameter is not recommended. Please manually process the existing ghost table. By default, this parameter is not enabled and gh-ost will exit the operation. --initially-drop-old-table: Before gh-ost operation, check and drop the existing old table. This parameter is not recommended. Please manually process the existing ghost table. By default, this parameter is not enabled and gh-ost will exit the operation. --initially-drop-socket-file:gh-ost forcefully deletes an existing socket file. This parameter is not recommended and may delete a running gh-ost program, causing DDL failures. --ok-to-drop-table: After the gh-ost operation is completed, the old table is deleted. By default, the old table is not deleted and the _tablename_del table will exist. There are some other parameters, such as: --exact-rowcount, --max-lag-millis, --max-load, etc. You can see the instructions above. Most of the commonly used parameter commands are as follows: gh-osc --user= --password= --host= --database= --table= --max-load=Threads_running=30, --chunk-size=1000 --serve-socket-file=/tmp/gh-ost.test.sock --exact-rowcount --allow-on-master/--test-on-replica --initially-drop-ghost-table/--initially-drop-old-table/--initially-drop-socket-file --max-lag-millis= --max-load='Threads_running=100,Threads_connected=500' --ok-to-drop-table 4) Additional instructions: termination, suspension, speed limit gh-ost --user="root" --password="root" --host=192.168.163.131 --database="test" --table="t1" --alter="ADD COLUMN o2 varchar(10),add column o1 int not null default 0 comment 'test' " --exact-rowcount --serve-socket-file=/tmp/gh-ost.t1.sock --panic-flag-file=/tmp/gh-ost.panic.t1.flag --postpone-cut-over-flag-file=/tmp/ghost.postpone.t1.flag --allow-on-master --execute ① Flag file to terminate operation: --panic-flag-file Create a file to terminate the operation. In this example, the /tmp/gh-ost.panic.t1.flag file is created to terminate the running gh-ost. Temporary file cleanup needs to be done manually. ② Indicates that cut-over is prohibited for the file, that is, table name switching is prohibited, and data replication is carried out normally. --postpone-cut-over-flag-file Create a file to delay the cut-over, that is, postpone the switching operation. In this example, the /tmp/ghost.postpone.t1.flag file is created. gh-ost will complete the row replication but will not switch tables. It will continue to synchronize data updates from the original table to the temporary table. ③ Using socket to listen for requests, the operator can change the corresponding parameters after the command is run. --serve-socket-file, --serve-tcp-port (disabled by default) Create a socket file for listening and adjust parameters through the interface. If the load and delay increase during the operation, the operation has to be terminated and parameters such as chunk-size have to be reconfigured. Then the operation command can be re-executed. Dynamic adjustments can be made through the socket interface. like: Pause operation: # Pause echo throttle | socat - /tmp/gh-ost.test.t1.sock #Resume echo no-throttle | socat - /tmp/gh-ost.test.t1.sock Modify the speed limit parameters: echo chunk-size=100 | socat - /tmp/gh-ost.t1.sock echo max-lag-millis=200 | socat - /tmp/gh-ost.t1.sock echo max-load=Thread_running=3 | socat - /tmp/gh-ost.t1.sock 4) Comparison test with pt-online-schema-change 1. When the table is not written and the parameters are default, the DDL operation time of the two is similar, after all, both are copy row operations. 2. When there are a lot of writes to the table (sysbench), pt-osc is multi-threaded and can be executed quickly, while gh-ost simulates a single-threaded application. In extreme cases, DDL operations are very difficult to complete. Conclusion: Although pt-osc does not require triggers and has much less impact on the pressure and performance of the main database, the efficiency of DDL in high-concurrency scenarios is still lower than that of pt-osc, so it still needs to be processed during low-peak business hours. For related tests, see the performance comparison between gh-ost and pt-osc. 5) Encapsulation script: Environment: M: 192.168.163.131 (ROW), S: 192.168.163.130/132 Wrapper script: gh-ost.py #!/bin/env python # -*- encoding: utf-8 -*- #---------------------------------------------- # Purpose: gh-ost # Created: 2018-06-16 #---------------------------------------------- import MySQLdb import re import sys import time import subprocess import os from optparse import OptionParser def calc_time(func): def _deco(*args, **kwargs): begin_time = time.time() func(*args, **kwargs) cost_time = time.time() - begin_time print 'cost time: %ss' % round(cost_time,2) return _deco def get_table_count(conn,dbname,tbname): query = ''' SELECT count(*) FROM %s.%s ''' %(dbname,tbname) cursor = conn.cursor() cursor.execute(query) row_nums = cursor.fetchone() cursor.close() conn.close() return row_nums def online_ddl(conn,ddl_cmd): cursor = conn.cursor() cursor.execute(ddl_cmd) conn.commit() cursor.close() conn.close() #@calc_time def run_cmd(cmd): p = subprocess.Popen(cmd, shell=True) return p,p.pid def drop_ghost_table(conn, ghost_name_list): try: cursor = conn.cursor() query = ''' DROP TABLE IF EXISTS %s; ''' %(ghost_name_list) cursor.execute(query) conn.commit() cursor.close() conn.close() except Exception,e: print e if __name__ == "__main__": parser = OptionParser() parser.add_option("-P", "--Port", help="Port for search", dest="port") parser.add_option("-D", "--Dbname", help="the Dbname to use", dest="dbname") parser.add_option("-T", "--Table", help="the Table to use", dest="tablename") (options, args) = parser.parse_args() if not options.port: print 'params port need to apply' exit() if not options.dbname: print 'params dbname need to apply' exit() if not options.tablename: print 'params tablename need to apply' exit() gh_ost_socket = '/tmp/gh-ost.%s.%s.sock' %(options.dbname,options.tablename) #Termination flag panic_flag = '/tmp/gh-ost.panic.%s.%s.flag' %(options.dbname,options.tablename) # postpone_flag = '/tmp/gh-ost.postpone.%s.%s.flag' %(options.dbname,options.tablename) #Throttle flag throttle_flag = '/tmp/gh-ost.throttle.%s.%s' %(options.dbname,options.tablename) # socket = '/data/%s/tmp/mysql.sock' %(options.port) socket = '/var/run/mysqld/mysqld.sock' get_conn = MySQLdb.connect(host='192.168.163.131', port=int(options.port), user='root', passwd='root', db=options.dbname, unix_socket=socket, charset='utf8') conn = MySQLdb.connect(host='192.168.163.131', port=int(options.port), user='root', passwd='root', db=options.dbname, unix_socket=socket, charset='utf8') (table_count,) = get_table_count(get_conn,options.dbname,options.tablename) print ("\033[0;32m%s\033[0m" % "Number of tables: %s" %table_count) DDL_CMD = raw_input('Enter DDL CMD : ').replace('`','') gh_command_list = re.split('[ ]+',DDL_CMD) if gh_command_list[0].upper() == 'CHANGE' and gh_command_list[1] != gh_command_list[2]: print ("\033[0;31m%s\033[0m" % "renamed columns' data will be lost,pt-osc exit...") exit() if table_count <= 10000: ddl = ''' ALTER TABLE %s %s ''' %(options.tablename,DDL_CMD) print ("\033[0;36m%s\033[0m" %ddl) print ("\033[0;32m%s\033[0m" % "online ddl ...") online_ddl(conn,ddl) print ("\033[0;32m%s\033[0m" % "Execution completed...") exit() else: MAX_LOAD = raw_input('Enter Max Threads_running【25】 : ') if not MAX_LOAD: Threads_running = 25 else: try: Threads_running = int(MAX_LOAD) except ValueError: print ("\033[0;31m%s\033[0m" % "Input type error, exit...") exit() CHUNK_SIZE = raw_input('Enter Max chunk-size【1000】 : ') if not CHUNK_SIZE: chunk_size = 1000 else: try: chunk_size = int(CHUNK_SIZE) except ValueError: print ("\033[0;31m%s\033[0m" % "Input type error, exit...") exit() print ("\033[0;32m%s\033[0m" % "gh-ost ddl ...") #--postpone-cut-over-flag-file=%s gh_command = '''/usr/bin/gh-ost --user="root" --password="root" --host=192.168.163.131 --port=%s --database="%s" --table="%s" --allow-on-master --max-load='Threads_running=%d' --chunk-size=%d --serve-socket-file=%s --panic-flag-file=%s --throttle-additional-flag-file=%s --alter="%s" --execute ''' %(options.port,options.dbname,options.tablename,Threads_running,chunk_size,gh_ost_socket,panic_flag,throttle_flag,DDL_CMD) print ("\033[0;36m%s\033[0m" %gh_command) child,pid = run_cmd(gh_command) print ("\033[0;31mgh-ost's PID: %s\033[0m" %pid) print ("\033[0;33mCreate: [touch %s] file, suspend DDL ...\033[0m" %throttle_flag) try: child.wait() except: child.terminate() #clean ghost_name_list = '_%s_ghc,_%s_gho' %(options.tablename,options.tablename) drop_ghost_table(conn,ghost_name_list) if os.path.exists(gh_ost_socket): os.system('rm -r %s' %gh_ost_socket) print ("\033[0;32m%s\033[0m" % "Cleaning completed...") exit() print ("\033[0;32m%s\033[0m" % "Cleaning completed...") exit() finally : pass run: root@test2:~# python gh-ost.py -P3306 -Dtest -Tzjy Number of tables: 1310720 Enter DDL CMD : ADD COLUMN q1 varchar(10), ADD COLUMN q2 varchar(10) Enter Max Threads_running【25】 : 10 Enter Max chunk-size【1000】 : 200 gh-ost ddl ... /usr/bin/gh-ost --user="root" --password="root" --host=192.168.163.131 --port=3306 --database="test" --table="zjy" --allow-on-master --max-load='Threads_running=10' --chunk-size=200 --serve-socket-file=/tmp/gh-ost.test.zjy.sock --panic-flag-file=/tmp/gh-ost.panic.test.zjy.flag --throttle-additional-flag-file=/tmp/gh-ost.throttle.test.zjy --alter="ADD COLUMN q1 varchar(10),ADD COLUMN q2 varchar(10)" --execute gh-ost's PID: 2105 Create: [touch /tmp/gh-ost.throttle.test.zjy] file, suspend DDL... 2018/06/17 14:37:37 binlogsyncer.go:79: [info] create BinlogSyncer with config {99999 mysql 192.168.163.131 3306 root false false <nil>} 2018/06/17 14:37:37 binlogsyncer.go:246: [info] begin to sync binlog from position (mysql-bin.000013, 31197930) 2018/06/17 14:37:37 binlogsyncer.go:139: [info] register slave for master server 192.168.163.131:3306 2018/06/17 14:37:37 binlogsyncer.go:573: [info] rotate to (mysql-bin.000013, 31197930) # Migrating `test`.`zjy`; Ghost table is `test`.`_zjy_gho` # Migrating test2:3306; inspecting test2:3306; executing on test2 # Migration started at Sun Jun 17 14:37:37 +0800 2018 # chunk-size: 200; max-lag-millis: 1500ms; dml-batch-size: 10; max-load: Threads_running=10; critical-load: ; nice-ratio: 0.000000 # throttle-additional-flag-file: /tmp/gh-ost.throttle.test.zjy # panic-flag-file: /tmp/gh-ost.panic.test.zjy.flag # Serving on unix socket: /tmp/gh-ost.test.zjy.sock Copy: 0/1305600 0.0%; Applied: 0; Backlog: 0/1000; Time: 0s(total), 0s(copy); streamer: mysql-bin.000013:31199542; State: migrating; ETA: N/A Copy: 0/1305600 0.0%; Applied: 0; Backlog: 0/1000; Time: 1s(total), 1s(copy); streamer: mysql-bin.000013:31202866; State: migrating; ETA: N/A Copy: 44400/1305600 3.4%; Applied: 0; Backlog: 0/1000; Time: 2s(total), 2s(copy); streamer: mysql-bin.000013:33352548; State: migrating; ETA: 56s Copy: 91200/1305600 7.0%; Applied: 0; Backlog: 0/1000; Time: 3s(total), 3s(copy); streamer: mysql-bin.000013:35598132; State: migrating; ETA: 39s Copy: 135200/1305600 10.4%; Applied: 0; Backlog: 0/1000; Time: 4s(total), 4s(copy); streamer: mysql-bin.000013:37727925; State: migrating; ETA: 34s Copy: 174000/1305600 13.3%; Applied: 0; Backlog: 0/1000; Time: 5s(total), 5s(copy); streamer: mysql-bin.000013:39588956; State: migrating; ETA: 32s Copy: 212200/1305600 16.3%; Applied: 0; Backlog: 0/1000; Time: 6s(total), 6s(copy); streamer: mysql-bin.000013:41430090; State: migrating; ETA: 30s Copy: 254800/1305600 19.5%; Applied: 0; Backlog: 0/1000; Time: 7s(total), 7s(copy); streamer: mysql-bin.000013:43483555; State: migrating; ETA: 28s Copy: 303600/1305600 23.3%; Applied: 0; Backlog: 0/1000; Time: 8s(total), 8s(copy); streamer: mysql-bin.000013:45834978; State: migrating; ETA: 26s Copy: 351200/1305600 26.9%; Applied: 0; Backlog: 0/1000; Time: 9s(total), 9s(copy); streamer: mysql-bin.000013:48128675; State: migrating; ETA: 24s Copy: 401400/1305600 30.7%; Applied: 0; Backlog: 0/1000; Time: 10s(total), 10s(copy); streamer: mysql-bin.000013:50547454; State: migrating; ETA: 22s Copy: 451200/1305600 34.6%; Applied: 0; Backlog: 0/1000; Time: 11s(total), 11s(copy); streamer: mysql-bin.000013:52946991; State: migrating; ETA: 20s Copy: 490000/1305600 37.5%; Applied: 0; Backlog: 0/1000; Time: 12s(total), 12s(copy); streamer: mysql-bin.000013:54817320; State: migrating; ETA: 19s Copy: 529600/1305600 40.6%; Applied: 0; Backlog: 0/1000; Time: 13s(total), 13s(copy); streamer: mysql-bin.000013:56735431; State: migrating; ETA: 19s Copy: 589200/1305600 45.1%; Applied: 0; Backlog: 0/1000; Time: 14s(total), 14s(copy); streamer: mysql-bin.000013:59606450; State: migrating; ETA: 17s Copy: 639400/1305600 49.0%; Applied: 0; Backlog: 0/1000; Time: 15s(total), 15s(copy); streamer: mysql-bin.000013:62025561; State: migrating; ETA: 15s Copy: 695200/1305600 53.2%; Applied: 0; Backlog: 0/1000; Time: 16s(total), 16s(copy); streamer: mysql-bin.000013:64704138; State: migrating; ETA: 14s Copy: 751200/1305600 57.5%; Applied: 0; Backlog: 0/1000; Time: 17s(total), 17s(copy); streamer: mysql-bin.000013:67401961; State: migrating; ETA: 12s Copy: 803800/1305600 61.6%; Applied: 0; Backlog: 0/1000; Time: 18s(total), 18s(copy); streamer: mysql-bin.000013:69935884; State: migrating; ETA: 11s Copy: 856400/1305600 65.6%; Applied: 0; Backlog: 0/1000; Time: 19s(total), 19s(copy); streamer: mysql-bin.000013:72470455; State: migrating; ETA: 9s Copy: 907400/1305600 69.5%; Applied: 0; Backlog: 0/1000; Time: 20s(total), 20s(copy); streamer: mysql-bin.000013:74927401; State: migrating; ETA: 8s Copy: 958800/1305600 73.4%; Applied: 0; Backlog: 0/1000; Time: 21s(total), 21s(copy); streamer: mysql-bin.000013:77404243; State: migrating; ETA: 7s Copy: 999200/1305600 76.5%; Applied: 0; Backlog: 0/1000; Time: 22s(total), 22s(copy); streamer: mysql-bin.000013:79351223; State: migrating; ETA: 6s Copy: 1009600/1305600 77.3%; Applied: 0; Backlog: 0/1000; Time: 23s(total), 23s(copy); streamer: mysql-bin.000013:79855229; State: migrating; ETA: 6s Copy: 1059600/1305600 81.2%; Applied: 0; Backlog: 0/1000; Time: 24s(total), 24s(copy); streamer: mysql-bin.000013:82264712; State: migrating; ETA: 5s Copy: 1107200/1305600 84.8%; Applied: 0; Backlog: 0/1000; Time: 25s(total), 25s(copy); streamer: mysql-bin.000013:84558411; State: migrating; ETA: 4s Copy: 1147000/1305600 87.9%; Applied: 0; Backlog: 0/1000; Time: 26s(total), 26s(copy); streamer: mysql-bin.000013:86486148; State: migrating; ETA: 3s Copy: 1198000/1305600 91.8%; Applied: 0; Backlog: 0/1000; Time: 27s(total), 27s(copy); streamer: mysql-bin.000013:88943747; State: migrating; ETA: 2s Copy: 1245400/1305600 95.4%; Applied: 0; Backlog: 0/1000; Time: 28s(total), 28s(copy); streamer: mysql-bin.000013:91218202; State: migrating; ETA: 1s Copy: 1286600/1305600 98.5%; Applied: 0; Backlog: 0/1000; Time: 29s(total), 29s(copy); streamer: mysql-bin.000013:93203991; State: migrating; ETA: 0s Copy: 1310720/1310720 100.0%; Applied: 0; Backlog: 0/1000; Time: 29s(total), 29s(copy); streamer: mysql-bin.000013:94366846; State: migrating; ETA: due Copy: 1310720/1310720 100.0%; Applied: 0; Backlog: 1/1000; Time: 30s(total), 29s(copy); streamer: mysql-bin.000013:94369042; State: migrating; ETA: due # Migrating `test`.`zjy`; Ghost table is `test`.`_zjy_gho` # Migrating test2:3306; inspecting test2:3306; executing on test2 # Migration started at Sun Jun 17 14:37:37 +0800 2018 # chunk-size: 200; max-lag-millis: 1500ms; dml-batch-size: 10; max-load: Threads_running=10; critical-load: ; nice-ratio: 0.000000 # throttle-additional-flag-file: /tmp/gh-ost.throttle.test.zjy # panic-flag-file: /tmp/gh-ost.panic.test.zjy.flag # Serving on unix socket: /tmp/gh-ost.test.zjy.sock Copy: 1310720/1310720 100.0%; Applied: 0; Backlog: 0/1000; Time: 30s(total), 29s(copy); streamer: mysql-bin.000013:94371928; State: migrating; ETA: due 2018/06/17 14:38:08 binlogsyncer.go:107: [info] syncer is closing... 2018/06/17 14:38:08 binlogstreamer.go:47: [error] close sync with err: sync is been closing... (The error here does not affect the use, sync is closed repeatedly, waiting for the author to fix) 2018/06/17 14:38:08 binlogsyncer.go:122: [info] syncer is closed # Done Summarize: gh-ost abandons triggers and uses binlog for synchronization. As a fake slave database, gh-ost can pull binlogs from the master/slave database, filter them, and reapply them to the master database. This is equivalent to applying incremental operations on the master database back to the master database itself through binlogs, but it is applied to ghost tables. gh-ost first connects to the master, creates a ghost table based on the alter statement, and then connects to one of the real slaves as a "slave", copies the existing data on the master to the ghost table, pulls the binlog of incremental data from the slave, and then continuously applies the binlog back to the master. The cut-over in the figure is the last step, which locks the source table of the master database, waits for the binlog to be applied, and then replaces the gh-ost table with the source table. During execution, gh-ost will add the following hints and heartbeat packets to the original binlog event to control the progress of the entire process, detect status, etc. This architecture brings many benefits, such as:
You may also be interested in:
|
<<: How to use React forwardRef and what to note
>>: CentOS8 installation tutorial of jdk8 / java8 (recommended)
When I surf the Net, I often see web sites filled...
Table of contents 1. Introduction 2. Main text 2....
This article shares the specific code of JavaScri...
1. Ubuntu Server 18.04.5 LTS system installation ...
Recently, I have been working on several virtual ...
Nginx is now one of the most popular load balance...
Method 1: Use cmd command First, open our DOS win...
Table of contents Preface Source code Where do I ...
Table of contents Preface analyze Initial Renderi...
Preface Swap is a special file (or partition) loc...
The effect to be achieved In many cases, we will ...
This article shares the specific code of JavaScri...
In this blog, I will walk you through the process...
Table of contents Tutorial Series 1. User Managem...
(I) Installation of mysql5.7: ❀ Details: The inst...