Incremental Migration Tool gs_replicate

Availability

This feature is available since openGauss 5.0.0.

Introduction

gs_replicate migrates incremental data generated during MySQL data migration (including full and incremental migration) to openGauss.

Benefits

You can use gs_replicate to migrate incremental data from the MySQL database to the openGauss database.

Description

The source end of the debezium mysql connector monitors binlogs of the MySQL database and writes DDL and DML operations to Kafka in AVRO format. The sink end of the debezium mysql connector reads DDL and DML operations in AVRO format from Kafka, assembles the data into transactions, and plays back the transactions in parallel on the openGauss based on the transaction granularity to migrate DDL and DML operations from MySQL to openGauss online.

This solution strictly ensures the transaction sequence. Therefore, DDL and DML operations are routed in a topic of Kafka, and the number of partitions of the topic can only be 1 (num.partitions=1). In this way, the data pushed from the source end to Kafka and the data obtained by the sink end from Kafka is in strict sequence.

Enhancements

None.

Constraints

  • Currently, incremental data generated by MySQL INSERT, UPDATE, and DELETE (IUD) operations can be migrated to openGauss.

  • MySQL DDL statements compatible with the openGauss database can be migrated. For incompatible DDL statements, an error will be reported during the migration. (openGauss is improving its compatibility with DDL statements.)

  • To ensure the sequence and consistency of transactions, settings such as skip_event, limit_table, skip_table are not supported.

  • MySQL 5.7 or later is required.

  • The MySQL parameter must be set to log_bin=ON, binlog_format=ROW, binlog_row_image=FULL, gtid_mode = ON. If gtid_mode is set to off, the sink end replays data in serial mode based on the transaction sequence, which deteriorates the online migration performance.

  • Full migration is performed by using gs_mysync before incremental migration.

  • Kafka stores data in AVRO format. The AVRO naming rules are as follows:

    - Start with [A-Za-z_].
    - Subsequently contain only [A-Za-z0-9_].
    

    Therefore, naming identifiers in MySQL, including table names and column names, must comply with the preceding naming rules. Otherwise, an error will be reported during online migration.

Dependencies

gs_replicate depends on the MySQL migration tool gs_rep_portal.

Feedback
编组 3备份
    openGauss 2024-05-19 00:42:09
    cancel