A brief discussion on the synchronization solution between MySQL and redis cache

A brief discussion on the synchronization solution between MySQL and redis cache

This article introduces two solutions for synchronizing MySQL and Redis cache

  • Solution 1: Automatically refresh Redis through MySQL synchronization, MySQL trigger + UDF function implementation
  • Solution 2: Analyze MySQL's binlog implementation and synchronize the data in the database to Redis

1. Solution 1 (UDF)

Scenario analysis: When we perform data operations on the MySQL database, the corresponding data is synchronized to Redis at the same time. After synchronization to Redis, the query operation is searched from Redis

The process is roughly as follows:

Set a trigger for the data to be operated in MySQL and monitor the operation

When the client (NodeServer) writes data to MySQL, the trigger is triggered, and the MySQL UDF function is called after the trigger

UDF function can write data into Redis to achieve synchronization effect

Solution Analysis:

  • This solution is suitable for scenarios where there are more reads than writes and no concurrent writes.
  • Because MySQL triggers themselves will cause a decrease in efficiency, if a table is frequently operated, this solution is not suitable.

Demo Case

Below is the MySQL table

Below is the parsing code of UDF

Define the corresponding trigger

2. Solution 2 (parsing binlog)

Before introducing Solution 2, let's first introduce the principle of MySQL replication, as shown in the following figure:

  • The main server operates the data and writes the data to the Bin log
  • The slave server calls the I/O thread to read the Bin log of the master server and writes it to its own Relay log. It then calls the SQL thread to parse the data from the Relay log and synchronize it to its own database.

Solution 2 is:

  • The entire replication process of MySQL above can be summarized in one sentence: read the data in the master server Bin log from the slave server and synchronize it to its own database
  • The same is true for our solution 2. Conceptually, we change the master server to MySQL and the slave server to Redis (as shown in the figure below). When data is written to MySQL, we parse the MySQL Bin log and then write the parsed data to Redis to achieve synchronization.

For example, the following is an analysis of a cloud database instance:

The cloud database and the local database are in a master-slave relationship. The cloud database is the primary database that mainly provides write services, and the local database is the secondary database that reads data from the primary database.

After the local database reads the data, it parses the Bin log and then writes the data to Redis for synchronization. The client then reads the data from Redis.

The difficulty of this technical solution lies in: how to parse the MySQL Bin Log. However, this requires a very deep understanding of binlog files and MySQL. At the same time, since binlog exists in multiple forms such as Statement/Row/Mixed level, the workload of analyzing binlog to achieve synchronization is very large.

Canal Open Source Technology

Canal is an open source project under Alibaba, developed in pure Java. Based on database incremental log analysis, it provides incremental data subscription and consumption. Currently, it mainly supports MySQL (also supports mariaDB)

The open source reference address is: https://github.com/liukelin/canal_mysql_nosql_sync

How it works (mimicking MySQL replication):

  • Canal simulates the interactive protocol of MySQL slave, pretends to be MySQL slave, and sends dump protocol to MySQL master
  • MySQL master receives the dump request and starts pushing binary log to slave (canal)
  • Canal parses binary log objects (originally byte streams)

Architecture:

Server represents a canal running instance, corresponding to a jvm

An instance corresponds to a data queue (one server corresponds to 1..n instances)

instance module:

  • eventParser (data source access, simulate slave protocol and master interaction, protocol parsing)
  • eventSink (Parser and Store connector, performs data filtering, processing, and distribution)
  • eventStore (data storage)
  • metaManager (incremental subscription & consumption information manager)

The general parsing process is as follows:

  • Parse parses the MySQL Bin log and then puts the data into the sink
  • Sink filters, processes and distributes data
  • The store reads the parsed data from the sink and stores it
  • Then use the design code to synchronize the data in the store to Redis.
  • Among them, parse/sink is encapsulated by the framework, and what we do is the step of reading the data from the store.

For more information about Cancl, please search on Baidu

The following is the operation topology diagram

The synchronization of MySQL tables adopts the chain of responsibility model, and each table corresponds to a Filter . For example, the class design used in zvsync is as follows:

The following are the classes used in the concrete zvsync . Whenever a table is added or deleted, it can be added or deleted directly.

3. Additional

The above introduction in this article is about synchronization from MySQL to cache. However, in actual development, some people may use the following solution:

  • When the client receives data, it is first saved in Redis and then synchronized to MySQL
  • This solution is inherently unsafe/unreliable, so if Redis has a short downtime or failure, data will be lost.

This concludes this article on the synchronization solution for MySQL and redis cache. For more information about MySQL and redis cache synchronization, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Java manually implements Redis's LRU cache mechanism
  • A brief discussion on the use of redis cache in projects
  • Detailed explanation of redis cache and database consistency problem solution
  • Manually implement Redis's LRU cache mechanism example
  • Sample code for using Redis cache to implement likes and unlikes
  • Detailed explanation of Redis cache deletion mechanism (source code analysis)
  • How to use Redis cache to store and read historical search keywords
  • SpringCache distributed cache implementation method (avoiding redis unlocking problem)
  • Detailed explanation of cache penetration avalanche solution

<<:  Vue implements two routing permission control methods

>>:  Use CSS to achieve circular wave effect

Recommend

JS implements simple calendar effect

This article shares the specific code of JS to ac...

Steps to deploy multiple tomcat services using DockerFile on Docker container

1. [admin@JD ~]$ cd opt #Enter opt in the root di...

Solutions to common problems using Elasticsearch

1. Using it with redis will cause Netty startup c...

Detailed explanation of the principle and usage of MySQL stored procedures

This article uses examples to explain the princip...

How to build LNMP environment on Ubuntu 20.04

Simple description Since it was built with Centos...

How familiar are you with pure HTML tags?

The following HTML tags basically include all exis...

Specific usage of Vue's new toy VueUse

Table of contents Preface What is VueUse Easy to ...

Detailed explanation of overflow:auto usage

Before starting the main text, I will introduce s...

Mini Program implements list countdown function

This article example shares the specific code for...

How to prevent Vue from flashing in small projects

Summary HTML: element plus v-cloak CSS: [v-cloak]...

Implementing Priority Queue in JavaScript

Table of contents 1. Introduction to priority que...