Flink CDC streaming java代码实战

it2025-02-04  14

1,关于Flink cdc的使用说明

  1)导入依赖

<dependency> <groupId>com.alibaba.ververica</groupId> <artifactId>flink-connector-mysql-cdc</artifactId> <version>1.1.0</version> </dependency>

  sql案例 :

 

  2)需要理解注意的地方

    

这个锁可以去掉的,如果不去掉,也是很轻量级的,并不是snapshot完后才释放,而是拿到当前的binlog位点后就释放掉了。 如果表结构 不会变更的话可以完全禁用掉这个锁的。在sql里加上 'debezium.snapshot.locking.mode' = 'none' 就可以了

 问题:使用cdc的时候会影响mysql的性能跟正常使用吗?

      不会,这个锁应该很快就能释放,没什么性能瓶颈,但是得保证你接binlog的用户开了reload权限

  请百度 ‘’mysql的reload权限‘

 

2,官网的案例代码

package cdc; /** * @program: flink-neiwang-dev * @description: 通过cdc代码读取mysql的数据 * @author: Mr.Wang * @create: 2020-10-21 15:29 **/ import com.alibaba.fastjson.JSONObject; import com.alibaba.ververica.cdc.connectors.mysql.MySQLSource; import org.apache.flink.streaming.api.datastream.DataStreamSource; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.api.functions.source.SourceFunction; public class MySqlBinlogSourceExample { public static void main(String[] args) { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); SourceFunction<JSONObject> sourceFunction = MySQLSource.<JSONObject>builder() .hostname("192.168.x.xx") .port(3306) .databaseList("cdc_test") // monitor all tables under inventory database .username("root") .password("xxxxxx") .deserializer(new CdcDwdDeserializationSchema()) // converts SourceRecord to String .build(); DataStreamSource<JSONObject> stringDataStreamSource = env.addSource(sourceFunction); stringDataStreamSource.print("===>"); try { env.execute("测试mysql-cdc"); } catch (Exception e) { e.printStackTrace(); } } }

3,红色标注的地方我们自定义一个类,解析一下cdc的格式,然后拼接成自己需要的格式

package cdc; import com.alibaba.fastjson.JSONArray; import com.alibaba.fastjson.JSONObject; import com.alibaba.ververica.cdc.debezium.DebeziumDeserializationSchema; import org.apache.flink.api.common.typeinfo.BasicTypeInfo; import org.apache.flink.api.common.typeinfo.TypeInformation; import org.apache.flink.util.Collector; import org.apache.kafka.connect.data.Field; import org.apache.kafka.connect.data.Schema; import org.apache.kafka.connect.data.Struct; import org.apache.kafka.connect.source.SourceRecord; import java.util.List; /** * @program: flink-neiwang-dev * @description: * @author: Mr.Wang * @create: 2020-10-21 16:04 * **/ //todo 这里是直接输出到dwd层代码 public class CdcDwdDeserializationSchema implements DebeziumDeserializationSchema<JSONObject> { private static final long serialVersionUID = -3168848963265670603L; public CdcDwdDeserializationSchema() { } @Override public void deserialize(SourceRecord record, Collector<JSONObject> out) throws Exception { Struct dataRecord = (Struct)record.value(); Struct afterStruct = dataRecord.getStruct("after"); Struct beforeStruct = dataRecord.getStruct("before"); /* todo 1,同时存在 beforeStruct 跟 afterStruct数据的话,就代表是update的数据 2,只存在 beforeStruct 就是delete数据 3,只存在 afterStruct数据 就是insert数据 */ JSONObject logJson = new JSONObject(); String canal_type = ""; List<Field> fieldsList = null; if(afterStruct !=null && beforeStruct !=null){ System.out.println("这是修改数据"); canal_type = "update"; fieldsList = afterStruct.schema().fields(); //todo 字段与值 for (Field field : fieldsList) { String fieldName = field.name(); Object fieldValue = afterStruct.get(fieldName); // System.out.println("*****fieldName=" + fieldName+",fieldValue="+fieldValue); logJson.put(fieldName,fieldValue); } }else if (afterStruct !=null){ System.out.println( "这是新增数据"); canal_type = "insert"; fieldsList = afterStruct.schema().fields(); //todo 字段与值 for (Field field : fieldsList) { String fieldName = field.name(); Object fieldValue = afterStruct.get(fieldName); // System.out.println("*****fieldName=" + fieldName+",fieldValue="+fieldValue); logJson.put(fieldName,fieldValue); } }else if (beforeStruct !=null){ System.out.println( "这是删除数据"); canal_type = "detele"; fieldsList = beforeStruct.schema().fields(); //todo 字段与值 for (Field field : fieldsList) { String fieldName = field.name(); Object fieldValue = beforeStruct.get(fieldName); // System.out.println("*****fieldName=" + fieldName+",fieldValue="+fieldValue); logJson.put(fieldName,fieldValue); } }else { System.out.println("一脸蒙蔽了"); } //todo 拿到databases table信息 Struct source = dataRecord.getStruct("source"); Object db = source.get("db"); Object table = source.get("table"); Object ts_ms = source.get("ts_ms"); logJson.put("canal_database",db); logJson.put("canal_database",table); logJson.put("canal_ts",ts_ms); logJson.put("canal_type",canal_type); //todo 拿到topic String topic = record.topic(); System.out.println("topic = " + topic); //todo 主键字段 Struct pk = (Struct)record.key(); List<Field> pkFieldList = pk.schema().fields(); int partitionerNum = 0 ; for (Field field : pkFieldList) { Object pkValue= pk.get(field.name()); partitionerNum += pkValue.hashCode(); } int hash = Math.abs(partitionerNum) % 3; logJson.put("pk_hashcode",hash); out.collect(logJson); } @Override public TypeInformation<JSONObject> getProducedType() { return BasicTypeInfo.of(JSONObject.class); } }

 

上面红字体标注的record打断点为: 

 

新增数据格式:  SourceRecord{sourcePartition={server=mysql-binlog-source}, sourceOffset={file=mysql-bin.000002, pos=391425550, row=1, snapshot=true}} ConnectRecord{topic='mysql-binlog-source.cdc_test.test', kafkaPartition=null, key=Struct{id=1}, keySchema=Schema{mysql_binlog_source.cdc_test.test.Key:STRUCT}, value=Struct{after=Struct{id=1,name=第一行数据},source=Struct{version=1.2.0.Final,connector=mysql,name=mysql-binlog-source,ts_ms=0,snapshot=true,db=cdc_test,table=test,server_id=0,file=mysql-bin.000002,pos=391425550,row=0},op=c,ts_ms=1603365697093}, valueSchema=Schema{mysql_binlog_source.cdc_test.test.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}      修改数据格式:  SourceRecord{sourcePartition={server=mysql-binlog-source},   sourceOffset={ts_sec=1603363386, file=mysql-bin.000002, pos=391422962, row=1, server_id=1, event=2}}   ConnectRecord{topic='mysql-binlog-source.cdc_test.test', kafkaPartition=null, key=Struct{id=get}, keySchema=Schema{mysql_binlog_source.cdc_test.test.Key:STRUCT}, value=Struct{before=Struct{id=get,name=get222},after=Struct{id=get,name=修改数据},  source=Struct{version=1.2.0.Final,connector=mysql,name=mysql-binlog-source,ts_ms=1603363386000,db=cdc_test,table=test,server_id=1,file=mysql-bin.000002,pos=391423094,row=0,thread=29},  op=u,ts_ms=1603363386700}, valueSchema=Schema{mysql_binlog_source.cdc_test.test.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}     删除数据格式:  SourceRecord{sourcePartition={server=mysql-binlog-source},  sourceOffset={ts_sec=1603363545, file=mysql-bin.000002, pos=391423260, row=1, server_id=1, event=2}}   ConnectRecord{topic='mysql-binlog-source.cdc_test.test', kafkaPartition=null,  key=Struct{id=get},   keySchema=Schema{mysql_binlog_source.cdc_test.test.Key:STRUCT},   value=Struct{before=Struct{id=get,name=修改数据},source=Struct{version=1.2.0.Final,connector=mysql,name=mysql-binlog-source,ts_ms=1603363545000,db=cdc_test,table=test,server_id=1,file=mysql-bin.000002,pos=391423392,row=0,thread=29},  op=d,ts_ms=1603363545295}, valueSchema=Schema{mysql_binlog_source.cdc_test.test.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}

 

测试表演示

控制台打印

 首次加载全量:

===>> {"canal_type":"insert","name":"第一行数据","id":"1","canal_ts":0,"canal_database":"test","pk_hashcode":1} 这是新增数据 topic = mysql-binlog-source.cdc_test.test ===>> {"canal_type":"insert","name":"爱迪生所多","id":"2","canal_ts":0,"canal_database":"test","pk_hashcode":2} 这是新增数据 topic = mysql-binlog-source.cdc_test.test 19:16:40,830 INFO  com.alibaba.ververica.cdc.debezium.internal.DebeziumChangeConsumer  - Received record from streaming binlog phase, released checkpoint lock. ===>> {"canal_type":"insert","name":"所得税的方式","id":"3","canal_ts":0,"canal_database":"test","pk_hashcode":0}

修改数据:

这是修改数据 topic = mysql-binlog-source.cdc_test.test ===>> {"canal_type":"update","name":"修改内容","id":"3","canal_ts":1603365535000,"canal_database":"test","pk_hashcode":0}

删除数据:

这是删除数据 topic = mysql-binlog-source.cdc_test.test ===>> {"canal_type":"detele","name":"修改内容","id":"3","canal_ts":1603365585000,"canal_database":"test","pk_hashcode":0}

参考:

https://github.com/ververica/flink-cdc-connectors/wiki/MySQL-CDC-Connector

https://mp.weixin.qq.com/s/Mfn-fFegb5wzI8BIHhNGvQ

看完点个赞!!!这对我来说很重要~

最新回复(0)