我正在使用Kafka和Kafka Connect,使用debezium SQL Server CDC源连接器和合流JDBC接收器连接器将MS SQL Server数据库复制到MySQL。";auto.create";设置为true,并且接收器连接器确实创建了表,但某些数据类型不匹配。在SQL Sever中,我有
CREATE TABLE employees (
id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY,
first_name VARCHAR(255) NOT NULL,
last_name VARCHAR(255) NOT NULL,
email VARCHAR(255) NOT NULL UNIQUE,
start_date DATE,
salary INT,
secret FLOAT,
create_time TIME
);
但在MySQL中,它创建了以下内容:
mysql> desc employees;
+-------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-------------+------+-----+---------+-------+
| id | int | NO | PRI | NULL | |
| first_name | text | NO | | NULL | |
| last_name | text | NO | | NULL | |
| email | text | NO | | NULL | |
| start_date | int | YES | | NULL | |
| salary | int | YES | | NULL | |
| secret | double | YES | | NULL | |
| create_time | bigint | YES | | NULL | |
| messageTS | datetime(3) | YES | | NULL | |
+-------------+-------------+------+-----+---------+-------+
忽略messgeTS,这是我在SMT中添加的一个额外字段。
first_name、last_name、电子邮件、start_date和创建时间的数据类型都不匹配。它将VARCHAR(255(转换为文本,将DATE转换为int,将TIME转换为bigint。
只是想知道是否有什么配置错误?
我正在使用docker运行SQL Server 2019和MySQL 9.0.28。
我还尝试过禁用autocreate和autoevolution的建议,并使用适当的数据类型预创建表。
mysql> desc employees;
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| first_name | varchar(255) | NO | | NULL | |
| last_name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | | NULL | |
| start_date | date | NO | | NULL | |
| salary | int | NO | | NULL | |
| secret | double | NO | | NULL | |
| create_time | datetime | NO | | NULL | |
| messageTS | datetime | NO | | NULL | |
+-------------+--------------+------+-----+---------+----------------+
但当试图插入数据库时,它会出现以下异常:
kafka-connect | [2022-03-04 19:55:07,331] INFO Setting metadata for table "employees" to Table{name='"employees"', type=TABLE columns=[Column{'first_name', isPrimaryKey=false, allowsNull=false, sqlType=VARCHAR}, Column{'secret', isPrimaryKey=false, allowsNull=false, sqlType=DOUBLE}, Column{'salary', isPrimaryKey=false, allowsNull=false, sqlType=INT}, Column{'start_date', isPrimaryKey=false, allowsNull=false, sqlType=DATE}, Column{'email', isPrimaryKey=false, allowsNull=false, sqlType=VARCHAR}, Column{'id', isPrimaryKey=true, allowsNull=false, sqlType=INT}, Column{'last_name', isPrimaryKey=false, allowsNull=false, sqlType=VARCHAR}, Column{'messageTS', isPrimaryKey=false, allowsNull=false, sqlType=DATETIME}, Column{'create_time', isPrimaryKey=false, allowsNull=false, sqlType=DATETIME}]} (io.confluent.connect.jdbc.util.TableDefinitions)
kafka-connect | [2022-03-04 19:55:07,382] WARN Write of 4 records failed, remainingRetries=0 (io.confluent.connect.jdbc.sink.JdbcSinkTask)
kafka-connect | java.sql.BatchUpdateException: Data truncation: Incorrect date value: '19055' for column 'start_date' at row 1
消息的值为
{"id":1002,"first_name":"George","last_name":"Bailey","email":"george.bailey@acme.com","start_date":{"int":19055},"salary":{"int":100000},"secret":{"double":0.867153569942739},"create_time":{"long":1646421476477}}
start_date字段的消息模式为
{
"name": "start_date",
"type": [
"null",
{
"type": "int",
"connect.version": 1,
"connect.name": "io.debezium.time.Date"
}
],
"default": null
}
它似乎不知道如何将io.debezium.time.Date转换为Date,并将其视为int。
任何关于这方面的建议都将不胜感激。
来源配置:
{
"name": "SimpleSQLServerCDC",
"config":{
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"tasks.max":1,
"key.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "http://schema-registry:8081",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"confluent.topic.bootstrap.servers":"kafka:29092",
"database.hostname" : "sqlserver",
"database.port" : "1433",
"database.user" : "sa",
"database.password" : "",
"database.dbname" : "testDB",
"database.server.name" : "corporation",
"database.history.kafka.topic": "dbhistory.corporation",
"database.history.kafka.bootstrap.servers" : "kafka:29092",
"topic.creation.default.replication.factor": 1,
"topic.creation.default.partitions": 10,
"topic.creation.default.cleanup.policy": "delete"
}
}
接收器配置:
{
"name": "SimpleMySQLJDBC",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.url": "jdbc:mysql://mysql:3306/sinkdb",
"connection.user": "user",
"connection.password": "",
"tasks.max": "2",
"topics.regex": "corporation.dbo.*",
"auto.create": "true",
"auto.evolve": "true",
"dialect.name": "MySqlDatabaseDialect",
"insert.mode": "upsert",
"pk.mode": "record_key",
"pk.fields":"id",
"delete.enabled": "true",
"batch.size": 1,
"key.converter":"io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "http://schema-registry:8081",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"transforms":"unwrap,dropPrefix,insertTS",
"transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",
"transforms.dropPrefix.regex":"corporation.dbo.(.*)",
"transforms.dropPrefix.replacement":"$1",
"transforms.unwrap.type":"io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.drop.tombstones":"false",
"transforms.unwrap.delete.handling.mode":"drop",
"transforms.insertTS.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.insertTS.timestamp.field": "messageTS",
"errors.log.enable": "true",
"errors.log.include.messages": "true",
"errors.tolerance":"all",
"errors.deadletterqueue.topic.name":"dlq-mysql",
"errors.deadletterqueue.context.headers.enable": "true",
"errors.deadletterqueue.topic.replication.factor":"1"
}
}
将VARCHAR(255(转换为文本
字段的字符限制不通过Connect API数据类型携带。任何类似字符串的数据都将成为TEXT
列类型。
DATE到int,TIME到bigint
我认为,默认情况下,日期时间值会转换为Unix epoch。您可以使用TimestampConverter
转换转换为不同格式的
总体而言,如果您想准确地保留类型,请禁用从接收器连接器自动创建表,并使用所需类型预创建表。
我刚刚制作了一个SMT,它将所有时间戳字段转换为字符串。希望它能有所帮助。
https://github.com/FX-HAO/kafka-connect-debezium-tranforms
您需要进行2次更改
In-Source Connector添加"time.precision.mode":"connect"
In sink Connector添加
"transforms": "TimestampConverter",
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.TimestampConverter.target.type": "Timestamp",
"transforms.TimestampConverter.field": "dob",