连接Mongodb接收器连接器和kafka



我正在尝试使用sink连接器连接mongodb kafka,因为我想将数据从kafka写入mongodb。我在libs文件夹中添加了mongodb连接器jar文件,并编辑了如下的connect-standalone-demo.properties文件,

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# These are defaults. This file just demonstrates how to override some settings.
bootstrap.servers=localhost:9092
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
#value.converter=org.apache.kafka.connect.json.JsonConverter
#key.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include 
# any combination of: 
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Note: symlinks will be followed to discover dependencies or plugins.
# Examples: 
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
# plugin.path=/home/adminacl/Kafka/kafka_2.13-3.1.0/libs

我已经创建了文件[file-ssink-standalone.properties],它有数据库详细信息的配置。

curl -X POST -H "Content-Type: application/json" -d ' {
"connector.class":"com.mongodb.kafka.connect.MongoSinkConnector",
"tasks.max":"1",
"topics":"departments",
"connection.uri":"mongodb://localhost:27017",
"database":"hrmdb",
"collection":"departments",
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable":false,
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable":false

}

我正在使用以下cli、运行连接器

bin/connect-standalone.sh config/connect-standalone-demo.properties config/file-sink-standalone.properties 

我收到以下错误,

ERROR Failed to create job for config/file-sink-standalone.properties (org.apache.kafka.connect.cli.ConnectStandalone:107)
[2022-07-27 17:04:20,424] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:117)

您必须在connect-standalone-demo.properties[Worker]中取消注释plugin.path属性

在我的环境中,我只有这个插件。path=/usr/share/java适用于大多数接收器连接器,它完全基于连接器属性中的connector.class值工作。

阅读https://docs.confluent.io/home/connect/self-managed/userguide.html#installing-kconnect插件

connector.class=com.mongodb.kafka.connect.MongoSinkConnector

public class MongoSinkConnector extends SinkConnector {
private Map<String, String> settings;
@Override
public String version() {
return VersionUtil.getVersion();
}
@Override
public void start(final Map<String, String> map) {
settings = map;
}
@Override
public Class<? extends Task> taskClass() {
return MongoSinkTask.class;
}
@Override
public List<Map<String, String>> taskConfigs(final int maxTasks) {
return singletonList(settings);
}
@Override
public void stop() {
}
@Override
public ConfigDef config() {
return MongoSinkConfig.CONFIG;
}
}

相关内容

  • 没有找到相关文章