Apache Heron中的Kafka集成



我正在尝试将Kafka与Heron拓扑集成。但是,我找不到最新版本的苍鹭(0.17.5(的任何示例。是否有任何可以分享的示例或有关如何实现自定义 Kafka Spout 和 Kafka Bolt 的任何建议?

编辑 1:

我相信KafkaSpoutKafkaBolt在Heron中被故意弃用,以便为新的Streamlet API让路。我目前正在看看我是否可以使用 Streamlet API 构建KafkaSource 和 KafkaSink但是,当我尝试在源代码中创建KafkaConsumer时,我遇到了以下异常。

Caused by: java.io.NotSerializableException: org.apache.kafka.clients.consumer.KafkaConsumer
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at com.twitter.heron.api.utils.Utils.serialize(Utils.java:97)

编辑 2:

修复上述问题。我正在初始化构造函数中的KafkaConsumer,这是错误的。在setup()方法中初始化相同的方法修复了它。

我设法使用Streamlet API for Heron完成了这项工作。我在这里发布相同的内容。希望它能帮助其他面临同样问题的人。

夫卡出处

public class KafkaSource implements Source {
private String streamName;
private Consumer<String, String> kafkaConsumer;
private List<String> kafkaTopic;
private static final Logger LOGGER = Logger.getLogger("KafkaSource");
@Override
public void setup(Context context) {
this.streamName = context.getStreamName();
kafkaTopic = Arrays.asList(KafkaProperties.KAFKA_TOPIC);
Properties props = new Properties();
props.put("bootstrap.servers", KafkaProperties.BOOTSTRAP_SERVERS);
props.put("group.id", KafkaProperties.CONSUMER_GROUP_ID);
props.put("enable.auto.commit", KafkaProperties.ENABLE_AUTO_COMMIT);
props.put("auto.commit.interval.ms", KafkaProperties.AUTO_COMMIT_INTERVAL_MS);
props.put("session.timeout.ms", KafkaProperties.SESSION_TIMEOUT);
props.put("key.deserializer", KafkaProperties.KEY_DESERIALIZER);
props.put("value.deserializer", KafkaProperties.VALUE_DESERIALIZER);
props.put("auto.offset.reset", KafkaProperties.AUTO_OFFSET_RESET);
props.put("max.poll.records", KafkaProperties.MAX_POLL_RECORDS);
props.put("max.poll.interval.ms", KafkaProperties.MAX_POLL_INTERVAL_MS);
this.kafkaConsumer = new KafkaConsumer<>(props);
kafkaConsumer.subscribe(kafkaTopic);
}
@Override
public Collection get() {
List<String> kafkaRecords = new ArrayList<>();
ConsumerRecords<String, String> records = kafkaConsumer.poll(Long.MAX_VALUE);
for (ConsumerRecord<String, String> record : records) {
String rVal = record.value();
kafkaRecords.add(rVal);
}
return kafkaRecords;
}
@Override
public void cleanup() {
kafkaConsumer.wakeup();
}
}

最新更新