我有这个配置:
@Configuration
public class KafkaTopicConfig {
private final TopicProperties topics;
public KafkaTopicConfig(TopicProperties topics) {
this.topics = topics;
}
@Bean
public NewTopic newTopicImportCharge() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CHARGES.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
@Bean
public NewTopic newTopicImportPayment() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_PAYMENTS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
@Bean
public NewTopic newTopicImportCatalog() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CATALOGS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
}
我可以添加 10 个不同的主题TopicProperties
.而且我不想手动创建每个类似的豆子。 是否存在某种方法可以在 spring-kafka 中创建所有主题或仅创建 spring?
直接使用管理客户端;您可以从 Boot 的 KafkaAdmin
获取预构建的属性映射。
@SpringBootApplication
public class So55336461Application {
public static void main(String[] args) {
SpringApplication.run(So55336461Application.class, args);
}
@Bean
public ApplicationRunner runner(KafkaAdmin kafkaAdmin) {
return args -> {
AdminClient admin = AdminClient.create(kafkaAdmin.getConfigurationProperties());
List<NewTopic> topics = new ArrayList<>();
// build list
admin.createTopics(topics).all().get();
};
}
}
编辑
要检查它们是否已经存在,或者是否需要增加分区,KafkaAdmin
具有以下逻辑...
private void addTopicsIfNeeded(AdminClient adminClient, Collection<NewTopic> topics) {
if (topics.size() > 0) {
Map<String, NewTopic> topicNameToTopic = new HashMap<>();
topics.forEach(t -> topicNameToTopic.compute(t.name(), (k, v) -> t));
DescribeTopicsResult topicInfo = adminClient
.describeTopics(topics.stream()
.map(NewTopic::name)
.collect(Collectors.toList()));
List<NewTopic> topicsToAdd = new ArrayList<>();
Map<String, NewPartitions> topicsToModify = checkPartitions(topicNameToTopic, topicInfo, topicsToAdd);
if (topicsToAdd.size() > 0) {
addTopics(adminClient, topicsToAdd);
}
if (topicsToModify.size() > 0) {
modifyTopics(adminClient, topicsToModify);
}
}
}
private Map<String, NewPartitions> checkPartitions(Map<String, NewTopic> topicNameToTopic,
DescribeTopicsResult topicInfo, List<NewTopic> topicsToAdd) {
Map<String, NewPartitions> topicsToModify = new HashMap<>();
topicInfo.values().forEach((n, f) -> {
NewTopic topic = topicNameToTopic.get(n);
try {
TopicDescription topicDescription = f.get(this.operationTimeout, TimeUnit.SECONDS);
if (topic.numPartitions() < topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
}
else if (topic.numPartitions() > topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d, increasing "
+ "if the broker supports it", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
topicsToModify.put(n, NewPartitions.increaseTo(topic.numPartitions()));
}
}
catch (@SuppressWarnings("unused") InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (TimeoutException e) {
throw new KafkaException("Timed out waiting to get existing topics", e);
}
catch (@SuppressWarnings("unused") ExecutionException e) {
topicsToAdd.add(topic);
}
});
return topicsToModify;
}
目前我们可以只使用KafkaAdmin.NewTopics
春季文档