Spring Kafka,手动在不同线程中提交的手册



美好的一天。我正在使用Spring Kafka 2.2.5我有一个听众:

@KafkaListener(topics = "${kafka.execution-task-topic}", containerFactory = "executionTaskObjectContainerFactory")
public void protocolEventsHandle(ExecutionTask executionTask,
    Acknowledgment acknowledgment,
    @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
    @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
    @Header(KafkaHeaders.OFFSET) long offset) {
    ResponseEntity < String > stringResponseEntity = airflowRestRunner.startDag(executionTask);
    JSONObject body = new JSONObject(stringResponseEntity.getBody());
    String message = body.getString("message");
    String runId = messageParser.getRunId(message);
    ExecutionTaskMessageInfo messageInfo = new ExecutionTaskMessageInfo(offset, partition, false, acknowledgment);
    kafkaAcknowledgeObject.putMessageInfo(messageInfo, partition);
    this.executorService.submit(kafkaAlertProducer.produceMessageAfterTaskSuccess(runId, executionTask, messageInfo));
}

我做一些操作,如果它们是成功的,我会使用确认接口提交偏移。

我有问题。在创建线程中进行计算时,侦听器再次从同一偏移量读取消息。因此,当我尝试确认偏移量时,应用程序崩溃了。

与Kafka同意合作的最佳做法是什么?我最多可以并行收到10条消息,只有在计算后才需要提交它们。

update1

我从Kafka存储所有消息:钥匙 - 截面号值 - 包含所需的参考确认的特殊模型类

@Data
@NoArgsConstructor
@AllArgsConstructor
public abstract class KafkaAcknowledgeObject < T extends Comparable > {
    protected ConcurrentHashMap < Integer,
    TreeSet < T >> hashMap = new ConcurrentHashMap < > ();
    public abstract void doAck();
    public void putMessageInfo(T message, int partition) {
        if (hashMap.containsKey(partition)) {
            hashMap.get(partition).add(message);
        } else {
            TreeSet < T > messageInfos = new TreeSet < > ();
            messageInfos.add(message);
            hashMap.put(partition, messageInfos);
        }
    }
}

计算后,我致电doAck(),例如

@Override
public void doAck() {
    for (TreeSet < ExecutionTaskMessageInfo > messageInfoTreeSet: super.hashMap.values()) {
        checkHandledOffsets(messageInfoTreeSet);
    }
}
private void checkHandledOffsets(TreeSet < ExecutionTaskMessageInfo > messageInfoTreeSet) {
    ExecutionTaskMessageInfo first = getFirstMessageInfo(messageInfoTreeSet);
    if (first.isCompleted()) {
        first.getAcknowledgment().acknowledge();
        messageInfoTreeSet.remove(first);
        checkHandledOffsets(messageInfoTreeSet);
    }
    return;
}
private ExecutionTaskMessageInfo getFirstMessageInfo(TreeSet < ExecutionTaskMessageInfo > messageInfoTreeSet) {
    Iterator < ExecutionTaskMessageInfo > iterator = messageInfoTreeSet.iterator();
    return iterator.next();
}

您正在做的事情应该没问题;我刚刚测试了类似的安排,对我来说很好...

@SpringBootApplication
public class So56190029Application {
    public static void main(String[] args) {
        SpringApplication.run(So56190029Application.class, args);
    }
    private final ExecutorService exec = Executors.newSingleThreadExecutor();
    private final AtomicInteger count = new AtomicInteger();
    @KafkaListener(id = "so56190029", topics = "so56190029")
    public void listen(String in, Acknowledgment ack) {
        this.exec.execute(runner(in, ack));
    }
    private Runnable runner(String payload, Acknowledgment ack) {
        return () -> {
            System.out.println(payload);
            if (this.count.incrementAndGet() % 3 == 0) {
                System.out.println("acking");
                ack.acknowledge();
            }
        };
    }
    @Bean
    public ApplicationRunner runner(KafkaTemplate<?, String> template) {
        return args -> IntStream.range(0, 6).forEach(i -> template.send("so56190029", "foo" + i));
    }
    @Bean
    public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
            ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
            ConsumerFactory<Object, Object> kafkaConsumerFactory) {
        ConcurrentKafkaListenerContainerFactory<Object, Object> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        configurer.configure(factory, kafkaConsumerFactory);
        factory.getContainerProperties().setCommitLogLevel(Level.INFO);
        return factory;
    }
}

spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.properties.max.poll.records=3
spring.kafka.listener.ack-mode=MANUAL

foo0
foo1
foo2
acking
foo3
foo4
foo5
acking
2019-05-17 14:46:28.790  INFO 62429 --- [o56190029-0-C-1] essageListenerContainer$ListenerConsumer 
    : Committing: {so56190029-0=OffsetAndMetadata{offset=36, leaderEpoch=null, metadata=''}}

最新更新