Spring Cloud Stream数据库事务没有回滚



我正在尝试编写一个spring-cloud-stream函数(spring-starter-parent 2.5.3, java 11, spring-cloud-version 2020.0.3),其中既有Kafka和Postgres事务。每当使用的消息以字符串"fail,"开头时,该函数将引发模拟错误。这将导致数据库事务回滚,然后导致kafka事务回滚。(我知道Kafka事务不是XA,这很好。)到目前为止,我还没有让数据库事务工作,但kafka事务可以。

目前我正在使用@Transactional注释,它似乎没有启动数据库事务。(Kafka绑定器文档建议使用ChainedTransactionManager同步数据库+ Kafka事务,但Spring Kafka文档声明它不赞成使用@Transactional注释,而这个问题的S.C.S.示例使用@Transactional注释和由start-jpa库创建的默认事务管理器(我认为))。我可以在我的调试器中看到,无论我是否@EnableTransactionManagement并在我的消费者上使用@Transactional,消费者都使用堆栈中更高的事务模板在kafka事务中执行,但我没有看到任何数据库事务。

我有几个问题想弄明白:

  • 我是否正确,Kafka侦听器容器在Kafka事务的上下文中运行我的消费者,无论我是否有@Transactional注释?如果是这样,是否有一种方法可以只在Kafka事务中运行特定的功能?
  • 上述会改变生产者,因为容器没有办法拦截调用生产者(据我所知)?
  • 我应该怎么做来同步Kafka和数据库事务,使数据库提交发生在Kafka提交之前?

我有以下Crud Repository、处理程序集合和application.yml:

@Repository
public interface AuditLogRepository extends CrudRepository<AuditLog, Long> {
/**
* Create a new audit log entry if and only if another with the same message does not already
* exist. This is idempotent.
*/
@Transactional
@Modifying
@Query(
nativeQuery = true,
value = "insert into audit_log (message) values (?1) on conflict (message) do nothing")
void createIfNotExists(String message);
}
@Profile("ft")
@Configuration
@EnableTransactionManagement
public class FaultTolerantHandlers {
private static final Logger LOGGER = LoggerFactory.getLogger(FaultTolerantHandlers.class);
@Bean
public NewTopic inputTopic() {
return TopicBuilder.name("input").partitions(1).replicas(1).build();
}
@Bean
public NewTopic inputDltTopic() {
return TopicBuilder.name("input.DLT").partitions(1).build();
}
@Bean
public NewTopic leftTopic() {
return TopicBuilder.name("left").partitions(1).build();
}
@Bean
public NewTopic rightTopic() {
return TopicBuilder.name("right").partitions(1).build();
}
@Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
LOGGER.info("Producing messages to input...");
template.send("input", "pass-1".getBytes());
template.send("input", "fail-1".getBytes());
template.send("input", "pass-2".getBytes());
template.send("input", "fail-2".getBytes());
LOGGER.info("Produced input.");
};
}
@Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(
BinderFactory binders) {
return (container, dest, group) -> {
ProducerFactory<byte[], byte[]> pf =
((KafkaMessageChannelBinder) binders.getBinder(null, MessageChannel.class))
.getTransactionalProducerFactory();
KafkaTemplate<byte[], byte[]> template = new KafkaTemplate<>(requireNonNull(pf));
container.setAfterRollbackProcessor(
new DefaultAfterRollbackProcessor<>(
new DeadLetterPublishingRecoverer(template), new FixedBackOff(2000L, 2L)));
};
}
// Receive messages from `input`.
// For each input, write an audit log to the database.
// For each input, produce a message to both `left` and `right` atomically.
// After three failed attempts to achieve the above, shuffle the message
// off to `input.DLT` and move on.
@Bean
@Transactional
public Consumer<String> persistAndSplit(
StreamBridge bridge,
AuditLogRepository repository
) {
return input -> {
bridge.send("left", ("left-" + input).getBytes());
repository.createIfNotExists(input);
if (input.startsWith("fail")) {
throw new RuntimeException("Simulated error");
}
bridge.send("right", ("right-" + input).getBytes());
};
}
@Bean
public Consumer<Message<String>> logger() {
return message -> {
var receivedTopic = message.getHeaders().get(KafkaHeaders.RECEIVED_TOPIC);
LOGGER.info("Received on topic=" + receivedTopic + " payload=" + message.getPayload());
};
}
}
spring:
cloud:
stream:
kafka:
binder:
transaction:
transaction-id-prefix: 'tx-'
required-acks: all
bindings:
persistAndSplit-in-0:
destination: input
group: input
logger-in-0:
destination: left,right,input.DLT
group: logger
consumer:
properties:
isolation.level: read_committed
function:
definition: persistAndSplit;logger

谢谢!

@Bean
@Transactional
public Consumer<String> persistAndSplit(
StreamBridge bridge,
AuditLogRepository repository
) {

在本例中,@Transactional位于bean定义上(在应用程序初始化期间只执行一次);要获得运行时事务,需要对lambda中的代码进行这样的注释;如……

@Bean
public Consumer<String> persistAndSplit(
StreamBridge bridge,
AuditLogRepository repository,
TxCode code
) {
return Txcode:run;
}
@Component
class TxCode {
@Autowired
AuditLogRepository repository
@Autowired
StreamBridge bridge;
@Transactional
void run(String input) {
bridge.send("left", ("left-" + input).getBytes());
repository.createIfNotExists(input);
if (input.startsWith("fail")) {
throw new RuntimeException("Simulated error");
}
bridge.send("right", ("right-" + input).getBytes());
};
}

(或者您也可以通过桥接并回购)。

return str -> code.run(str, repo, bridge);

相关内容

最新更新