Kafka Streams 中的 RocksDB 例外



在一个简单的Kafka Stream程序中,当我使用以下代码时,它可以工作而不会引发任何错误:

KTable<String, Long> result= source.mapValues(textLine
->textLine.toLowerCase()) .flatMapValues(lowercasedTextLine ->
Arrays.asList(lowercasedTextLine.split(" "))) .selectKey((ignoredKey,word) ->
word) .groupByKey() .count("Counts");
result.to(Serdes.String(), Serdes.Long(), "wc-output");

但是,当我使用以下代码时,出现错误:

KStream<String, String> source = builder.stream("wc-input");
source.groupBy((key, word) -> word).windowedBy(TimeWindows.of(TimeUnit.SECONDS.toMillis(5000))).count()
.toStream().map((key, value) -> new KeyValue<>(key.key(), value))
.to("wc-output", Produced.with(Serdes.String(), Serdes.Long()));

线程中的异常 "streams-wordcount-b160d715-f0e0-42ee-831e-0e4eed7e9424-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Exception capture in 过程。taskId=1_0, processor=KSTREAM-SOURCE-0000000006, topic=streams-wordcount-kstream-aggregate-state-store-0000000002-repartition, 分区 = 0,偏移量 = 0 在 org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:232) 在 org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:403) 在 org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:317) 在 org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:942) 在 org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:822) 在 org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774) 在 org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744) 由以下原因引起:org.apache.kafka.streams.errors.ProcessorStateException: 打开存储时出错 KSTREAM-AGREGATE-STATE-STORE-0000000002:1553472000000 at location \tmp\kafka-streams\streams-wordcount\1_0\KSTREAM-AGGREGATE-STATE-STORE-0000000002\KSTREAM-AGGREGATE-STATE-STORE-0000000002:1553472000000 在 org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:204) 在 org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:174) 在 org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:40) 在 org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(Segments.java:89) 在 org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:81) 在 org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:43) 在 org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:34) 在 org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:67) 在 org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:33) 在 org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:100) 在 org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:141) 在 org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:232) 在 org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:245) 在 org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:153) 在 org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:157) 在 org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:36) 在 org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:96) 在 org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:122) 在 org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46) 在 org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208) 在 org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124) 在 org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85) 在 org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80) 在 org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216) ...6 更多 原因: org.rocksdb.RocksDBException: 无法创建 迪尔: H:\tmp\kafka-streams\streams-wordcount\1_0\KSTREAM-AGGREGATE-STATE-STORE-0000000002\KSTREAM-AGGREGATE-STATE-STORE-0000000002:1553472000000: org.rocksdb.RocksDB.open(Native Method) 的参数无效,网址为 org.rocksdb.RocksDB.open(RocksDB.java:231) at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:197)

当您使用窗口聚合时,存储 以不同的方式命名,并且 Kafka1.0.0中存在影响 Windows 操作系统的错误:窗口存储的名称包含 Windows 操作系统上不允许的:。该错误已在版本1.0.11.1.0中修复

参看 https://issues.apache.org/jira/browse/KAFKA-6167

相关内容

  • 没有找到相关文章

最新更新