无法使用Flink处理器API还原检查点状态



主程序正在消耗kafka事件,然后过滤->地图->keyBy->CEP->下沉我写了另一个单独的简单程序来读取检查点目录,如下所示:

object StateReader extends App {
val path = "file://...."
val env = ExecutionEnvironment.getExecutionEnvironment
val chk = Savepoint.load(env.getJavaEnv, path, new FsStateBackend(path))
val ds = chk.readKeyedState("cep", new CepOperatorReadFunction, TypeInformation.of(classOf[KEY]), TypeInformation.of(classOf[VALUE]))
println(ds.count())
}
class CepOperatorReadFunction extends KeyedStateReaderFunction[KEY, VALUE] {
override def open(parameters: Configuration): Unit = {
}
override def readKey(k: KEY, context: KeyedStateReaderFunction.Context, collector: Collector[VALUE]): Unit = {
}//end readKey
}//end class CepOperatorReadFunction

然而,我得到了以下例外:

Caused by: java.lang.IllegalStateException: Unexpected state handle type, expected: class org.apache.flink.runtime.state.KeyGroupsStateHandle, but found: class org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle
at org.apache.flink.runtime.state.heap.HeapRestoreOperation.restore(HeapRestoreOperation.java:120)
at org.apache.flink.runtime.state.heap.HeapKeyedStateBackendBuilder.build(HeapKeyedStateBackendBuilder.java:114)
... 13 more

以下是flink-conf.yaml中的一些配置

state.backend: rocksdb
state.checkpoints.dir: hdfs:///.../checkpoints
state.savepoints.dir: hdfs:///.../savepoints
state.backend.incremental: true
state.backend.rocksdb.memory.write-buffer-ratio: 0.6
state.backend.rocksdb.localdir: /var/lib/.../rocksdb
execution.checkpointing.interval: 900000
execution.checkpointing.timeout: 600000
execution.checkpointing.unaligned: true
execution.checkpointing.mode: EXACTLY_ONCE
execution.checkpointing.max-concurrent-checkpoints: 1
execution.checkpointing.min-pause: 0

你知道为什么会出现异常以及如何解决问题吗?

感谢

没有现成的支持,可以轻松读取CEP操作员的状态。因此,要实现KeyedStateReaderFunction,您必须深入研究CEP实现,找到使用的ValueStateMapState,并实现一个使用相同状态描述符的读取器。

最新更新