在local_one的一致性(需要1个,但只有0个还活着)的副本可用于查询



我正在运行Spark-Cassandra-Connector并遇到一个奇怪的问题:我将火花塞运行为:

bin/spark-shell --packages datastax:spark-cassandra-connector:2.0.0-M2-s_2.1

然后我运行以下命令:

import com.datastax.spark.connector._
val rdd = sc.cassandraTable("test_spark", "test")
println(rdd.first)
# CassandraRow{id: 2, name: john, age: 29}

问题是以下命令给出一个错误:

rdd.take(1).foreach(println)
# CassandraRow{id: 2, name: john, age: 29}
rdd.take(2).foreach(println)
# Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_ONE (1 required but only 0 alive)
# at com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:128)
# at com.datastax.driver.core.Responses$Error.asException(Responses.java:114)
# at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:467)
# at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1012)
# at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:935)
# at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)

和以下命令仅悬挂:

println(rdd.count)

我的Cassandra Keyspace似乎具有正确的复制因素:

describe test_spark;
CREATE KEYSPACE test_spark WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true;

如何解决以上两个错误?

我假设您使用LOCAL_ONE(Spark Connector默认)一致性时使用SimpleStrategy和Multi-DC击中问题。它将在本地DC中寻找一个节点来提出请求,但有机会在不同的DC中存在所有复制品,并且不符合要求。(Cassandra-122053)

如果将一致性级别(input.consistency.level更改为ONE),我认为它将解决。您还应该真正考虑使用网络拓扑策略。

最新更新