慢速超时 500 毫秒/跨节点警告



我有一个树节点Cassandra集群。

当我从 Java 客户端请求大量数据时,我在服务器端有以下警告:

 WARN SELECT * FROM [...] time 789 msec - slow timeout 500 msec/cross-node
 WARN SELECT * FROM [...] time 947 msec - slow timeout 500 msec/cross-node
 WARN SELECT * FROM [...] time 1027 msec - slow timeout 500 msec/cross-node
 WARN SELECT * FROM [...] time 819 msec - slow timeout 500 msec/cross-node

客户端,我最终出现以下异常:

java.util.concurrent.ExecutionException:
com.datastax.driver.core.exceptions.TransportException:
[/x.y.z.a:9042]连接已关闭

我的服务器配置如下:

 # How long the coordinator should wait for read operations to complete
 read_request_timeout_in_ms: 5000
 # How long the coordinator should wait for seq or index scans to complete
 range_request_timeout_in_ms: 10000
 # How long the coordinator should wait for writes to complete
 write_request_timeout_in_ms: 2000
 # How long the coordinator should wait for counter writes to complete
 counter_write_request_timeout_in_ms: 5000
 # How long a coordinator should continue to retry a CAS operation
 # that contends with other proposals for the same row
 cas_contention_timeout_in_ms: 1000
 # How long the coordinator should wait for truncates to complete
 # (This can be much longer, because unless auto_snapshot is disabled
 # we need to flush first so we can snapshot before removing the data.)
 truncate_request_timeout_in_ms: 60000
 # The default timeout for other, miscellaneous operations
 request_timeout_in_ms: 10000

我没有找到任何关于"500 毫秒"超时的参考。那么我该如何调整此超时呢?是否有任何选项可以避免在查询大量分区/数据时以异常结尾?

作为旁注,我使用 future 以异步方式检索数据:

 import com.datastax.driver.core.ResultSetFuture;
默认

slow_query_log_timeout_in_ms为 500,不是实际超时,而只是通知/日志记录。如果你想要更高的,你可以在你的yaml中更新它。

不过,500 毫秒很慢,可能表明您的环境或查询中出现问题。尽管如果这种情况很少见,它可能只是周期性 GC,可以通过客户端推测重试来缓解。

相关内容

最新更新