如何修复外部火花洗牌服务"Error opening block StreamChunkId"



我正在尝试在 kubernetes 集群中从我的齐柏林飞艇部署运行 Spark 作业。我也在不同的命名空间上运行了一个火花洗牌服务(守护程序 - v2.2.0-k8s(。这是我的火花配置(设置在齐柏林飞艇吊舱上(

--conf spark.kubernetes.executor.docker.image=<spark-executor> 
--conf spark.executor.cores=5
--conf spark.driver.memory=5g
--conf spark.executor.memory=5g
--conf spark.kubernetes.authenticate.driver.serviceAccountName=<svc-account> 
--conf spark.local.dir=/tmp/spark-local 
--conf spark.executor.instances=5 
--conf spark.dynamicAllocation.enabled=true 
--conf spark.shuffle.service.enabled=true 
--conf spark.kubernetes.shuffle.labels="app=spark-shuffle,spark-version=2.2.0" 
--conf spark.dynamicAllocation.maxExecutors=5   
--conf spark.dynamicAllocation.minExecutors=1 
--conf spark.kubernetes.shuffle.namespace=<namespace> 
--conf spark.kubernetes.docker.image.pullPolicy=IfNotPresent 
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.5.0 
--conf spark.kubernetes.resourceStagingServer.uri=<ip:port>

但我从齐柏林飞艇生成的外部火花洗牌和火花执行器中得到以下日志:

+ /sbin/tini -s -- /opt/spark/bin/spark-class org.apache.spark.deploy.k8s.KubernetesExternalShuffleService 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/spark/jars/slf4j-log4j12-1.7.16.jar!/org/sl
f4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/spark/jars/kubernetes-client-3.0.1.jar!/org
/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2020-01-14 03:37:31 INFO  ExternalShuffleService:2574 - Started daemon with proces
s name: 10@unawa2-shuffle-unawa2-spark-shuffle-d5cfg
2020-01-14 03:37:31 INFO  SignalUtils:54 - Registered signal handler for TERM
2020-01-14 03:37:31 INFO  SignalUtils:54 - Registered signal handler for HUP
2020-01-14 03:37:31 INFO  SignalUtils:54 - Registered signal handler for INT
2020-01-14 03:37:31 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2020-01-14 03:37:31 INFO  SecurityManager:54 - Changing view acls to: root
2020-01-14 03:37:31 INFO  SecurityManager:54 - Changing modify acls to: root
2020-01-14 03:37:31 INFO  SecurityManager:54 - Changing view acls groups to:
2020-01-14 03:37:31 INFO  SecurityManager:54 - Changing modify acls groups to:
2020-01-14 03:37:31 INFO  SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
2020-01-14 03:37:32 INFO  KubernetesExternalShuffleService:54 - Starting shuffle service on port 7337 (auth enabled = false)
2020-01-14 03:38:35 INFO  KubernetesShuffleBlockHandler:54 - Received registration request from app spark-application-1578973110574 (remote address /192.168.2.37:40318).
2020-01-14 03:38:36 INFO  ExternalShuffleBlockResolver:135 - Registered executor AppExecId{appId=spark-application-1578973110574, execId=5} with ExecutorShuffleInfo{localDirs=[/tmp/spark-local/blockmgr-8a26a714-3ecb-46dd-8499-ff796fa97744], subDirsPerLocalDir=64, shuffleManager=org.apache.spark.shuffle.sort.SortShuffleManager}
2020-01-14 03:39:15 ERROR TransportRequestHandler:127 - Error opening block StreamChunkId{streamId=527834012000, chunkIndex=0} for request from /192.168.3.130:50896
java.lang.RuntimeException: Failed to open file: /tmp/spark-local/blockmgr-8a26a714-3ecb-46dd-8499-ff796fa97744/0f/shuffle_1_0_0.index
at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.getSortBasedShuffleBlockData(ExternalShuffleBlockResolver.java:249)
at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.getBlockData(ExternalShuffleBlockResolver.java:174)
at org.apache.spark.network.shuffle.ExternalShuffleBlockHandler$1.next(ExternalShuffleBlockHandler.java:105)
at org.apache.spark.network.shuffle.ExternalShuffleBlockHandler$1.next(ExternalShuffleBlockHandler.java:95)
at org.apache.spark.network.server.OneForOneStreamManager.getChunk(OneForOneStreamManager.java:89)
at org.apache.spark.network.server.TransportRequestHandler.processFetchRequest(TransportRequestHandler.java:125)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:103)
at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
.
.
.
Caused by: java.util.concurrent.ExecutionException: java.io.FileNotFoundException: /tmp/spark-local/blockmgr-8a26a714-3ecb-46dd-8499-ff796fa97744/0f/shuffle_1_0_0.index (No such file or directory)
at org.spark_project.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
at org.spark_project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
at org.spark_project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at org.spark_project.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)

知道如何解决这个问题吗?

[编辑]

我将本地目录/tmp/spark-local安装到我的pod中。当我ssh每个节点时,我确认块管理器存在于其中一个工作节点中(我猜这是预期的行为(。当来自另一个工作节点的随机 Pod 之一尝试访问同一块管理器时,会发生此错误。

注释线程中的摘要。

为了在启用动态分配的情况下在 Kubernetes 上运行 Spark,您可以:

遵循 apache-spark-on-k8s 指南

重要提示:

  • 你应该把你的镜像建立在kubespark镜像上,这些镜像是用分叉的Apache Spark 2.2.0构建
  • 的。
  • 该功能是实验性的,不支持

等到 Spark 3.0.0 随 SPARK-24432 和 SPARK-25299 一起发布

相关内容

最新更新