运行spark-perf时获取SparkUncaughtExceptionHandler



我们已经设置了分布式spark集群(1.5.0版本)并尝试运行spark-perf。但是我们得到了这个错误,不知道如何修复它。

15/10/05 20:14:37 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@43ff6bf rejected from java.util.concurrent.ThreadPoolExecutor@36077c7[Running, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
        at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
        at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
        at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
        at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:96)
        at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:95)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
        at org.apache.spark.deploy.client.AppClient$ClientEndpoint.tryRegisterAllMasters(AppClient.scala:95)
        at org.apache.spark.deploy.client.AppClient$ClientEndpoint.org$apache$spark$deploy$client$AppClient$ClientEndpoint$$registerWithMaster(AppClient.scala:121)
        at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:132)
        at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
        at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:124)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
15/10/05 20:14:37 INFO DiskBlockManager: Shutdown hook called
15/10/05 20:14:37 INFO ShutdownHookManager: Shutdown hook called
15/10/05 20:14:37 INFO ShutdownHookManager: Deleting directory /tmp/spark-c5a4a63b-3dc5-4c52-bd2b-e6df22a0c19f

请检查config/config.py中的变量SPARK_CLUSTER_URL

SPARK_CLUSTER_URL = "spark://Master_Ip:7077"

PS: Master_Ip是Master的ip地址,而不是主机名。

您没有正确输入Spark主URL。可能是因为大写错误。请使用此命令确保conf/99-user_defined_properties.conf文件中hibench.spark.master的路径正确。您应该能够通过运行以下命令连接到Spark-shell。

MASTER=<YOUR-SPARK-MASTER-URL-HERE> bin/spark-shell

在Spark的独立模式下,这个url应该看起来像:

spark://<master-machine-IP>:7077

一般来说,最好使用主节点的IP地址,而不是由spark Master提供的字母主机名,例如spark://Macs-MacBook-Pro.local:7077

我在spark-submit命令中将主ip替换为主机名,这个错误解决了。

--master "spark://hostname:7077"

相关内容

  • 没有找到相关文章

最新更新