Apache Spark Yarn模式启动需要太长(10秒以上)



我正在运行带有纱线或纱线群集模式的火花应用程序。

,但似乎需要太长时间才能启动。

初始化火花上下文需要10秒以上。

这是正常的吗?还是可以优化?

环境如下:

  • Hadoop:Hortonworks HDP 2.2(Hadoop 2.6)(带3个数据节点的小测试群)
  • Spark:1.3.1
  • 客户端:Windows 7,但在CentOS 6.6上的类似结果

以下是应用程序日志的启动部分。(编辑了一些私人信息)

在第一行的"主:初始化上下文"," mainprocessor:删除上一行的先前输出文件"是应用程序的日志。中间的其他人来自火花本身。显示此日志后,执行应用程序逻辑。

15/05/07 09:18:31 INFO Main: Initializing context
15/05/07 09:18:31 INFO SparkContext: Running Spark version 1.3.1
15/05/07 09:18:31 INFO SecurityManager: Changing view acls to: myuser,myapp
15/05/07 09:18:31 INFO SecurityManager: Changing modify acls to: myuser,myapp
15/05/07 09:18:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(myuser, myapp); users with modify permissions: Set(myuser, myapp)
15/05/07 09:18:31 INFO Slf4jLogger: Slf4jLogger started
15/05/07 09:18:31 INFO Remoting: Starting remoting
15/05/07 09:18:31 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@mymachine:54449]
15/05/07 09:18:31 INFO Utils: Successfully started service 'sparkDriver' on port 54449.
15/05/07 09:18:31 INFO SparkEnv: Registering MapOutputTracker
15/05/07 09:18:32 INFO SparkEnv: Registering BlockManagerMaster
15/05/07 09:18:32 INFO DiskBlockManager: Created local directory at C:UsersmyuserAppDataLocalTempspark-2d3db9d6-ea78-438e-956f-be9c1dcf3a9dblockmgr-e9ade223-a4b8-4d9f-b038-efd66adf9772
15/05/07 09:18:32 INFO MemoryStore: MemoryStore started with capacity 1956.7 MB
15/05/07 09:18:32 INFO HttpFileServer: HTTP File server directory is C:UsersmyuserAppDataLocalTempspark-ff40d73b-e8ab-433e-88c4-35da27fb6278httpd-def9220f-ac3a-4dd2-9ac1-2c593b94b2d9
15/05/07 09:18:32 INFO HttpServer: Starting HTTP Server
15/05/07 09:18:32 INFO Server: jetty-8.y.z-SNAPSHOT
15/05/07 09:18:32 INFO AbstractConnector: Started SocketConnector@0.0.0.0:54450
15/05/07 09:18:32 INFO Utils: Successfully started service 'HTTP file server' on port 54450.
15/05/07 09:18:32 INFO SparkEnv: Registering OutputCommitCoordinator
15/05/07 09:18:32 INFO Server: jetty-8.y.z-SNAPSHOT
15/05/07 09:18:32 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/05/07 09:18:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/05/07 09:18:32 INFO SparkUI: Started SparkUI at http://mymachine:4040
15/05/07 09:18:32 INFO SparkContext: Added JAR file:/D:/Projects/MyApp/MyApp.jar at http://10.111.111.199:54450/jars/MyApp.jar with timestamp 1430957912240
15/05/07 09:18:32 INFO RMProxy: Connecting to ResourceManager at cluster01/10.111.111.11:8050
15/05/07 09:18:32 INFO Client: Requesting a new application from cluster with 3 NodeManagers
15/05/07 09:18:32 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (23040 MB per container)
15/05/07 09:18:32 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/07 09:18:32 INFO Client: Setting up container launch context for our AM
15/05/07 09:18:32 INFO Client: Preparing resources for our AM container
15/05/07 09:18:32 INFO Client: Source and destination file systems are the same. Not copying hdfs://cluster01/apps/spark/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/07 09:18:32 INFO Client: Setting up the launch environment for our AM container
15/05/07 09:18:33 INFO SecurityManager: Changing view acls to: myuser,myapp
15/05/07 09:18:33 INFO SecurityManager: Changing modify acls to: myuser,myapp
15/05/07 09:18:33 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(myuser, myapp); users with modify permissions: Set(myuser, myapp)
15/05/07 09:18:33 INFO Client: Submitting application 2 to ResourceManager
15/05/07 09:18:33 INFO YarnClientImpl: Submitted application application_1430956687773_0002
15/05/07 09:18:34 INFO Client: Application report for application_1430956687773_0002 (state: ACCEPTED)
15/05/07 09:18:34 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1430957906540
     final status: UNDEFINED
     tracking URL: http://cluster01:8088/proxy/application_1430956687773_0002/
     user: myapp
15/05/07 09:18:35 INFO Client: Application report for application_1430956687773_0002 (state: ACCEPTED)
15/05/07 09:18:36 INFO Client: Application report for application_1430956687773_0002 (state: ACCEPTED)
15/05/07 09:18:37 INFO Client: Application report for application_1430956687773_0002 (state: ACCEPTED)
15/05/07 09:18:37 INFO YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@cluster02:39698/user/YarnAM#-1579648782]
15/05/07 09:18:37 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> cluster01, PROXY_URI_BASES -> http://cluster01:8088/proxy/application_1430956687773_0002), /proxy/application_1430956687773_0002
15/05/07 09:18:37 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/07 09:18:38 INFO Client: Application report for application_1430956687773_0002 (state: RUNNING)
15/05/07 09:18:38 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: cluster02
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1430957906540
     final status: UNDEFINED
     tracking URL: http://cluster01:8088/proxy/application_1430956687773_0002/
     user: myapp
15/05/07 09:18:38 INFO YarnClientSchedulerBackend: Application application_1430956687773_0002 has started running.
15/05/07 09:18:38 INFO NettyBlockTransferService: Server created on 54491
15/05/07 09:18:38 INFO BlockManagerMaster: Trying to register BlockManager
15/05/07 09:18:38 INFO BlockManagerMasterActor: Registering block manager mymachine:54491 with 1956.7 MB RAM, BlockManagerId(<driver>, mymachine, 54491)
15/05/07 09:18:38 INFO BlockManagerMaster: Registered BlockManager
15/05/07 09:18:43 INFO YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@cluster02:44996/user/Executor#-786778979] with ID 1
15/05/07 09:18:43 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
15/05/07 09:18:43 INFO MainProcessor: Deleting previous output files

谢谢。

update

我认为我找到了(也许是部分但主要的)原因。

是以下几行之间:

15/05/08 11:36:32 INFO BlockManagerMaster: Registered BlockManager
15/05/08 11:36:38 INFO YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@cluster04:55237/user/Executor#-149550753] with ID 1

当我读取群集侧的日志时,找到了以下行:(上面的行的确切时间是不同的,但这是机器之间的区别)

15/05/08 11:36:23 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/08 11:36:28 INFO impl.AMRMClientImpl: Received new token for : cluster04:45454

似乎火花故意睡5秒。

我已经阅读了Spark源代码,并且在org.apache.spark.deploy.yarn.applicationmaster.scala中,启动reporterthread()具有代码。它循环呼叫分配器.alocateresources()和thread.sleep()。对于睡眠,它读取配置变量spark.yarn.scheduler.heartbeat.interval-ms(默认值为5000,即5秒)。根据评论,"我们希望在不引起太多请求RM的情况下做出合理的响应。"因此,除非纱线立即满足分配请求,否则似乎将浪费5秒。

当我将配置变量修改为1000时,它仅等待1秒

这是更改后的日志线:

15/05/08 11:47:21 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 1000
15/05/08 11:47:22 INFO impl.AMRMClientImpl: Received new token for : cluster04:45454

保存4秒。

所以,当一个人不想等待5秒时,一个人可以更改 spark.yarn.scheduler.heartbeat.interval.interval-ms

我希望它造成的额外开销可以忽略不计。

update

已经打开并解决了相关的JIRA问题。请参阅https://issues.apache.org/jira/browse/spark-7533

这很典型。我的系统从运行Spark-Submit到获得SparkContext大约需要20秒。

正如它在几个地方的文档中所说的那样,解决方案是将驱动程序变成RPC服务器。这样您一次初始化,然后其他应用程序可以将驾驶员的上下文用作服务。

我正与我的应用程序中间这样做。我正在使用HTTP4并将驱动程序变成Web服务器。

用于快速创建Spark-Context

在EMR上测试:

  1. cd /usr/lib/spark/jars/; zip /tmp/yarn-archive.zip *.jar

  2. cd path/to/folder/of/someOtherDependancy/jarFolder/; zip /tmp/yarn-archive.zip jar-file.jar

  3. zip -Tv /tmp/yarn-archive.zip用于测试完整性和详细调试

  4. 如果yarn-archive.zip已经存在于HDF上,则hdfs dfs -rm -r -f -skipTrash /user/hadoop/yarn-archive.zip hdfs dfs -put /tmp/yarn-archive.zip /user/hadoop/否则hdfs dfs -put /tmp/yarn-archive.zip /user/hadoop/

  5. --conf spark.yarn.archive="hdfs:///user/hadoop/yarn-archive.zip"spark-submit

  6. 中使用此参数

这样可以起作用的原因,主人不必将所有罐子分配给奴隶。它可以从一些常见的HDFS路径上使用,这是hdfs:///user/hadoop/yarn-archive.zip

我意识到它可以节省您的时间3-5秒,这次还取决于群集中的节点数量。节点更多,您可以节省时间。

您可以检查Apache Livy,这是Spark前面的REST API。

  • http://livy.io/
  • https://github.com/cloudera/livy

您可以有一个会话和多个请求到那个Spark/Livy会话。

如果您使用Mac OS以独立模式运行某些任务,

只记得要在您的system preference -> sharing中启用remote ssh connection(我不知道为什么我们应该这样做的原因)

启用它之前,我大约需要1分钟才能运行spark-submit xx.py

启用它后,我只需要3秒钟才能运行它。

我希望它可以帮助他人在Mac OS平台上遇到那种问题

最新更新