如何调整火花执行器编号



我向我的独立火花集群提交了一个火花流计算任务。submit命令如下所示:

./bin/spark-submit   
--master spark://ES01:7077 
--executor-memory 4G --num-executors 1 
/opt/flowSpark/sparkStream/latest5min.py    1>a.log 2>b.log

请注意,我使用num执行器1。因为我只想要一个遗嘱执行人。

然后用ps命令,我可以找到下面的输出。

[root@ES01 ~]# ps -ef | grep java | grep -v grep  | grep spark
root     11659     1  0 Apr19 ?        00:48:25 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms4G -Xmx4G -XX:MaxPermSize=256m org.apache.spark.deploy.master.Master --ip ES01 --port 7077 --webui-port 8080
root     11759     1  0 Apr19 ?        00:42:59 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms4G -Xmx4G -XX:MaxPermSize=256m org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://ES01:7077
root     18538 28335 38 16:13 pts/1    00:01:52 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms1g -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --master spark://ES01:7077 --executor-memory 4G --num-executors 1 /opt/flowSpark/sparkStream/latest5min.py
root     18677 11759 46 16:13 ?        00:02:14 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms4096M -Xmx4096M -Dspark.driver.port=55652 -XX:MaxPermSize=256m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.79.148.184:55652 --executor-id 0 --hostname 10.79.148.184 --cores 1 --app-id app-20160509161303-0048 --worker-url spark://Worker@10.79.148.184:35012
root     18679 11759 46 16:13 ?        00:02:13 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms4096M -Xmx4096M -Dspark.driver.port=55652 -XX:MaxPermSize=256m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.79.148.184:55652 --executor-id 1 --hostname 10.79.148.184 --cores 1 --app-id app-20160509161303-0048 --worker-url spark://Worker@10.79.148.184:35012
root     18723 11759 47 16:13 ?        00:02:14 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms4096M -Xmx4096M -Dspark.driver.port=55652 -XX:MaxPermSize=256m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.79.148.184:55652 --executor-id 2 --hostname 10.79.148.184 --cores 1 --app-id app-20160509161303-0048 --worker-url spark://Worker@10.79.148.184:35012

根据我的理解

11659和11759是火花架簇工艺。

18538是驱动程序。

18677 18679 18723现在应该是工人流程了。

既然我已经使用了num executor 1,为什么还有3?

从文档中检查火花默认值中的spark.executor.cores

The number of cores to use on each executor. For YARN and standalone mode only. 
In standalone mode, setting this parameter allows an application to run multiple executors on the same worker, provided that there are enough cores on that worker. 
Otherwise, only one executor per application will run on each worker.

http://spark.apache.org/docs/latest/configuration.html#execution-行为

如果您使用YARN,您可以在数据节点(执行器将在其中实例化)中发出以下命令来检查执行器

 $ sudo -u yarn jps
 11388 CoarseGrainedExecutorBackend
 1854 Jps
 11396 CoarseGrainedExecutorBackend

CoarseGrainedExecutiorBackend指的是一个执行器。

最新更新