群集中的 Spark 错误:模块未找到错误:没有名为 'cst_utils' 的模块



我有一个带有python的Spark程序。程序的结构是这样的:

cst_utils.py
bn_utils.py
ep_utils.py
main.py

每个cst_utils.py,bn_utils.py,ep_utils.py都有一个名为Spark_Func(sc)的函数。主要情况下,我创建一个Spark上下文sc,并将其发送给每个Spark_Func,如下所示:

import cst_utils as cu
import bn_utils as bu
import ep_utils as eu
spark_conf = SparkConf().setAppName('app_name') 
.setMaster("spark://x.x.x.x:7077") 
.set('spark.executor.memory', "8g") 
.set('spark.executor.cores', 4) 
.set('spark.task.cpus', 2)
sc = SparkContext(conf=spark_conf)
cu.spark_func(sc)
bu.spark_func(sc)
eu.spark_func(sc) 

我用两个从属和一个主服务器配置了Spark集群,它们都有Ubuntu 20.04操作系统。我在spark-env.sh中设置了主 IP,并使 SSH 无密码,主节点无需身份验证即可访问每个从属节点。我在每个节点中运行以下命令:

主节点:

./start-master.sh

奴隶:

./start-worker.sh spark://x.x.x.x:7077

集群已创建,因为我可以在浏览器中使用此命令看到 SPARK UI:

http://x.x.x.x:8080

但是当我想用这个命令运行程序时:

/opt/spark/bin/spark-submit --master spark://x.x.x.x:7077 main.py

我收到此错误:

22/02/16 16:39:20 INFO SparkContext: Starting job: count at /home/hs/Desktop/etl/cst_utils.py:442
22/02/16 16:39:20 INFO DAGScheduler: Registering RDD 2 (reduceByKey at /home/hs/Desktop/etl/cst_utils.py:434) as input to shuffle 0
22/02/16 16:39:20 INFO DAGScheduler: Got job 0 (count at /home/hs/Desktop/etl/cst_utils.py:442) with 1 output partitions
22/02/16 16:39:20 INFO DAGScheduler: Final stage: ResultStage 1 (count at /home/hs/Desktop/etl/cst_utils.py:442)
22/02/16 16:39:20 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
22/02/16 16:39:20 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
22/02/16 16:39:20 INFO DAGScheduler: Submitting ShuffleMapStage 0 (PairwiseRDD[2] at reduceByKey at /home/hs/Desktop/etl/cst_utils.py:434), which has no missing parents
22/02/16 16:39:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 9.4 KiB, free 366.3 MiB)
22/02/16 16:39:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 5.9 KiB, free 366.3 MiB)
22/02/16 16:39:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on x.x.x.x:43875 (size: 5.9 KiB, free: 366.3 MiB)
22/02/16 16:39:20 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1388
22/02/16 16:39:20 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (PairwiseRDD[2] at reduceByKey at /home/hs/Desktop/etl/cst_utils.py:434) (first 15 tasks are for partitions Vector(0))
22/02/16 16:39:20 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
22/02/16 16:39:21 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (z.z.z.z:39668) with ID 1,  ResourceProfileId 0
22/02/16 16:39:21 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (y.y.y.y:46330) with ID 0,  ResourceProfileId 0
22/02/16 16:39:21 INFO BlockManagerMasterEndpoint: Registering block manager y.y.y.y:34159 with 4.1 GiB RAM, BlockManagerId(0, y.y.y.y, 34159, None)
22/02/16 16:39:21 INFO BlockManagerMasterEndpoint: Registering block manager z.z.z.z:42231 with 4.1 GiB RAM, BlockManagerId(1, z.z.z.z, 42231, None)
22/02/16 16:39:21 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (y.y.y.y, executor 0, partition 0, PROCESS_LOCAL, 4481 bytes) taskResourceAssignments Map()
22/02/16 16:39:21 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on y.y.y.y:34159 (size: 5.9 KiB, free: 4.1 GiB)
22/02/16 16:39:22 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (y.y.y.y executor 0): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 586, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 69, in read_command
command = serializer._read_with_length(file)
File "/opt/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 160, in _read_with_length
return self.loads(obj)
File "/opt/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 430, in loads
return pickle.loads(obj, encoding=encoding)
ModuleNotFoundError: No module named 'cst_utils'

所有节点的程序路径以及 SPARK 路径都位于同一路径中。

实际上,当我在本地模式下运行该程序时,它可以毫无问题地运行。但是,要在本地运行,我在 SPARK 上下文中使用此配置:

spark_conf = SparkConf().setAppName('app_name') 
.setMaster("local[4]") 
.set('spark.executor.memory', "8g") 
.set('spark.executor.cores', 4) 
.set('spark.task.cpus', 1)
sc = SparkContext(conf=spark_conf)

更新 1 :

我还使用虚拟环境并安装其中的所有软件包以在节点之间分发它们。详细内容:

  1. 要在 python 中创建虚拟环境,请运行以下命令:

    sudo apt install python3.8-venv
    
  2. 创建虚拟环境:

    python3 -m venv my_venv
    
  3. 进入环境:

    source my_vent/bin/activate
    
  4. 我使用venv-pack来打包您在项目中安装的所有软件包。

    pip install venv-pack
    
  5. 包装:

    venv-pack -o my_venv.tar.gz
    

此外,正如Spark站点所说,我将项目的所有.py文件放在一个文件夹中,并将其压缩到.zip文件夹中。

最后,在创建集群后,我运行以下命令:

/opt/spark/bin/spark-submit --master spark://x.x.x.x:7077 --archives my_venv.tar.gz#environment --py-files my_files.zip main.py

但是,它最终会出现此错误:

Traceback (most recent call last):
File "/home/spark/Desktop/etl/main.py", line 3, in <module>
import cst_utils as cu
File "/home/spark/Desktop/etl/cst_utils.py", line 5, in <module>
import group_state as gs
File "/home/spark/Desktop/etl/group_state.py", line 1, in <module>
import numpy as np
ModuleNotFoundError: No module named 'numpy'

你能指导我在集群中运行代码有什么问题吗?

任何帮助将不胜感激。

问题解决了。

首先,我使用以下命令在每个节点中安装了所有软件包:

python3 -m pip install PACKAGE

然后,当我运行程序时,我必须将程序中使用的所有 PY 文件写入 --py-files前面,如下所示:

/opt/spark/bin/spark-submit --master spark://x.x.x.x:7077 --files sparkConfig.json --py-files cst_utils.py,grouping.py,group_state.py,g_utils.py,csts.py,oracle_connection.py,config.py,brn_utils.py,emp_utils.py main.py    

然后我对导入文件没有任何错误。

最新更新