连接到本地Docker Spark集群



我正在尝试连接到我从笔记本电脑本地创建的spark集群。我使用的docker-compose如下:


services:
spark-master:
image: docker.io/bitnami/spark:3.2.1
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
ports:
- '7075:8080'
- "7077:7077"
volumes:
- "./execution_scripts:/execution_scripts:rw"
spark-worker:
image: docker.io/bitnami/spark:3.2.1
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark:7077
- SPARK_WORKER_MEMORY=2G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
spark-worker-2:
image: docker.io/bitnami/spark:3.2.1
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark:7077
- SPARK_WORKER_MEMORY=2G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
spark-worker-3:
image: docker.io/bitnami/spark:3.2.1
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark:7077
- SPARK_WORKER_MEMORY=2G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no

上面的图像是一个bitnami图像,有3个worker和1个master。我试图通过jupyter笔记本连接的代码如下:

import findspark
findspark.init()
findspark.find()
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("Day1_1").master("spark://localhost:7077").getOrCreate()
df_NYTaxi =  spark.read.csv(file)

运行以上代码后,我得到的错误如下:

: java.lang.NullPointerException
at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:78)
at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:518)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:596)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:829)
```
I have tried a lot of things but every time I just can't seem to connect to that docker image some how, or I can connect but the job times out. 
my local spark version is 3.2.1 and the image used has the same version. 

所以解决这个问题的方法是实际创建一个docker映像与多个容器,然后通过VS code连接到它,然后从内部运行脚本。

下面是修改后的docker组合

version: '2'
services:
spark:
build : .
container_name: spark_master
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
ports:
- '7075:8080'
- "7077:7077"
volumes:
- "./execution_scripts:/execution_scripts:rw"
- "./resources:/resources:rw"

spark-worker:
build : .
container_name: spark_worker_1
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark:7077
- SPARK_WORKER_MEMORY=2G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
volumes:
- "./execution_scripts:/execution_scripts:rw"
- "./resources:/resources:rw"

spark-worker-2:
build : .
container_name: spark_worker_2
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark:7077
- SPARK_WORKER_MEMORY=2G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
volumes:
- "./execution_scripts:/execution_scripts:rw"
- "./resources:/resources:rw"

和构建该映像的docker文件如下:

FROM bitnami/spark:3.2.1 USER root

# Installing package into Spark if needed
# spark-shell --master local --packages "<package name>" RUN pip install findspark 
EXPOSE 8080 
EXPOSE 7075 
EXPOSE 7077
(当然,您需要创建两个名为execution_scripts和resources的文件夹。你可以在VS Code或任何其他IDE中以类似的方式附加到正在运行的容器上。

最新更新