!pip install Pyspark
import pandas as pd
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
pdf = pd.read_excel("xxxx.xlsx", sheet_name='Input (I)')
df = spark.createDataFrame(pdf)
df.show()
但是得到一个错误:
Py4JJava错误:调用o41.showString时出错。:org.apache.spark.SparkException:由于阶段失败而中止作业:阶段1.0中的任务0失败了1次,最近的失败:阶段1.0(TID 1(中丢失的任务0.0(10.75.81.111执行程序驱动程序(:org.apache_spark.spark Exception:Python工作程序无法连接回。
似乎与PySpark和Python之间的通信有关,可以通过更改环境变量的值来解决:
Set Env PYSPARK_PYTHON=python
但是,为什么不直接在PySpark DF上加载xlsx文件呢?类似于:
df = spark.read.format("com.crealytics.spark.excel")
.option("useHeader", "true")
.option("inferSchema", "true")
.option("dataAddress", "Input (I)")
.load("xxxx.xlsx"))