Pyspark将数据块中的数据写入azure sql:ValueError:有些类型在推断后无法确定



我正在使用pyspark将数据从azure databricks写入azure sql。代码在没有null的情况下运行良好,但当数据帧包含null时,我会得到以下错误:

databricks/spark/python/pyspark/sql/pandas/conversion.py:300: UserWarning: createDataFrame attempted Arrow optimization because 'spark.sql.execution.arrow.pyspark.enabled' is set to true; however, failed by the reason below:
Unable to convert the field Product. If this column is not necessary, you may consider dropping it or converting to primitive type before the conversion.
Context: Unsupported type in conversion from Arrow: null
Attempting non-optimization as 'spark.sql.execution.arrow.pyspark.fallback.enabled' is set to true.
warnings.warn(msg)
ValueError: Some of types cannot be determined after inferring

数据帧必须写入sql,包括null。我该如何解决此问题?

sqlContext = SQLContext(sc)
def to_sql(df, table):
finaldf = sqlContext.createDataFrame(df)
finaldf.write.jdbc(url=url, table= table, mode ="overwrite", properties = properties)
to_sql(data, f"TF_{table.upper()}")

编辑:

创建了一个函数,将pandas数据类型映射到sql数据类型,并将列和数据类型输出为一个字符串。

def convert_dtype(df):
df_mssql = {'int64': 'bigint', 'object': 'varchar(200)', 'float64': 'float'}
mydict = {}
for col in df.columns:
if str(df.dtypes[col]) in df_mssql:
mydict[col] = df_mssql.get(str(df.dtypes[col]))
l = " ".join([str(k[0] + " " + k[1] + ",") for k in list(mydict.items())])
return l[:-1]

将此字符串传递给createTableColumnTypes选项解决了这种情况

jdbcDF.write 
.option("createTableColumnTypes", convert_dtype(df) 
.jdbc("jdbc:postgresql:dbserver", "schema.tablename",
properties={"user": "username", "password": "password"})

为此,您需要在write语句中指定模式。以下是文档中的一个示例,也链接到下面:

jdbcDF.write 
.option("createTableColumnTypes", "name CHAR(64), comments VARCHAR(1024)") 
.jdbc("jdbc:postgresql:dbserver", "schema.tablename",
properties={"user": "username", "password": "password"})

https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html

最新更新