如何在sparks withColumn
语句中使用posexplode?
Seq(Array(1,2,3)).toDF.select(col("*"), posexplode(col("value")) as Seq("position", "value")).show
工作正常,而:
Seq(Array(1,2,3)).toDF.withColumn("foo", posexplode(col("value"))).show
失败:
org.apache.spark.sql.AnalysisException: The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF expected 2 aliases but got foo ;
而不是使用withColumn()
,您可以选择数据框中的所有列并附加posexplode()
的结果,包括pos
和col
字段的别名。这是使用pyspark的示例。
from pyspark.sql import functions as F
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()
df = spark.createDataFrame(
[(["a"], ), (["b", "c"], ), (["d", "e", "f"], )],
["A"],
)
df.show()
# +---------+
# | A|
# +---------+
# | [a]|
# | [b, c]|
# |[d, e, f]|
# +---------+
df = df.select("*", F.posexplode("A").alias("B", "C"))
df.show()
# +---------+---+---+
# | A| B| C|
# +---------+---+---+
# | [a]| 0| a|
# | [b, c]| 0| b|
# | [b, c]| 1| c|
# |[d, e, f]| 0| d|
# |[d, e, f]| 1| e|
# |[d, e, f]| 2| f|
# +---------+---+---+
不确定这是否确实需要,但是您可以尝试选择语句而不是使用column,例如
df.select('col1', 'col2', F.posexplode('col_to_be_exploded'))
withColumn
函数似乎不适用于 posexplode
。您可以使用类似的东西:
df.select($"*", posexplode($"value").as(List("index", "column")))