我有一个特殊的要求,以下数据标准化如下:
源数据框:Key, item_desc
目标数据帧:Key, item_desc1, item_desc2, item_desc3, item_desc4
对于源数据框中的每4个记录,我应该在目标数据框架上创建一个记录。
源数据:
Key, item_desc
1, desc1
1, desc2
1, desc3
1, desc4
1, desc5
1, desc6
目标数据:
key, item_desc1, item_desc2, item_desc3, item_desc4
1, desc1, desc2, desc3, desc4
1, desc5, desc6
任何人都可以指导我如何编写此代码吗?我做了一个示例代码以在Scala上进行,如下所示:
var l = (1 to 102).toList
var n = ""
var j = 1
for (i <- l){
n = n + l(j) + ","
if (j%4 == 0) {
println(n)
n = ""
}
if (j+1 == l.size) println(n)
j = j+1
}
但是,我应该将此逻辑应用于dataframe/rdd/list。
请帮助我!
您可以尝试这样的事情:
import org.apache.spark.sql.expressions._
import org.apache.spark.sql.functions._
val w = Window.partitionBy("key").orderBy("item_desc")
val df = Seq(
(1, "desc1"), (1, "desc2"), (1, "desc3"),
(1, "desc4"), (1, "desc5"), (1, "desc6")
).toDF("key", "item_desc")
df
// Add sequential id for group 0 .. n - 1
.withColumn("id", row_number.over(w) - 1)
// Add row group id
.withColumn("group_id", floor($"id" / 4))
// Add column group id
.withColumn("column_id", concat(lit("item_desc"), $"id" % 4))
.groupBy("key", "group_id")
.pivot("column_id")
.agg(first("item_desc"))
.drop("group_id").show
// +---+----------+----------+----------+----------+
// |key|item_desc0|item_desc1|item_desc2|item_desc3|
// +---+----------+----------+----------+----------+
// | 1| desc1| desc2| desc3| desc4|
// | 1| desc5| desc6| null| null|
// +---+----------+----------+----------+----------+
但是,除非与单键相关的值数量很小,否则它不会很好地扩展,因此请自行使用。