问题陈述:
我有一个包含四列的数据框:service (String), show (String), country_1 (Integer), &country_2(整数)。我的目标是生成一个仅由两列组成的数据框架:service (String) &
(Map[Integer, List[String]]),其中映射可以包含像这样的每个流媒体服务的键值对的多个记录:
{
"34521": ["The Crown", "Bridgerton", "The Queen's Gambit"],
"49678": ["The Crown", "Bridgerton", "The Queen's Gambit"]
}
需要注意的一点是,将来可以添加更多的国家,例如在输入数据框中再添加一些列,如"country3 ", &;country_4"等。解决方案代码的目标也希望能够解释这些事情,而不仅仅是硬编码所选择的列,就像我在下面的尝试解决方案中所做的那样,如果这有意义的话。
输入Dataframe:
模式:
root
|-- service: string (nullable = true)
|-- show: string (nullable = true)
|-- country_1: integer (nullable = true)
|-- country_2: integer (nullable = true)
Dataframe:
service | show | country_1 | country_2
Netflix The Crown 34521 49678
Netflix Bridgerton 34521 49678
Netflix The Queen's Gambit 34521 49678
Peacock The Office 34521 49678
Disney+ WandaVision 34521 49678
Disney+ Marvel's 616 34521 49678
Disney+ The Mandalorian 34521 49678
Apple TV Ted Lasso 34521 49678
Apple TV The Morning Show 34521 49678
输出Dataframe:
模式:
root
|-- service: string (nullable = true)
|-- information: map (nullable = false)
| |-- key: integer
| |-- value: array (valueContainsNull = true)
| | |-- element: string (containsNull = true)
Dataframe:
service | information
Netflix [34521 -> [The Crown, Bridgerton, The Queen’s Gambit], 49678 -> [The Crown, Bridgerton, The Queen’s Gambit]]
Peacock [34521 -> [The Office], 49678 -> [The Office]]
Disney+ [34521 -> [WandaVision, Marvel’s 616, The Mandalorian], 49678 -> [WandaVision, Marvel’s 616, The Mandalorian]]
Apple TV [34521 -> [Ted Lasso, The Morning Show], 49678 -> [Ted Lasso, The Morning Show]]
我已经尝试过了
虽然我已经通过粘贴的代码片段成功地生成了我想要的输出,但我不想依赖于使用非常基本的sql类型命令,因为我认为对于大型数据集的快速计算来说,这并不总是最佳的,此外,我不想依赖于在映射时按确切名称手动选择国家列的方法,因为这总是会发生变化,因为以后可以添加更多的国家列。
是否有更好的方法来做到这一点,利用udfs, foldLeft等类型的代码或其他任何有助于优化,也有助于代码更简洁,而不是混乱?
val df = spark.read.parquet("filepath/*.parquet")
val temp = df.groupBy("service", "country_1", "country_2").agg(collect_list("show").alias("show"))
val service_information = grouped.withColumn("information", map(lit($"country_1"), $"show", lit($"country_2"), $"show")).drop("country_1", "country_2", "show")
按国家数据"规格";注释部分中描述的(即国家代码将在任何给定的country_X
列的所有行中相同且非空),您的代码可以推广到处理任意多个国家列:
val df = Seq(
("Netflix", "The Crown", 34521, 49678),
("Netflix", "Bridgerton", 34521, 49678),
("Netflix", "The Queen's Gambit", 34521, 49678),
("Peacock", "The Office", 34521, 49678),
("Disney+", "WandaVision", 34521, 49678),
("Disney+", "Marvel's 616", 34521, 49678),
("Disney+", "The Mandalorian", 34521, 49678),
("Apple TV", "Ted Lasso", 34521, 49678),
("Apple TV", "The Morning Show", 34521, 49678)
).toDF("service", "show", "country_1", "country_2")
val countryCols = df.columns.filter(_.startsWith("country_")).toList
val grouped = df.groupBy("service", countryCols: _*).agg(collect_list("show").as("shows"))
val service_information = grouped.withColumn(
"information",
map( countryCols.flatMap{ c => col(c) :: col("shows") :: Nil }: _* )
).drop("shows" :: countryCols: _*)
service_information.show(false)
// +--------+--------------------------------------------------------------------------------------------------------------+
// |service |information |
// +--------+--------------------------------------------------------------------------------------------------------------+
// |Disney+ |[34521 -> [WandaVision, Marvel's 616, The Mandalorian], 49678 -> [WandaVision, Marvel's 616, The Mandalorian]]|
// |Peacock |[34521 -> [The Office], 49678 -> [The Office]] |
// |Netflix |[34521 -> [The Crown, Bridgerton, The Queen's Gambit], 49678 -> [The Crown, Bridgerton, The Queen's Gambit]] |
// |Apple TV|[34521 -> [Ted Lasso, The Morning Show], 49678 -> [Ted Lasso, The Morning Show]] |
// +--------+--------------------------------------------------------------------------------------------------------------+
请注意所描述的国家"规格";将强制要求所有show
与同一国家名单相关联。例如,如果你有3列country_X
,并且给定的country_X
的每一行都是相同的,没有null,这意味着每个show
都与这3个国家有关。如果你的show
只适用于3个国家中的2个呢?
如果您的数据模式可以修改,维护相关国家信息的更灵活的方法是为每个show
提供单个ArrayType列。
val df = Seq(
("Netflix", "The Crown", Seq(34521, 49678)),
("Netflix", "Bridgerton", Seq(34521)),
("Netflix", "The Queen's Gambit", Seq(10001, 49678)),
("Peacock", "The Office", Seq(34521, 49678)),
("Disney+", "WandaVision", Seq(10001, 20002, 34521)),
("Disney+", "Marvel's 616", Seq(49678)),
("Disney+", "The Mandalorian", Seq(34521, 49678)),
("Apple TV", "Ted Lasso", Seq(34521, 49678)),
("Apple TV", "The Morning Show", Seq(20002, 34521))
).toDF("service", "show", "countries")
val grouped = df.withColumn("country", explode($"countries")).
groupBy("service", "country").agg(collect_list($"show").as("shows"))
val service_information = grouped.groupBy("service").
agg(collect_list($"country").as("c_list"), collect_list($"shows").as("s_list")).
select($"service", map_from_arrays($"c_list", $"s_list").as("information"))
service_information.show(false)
// +--------+-----------------------------------------------------------------------------------------------------------------------------------+
// |service |information |
// +--------+-----------------------------------------------------------------------------------------------------------------------------------+
// |Peacock |[34521 -> [The Office], 49678 -> [The Office]] |
// |Disney+ |[20002 -> [WandaVision], 49678 -> [Marvel's 616, The Mandalorian], 34521 -> [WandaVision, The Mandalorian], 10001 -> [WandaVision]]|
// |Apple TV|[34521 -> [Ted Lasso, The Morning Show], 49678 -> [Ted Lasso], 20002 -> [The Morning Show]] |
// |Netflix |[49678 -> [The Crown, The Queen's Gambit], 10001 -> [The Queen's Gambit], 34521 -> [The Crown, Bridgerton]] |
// +--------+-----------------------------------------------------------------------------------------------------------------------------------+