Spark DataSet -NullPoInterException在数据集上执行过滤器时



我有2个数据集,如下所示。我试图找出每个游戏有多少产品。基本上,我正在尝试保留关联的产品数量。

   scala> df1.show()
   gameid     | games     | users         | cnt_assoc_prod  
   -------------------------------------------
   1          | cricket   |[111, 121]     |
   2          | basketball|[211]          |
   3          | skating   |[101, 100, 98] |
   scala> df2.show()
   user  | products 
   ----------------------
   98    | "shampoo"   
   100   | "soap"          
   101   | "shampoo"
   111   | "shoes"
   121   | "honey"
   211   | "shoes"

我正在尝试从数组中迭代DF1的每个用户,并通过在与用户匹配的列上应用过滤器,从而在DF2中找到相应的行。

df1.map{x => {
   var assoc_products = new Set()
   x.users.foreach(y => assoc_products + df2.filter(z => z.user == y).first(). 
        products)
   x.cnt_assoc_prod = assoc_products.size
}

应用过滤器时,我会遵循异常

java.lang.NullPointerException
    at org.apache.spark.sql.Dataset.logicalPlan(Dataset.scala:784)
    at org.apache.spark.sql.Dataset.mapPartitions(Dataset.scala:344)
    at org.apache.spark.sql.Dataset.filter(Dataset.scala:307)

我正在使用Spark版本1.6.1。

users中的CC_1列,在user列上与df2一起加入,然后进行groupBy计数:

(df1.withColumn("user", explode(col("users")))
    .join(df2, Seq("user"))
    .groupBy("gameid", "games")
    .agg(count($"products").alias("cnt_assoc_prod"))
).show
+------+----------+--------------+
|gameid|     games|cnt_assoc_prod|
+------+----------+--------------+
|     3|   skating|             3|
|     2|basketball|             1|
|     1|   cricket|             2|
+------+----------+--------------+

最新更新