更改了 Spark-sql 中的前 N 个值



我需要根据评论总数显示前 5 个州和城市(下面提到的原始架构中的评论计数(。我的DF(来自Json文件(的描述如下。

+-------------+--------------------+-------+
|     col_name|           data_type|comment|
+-------------+--------------------+-------+
|   attributes|struct<Accepts Cr...|   null|
|         city|              string|   null|
|neighborhoods|       array<string>|   null|
|         open|             boolean|   null|
| review_count|              bigint|   null|
|        stars|              double|   null|
|        state|              string|   null|
|         type|              string|   null|
+-------------+--------------------+-------+

我尝试按方法排序,但没有奏效。终于了解了这里的窗口功能

在我编写的代码中,审阅计数的值不是 Json 文件中的确切值。

我尝试的代码是:

val topcity=spark.sql("select city,state,review_count,RANK() OVER (ORDER BY review_count desc ) AS RANKING from yelp").show(5)

以下是我得到的输出:

+-------------+-----+------------+-------+
|         city|state|review_count|RANKING|
+-------------+-----+------------+-------+
|   Pittsburgh|   PA|           3|      1|
|     Carnegie|   PA|           3|      2|
|     Carnegie|   PA|           3|      3|
|     Carnegie|   PA|           3|      4|
|   Pittsburgh|   PA|           3|      5|
+-------------+--------------------+-----+

所以我的评论计数只有常量值 3。所以我的问题是:

  1. 为什么评论数总是3?
  2. 我应该进行哪些更改才能获得评论计数的前 5 个确切值?

接下来是实现,假设您正在寻找如何获得每个州城市组合的评论总数(希望我做对了(:

首先,我们生成一些虚拟数据:

cities_data = [
            ["Alameda", "California", 1],
            ["Alameda", "California", 3],
            ["Berkeley", "California", 2],
            ["Beverly Hills", "California", 2],
            ["Beverly Hills", "California", 3],
            ["Hollywood", "California", 4],
            ["Miami", "Florida", 3],
            ["Miami", "Florida", 2],
            ["Orlando", "Florida", 1],
            ["Cocoa Beach", "Florida", 1]]
cols = ["city", "state", "review_count"]
df = spark.createDataFrame(cities_data, cols)
df.show(10, False)

这将打印:

+-------------+----------+------------+
|city         |state     |review_count|
+-------------+----------+------------+
|Alameda      |California|1           |
|Alameda      |California|3           |
|Berkeley     |California|2           |
|Beverly Hills|California|2           |
|Beverly Hills|California|3           |
|Hollywood    |California|4           |
|Miami        |Florida   |3           |
|Miami        |Florida   |2           |
|Orlando      |Florida   |1           |
|Cocoa Beach  |Florida   |1           |
+-------------+----------+------------+

数据按州/城市分组,以获得total_reviews的总和。这是在pyspark中,但应该很容易将其更改为scala:

df = df.groupBy("state", "city") 
        .agg(F.sum("review_count").alias("reviews_count")) 
        .orderBy(F.desc("reviews_count")) 
        .limit(5)

这应该是上述场景的输出:

+----------+-------------+-------------+
|state     |city         |reviews_count|
+----------+-------------+-------------+
|California|Beverly Hills|5            |
|Florida   |Miami        |5            |
|California|Alameda      |4            |
|California|Hollywood    |4            |
|California|Berkeley     |2            |
+----------+-------------+-------------+

相关内容

  • 没有找到相关文章

最新更新