SparkSQL:日期范围的条件和



我有一个这样的数据帧:

| id | prodId |    date    | value |
| 1  |   a    | 2015-01-01 | 100   |
| 2  |   a    | 2015-01-02 | 150   |
| 3  |   a    | 2015-01-03 | 120   |
| 4  |   b    | 2015-01-01 | 100   |

我很想做一个groupBy prodId,并聚合日期范围的"值"求和。换句话说,我需要构建一个包含以下列的表:

  • prodId
  • val_1:如果日期介于date1和date2之间,则为和值
  • val2:如果日期介于date2和date3之间,则为sum值
  • val3:和以前一样等

     | prodId |      val_1       |      val_2       |
     |        | (01-01 to 01-02) | (01-03 to 01-04) |
     |   a    |        250       |        120       |
     |   b    |        100       |        0         |
    

spark中是否有任何预定义的聚合函数允许进行条件求和?你建议开发一个aggr吗。UDF(如果是,有什么建议)?非常感谢!

首先让我们重新创建示例数据集

import org.apache.spark.sql.functions.to_date
val df = sc.parallelize(Seq(
  (1, "a", "2015-01-01", 100), (2, "a", "2015-01-02", 150),
  (3, "a", "2015-01-03", 120), (4, "b", "2015-01-01", 100)
)).toDF("id", "prodId", "date", "value").withColumn("date", to_date($"date"))
val dates = List(("2015-01-01", "2015-01-02"), ("2015-01-03", "2015-01-04"))

你所要做的就是这样:

import org.apache.spark.sql.functions.{when, lit, sum}
val exprs = dates.map{
  case (x, y) => {
    // Create label for a column name
    val alias = s"${x}_${y}".replace("-", "_")
    // Convert strings to dates
    val xd = to_date(lit(x))
    val yd = to_date(lit(y))
    // Generate expression equivalent to
    // SUM(
    //   CASE
    //     WHEN date BETWEEN ... AND ... THEN value
    //     ELSE 0
    //   END
    // ) AS ...
    // for each pair of dates.
    sum(when($"date".between(xd, yd), $"value").otherwise(0)).alias(alias)
  }
}
df.groupBy($"prodId").agg(exprs.head, exprs.tail: _*).show
// +------+---------------------+---------------------+
// |prodId|2015_01_01_2015_01_02|2015_01_03_2015_01_04|
// +------+---------------------+---------------------+
// |     a|                  250|                  120|
// |     b|                  100|                    0|
// +------+---------------------+---------------------+

相关内容

  • 没有找到相关文章

最新更新