如何在 PySpark 中为组迭代数据帧/RDD 的每一行



我想根据组上一行中该列的值来设置列的值。然后,此更新的值将在下一行中使用。

我有以下数据帧

id | start_date|sort_date | A | B |
-----------------------------------
1 | 1/1/2017 | 31-01-2015 | 1 | 0 | 
1 | 1/1/2017 | 28-02-2015 | 0 | 0 | 
1 | 1/1/2017 | 31-03-2015 | 1 | 0 | 
1 | 1/1/2017 | 30-04-2015 | 1 | 0 | 
1 | 1/1/2017 | 31-05-2015 | 1 | 0 | 
1 | 1/1/2017 | 30-06-2015 | 1 | 0 | 
1 | 1/1/2017 | 31-07-2015 | 1 | 0 | 
1 | 1/1/2017 | 31-08-2015 | 1 | 0 | 
1 | 1/1/2017 | 30-09-2015 | 0 | 0 | 
2 | 1/1/2017 | 31-10-2015 | 1 | 0 | 
2 | 1/1/2017 | 30-11-2015 | 0 | 0 | 
2 | 1/1/2017 | 31-12-2015 | 1 | 0 | 
2 | 1/1/2017 | 31-01-2016 | 1 | 0 | 
2 | 1/1/2017 | 28-02-2016 | 1 | 0 | 
2 | 1/1/2017 | 31-03-2016 | 1 | 0 | 
2 | 1/1/2017 | 30-04-2016 | 1 | 0 | 
2 | 1/1/2017 | 31-05-2016 | 1 | 0 | 
2 | 1/1/2017 | 30-06-2016 | 0 | 0 | 

输出:

id | start_date|sort_date | A | B | C
---------------------------------------
1 | 1/1/2017 | 31-01-2015 | 1 | 0 | 1
1 | 1/1/2017 | 28-02-2015 | 0 | 0 | 0
1 | 1/1/2017 | 31-03-2015 | 1 | 0 | 1
1 | 1/1/2017 | 30-04-2015 | 1 | 0 | 2
1 | 1/1/2017 | 31-05-2015 | 1 | 0 | 3
1 | 1/1/2017 | 30-06-2015 | 1 | 0 | 4
1 | 1/1/2017 | 31-07-2015 | 1 | 0 | 5
1 | 1/1/2017 | 31-08-2015 | 1 | 0 | 6
1 | 1/1/2017 | 30-09-2015 | 0 | 0 | 0
2 | 1/1/2017 | 31-10-2015 | 1 | 0 | 1
2 | 1/1/2017 | 30-11-2015 | 0 | 0 | 0
2 | 1/1/2017 | 31-12-2015 | 1 | 0 | 1
2 | 1/1/2017 | 31-01-2016 | 1 | 0 | 2
2 | 1/1/2017 | 28-02-2016 | 1 | 0 | 3
2 | 1/1/2017 | 31-03-2016 | 1 | 0 | 4
2 | 1/1/2017 | 30-04-2016 | 1 | 0 | 5
2 | 1/1/2017 | 31-05-2016 | 1 | 0 | 6
2 | 1/1/2017 | 30-06-2016 | 0 | 0 | 0

组的 ID 和日期

C 列是基于 A 列和 B 列派生的。

如果 A == 1

和 B == 0,则 C 是从前一行 + 1 派生的 C。
还有其他一些条件,但我正在为这部分而苦苦挣扎。

假设我们在数据框中有一个列sort_date。

我尝试了以下查询:

SELECT
id,
date,
sort_date,
lag(A) OVER (PARTITION BY  id, date ORDER BY sort_date) as prev,
CASE
   WHEN A=1 AND B= 0  THEN 1
   WHEN  A=1 AND B> 0 THEN prev +1
   ELSE 0
 END AS A
FROM
Table

这就是我为UDAF所做的

val myFunc = new MyUDAF
val w = Window.partitionBy(col("ID"), col("START_DATE")).orderBy(col("SORT_DATE"))
val df = df.withColumn("C", myFunc(col("START_DATE"), col("X"),
  col("Y"), col("A"),
  col("B")).over(w))

附言:我正在使用Spark 1.6

首先定义一个窗口:

import org.apache.spark.sql.expressions.Window
val winspec = Window.partitionBy("id","start_date").orderBy("sort_date")

接下来创建一个UDAF,它接收A和B,基本上从0开始计算C,每当条件出现时更改为0(A = 1,B = 0),并在任何其他时间增加1。要了解如何编写 UDAF,请参阅此处、此处和此处的示例

编辑以下是 UDAF 的示例实现(未真正测试,因此可能存在拼写错误):

import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.{MutableAggregationBuffer,UserDefinedAggregateFunction}
 import org.apache.spark.sql.types._
 class myFunc() extends UserDefinedAggregateFunction {
  // Input Data Type Schema
  def inputSchema: StructType = StructType(Array(StructField("A", IntegerType), StructField("A", IntegerType)))
   // Intermediate Schema
  def bufferSchema = StructType(Array(StructField("C", IntegerType)))
  // Returned Data Type .
  def dataType: DataType = IntegerType
  // Self-explaining
  def deterministic = true
  // This function is called whenever key changes
  def initialize(buffer: MutableAggregationBuffer) = {
    buffer(0) = 0 // set number of items to 0
  }
  // Iterate over each entry of a group
  def update(buffer: MutableAggregationBuffer, input: Row) = {
    buffer(0) = if (input.getInt(0) == 1 && input.getInt(1) == 0) buffer.getInt(0) + 1 else 0
  }
  // Merge two partial aggregates - doesn't really matter because the window will make sure the buffer remains in a
  // single partition
  def merge(buffer1: MutableAggregationBuffer, buffer2: Row) = {
    buffer1(0) = buffer1.getInt(0) + buffer2.getInt(0)
  }
  // Called after all the entries are exhausted.
  def evaluate(buffer: Row) = {
    buffer.getInt(0)
  }
}

最后将其应用于数据帧。假设您将 UDAF 命名为 myFunc:

val f = new myFunc()
val newDF = df.withColumn("newC", f($"A",$"B").over(winspec))

相关内容

  • 没有找到相关文章

最新更新