我希望能够按步长对Spark进行分组,而不仅仅是单个值。Spark 中是否有类似于 PySpark 2.x 的数值(非日期(window
函数的内容?
大致如下:
sqlContext = SQLContext(sc)
df = sqlContext.createDataFrame([10, 11, 12, 13], "integer").toDF("foo")
res = df.groupBy(window("foo", step=2, start=10)).count()
您可以重复使用时间戳 1 并以秒为单位表示参数。翻筋斗:
from pyspark.sql.functions import col, window
df.withColumn(
"window",
window(
col("foo").cast("timestamp"),
windowDuration="2 seconds"
).cast("struct<start:bigint,end:bigint>")
).show()
# +---+-------+
# |foo| window|
# +---+-------+
# | 10|[10,12]|
# | 11|[10,12]|
# | 12|[12,14]|
# | 13|[12,14]|
# +---+-------+
滚动一:
df.withColumn(
"window",
window(
col("foo").cast("timestamp"),
windowDuration="2 seconds", slideDuration="1 seconds"
).cast("struct<start:bigint,end:bigint>")
).show()
# +---+-------+
# |foo| window|
# +---+-------+
# | 10| [9,11]|
# | 10|[10,12]|
# | 11|[10,12]|
# | 11|[11,13]|
# | 12|[11,13]|
# | 12|[12,14]|
# | 13|[12,14]|
# | 13|[13,15]|
# +---+-------+
使用groupBy
和start
:
w = window(col("foo").cast("timestamp"), "2 seconds").cast("struct<start:bigint,end:bigint>")
start = w.start.alias("start")
df.groupBy(start).count().show()
+-----+-----+
|start|count|
+-----+-----+
| 10| 2|
| 12| 2|
+-----+-----+