在SparkSQL中,可以定义一个按列排序的窗口查询,但似乎不可能基于这些列定义范围子句。
例如,
select
row_id,
count(*) over (
partition by group_id
order by filter_key1, filter_key2
range between 12 preceding and 12 following
range between 5 preceding and 1 preceding
) as the_count
from table
以上失败(尽管可能语法不正确?祈祷…)
它能在一个类似于上面的语句中完成吗?
否,只允许一个范围。但不要绝望。count(*)
是添加剂:
select row_id,
(count(*) over (partition by group_id
order by filter_key1, filter_key2
range between 12 preceding and 12 following
) +
count(*) over (partition by group_id
order by filter_key1, filter_key2
range between 5 preceding and 1 preceding
)
) as the_count
from table
这个特殊的例子似乎很奇怪,因为范围是重叠的。也许这就是你的意图。
根据你的问题,我想知道你是否想要:
select row_id,
(count(*) over (partition by group_id
order by filter_key1
range between 12 preceding and 12 following
) +
count(*) over (partition by group_id
order by filter_key2
range between 5 preceding and 1 preceding
)
) as the_count
from table