SparkSQL - 相关的标量子查询只能包含相等谓词



我想用Spark SQL 2.0执行以下查询

SELECT
a.id as id,
(SELECT SUM(b.points) 
  FROM tableB b 
  WHERE b.id = a.id AND b.date <= a.date) AS points
FROM tableA a

但我收到以下错误

The correlated scalar subquery can only contain equality predicates .

知道如何重写查询或使用两个数据帧表A和表B之间的操作以使其工作吗?

select a.id as id, 
sum(b.points) as points 
from a, b 
where a.id = b.id 
and b.date <= a.date 
group by a.id 
;

跳过子选择并group by id,以确保 id 与 b 的点数总和列之间存在一对一的关系。

这是我使用的一个"向下和肮脏"的例子:

select * from a ;
id|date
1|2017-01-22 17:59:49
2|2017-01-22 18:00:00
3|2017-01-22 18:00:05
4|2017-01-22 18:00:11
5|2017-01-22 18:00:15
select * from b ;
id|points|date
1|12|2017-01-21 18:03:20
3|25|2017-01-21 18:03:37
5|17|2017-01-21 18:03:55
2|-1|2017-01-22 18:04:27
4|-4|2017-01-22 18:04:35
5|400|2017-01-20 18:17:31
5|-1000|2017-01-23 18:18:36

请注意,b 有三个 id = 5 的条目,两个在 a.date 之前,一个在 a.date 之后。

select a.id, sum(b.points) as points from a, b where a.id = b.id and b.date <= a.date group by a.id ;
1|12
3|25
5|417

我还确认支持"分组依据":http://spark.apache.org/docs/latest/sql-programming-guide.html#supported-hive-features

相关内容

  • 没有找到相关文章

最新更新