我非常期待PostgreSQL 9.5的新功能,并且很快就会升级我们的数据库。但是当我发现
SELECT col1, col2, count(*), grouping(col1,col2)
FROM table1
GROUP BY CUBE(col1, col2)
我们数据集上的查询实际上比等效数据的查询持续时间总和慢得多(~3 秒)(所有 4 个查询总共 ~1 秒,每个查询 100-300 毫秒)。col1 和 col2 都有索引。
这是意料之中的(这意味着该功能现在更多的是关于兼容性而不是性能)?或者可以以某种方式进行调整?
以下是真空生产台上的一个示例:
> explain analyze select service_name, state, res_id, count(*) from bookings group by rollup(service_name, state, res_id);
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=43069.12..45216.05 rows=4161 width=24) (actual time=1027.341..1120.675 rows=428 loops=1)
Group Key: service_name, state, res_id
Group Key: service_name, state
Group Key: service_name
Group Key: ()
-> Sort (cost=43069.12..43490.18 rows=168426 width=24) (actual time=1027.301..1070.321 rows=168426 loops=1)
Sort Key: service_name, state, res_id
Sort Method: external merge Disk: 5728kB
-> Seq Scan on bookings (cost=0.00..28448.26 rows=168426 width=24) (actual time=0.079..147.619 rows=168426 loops=1)
Planning time: 0.118 ms
Execution time: 1122.557 ms
(11 rows)
> explain analyze select service_name, state, res_id, count(*) from bookings group by service_name, state, res_id
UNION ALL select service_name, state, NULL, count(*) from bookings group by service_name, state
UNION ALL select service_name, NULL, NULL, count(*) from bookings group by service_name
UNION ALL select NULL, NULL, NULL, count(*) from bookings;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------
Append (cost=30132.52..118086.91 rows=4161 width=32) (actual time=208.986..706.347 rows=428 loops=1)
-> HashAggregate (cost=30132.52..30172.12 rows=3960 width=24) (actual time=208.986..209.078 rows=305 loops=1)
Group Key: bookings.service_name, bookings.state, bookings.res_id
-> Seq Scan on bookings (cost=0.00..28448.26 rows=168426 width=24) (actual time=0.022..97.637 rows=168426 loops=1)
-> HashAggregate (cost=29711.45..29713.25 rows=180 width=20) (actual time=195.851..195.879 rows=96 loops=1)
Group Key: bookings_1.service_name, bookings_1.state
-> Seq Scan on bookings bookings_1 (cost=0.00..28448.26 rows=168426 width=20) (actual time=0.029..95.588 rows=168426 loops=1)
-> HashAggregate (cost=29290.39..29290.59 rows=20 width=11) (actual time=181.955..181.960 rows=26 loops=1)
Group Key: bookings_2.service_name
-> Seq Scan on bookings bookings_2 (cost=0.00..28448.26 rows=168426 width=11) (actual time=0.030..97.047 rows=168426 loops=1)
-> Aggregate (cost=28869.32..28869.33 rows=1 width=0) (actual time=119.332..119.332 rows=1 loops=1)
-> Seq Scan on bookings bookings_3 (cost=0.00..28448.26 rows=168426 width=0) (actual time=0.039..93.508 rows=168426 loops=1)
Planning time: 0.373 ms
Execution time: 706.558 ms
(14 rows)
总时间相当,但后者使用四次扫描,不应该更慢吗?使用 rollup() 时"磁盘上的外部合并"很奇怪,我work_mem设置为 16M。
很有趣,但在那个特定的示例中,SET work_mem='32mb'
摆脱了磁盘合并,现在使用 ROLLUP 比相应的联合快 2 倍。
解释分析现在包含:"排序方法:快速排序内存:19301kB"
我仍然想知道为什么仅 400 行输出需要这么多内存,以及为什么磁盘合并需要 7Mb 而不是 19Mb 内存(快速排序开销?),但我的问题解决了。
似乎分组集在查询计划中始终具有具有排序的组聚合。但按频率分组的标准组使用HashAggragate。