减少表分区内存使用量的建议 (psql 11)



我有几个表将有20-4000万行,因此我的查询过去需要花费大量时间来执行。在进行分区之前,是否有任何建议可以详细排除/分析查询,例如大部分内存的消耗位置或更多建议?

此外,我也很少有用于分析的查询,这些查询在整个日期范围内运行(必须遍历整个数据)。

因此,我将需要一个整体解决方案来保持我的基本查询快速,并且分析查询不会因内存不足或数据库崩溃而失败。

一个表的大小接近 120GB,其他表只有大量的行。 我尝试以每周和每月日期为基础对表进行分区,但随后查询内存不足,锁的数量在分区时增加了一大倍,正常的表查询需要 13 个锁,分区表上的查询需要 250 个锁(每月分区)和 1000 个锁(每周分区)。 我读到,当我们有分区时,有一个开销加起来。

分析查询:

SELECT id
from TABLE1
where id NOT IN (
SELECT DISTINCT id
FROM TABLE2
);

TABLE1TABLE2被分区,第一个按event_data_timestamp分区,第二个按event_timestamp分区。

分析查询耗尽内存并消耗大量锁,但基于日期的查询非常快。

查询:

EXPLAIN (ANALYZE, BUFFERS) SELECT id FROM Table1_monthly WHERE event_timestamp > '2019-01-01' and id NOT IN (SELECT DISTINCT id FROM Table2_monthly where event_data_timestamp > '2019-01-01');
Append  (cost=32731.14..653650.98 rows=4656735 width=16) (actual time=2497.747..15405.447 rows=10121827 loops=1)
Buffers: shared hit=3 read=169100
->  Seq Scan on TABLE1_monthly_2019_01_26  (cost=32731.14..77010.63 rows=683809 width=16) (actual time=2497.746..3489.767 rows=1156382 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 462851
Buffers: shared read=44559
SubPlan 1
->  HashAggregate  (cost=32728.64..32730.64 rows=200 width=16) (actual time=248.084..791.054 rows=1314570 loops=6)
Group Key: TABLE2_monthly_2019_01_26.cid
Buffers: shared read=24568
->  Append  (cost=0.00..32277.49 rows=180458 width=16) (actual time=22.969..766.903 rows=1314570 loops=1)
Buffers: shared read=24568
->  Seq Scan on TABLE2_monthly_2019_01_26  (cost=0.00..5587.05 rows=32135 width=16) (actual time=22.965..123.734 rows=211977 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Rows Removed by Filter: 40282
Buffers: shared read=4382
->  Seq Scan on TABLE2_monthly_2019_02_25  (cost=0.00..5573.02 rows=32054 width=16) (actual time=0.700..121.657 rows=241977 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=4371
->  Seq Scan on TABLE2_monthly_2019_03_27  (cost=0.00..5997.60 rows=34496 width=16) (actual time=0.884..123.043 rows=253901 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=4704
->  Seq Scan on TABLE2_monthly_2019_04_26  (cost=0.00..6581.55 rows=37855 width=16) (actual time=0.690..129.537 rows=282282 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=5162
->  Seq Scan on TABLE2_monthly_2019_05_26  (cost=0.00..6585.38 rows=37877 width=16) (actual time=1.248..122.794 rows=281553 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=5165
->  Seq Scan on TABLE2_monthly_2019_06_25  (cost=0.00..999.60 rows=5749 width=16) (actual time=0.750..23.020 rows=42880 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=784
->  Seq Scan on TABLE2_monthly_2019_07_25  (cost=0.00..12.75 rows=73 width=16) (actual time=0.007..0.007 rows=0 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
->  Seq Scan on TABLE2_monthly_2019_08_24  (cost=0.00..12.75 rows=73 width=16) (actual time=0.003..0.004 rows=0 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
->  Seq Scan on TABLE2_monthly_2019_09_23  (cost=0.00..12.75 rows=73 width=16) (actual time=0.003..0.004 rows=0 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
->  Seq Scan on TABLE2_monthly_2019_10_23  (cost=0.00..12.75 rows=73 width=16) (actual time=0.007..0.007 rows=0 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
->  Seq Scan on TABLE1_monthly_2019_02_25  (cost=32731.14..88679.16 rows=1022968 width=16) (actual time=1008.738..2341.807 rows=1803957 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 241978
Buffers: shared hit=1 read=25258
->  Seq Scan on TABLE1_monthly_2019_03_27  (cost=32731.14..97503.58 rows=1184315 width=16) (actual time=1000.795..2474.769 rows=2114729 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 253901
Buffers: shared hit=1 read=29242
->  Seq Scan on TABLE1_monthly_2019_04_26  (cost=32731.14..105933.54 rows=1338447 width=16) (actual time=892.820..2405.941 rows=2394619 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 282282
Buffers: shared hit=1 read=33048
->  Seq Scan on TABLE1_monthly_2019_05_26  (cost=32731.14..87789.65 rows=249772 width=16) (actual time=918.397..2614.059 rows=2340789 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 281553
Buffers: shared read=32579
->  Seq Scan on TABLE1_monthly_2019_06_25  (cost=32731.14..42458.60 rows=177116 width=16) (actual time=923.367..1141.672 rows=311351 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 42880
Buffers: shared read=4414
->  Seq Scan on TABLE1_monthly_2019_07_25  (cost=32731.14..32748.04 rows=77 width=16) (actual time=0.008..0.008 rows=0 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
->  Seq Scan on TABLE1_monthly_2019_08_24  (cost=32731.14..32748.04 rows=77 width=16) (actual time=0.003..0.003 rows=0 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
->  Seq Scan on TABLE1_monthly_2019_09_23  (cost=32731.14..32748.04 rows=77 width=16) (actual time=0.003..0.003 rows=0 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
->  Seq Scan on TABLE1_monthly_2019_10_23  (cost=32731.14..32748.04 rows=77 width=16) (actual time=0.003..0.003 rows=0 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Planning Time: 244.669 ms
Execution Time: 15959.111 ms
(69 rows)

联接两个大型分区表以生成 1000 万行的查询将消耗资源,这是没有办法的。

您可以通过减少work_mem来牺牲内存消耗以换取速度:较小的 vakues 会使您的查询变慢,但消耗更少的内存。

我想说的是,最好的办法是保持work_mem高,但减少max_connections,这样你就不会这么快耗尽内存。此外,将更多 RAM 放入机器是最便宜的硬件调整技术之一。

您可以稍微改进查询:

  • 删除DISTINCT,这是无用的,会消耗 CPU 资源并抛弃您的估计。

  • ANALYZE table2以便您获得更好的估计。

关于分区:如果这些查询扫描所有分区,则分区表的查询速度会变慢。

分区是否适合您取决于以下问题:您是否有其他查询可以从分区中受益:

  • 首先是大规模删除,通过删除分区是无痛的。

  • 顺序扫描,其中分区键是扫描筛选器的一部分。

与流行的看法相反,如果您拥有大型表,则分区并不总是从中受益:许多查询通过分区变得变慢。

锁是您最不担心的:只需增加max_locks_per_transaction

相关内容

  • 没有找到相关文章

最新更新