使用sql查询取消clickhouse中表的规范化



源表的格式为

CREATE TABLE IF NOT EXISTS src_table (
str_1 String,
str_2 String,
metric_1 UInt64,
metric_2 UInt8
) ENGINE = Log

要反规范化的列是str2和非规范化表

CREATE TABLE IF NOT EXISTS denorm_table (
dt Date,
str_1 String,
attr_1 UInt64,
attr_2 UInt64,
......
attr_1000 UInt64,
attr_1001 UInt8,
attr_1002 UInt8,
.....
attr_2000 UInt8
) ENGINE = MergeTree PARTITION BY (dt) ORDER BY (dt, str_1) SETTINGS index_granularity=8192

假设列str_2具有1000个不同的值(1…1000(,并且

attr_1是当str_2等于1时列metric_1的值,

attr_2是当str_2等于1时列metric_1的值,

attr_1001是str_2等于1 时列metric_2的值

非规范化查询是:

INSERT INTO denorm_table        
(dt, user, attr_1, attr_2, ..., attr_1000, attr_1001, attr_2000)
SELECT 
'2018-11-01' as dt,
str_1,
arrayElement( groupArray(metric_1), indexOf(groupArray(str_2), '1') ) as attr_1,
arrayElement( groupArray(metric_1), indexOf(groupArray(str_2), '2') ) as attr_2,
......
arrayElement( groupArray(metric_1), indexOf(groupArray(str_2), '1000') ) as attr_1000,
arrayElement( groupArray(metric_2), indexOf(groupArray(str_2), '1001') ) as attr_1001,
.....
arrayElement( groupArray(metric_1), indexOf(groupArray(str_2), '2000') ) as attr_2000
FROM src_table
WHERE str_2 in ('1', '2', .....)
GROUP BY str_1

对于750列str_2的值(非规范化表中的1502列(,查询工作时没有问题。

但是,当非规范化表的列数为2002(以及相应的str_2值为1000(时,我有一个套接字。超时:超时错误

File "/usr/lib/python2.7/site-packages/clickhouse_driver/client.py", line 119, in execute
columnar=columnar
File "/usr/lib/python2.7/site-packages/clickhouse_driver/client.py", line 192, in process_ordinary_query
columnar=columnar)
File "/usr/lib/python2.7/site-packages/clickhouse_driver/client.py", line 42, in receive_result
return result.get_result()
File "/usr/lib/python2.7/site-packages/clickhouse_driver/result.py", line 39, in get_result
for packet in self.packet_generator:
File "/usr/lib/python2.7/site-packages/clickhouse_driver/client.py", line 54, in packet_generator
packet = self.receive_packet()
File "/usr/lib/python2.7/site-packages/clickhouse_driver/client.py", line 68, in receive_packet
packet = self.connection.receive_packet()
File "/usr/lib/python2.7/site-packages/clickhouse_driver/connection.py", line 331, in receive_packet
packet.type = packet_type = read_varint(self.fin)
File "/usr/lib/python2.7/site-packages/clickhouse_driver/reader.py", line 38, in read_varint
i = _read_one(f)
File "/usr/lib/python2.7/site-packages/clickhouse_driver/reader.py", line 23, in _read_one
c = f.read(1)
File "/usr/lib64/python2.7/socket.py", line 380, in read
data = self._sock.recv(left)
socket.timeout: timed out

客户端/服务器中是否有可以消除此问题的设置?


来自日志

2018.12.04 22:49:26.755926 [ 36 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log
2018.12.04 22:49:26.756233 [ 139 ] {821ce7ea-94b7-4675-96f5-feccb31b0ebe} <Error> executeQuery: Code: 32, e.displayText() = DB::Exception: Attempt to read after eof, e.what() = DB::Exception (from [::1]:52224) (in query:

=====编辑=====

我修改了如下查询(@johey(,错误没有再次发生:

WHERE modulo(sipHash64(str_1), 20) = 0用于分组拆分数据而不是对str_1 列中的所有值运行查询

INSERT INTO dst_table (....)
SELECT
arrayElement(metric_1_array, indexOf(str_2_array, '1') ) as attr_1,
arrayElement(metric_1_array, indexOf(str_2_array, '2') ) as attr_2,
......
arrayElement(metric_2_array, indexOf(str_2_array, '1') ) as attr_1001,
......
FROM (
SELECT
str_1,
groupArray(metric_1) metric_1_array,
groupArray(metric_2) metric_2_array,
groupArray(str_2) str_2_array
FROM src_table
WHERE modulo(sipHash64(str_1), 20) = 0
AND str_2 in ('1', '2', ......)
GROUP BY str_1
)

看起来您已经破坏了AST解析器。有两种设置可以限制输入AST树的大小:max_ast_elementsmax_expanded_ast_elements。你可以试着举起它们,看看是否有帮助。

以下是实现这种非规范化过程的另一种方法,

SELECT
arrayStringConcat(arrayConcat(['2018-11-01', str_1], arraySlice(groupArray(toString(metric_1)), 1, 1000), arraySlice(groupArray(toString(metric_2)), 1001, 1000)), ',')
FROM
(
SELECT *
FROM src_table
WHERE toUInt64(str_2) BETWEEN 1 AND 2000
ORDER BY toUInt64(str_2)
)
GROUP BY str_1
INTO OUTFILE 'xxx.csv'

然后将csv数据文件加载到denorm_table中。

更新

在这个PR被合并后,你可以实现这样的服务器端ETL:

INSERT INTO denorm_table
SELECT
arrayStringConcat(arrayConcat(['2018-11-01', str_1], arraySlice(groupArray(toString(metric_1)), 1, 1000), arraySlice(groupArray(toString(metric_2)), 1001, 1000)), ',')
FROM
(
SELECT *
FROM src_table
WHERE toUInt64(str_2) BETWEEN 1 AND 2000
ORDER BY toUInt64(str_2)
)
GROUP BY str_1
FORMAT CSV

最新更新