hive从hdfs取结果太慢,因为map only任务太多,hive SQL查询时如何合并查询结果?



hive query在"/tmp/hive/hive"文件夹中产生了太多的结果文件,接近4W个任务。但是运行结果的总数只有100多个所以我想知道是否有一种方法可以在查询后合并结果,减少结果文件的数量,提高提取结果的效率?

下面是查询的解释

+----------------------------------------------------+--+
|                      Explain                       |
+----------------------------------------------------+--+
| STAGE DEPENDENCIES:                                |
|   Stage-1 is a root stage                          |
|   Stage-0 depends on stages: Stage-1               |
|                                                    |
| STAGE PLANS:                                       |
|   Stage: Stage-1                                   |
|     Map Reduce                                     |
|       Map Operator Tree:                           |
|           TableScan                                |
|             alias: kafka_program_log             |
|             filterExpr: ((msg like '%disk loss%') and (ds > '2022-05-01')) (type: boolean) |
|             Statistics: Num rows: 36938084350 Data size: 11081425337136 Basic stats: PARTIAL Column stats: PARTIAL |
|             Filter Operator                        |
|               predicate: (msg like '%disk loss%') (type: boolean) |
|               Statistics: Num rows: 18469042175 Data size: 5540712668568 Basic stats: COMPLETE Column stats: PARTIAL |
|               Select Operator                      |
|                 expressions: server (type: string), msg (type: string), ts (type: string), ds (type: string), h (type: string) |
|                 outputColumnNames: _col0, _col1, _col2, _col3, _col4 |
|                 Statistics: Num rows: 18469042175 Data size: 5540712668568 Basic stats: COMPLETE Column stats: PARTIAL |
|                 File Output Operator               |
|                   compressed: false                |
|                   Statistics: Num rows: 18469042175 Data size: 5540712668568 Basic stats: COMPLETE Column stats: PARTIAL |
|                   table:                           |
|                       input format: org.apache.hadoop.mapred.TextInputFormat |
|                       output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat |
|                       serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
|                                                    |
|   Stage: Stage-0                                   |
|     Fetch Operator                                 |
|       limit: -1                                    |
|       Processor Tree:                              |
|         ListSink                                   |
|                                                    |
+----------------------------------------------------+--+

set mapred.max.split.size=2560000000;
增加单个map处理的文件大小,从而减少map的数量

  1. 使用ORC/Parquet重新创建表,您将获得更好的性能。这是你加快速度的首要任务。
  2. 您正在使用like操作符,这意味着扫描所有数据。您可能需要考虑将其重写为使用join/where子句。这将运行得更快。下面是一个你可以做的让事情变得更好的例子。
with words as --short cut for readable sub-query
(
select 
log.msg 
from 
kafka_program_log log 
lateral view EXPLODE(split(msg, ' ')) words as word  -- for each word in msg, make a row assumes ' disk loss ' is what is in the msg
where 
word in ('disk', 'loss' ) -- filter the words to the ones we care about.
and 
ds > '2022-05-01' -- filter dates to the ones we care about.
group by 
log.msg -- gather the msgs together
having 
count(word) >= 2  -- only pull back msg that have at least two words we are interested in.
) -- end sub-query
select 
* 
from kafka_program_log log
inner join 
words.msg = log.msg  // This join should really reduce the data we examine
where 
msg like "%disk loss%" -- like is fine now to make sure it's exactly what we're looking for.

最新更新