加载到hive分区Parquet地板表时内存不足

wlp8pajw  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(527)

首先,这不是hadoop的生产环境。它是一个单节点环境,我们在这里测试工作流
问题:当尝试加载带有单个数据分区的Parquet表时,下面的配置单元查询失败。源表/分区只是一个142mb的文件。insert语句生成一个Map器作业,该作业最终会失败,并出现java内存不足错误。看起来这个小测试用例不应该产生这样的开销吗?
我们只有在尝试插入 parquet 时才有这个问题。插入avro,orc,文本没有问题。也没有任何问题。
我试过使用以下命令,但它们只调整初始选择中使用的Map器。插入阶段仍然使用1个Map器。

set mapreduce.input.fileinputformat.split.minsize
set mapreduce.input.fileinputformat.split.maxsize
set mapreduce.job.maps

我使用的是cdh5.8/hadoop2.6。vm示例分配了4个内核/24gbram。

DROP TABLE IF EXISTS web.traffic_pageviews;

CREATE TABLE web.traffic_pageviews(
    SESSION_ID STRING,
    COOKIE_ID STRING,
    TS TIMESTAMP,
    PAGE STRING,
    PAGE_URL_BASE STRING,
    PAGE_URL_QUERY STRING,
    PAGE_REFERRAL_URL_BASE STRING,
    PAGE_REFERRAL_URL_QUERY STRING)
    PARTITIONED BY (DS STRING)
    STORED AS PARQUET;

INSERT OVERWRITE TABLE web.traffic_pageviews PARTITION(ds='2016-12-28')

select
    session_id,
    cookie_id,
    ts,
    page,
    SPLIT(PAGE_URL,'\\?')[0] PAGE_URL_BASE,
    SPLIT(PAGE_URL,'\\?')[1] PAGE_URL_QUERY,
    SPLIT(PAGE_REFERRAL_URL,'\\?')[0] PAGE_REFERRAL_URL_BASE,
    SPLIT(PAGE_REFERRAL_URL,'\\?')[1] PAGE_REFERRAL_URL_QUERY
from    
    web.stg_traffic_pageviews
where
    ds='2016-12-28';

错误输出如下所示。我觉得我们做了一些基本的错误,不应该调整java内存分配?

2017-01-03 07:11:02,053 INFO [main] org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper: real writer: parquet.hadoop.ParquetRecordWriter@755cce4b
2017-01-03 07:11:02,057 INFO [main] org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 1
2017-01-03 07:11:02,062 INFO [main] org.apache.hadoop.hive.ql.exec.MapOperator: MAP[2]: records read - 1
2017-01-03 07:11:02,064 INFO [main] org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 10
2017-01-03 07:11:02,064 INFO [main] org.apache.hadoop.hive.ql.exec.MapOperator: MAP[2]: records read - 10
2017-01-03 07:11:02,082 INFO [main] org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 100
2017-01-03 07:11:02,082 INFO [main] org.apache.hadoop.hive.ql.exec.MapOperator: MAP[2]: records read - 100
2017-01-03 07:11:02,356 INFO [main] org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 1000
2017-01-03 07:11:02,356 INFO [main] org.apache.hadoop.hive.ql.exec.MapOperator: MAP[2]: records read - 1000
2017-01-03 07:11:03,775 INFO [main] org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 10000
2017-01-03 07:11:03,775 INFO [main] org.apache.hadoop.hive.ql.exec.MapOperator: MAP[2]: records read - 10000
2017-01-03 07:12:03,679 FATAL [LeaseRenewer:cloudera@quickstart.cloudera:8020] org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[LeaseRenewer:cloudera@quickstart.cloudera:8020,5,main] threw an Error.  Shutting down now...
java.lang.OutOfMemoryError: Java heap space
bq8i3lrv

bq8i3lrv1#

在表上指定压缩后,问题自行解决。明确地:

CREATE TABLE web.traffic_pageviews(
    ...
    )
    PARTITIONED BY (DS STRING)
    STORED AS PARQUET
    TBLPROPERTIES ("parquet.compression"="SNAPPY");

虽然这是答案,但我不明白它为什么起作用。如果有人有真知灼见,我们将不胜感激。

相关问题