优化对配置单元表的写入

jdgnovmf  于 2021-05-27  发布在  Hadoop
关注(0)|答案(0)|浏览(290)

我有一个hql从大型源表(500tb以上)读取数据,并将数据写入静态分区配置单元表。我每天都在这个表上写1tb的数据。mapreduce作业的数据处理很好,但写入速度非常慢,数据加载时间从10到28小时不等。我尝试过将表文件格式从sequence更改为orc,这并没有在写操作上增加太多性能。我用表上原来的序列文件格式进行了快速压缩。我启用了并行执行、自动Map连接、cbo和矢量化来提高处理能力。特别是写作,我试着设置 hive.exec.scratchdir=/tmp/hive 使从.hive暂存到目标目录的复制操作成为移动/重命名操作。但失败的消息如下。也试过设置 hive.exec.copyfile.maxsize=1099511627776 ,也失败了。我将mapreduce2与yarn/application master一起使用。有人能告诉我如何直接写到目标目录或使用重命名操作,而不是复制,这需要很长时间吗?

Failed with exception 
Unable to move source /xxx/tmp/aaa/hive_hive_2020-10-02_15-06-33_205_3126778922824450411-1/-ext-10000 
to destination /xxx/ttt/temp/fff/c=FULL/dt=20200922 
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask

Error: java.lang.RuntimeException: Hive Runtime Error while closing operators
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:210)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename output from: /xxx/ttt/temp/fff/c=FULL/.hive-staging_hive_2020-10-02_16-52-38_613_8689698651254070943-1/_task_tmp.-ext-10000/_tmp.005299_3 to: /xxx/temp/fff/c=FULL/.hive-staging_hive_2020-10-02_16-52-38_613_8689698651254070943-1/_tmp.-ext-10000/005299_3

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题