我们从orc文件中读取数据,并使用多路输出将其写回orc和Parquet格式。我们的工作只是Map,没有减速机。在某些情况下,我们会遇到以下错误,导致整个工作失败。我认为这两个错误都是相关的,但不确定为什么不是每项工作都会出现这样的错误。如果需要更多信息,请告诉我。
Error: java.lang.RuntimeException: Overflow of newLength. smallBuffer.length=1073741824, nextElemLength=300947
Error: java.lang.ArrayIndexOutOfBoundsException: 1000
at org.apache.orc.impl.writer.StringTreeWriter.writeBatch(StringTreeWriter.java:70)
at org.apache.orc.impl.writer.StructTreeWriter.writeRootBatch(StructTreeWriter.java:56)
at org.apache.orc.impl.WriterImpl.addRowBatch(WriterImpl.java:546)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushInternalBatch(WriterImpl.java:297)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:334)
at org.apache.hadoop.hive.ql.io.orc.OrcNewOutputFormat$OrcRecordWriter.close(OrcNewOutputFormat.java:67)
at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs$RecordWriterWithCounter.close(MultipleOutputs.java:375)
at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574)
Error: java.lang.NullPointerException
at java.lang.System.arraycopy(Native Method)
at org.apache.orc.impl.DynamicByteArray.add(DynamicByteArray.java:115)
at org.apache.orc.impl.StringRedBlackTree.addNewKey(StringRedBlackTree.java:48)
at org.apache.orc.impl.StringRedBlackTree.add(StringRedBlackTree.java:60)
at org.apache.orc.impl.writer.StringTreeWriter.writeBatch(StringTreeWriter.java:70)
at org.apache.orc.impl.writer.StructTreeWriter.writeRootBatch(StructTreeWriter.java:56)
at org.apache.orc.impl.WriterImpl.addRowBatch(WriterImpl.java:546)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushInternalBatch(WriterImpl.java:297)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:334)
at org.apache.hadoop.hive.ql.io.orc.OrcNewOutputFormat$OrcRecordWriter.close(OrcNewOutputFormat.java:67)
at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs$RecordWriterWithCounter.close(MultipleOutputs.java:375)
at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574)
1条答案
按热度按时间ao218c7q1#
在我的情况下,解决办法是改变
orc.rows.between.memory.checks
(或spark.hadoop.orc.rows.between.memory.checks
)从5000
(默认值)到1
.因为orc writer似乎无法处理向条带中添加异常大的行。
该值可能会进一步调整,以实现更好的安全性能平衡。