将snappy压缩数据写入配置单元表

uemypmqf  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(490)

我已经创建了一个配置单元表,现在我想将snappy压缩数据加载到表中。因此,我做了以下工作:

SET mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET hive.exec.compress.output=true;
SET mapreduce.output.fileoutputformat.compress=true;
CREATE TABLE toydata_table (id STRING, value STRING)  ROW FORMAT DELIMITED FIELDS TERMINATED BY ",";'

然后我创建了一个名为toydata.csv的csv文件,该文件包含以下内容:

A,Value1
B,Value2
C,Value3

我用snzip()压缩了这个文件https://github.com/kubo/snzip )通过做

/usr/local/bin/snzip -t snappy-java toydata.csv

产生 toydata.csv.snappy . 完成之后,我返回到hivecli并通过 LOAD DATA LOCAL INPATH "toydata.csv.snappy" INTO TABLE toydata_table; . 但现在我想尝试从该表中查询并获得以下错误消息:

hive> select * from toydata_table;
OK
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
    at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
    at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:62)
    at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:189)
    at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:175)
    at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:108)
    at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
    at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:433)
    at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:515)
    at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:489)
    at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
    at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1471)
    at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
    at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

我对gzip做了完全相同的事情,使用gzip效果很好。那么,为什么这部分会失败呢?

xnifntxz

xnifntxz1#

请在群集上安装snappy压缩编解码器。如果要确认是否安装了snappy,请在库中找到libsnappy.so文件。您还需要使用--auxpath参数启动hiveshell并提供snappy.jar。例如:hive--auxpath/home/user/snappy1.0.4.1.jar。

相关问题