我们可以单独使用snappy,还是需要与hadoop结合使用?

q5iwbnjs  于 2021-06-04  发布在  Flume
关注(0)|答案(0)|浏览(236)

我想用snappy来压缩java文件。我已经下载了所需的本机库,并提供了java.library.path到包含libsnappy.so.1.1.4文件的文件夹。但我还是得到了以下例外。

| ERROR | [SinkRunner-PollingRunner-DefaultSinkProcessor] | com.omnitracs.otda.dte.flume.sink.
hdfs.HDFSEventSink:process(463): process failed org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
        at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) ~[hadoop-common-2.8.5.jar:?]
        at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63) ~[hadoop-common-2.8.5.jar:?]
        at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136) ~[hadoop-common-2.8.5.jar:?]
        at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) ~[hadoop-common-2.8.5.jar:?]
        at org.apache.flume.sink.hdfs.HDFSCompressedDataStream.open(HDFSCompressedDataStream.java:97) ~[flume-hdfs-sink-1.9.0.jar:1.9.0]

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题