flume代理出现hadoop snappy本机库错误

ndh0cuux  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(310)

嘿,我得到了一个错误与Flume,它是使用一个快速压缩库,并返回这个错误。

java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
    at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
    at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
    at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
    at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
    at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:165)
    at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1273)
    at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1166)
    at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1521)
    at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:284)
    at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:589)
    at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.java:97)
    at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.java:78)
    at com.mapquest.daas.flume.components.sink.s3.BucketWriter$1.call(BucketWriter.java:244)
    at com.mapquest.daas.flume.components.sink.s3.BucketWriter$1.call(BucketWriter.java:227)
    at com.mapquest.daas.flume.components.sink.s3.BucketWriter$9$1.run(BucketWriter.java:658)
    at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
    at com.mapquest.daas.flume.components.sink.s3.BucketWriter$9.call(BucketWriter.java:655)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

我已经尝试了所有其他顶级谷歌网页,与类似的问题。
i、 e.创建core-site.xml和mapred-site.xml
core-site.xml文件

<configuration>
  <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
  </property>

</configuration>

mapred-site.xml文件

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
 <property>
  <name>mapreduce.map.output.compress</name>
  <value>true</value>
 </property>

 <property>
  <name>mapred.map.output.compress.codec</name>
  <value>org.apache.hadoop.io.compress.SnappyCodec</value>
 </property>

 <property>
  <name>mapreduce.admin.user.env</name>
  <value>LD_LIBRARY_PATH=/opt/flume/plugins.d/hadoop/native</value>
 </property>

</configuration>

我尝试将snappy.so和hadoop.so文件添加到 .../jre/amd64/ 路径
我还尝试指定 flume-ng.sh 处理启动jar的脚本。

export JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:/opt/flume/plugins.d/hadoop/native
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/flume/plugins.d/hadoop/native
export SPARK_YARN_USER_ENV="JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH,LD_LIBRARY_PATH=$LD_LIBRARY_PATH"

我还尝试直接在java命令中添加snappy/hadoop libs。这是flume-ng.sh脚本中的简介。

DAAS_NATIVE_LIBS="/opt/flume/plugins.d/hadoop/native/*:/opt/flume/plugins.d/snappy/native/*"

$EXEC $JAVA_HOME/bin/java $JAVA_OPTS $FLUME_JAVA_OPTS "${arr_java_props[@]}" -cp "$FLUME_CLASSPATH:$DAAS_FLUME_HOME" \
      -Djava.library.path=$DAAS_NATIVE_LIBS:$FLUME_JAVA_LIBRARY_PATH "$FLUME_APPLICATION_CLASS" $*

我完全没有办法解决这个问题。如果有人知道如何解决这个问题,我将不胜感激。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题