我正在将snappy压缩的json文件导入sparkrdd或dataset。但是我遇到了这个错误:java.lang.unsatifiedLinkError:org.apache.hadoop.util.nativecodeloader.buildsupportssnappy()z
我已设置以下配置:
SparkConf conf = new SparkConf()
.setAppName("normal spark")
.setMaster("local")
.set("spark.io.compression.codec", "org.apache.spark.io.SnappyCompressionCodec")
.set("spark.driver.extraLibraryPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
.set("spark.driver.extraClassPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
.set("spark.executor.extraLibraryPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
.set("spark.executor.extraClassPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
;
其中d:\downloads\spark-2.2.0-bin-hadoop2.7是我的spark解包路径,我可以在中找到snappy-jar文件snappy-0.2.jar和snappy-java-1.1.2.6.jar
d:\下载\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\jars
但是什么都不起作用,甚至错误消息也没有改变。
我该怎么修?
暂无答案!
目前还没有任何答案,快来回答吧!