taskid.< init>(lorg/apache/hadoop/mapreduce/jobid;lorg/apache/hadoop/mapreduce/tasktype;i) 五

waxmsbnn  于 2021-05-29  发布在  Hadoop
关注(0)|答案(2)|浏览(517)
val jobConf = new JobConf(hbaseConf)  
jobConf.setOutputFormat(classOf[TableOutputFormat])  
jobConf.set(TableOutputFormat.OUTPUT_TABLE, tablename)  

val indataRDD = sc.makeRDD(Array("1,jack,15","2,Lily,16","3,mike,16"))  

indataRDD.map(_.split(','))   
val rdd = indataRDD.map(_.split(',')).map{arr=>{  
val put = new Put(Bytes.toBytes(arr(0).toInt))  
put.add(Bytes.toBytes("cf"),Bytes.toBytes("name"),Bytes.toBytes(arr(1)))  
put.add(Bytes.toBytes("cf"),Bytes.toBytes("age"),Bytes.toBytes(arr(2).toInt))  
(new ImmutableBytesWritable, put)   
}}  
  rdd.saveAsHadoopDataset(jobConf)

当我运行hadoop或spark作业时,经常会遇到以下错误:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapred.TaskID.<init>(Lorg/apache/hadoop/mapreduce/JobID;Lorg/apache/hadoop/mapreduce/TaskType;I)V
at org.apache.spark.SparkHadoopWriter.setIDs(SparkHadoopWriter.scala:158)
at org.apache.spark.SparkHadoopWriter.preSetup(SparkHadoopWriter.scala:60)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1188)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1161)
at com.iteblog.App$.main(App.scala:62)
at com.iteblog.App.main(App.scala)`

一开始,我认为是jar冲突,但我仔细检查了jar:没有其他jar。spark和hadoop版本是:

<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.0.1</version>`

<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>2.6.0-mr1-cdh5.5.0</version>

我发现taskid和tasktype都在hadoop核心jar中,但不在同一个包中。为什么mapred.taskid可以引用mapreduce.tasktype?

33qvvth1

33qvvth11#

我也遇到过这样的问题。它基本上只是由于jar问题。

从maven spark-core 2.10添加jar文件

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.10</artifactId>
  <version>2.0.2</version>
 </dependency>

更改jar文件后

wkftcu5l

wkftcu5l2#

哦,我已经解决了这个问题,添加了maven依赖项

<dependency>
   <groupId>org.apache.hadoop</groupId>
   <artifactId>hadoop-mapreduce-client-core</artifactId>
   <version>2.6.0-cdh5.5.0</version>
</dependency>

错误消失!

相关问题