mapreduce程序中函数错误的不可Map参数

yrwegjxp  于 2021-05-30  发布在  Hadoop
关注(0)|答案(0)|浏览(212)

我正在eclipse(在ubuntu14.04lts上)中处理一个mapreducejava项目,为此我使用apacheavro序列化框架,因为我需要avro-tools-1.7.7.jar文件。我从apache网站下载了这个jar,并使用下载的jar编写了java代码。当我执行程序时,我得到java.lang.verifyerror。我从一些网站上看到,这个错误是由于jar中编译类文件的jdk版本与运行时jdk版本不匹配,所以我检查了下载的jar file.class版本和我的运行时jvm版本,结果不匹配,所以我将jdk从1.7降级到1.6,没有不匹配。jar中编译的类有50个主版本,我当前的项目类文件也是如此。但我还是犯了那个错误。

srimanth@srimanth-Inspiron-N5110:~$ hadoop jar Desktop/AvroMapReduceExamples.jar practice.AvroSort file:///home/srimanth/avrofile.avro file:///home/srimanth/sorted/ test.avro
15/04/19 22:14:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.VerifyError: (class: org/apache/hadoop/mapred/JobTrackerInstrumentation, method: create signature: (Lorg/apache/hadoop/mapred/JobTracker;Lorg/apache/hadoop/mapred/JobConf;)Lorg/apache/hadoop/mapred/JobTrackerInstrumentation;) Incompatible argument to function
    at org.apache.hadoop.mapred.LocalJobRunner.<init>(LocalJobRunner.java:420)
    at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470)
    at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:455)
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
    at practice.AvroSort.run(AvroSort.java:63)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at practice.AvroSort.main(AvroSort.java:67)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:622)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

这是我的java程序

package practice;

import java.io.File;
import java.io.IOException;

import org.apache.avro.Schema;
import org.apache.avro.mapred.AvroCollector;
import org.apache.avro.mapred.AvroJob;
import org.apache.avro.mapred.AvroMapper;
import org.apache.avro.mapred.AvroReducer;
import org.apache.avro.mapred.Pair;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class AvroSort extends Configured implements Tool {
static class SortMapper<K> extends AvroMapper<K, Pair<K, K>> {
    public void map(K datum, AvroCollector<Pair<K, K>> collector,
            Reporter reporter) throws IOException {
        collector.collect(new Pair<K, K>(datum, null, datum, null));
    }
}
    static class SortReducer<K> extends AvroReducer<K, K, K> {
        public void reduce(K key, Iterable<K> values,
                AvroCollector<K> collector,
                Reporter reporter) throws IOException {
            for (K value : values) {
                collector.collect(value);
            }
        }
    }
@Override
    public int run(String[] args) throws Exception {
    if (args.length != 3) {

        System.err.printf(
                "Usage: %s [generic options] <input> <output> <schema-file>\n",
                getClass().getSimpleName());
        ToolRunner.printGenericCommandUsage(System.err);
        return -1;
    }
    String input = args[0];
    String output = args[1];
    String schemaFile = args[2];
    JobConf conf = new JobConf(getConf(), getClass());
    conf.setJobName("Avro sort");
    FileInputFormat.addInputPath(conf, new Path(input));
    FileOutputFormat.setOutputPath(conf, new Path(output));
    Schema schema = new Schema.Parser().parse(new File(schemaFile));
    AvroJob.setInputSchema(conf, schema);
    Schema intermediateSchema = Pair.getPairSchema(schema, schema);
    AvroJob.setMapOutputSchema(conf, intermediateSchema);
    AvroJob.setOutputSchema(conf, schema);
    AvroJob.setMapperClass(conf, SortMapper.class);
    AvroJob.setReducerClass(conf, SortReducer.class);

    JobClient.runJob(conf);
    return 0;
    }
    public static void main(String[] args) throws Exception {
        int exitCode = ToolRunner.run(new AvroSort(), args);
        System.exit(exitCode);
    }
}

附加信息:jdk版本:1.6,hadoop版本:2.6.0,我没有使用maven。
请帮帮我,我一整天都被困在这里。我真的很感谢你的帮助。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题