错误:java.io.ioexception:Map中的键类型不匹配:应为org.apache.hadoop.io.text,收到org.apache.hadoop.io.longwritable

uajslkp6  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(377)

我是hadoop新手,正在尝试运行本书中的示例程序。我面临错误:java.io.ioexception:来自Map的键中的类型不匹配:预期为org.apache.hadoop.io.text,收到的org.apache.hadoop.io.longwritable下面是我的代码

package com.hadoop.employee.salary;

import java.io.IOException;
import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class AvgMapper extends Mapper<LongWritable,Text,Text,FloatWritable>{

public void Map(LongWritable key,Text empRec,Context con) throws  IOException,InterruptedException{
        String[] word = empRec.toString().split("\\t");
        String sex = word[3];
        Float salary = Float.parseFloat(word[8]);
        try {
            con.write(new Text(sex), new FloatWritable(salary));
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}

package com.hadoop.employee.salary;

import java.io.IOException;
import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class AvgSalReducer extends Reducer<Text,FloatWritable,Text,Text> {

    public void reduce(Text key,Iterable<FloatWritable> valuelist,Context con)
    throws IOException,
                   InterruptedException
    {
        float total =(float)0;
        int count =0;
        for(FloatWritable var:valuelist)
        {
            total += var.get();
            System.out.println("reducer"+var.get());
            count++;
        }
        float avg =(float) total/count;
        String out = "Total: " + total + " :: " + "Average: " + avg;
        try {
            con.write(key,new Text(out));
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}

package com.hadoop.employee.salary;

import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class AvgSalary {

    public static void main(String[] args) throws IOException {
        // TODO Auto-generated method stub
        if(args.length!=2)
        {
            System.out.println("Please provide twp parameters");
        }
        Job job = new Job();
        job.setJarByClass(AvgSalary.class);//helps hadoop in finding the relevant jar if there are multiple jars
        job.setJobName("Avg Salary");
        job.setMapperClass(AvgMapper.class);
        job.setReducerClass(AvgSalReducer.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        //job.setMapOutputKeyClass(Text.class);
        //job.setMapOutputValueClass(FloatWritable.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job,new Path(args[1]));
        try {
            System.exit(job.waitForCompletion(true)?0:1);
        } catch (ClassNotFoundException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}
kzipqqlq

kzipqqlq1#

在Map器中调用了map方法 Map ,应该是 map . 因此,它将调用默认实现,因为您没有重写 map 方法。这将导致发出相同的输入键/值类型,因此它们是一个键 LongWritable .
将名称更改为 map 应该修复此错误。

相关问题