wholefilerecordreader不能强制转换为org.apache.hadoop.mapred.recordreader

qeeaahzv  于 2021-06-02  发布在  Hadoop
关注(0)|答案(2)|浏览(366)

我想在hadoop中创建一个新的数据类型,但我的自定义inputformat类出现以下错误,这是我的代码:
错误-无法将wholefilerecordreader转换为org.apache.hadoop.mapred.recordreader
代码-
导入java.io.ioexception;

import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileSplit;
import org.apache.hadoop.mapred.InputSplit;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.RecordReader;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TaskAttemptContext;

 public class wholeFileInputFormat extends FileInputFormat<Text, apriori>{

public RecordReader<Text, apriori> getRecordReader(
          InputSplit input, JobConf job, Reporter reporter)
          throws IOException {

        reporter.setStatus(input.toString());

    return (RecordReader<Text, apriori>) new WholeFileRecordReader(job,FileSplit)input);

      }

}

我的定制阅读器如下

import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileSplit;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;

 class WholeFileRecordReader extends RecordReader<Text, apriori> {

private FileSplit fileSplit;
private Configuration conf;
private InputStream in;
private Text key = new Text("");
private apriori value = new apriori();
private boolean processed = false;

public void initialize( JobConf job, FileSplit split)
        throws IOException {

    this.fileSplit = split;
    this.conf = job;
    final Path file = fileSplit.getPath();
    String StringPath = new String(fileSplit.getPath().toString());
    String StringPath2 = new String();
    StringPath2 = StringPath.substring(5);
    System.out.println(StringPath2);
    in = new FileInputStream(StringPath2);

    FileSystem fs = file.getFileSystem(conf);
    in = fs.open(file);
    }

public boolean nextKeyValue() throws IOException, InterruptedException {
    if (!processed) {
        byte[] contents = new byte[(int) fileSplit.getLength()];
        Path file = fileSplit.getPath();
        key.set(file.getName());

        try {
            IOUtils.readFully(in, contents, 0, contents.length);
            value.set(contents, 0, contents.length);
        } finally {
            IOUtils.closeStream(in);
        }

        processed = true;
        return true;
    }

    return false;
}

@Override
public Text getCurrentKey() throws IOException, InterruptedException {
    return key;
}

@Override
public apriori getCurrentValue() throws IOException, InterruptedException {
    return value;
}

@Override
public float getProgress() throws IOException {
    return processed ? 1.0f : 0.0f;
}

@Override
public void close() throws IOException {
    // Do nothing
}

@Override
public void initialize(InputSplit arg0, TaskAttemptContext arg1)
        throws IOException, InterruptedException {
    // TODO Auto-generated method stub

}

}
mtb9vblg

mtb9vblg1#

由于包不匹配而导致此错误。
在您的代码中,您结合了mrv1和mrv2,因此您得到了错误。
Package org.apache.hadoop.mapred 是mrv1(MapReduce(版本1)
Package org.apache.hadoop.mapreduce 是mrv2(MapReduce(版本2)
在您的代码中,您结合了mrv1和mrv2:

import org.apache.hadoop.mapred.FileSplit;

import org.apache.hadoop.mapred.JobConf;

import org.apache.hadoop.mapreduce.InputSplit;

import org.apache.hadoop.mapreduce.RecordReader;

import org.apache.hadoop.mapreduce.TaskAttemptContext;

或者将所有导入包用作 org.apache.hadoop.mapred (mrv1)或 org.apache.hadoop.mapreduce (mrv2)。
希望这有帮助。

piok6c0g

piok6c0g2#

WholeFileRecordReader 类是 org.apache.hadoop.mapreduce.RecordReader 类。此类不能强制转换为 org.apache.hadoop.mapred.RecordReader 你能尝试在两个类中使用相同的api吗
根据java编程语言的规则,只有来自同一类型层次结构的类或接口(统称为类型)可以被强制转换或相互转换。如果您试图强制转换两个不共享同一类型层次结构的对象,即它们之间没有父子关系,则会出现编译时错误。您可以引用此链接

相关问题