java—在eclipse中运行普通mapreduce程序时获取nullpointerexception

vatpfxk5  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(291)

我要走了 NullPointerException 在尝试执行简单的mapreduce程序时。我不明白问题出在哪里?

package MapReduce.HadMapReduce;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class RecCount extends Configured implements Tool {

    public int run(String[] arg0) throws Exception {

        Job job = Job.getInstance(getConf());

        FileInputFormat.setInputPaths(job, new Path("C:\\singledeck.txt"));
        FileOutputFormat.setOutputPath(job, new Path("C:\\temp123"));

        return job.waitForCompletion(true) ? 0 : 1;
    }

    public static void main(String args[]) throws Exception {
        System.exit(ToolRunner.run(new RecCount(), args));
    }
}

错误是:

Exception in thread "main" java.lang.NullPointerException
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:815)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:798)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:731)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:489)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:530)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:507)
    at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305)
    at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
    at MapReduce.HadMapReduce.RecCount.run(RecCount.java:22)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at MapReduce.HadMapReduce.RecCount.main(RecCount.java:26)

这就是发生在幕后的逻辑: ToolRunner 正在呼叫下面的 run 方法,此方法调用其另一个 run 方法(粘贴在该方法的正下方),其中设置配置(如果是) null .

public static int run(Tool tool, String[] args) throws Exception {
    return run(tool.getConf(), tool, args);
}        

public static int run(Configuration conf, Tool tool, String[] args) throws  Exception {
    if (conf == null) {
        conf = new Configuration();
    }
    GenericOptionsParser parser = new GenericOptionsParser(conf, args);
    // set the configuration back, so that Tool can configure itself
    tool.setConf(conf);

    // get the args w/o generic hadoop args
    String[] toolArgs = parser.getRemainingArgs();
    return tool.run(toolArgs);
}

在上面的最后一个语句中,调用了run方法,因为我实现了 Tool 接口。我的代码没有任何错误。如果你能找到,请告诉我!
有人能解释一下我的代码有什么问题吗?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题