我无法使用eclipse从远程计算机运行hadoop2.2.0中的wordcount示例。我构建了可运行的jar并使用>java-jar wordcount.jar执行。我的驾驶员等级代码如下
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
conf.set("fs.defaultFS", "hdfs://192.168.117.128:8020/");
conf.set("hadoop.job.ugi", "root");
conf.set("mapred.job.tracker", "192.168.117.128:8021");
Path inputPath = new Path("/input/FB_Bank_Comments_abi_required_original.txt");
Path outputPath = new Path("/output/wordcount"+System.currentTimeMillis());
job.setJobName("wordcount_analysis");
job.setJarByClass(Map.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, inputPath);
FileOutputFormat.setOutputPath(job, outputPath);
job.waitForCompletion(true);
我得到以下错误
Exception in thread "main" java.lang.NullPointerException
at java.lang.ProcessBuilder.start(Unknown Source)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:404)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:678)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:435)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:277)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:344)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
at org.myorg.WordCount.main(WordCount.java:69)
我错过了什么?任何帮助都将不胜感激
1条答案
按热度按时间lf5gs5x21#
我也犯了同样的错误。最终我找到了原因。输出目录已存在。在我删除了输出目录并再次运行之后,它工作了。