我正在使用ubuntu系统并尝试运行wordcount.jar程序。不幸的是,我有以下错误-
错误:java.lang.runtimeexception:java.lang.classnotfoundexception:class wordcountexample$map未找到
我已经更新了类路径-
job.setjarbyclass(wordcountexample.class);和jobconf.setjarbyclass(wordcountexample.class);
这些都不行。不知道怎么了。请分享您的外籍人士在这方面。
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
/***@作者http://www.devinline.com */
public class WordCountExample {
/* Map class which job will use and execute it map method */
public static class Map extends
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}}}
/* Reduce class which job will use and execute it reduce method */
public static class Reduce extends
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
/* Created a job with name wordCountExample */
Job job = Job.getInstance(conf, "wordCountExample");
job.setJarByClass(WordCountExample.class);
/*
* Handler string and int in hadoop way: for string hadoop uses Text
* class and for int uses IntWritable */
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
/*
* Configure map and reducer class, based on which it uses map and
/* reduce method
*/
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
/* Input and output format set as TextInputFormat */
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
/* addInputPath - passes input file path to job */
FileInputFormat.addInputPath(job, new Path(args[0]));
/* setOutputPath - passes output path to job */
FileOutputFormat.setOutputPath(job, new Path(args[1]));
/* Submit the job to the cluster and wait for it to finish. */
System.exit(job.waitForCompletion(true) ? 1 : 0);
}}
hadoop jar/home/nahmed/wordcountsample.jar wordcountexample/user/nahmed/pg20417.txt/user/nahmed/wcoutput 19/08/15 21:31:12 info client.rmproxy:连接到resourcemanager,地址为it066431.massey.ac.nz/130.123.248.83:8050
2015年8月19日21:31:12 info client.ahsproxy:连接到位于it066431.massey.ac.nz/130.123.248.83:10200 2015年8月19日21:31:12 warn mapreduce.jobresourceuploader的应用程序历史服务器:未执行hadoop命令行选项解析。实现工具接口并使用toolrunner执行应用程序以解决此问题。19/08/15 21:31:12警告mapreduce.jobresourceuploader:未设置作业jar文件。可能找不到用户类。请参阅job或job#setjar(字符串)。2015年8月19日21:31:12 info input.fileinputformat:要处理的总输入路径:1 2015年8月19日21:31:12 info mapreduce.jobsubmitter:个数splits:1 19/08/15 21:31:12 info mapreduce.jobsubmitter:提交作业令牌:job\u 1562128011754\u 0026 19/08/15 21:31:13 info mapred.yarnrunner:作业jar不存在。不向资源列表添加任何jar。2015年8月19日21:31:13 info impl.yarclientimpl:提交的申请\u 1562128011754 \u 0026 2015年8月19日21:31:13 info mapreduce.job:跟踪作业的url:http://it066431.massey.ac.nz:8088/proxy/application\u 1562128011754\u 0026/19/08/15 21:31:13 info mapreduce.job:运行作业:作业\u 1562128011754\u 0026 19/08/15 21:31:17 info mapreduce.job:作业job\u 1562128011754\u 0026在uber模式下运行:false 19/08/15 21:31:17 info mapreduce.job:map 0%reduce 0%19/08/15 21:31:20 info mapreduce.job:task id:attempt\u 1562128011754\u 0026\u m\u0000000,状态:失败错误:java.lang.runtimeexception:java.lang.classnotfoundexception:class wordcountexample$map未在org.apache.hadoop.conf.configuration.getclass(configuration)中找到。java:2228)在org.apache.hadoop.mapreduce.task.jobcontextimpl.getmapperclass(jobcontextimpl。java:186)在org.apache.hadoop.mapred.maptask.runnewmapper(maptask。java:745)在org.apache.hadoop.mapred.maptask.run(maptask。java:341)在org.apache.hadoop.mapred.yarnchild$2.run(yarnchild。java:170)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:422)在org.apache.hadoop.security.usergroupinformation.doas(用户组信息。java:1866)在org.apache.hadoop.mapred.yarnchild.main(yarnchild。java:164)原因:java.lang.classnotfoundexception:在org.apache.hadoop.conf.configuration.getclassbyname(配置)中找不到class wordcountexample$map。java:2134)在org.apache.hadoop.conf.configuration.getclass(配置。java:2226) ... 8个以上
容器被应用程序管理员杀死。按要求杀死集装箱。出口代码为143,集装箱出口代码为非零143
2015年8月19日21:31:23 info mapreduce.job:任务id:尝试\u 1562128011754 \u 0026 \u m \u000000 \u 1,状态:失败错误:java.lang.runtimeexception:java.lang.classnotfoundexception:class wordcountexample$map未在org.apache.hadoop.conf.configuration.getclass(configuration)中找到。java:2228)在org.apache.hadoop.mapreduce.task.jobcontextimpl.getmapperclass(jobcontextimpl。java:186)在org.apache.hadoop.mapred.maptask.runnewmapper(maptask。java:745)在org.apache.hadoop.mapred.maptask.run(maptask。java:341)在org.apache.hadoop.mapred.yarnchild$2.run(yarnchild。java:170)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:422)在org.apache.hadoop.security.usergroupinformation.doas(用户组信息。java:1866)在org.apache.hadoop.mapred.yarnchild.main(yarnchild。java:164)原因:java.lang.classnotfoundexception:在org.apache.hadoop.conf.configuration.getclassbyname(配置)中找不到class wordcountexample$map。java:2134)在org.apache.hadoop.conf.configuration.getclass(配置。java:2226) ... 8个以上
容器被应用程序管理员杀死。按要求杀死集装箱。出口代码为143,集装箱出口代码为非零143
2015年8月19日21:31:26 info mapreduce.job:任务id:尝试\u 1562128011754 \u 0026 \u m \u0000002,状态:失败错误:java.lang.runtimeexception:java.lang.classnotfoundexception:class wordcountexample$map未在org.apache.hadoop.conf.configuration.getclass(configuration)中找到。java:2228)在org.apache.hadoop.mapreduce.task.jobcontextimpl.getmapperclass(jobcontextimpl。java:186)在org.apache.hadoop.mapred.maptask.runnewmapper(maptask。java:745)在org.apache.hadoop.mapred.maptask.run(maptask。java:341)在org.apache.hadoop.mapred.yarnchild$2.run(yarnchild。java:170)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:422)在org.apache.hadoop.security.usergroupinformation.doas(用户组信息。java:1866)在org.apache.hadoop.mapred.yarnchild.main(yarnchild。java:164)原因:java.lang.classnotfoundexception:在org.apache.hadoop.conf.configuration.getclassbyname(配置)中找不到class wordcountexample$map。java:2134)在org.apache.hadoop.conf.configuration.getclass(配置。java:2226) ... 8个以上
容器被应用程序管理员杀死。按要求杀死集装箱。出口代码为143,集装箱出口代码为非零143
2015年8月19日21:31:31 info mapreduce.job:map 100%reduce 100%2015年8月19日21:31:31 info mapreduce.job:job job\u 1562128011754\u 0026失败,状态失败,原因是:任务失败task\u 1562128011754\u 0026\u m\u000000由于任务失败,作业失败。failedmaps:1 failedreduces:0
19/08/15 21:31:31 info mapreduce.job:计数器:13个作业计数器失败的Map任务=4个终止的还原任务=1个启动的Map任务=4个其他本地Map任务=3个数据本地Map任务=1所有Map在占用插槽中花费的总时间(毫秒)=53207所有Map在占用插槽中花费的总时间(毫秒)=0所有Map任务花费的总时间(毫秒)=7601花费的总时间按所有reduce tasks(ms)=0所有map tasks占用的vcore毫秒总数=7601所有reduce tasks占用的vcore毫秒总数=0所有map tasks占用的兆字节毫秒总数=54483968所有reduce tasks占用的兆字节毫秒总数=0
1条答案
按热度按时间kqlmhetl1#
有两个嵌套类:map和reduce。错误表明java无法找到它们。很可能jar/home/nahmed/wordcountsample.jar不包含它们。验证执行:
如果它们没有包含在jar中,我认为问题在于您的编译方式。