hadoop管道字数:任务id:尝试状态:失败attemptid:attempt timed 600秒后退出

41zrol4v  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(592)

我的haddop是最新的2.3.0版本。我可以很好地运行hadoopjava应用程序。但是hadoop管道wordcount应用程序中存在一些错误。
这是wordcount-simple.cc:


# include "Pipes.hh"

# include "TemplateFactory.hh"

# include "StringUtils.hh"

const std::string WORDCOUNT = "WORDCOUNT";
const std::string INPUT_WORDS = "INPUT_WORDS";
const std::string OUTPUT_WORDS = "OUTPUT_WORDS";

class WordCountMap: public HadoopPipes::Mapper {
public:
  HadoopPipes::TaskContext::Counter* inputWords;

  WordCountMap(HadoopPipes::TaskContext& context) {
    inputWords = context.getCounter(WORDCOUNT, INPUT_WORDS);
  }

  void map(HadoopPipes::MapContext& context) {
    std::vector<std::string> words =
      HadoopUtils::splitString(context.getInputValue(), " ");
      for(unsigned int i=0; i < words.size(); ++i) {
          context.emit(words[i], "1");
      }
    context.incrementCounter(inputWords, words.size());
  }
};

class WordCountReduce: public HadoopPipes::Reducer {
public:
  HadoopPipes::TaskContext::Counter* outputWords;

  WordCountReduce(HadoopPipes::TaskContext& context) {
    outputWords = context.getCounter(WORDCOUNT, OUTPUT_WORDS);
  }

  void reduce(HadoopPipes::ReduceContext& context) {
    int sum = 0;
    while (context.nextValue()) {
      sum += HadoopUtils::toInt(context.getInputValue());
    }
    context.emit(context.getInputKey(), HadoopUtils::toString(sum));
    context.incrementCounter(outputWords, 1);
  }
};

以下是makefile:

HADOOP_INSTALL=/opt/hadoop

CC = g++
CCFLAGS = -I$(HADOOP_INSTALL)/include

wordcount :wordcount-simple.cc
        $(CC) $(CCFLAGS) $< -Wall -L$(HADOOP_INSTALL)/lib/native -lhadooppipes -lhadooputils -lpthread -lcrypto -g -O2 -o $@

它编译得很好。然后我上传一些数据进行测试。我的run命令是:

hadoop pipes -D hadoop.pipes.java.recordreader=true -D hadoop.pipes.java.recordwriter=true -D mapred.job.name=wordcount -input /data/wc_in -output /data/wc_out -program /bin/wordcount

但出现了一些错误:

14/04/03 23:59:48 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.137:8032
14/04/03 23:59:49 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.137:8032
14/04/03 23:59:50 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String). 
14/04/03 23:59:50 INFO mapred.FileInputFormat: Total input paths to process : 2
14/04/03 23:59:51 INFO mapreduce.JobSubmitter: number of splits:2
14/04/03 23:59:51 INFO Configuration.deprecation: hadoop.pipes.java.recordreader is deprecated. Instead, use mapreduce.pipes.isjavarecordreader
14/04/03 23:59:51 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/04/03 23:59:51 INFO Configuration.deprecation: hadoop.pipes.java.recordwriter is deprecated. Instead, use mapreduce.pipes.isjavarecordwriter
14/04/03 23:59:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1396578697573_0004
14/04/03 23:59:52 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
14/04/03 23:59:53 INFO impl.YarnClientImpl: Submitted application application_1396578697573_0004
14/04/03 23:59:53 INFO mapreduce.Job: The url to track the job: 
14/04/03 23:59:53 INFO mapreduce.Job: Running job: job_1396578697573_0004
14/04/04 00:00:26 INFO mapreduce.Job: Job job_1396578697573_0004 running in uber mode : false

**14/04/04 00:00:26 INFO mapreduce.Job: map 0% reduce 0%

14/04/04 00:10:53 INFO mapreduce.Job: map 100% reduce 0%
14/04/04 00:10:53 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000001_0, Status : FAILED
AttemptID:attempt_1396578697573_0004_m_000001_0 Timed out after 600 secs
14/04/04 00:10:54 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000000_0, Status : FAILED
AttemptID:attempt_1396578697573_0004_m_000000_0 Timed out after 600 secs
14/04/04 00:10:55 INFO mapreduce.Job: map 0% reduce 0%
14/04/04 00:21:23 INFO mapreduce.Job: map 100% reduce 0%
14/04/04 00:21:24 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000000_1, Status : FAILED
AttemptID:attempt_1396578697573_0004_m_000000_1 Timed out after 600 secs
14/04/04 00:21:24 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000001_1, Status : FAILED
AttemptID:attempt_1396578697573_0004_m_000001_1 Timed out after 600 secs
14/04/04 00:21:25 INFO mapreduce.Job: map 0% reduce 0%
14/04/04 00:31:53 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000000_2, Status : FAILED
AttemptID:attempt_1396578697573_0004_m_000000_2 Timed out after 600 secs
14/04/04 00:31:53 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000001_2, Status : FAILED
AttemptID:attempt_1396578697573_0004_m_000001_2 Timed out after 600 secs
14/04/04 00:42:24 INFO mapreduce.Job: map 100% reduce 0%
14/04/04 00:42:25 INFO mapreduce.Job: map 100% reduce 100%
14/04/04 00:42:26 INFO mapreduce.Job: Job job_1396578697573_0004 failed with state FAILED due to: Task failed task_1396578697573_0004_m_000000

Job failed as tasks failed. failedMaps:1 failedReduces:0
14/04/04 00:42:27 INFO mapreduce.Job: Counters: 9
 Job Counters
  Failed map tasks=8
  Launched map tasks=8
  Other local map tasks=6
  Data-local map tasks=2
  Total time spent by all maps in occupied slots (ms)=5017539
  Total time spent by all reduces in occupied slots (ms)=0
  Total time spent by all map tasks (ms)=5017539
  Total vcore-seconds taken by all map tasks=5017539
  Total megabyte-seconds taken by all map tasks=5137959936

Exception in thread "main" java.io.IOException: Job failed!
 at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
 at org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:264)
 at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:503)
 at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:518)**

它运行了很长时间,但失败了。我的设置有问题吗?但是我没有很好地运行管道应用程序,没有任何错误。所以我觉得我的环境没问题。
此外,namenode和datanode(one)日志中没有任何错误。所以我想知道为什么。提前谢谢。

6pp0gazn

6pp0gazn1#

在分析了datanode的容器日志之后,我找到了这些错误的原因。我的主机是ubuntu13.04,从机是fedora17,这两个系统有不同的核心设计和不同的库版本。例如,对于libcrypto.so.10,ubuntu有这个库,但是fedora没有,所以发生了错误(slave找不到准确的库)。基本问题是“主机和所有从机必须是同一个系统”,可以通过克隆系统来实现这一点。

相关问题