r-rmr2 pipemapred.waitoutputthreads():子进程失败,代码为2

9vw9lbht  于 2021-06-04  发布在  Hadoop
关注(0)|答案(0)|浏览(165)

我在这里运行一个rmr2示例,下面是我尝试的代码:

Sys.setenv(HADOOP_HOME="/home/istvan/hadoop")
Sys.setenv(HADOOP_CMD="/home/istvan/hadoop/bin/hadoop")

library(rmr2)
library(rhdfs)

ints = to.dfs(1:100)
calc = mapreduce(input = ints,
                   map = function(k, v) cbind(v, 2*v))

我使用的是hadoop-streaming-1.1.1.jar,调用mapreduce函数后job启动,异常失败:

2013-12-16 16:26:14,844 WARN mapreduce.Counters: Group 

org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-12-16 16:26:15,600 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /app/cloudera/mapred/local/taskTracker/nkumar/jobcache/job_201312160142_0009/jars/job.jar <- /app/cloudera/mapred/local/taskTracker/nkumar/jobcache/job_201312160142_0009/attempt_201312160142_0009_m_000000_0/work/job.jar
2013-12-16 16:26:15,604 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /app/cloudera/mapred/local/taskTracker/nkumar/jobcache/job_201312160142_0009/jars/.job.jar.crc <- /app/cloudera/mapred/local/taskTracker/nkumar/jobcache/job_201312160142_0009/attempt_201312160142_0009_m_000000_0/work/.job.jar.crc
2013-12-16 16:26:15,693 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
2013-12-16 16:26:15,695 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-12-16 16:26:16,312 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-12-16 16:26:16,319 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6bdc64a5

2013-12-16 16:26:16,757 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
2013-12-16 16:26:16,763 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1
2013-12-16 16:26:16,772 INFO org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2013-12-16 16:26:16,779 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 450
2013-12-16 16:26:17,432 INFO org.apache.hadoop.mapred.MapTask: data buffer = 358612992/448266240
2013-12-16 16:26:17,432 INFO org.apache.hadoop.mapred.MapTask: record buffer = 1179648/1474560
2013-12-16 16:26:17,477 INFO org.apache.hadoop.streaming.PipeMapRed: PipeMapRed exec [/usr/bin/Rscript, ./rmr-streaming-map5b17a2a9ff]
2013-12-16 16:26:17,561 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=1/0/0 in:NA [rec/s] out:NA [rec/s]
2013-12-16 16:26:17,570 INFO org.apache.hadoop.streaming.PipeMapRed: MRErrorThread done
2013-12-16 16:26:17,571 INFO org.apache.hadoop.streaming.PipeMapRed: PipeMapRed failed!
2013-12-16 16:26:17,587 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-12-16 16:26:17,591 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
2013-12-16 16:26:17,605 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task

它在hdfs的/tmp目录中创建一个序列文件。有什么建议吗?谢谢。
编辑:
在python中找到这个答案hadoop streaming job failed error,所以我尝试执行r脚本,上面有两行:


# !/usr/bin/Rscript

# !/usr/bin/env Rscript

运气不好。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题