我安装了一个hadoop集群,安装了rmr2和rhdfs包。我已经能够通过cli和rscript运行一些示例mr作业。例如,这是有效的:
# !/usr/bin/env Rscript
require('rmr2')
small.ints = to.dfs(1:1000)
out = mapreduce( input = small.ints, map = function(k, v) keyval(v, v^2))
df = as.data.frame( from.dfs( out) )
colnames(df) = c('n', 'n2')
str(df)
最终输出:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
'data.frame': 1000 obs. of 2 variables:
$ n : int 1 2 3 4 5 6 7 8 9 10 ...
$ n2: num 1 4 9 16 25 36 49 64 81 100 ...
我现在正试着进入下一步,写我自己的mr job。我有一个文件(`/user/michael/batsmall.csv'),里面有一些击球统计数据:
aardsda01,2004,1,SFN,NL,11,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,11
aardsda01,2006,1,CHN,NL,45,43,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,45
aardsda01,2007,1,CHA,AL,25,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2
aardsda01,2008,1,BOS,AL,47,5,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,5
aardsda01,2009,1,SEA,AL,73,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
aardsda01,2010,1,SEA,AL,53,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
(batsmall.csv是一个更大文件的摘录,但实际上我只是想证明我可以从hdfs读取和分析一个文件)
以下是我的剧本:
# !/usr/bin/env Rscript
require('rmr2');
require('rhdfs');
hdfs.init()
hdfs.rmr("/user/michael/rMean")
findMean = function (input, output) {
mapreduce(input = input,
output = output,
input.format = 'csv',
map = function(k, fields) {
myField <- fields[[5]]
keyval(fields[[0]], myField)
},
reduce = function(key, vv) {
keyval(key, mean(as.numeric(vv)))
}
)
}
from.dfs(findMean("/home/michael/r/Batting.csv", "/home/michael/r/rMean"))
print(hdfs.read.text.file("/user/michael/batsmall.csv"))
每次都会失败,查看hadoop日志,这似乎是一个中断的管道错误。我不知道是什么原因造成的。当其他工作正常工作时,我会认为这是我的脚本问题,而不是我的配置问题,但我想不出来。我是一个新手,对hadoop还比较陌生。
以下是作业输出:
[michael@hadoop01 r]$ ./rtest.r
Loading required package: rmr2
Loading required package: Rcpp
Loading required package: RJSONIO
Loading required package: methods
Loading required package: digest
Loading required package: functional
Loading required package: stringr
Loading required package: plyr
Loading required package: rhdfs
Loading required package: rJava
HADOOP_CMD=/usr/bin/hadoop
Be sure to run hdfs.init()
Deleted hdfs://hadoop01.dev.terapeak.com/user/michael/rMean
[1] TRUE
packageJobJar: [/tmp/Rtmp2XnCL3/rmr-local-env55d1533355d7, /tmp/Rtmp2XnCL3/rmr-global-env55d119877dd3, /tmp/Rtmp2XnCL3/rmr-streaming-map55d13c0228b7, /tmp/Rtmp2XnCL3/rmr-streaming-reduce55d150f7ffa8, /tmp/hadoop-michael/hadoop-unjar5464463427878425265/] [] /tmp/streamjob4293464845863138032.jar tmpDir=null
12/12/19 11:09:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/12/19 11:09:41 INFO mapred.FileInputFormat: Total input paths to process : 1
12/12/19 11:09:42 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-michael/mapred/local]
12/12/19 11:09:42 INFO streaming.StreamJob: Running job: job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:09:42 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:09:43 INFO streaming.StreamJob: map 0% reduce 0%
12/12/19 11:10:15 INFO streaming.StreamJob: map 100% reduce 100%
12/12/19 11:10:15 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:10:15 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:10:15 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:10:15 ERROR streaming.StreamJob: Job not successful. Error: NA
12/12/19 11:10:15 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, in.folder = if (is.list(input)) { :
hadoop streaming failed with error code 1
Calls: findMean -> mapreduce -> mr
Execution halted
还有一个工作追踪者的例外:
ava.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:572)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:136)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
1条答案
按热度按时间rdlzhqv91#
你需要检查失败尝试的标准。jobtrackerwebui是最简单的方法。我猜是这样的
fields
是一个Dataframe,您访问它就像一个列表,可能但不寻常。由此可能间接产生错误。另外,我们在rhadoopwiki上有一个调试文档,其中包含了更多的建议。
最后,我们有一个专门的rhadoop谷歌小组,你可以与大量热情的用户互动。或者你可以独自一人。