我正在hadoop中流式传输两个脚本wordcountmap.pl和wordcountreduce.pl,这两个脚本应该计算文件中每个单词的出现次数。
但是hadoop一直在抱怨wordcountmap.pl。我的命令和输出如下。
命令:
hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar -input wordCount/words.txt -output output -mapper wordCount/wordCountMap.pl -file wordCount/wordCountMap.pl -reducer wordCount/wordCuntReduce.pl -file wordCount/wordCountReduce.pl
输出:
15/08/18 20:09:50 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
15/08/18 20:09:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
File: /home/hduser/wordCount/wordCountMap.pl does not exist, or is not readable.
Try -help for more information
Streaming Command Failed!
但是wordcountmap.pl很好(对我来说),因为我键入了:
hadoop fs -cat wordCount/wordCountMap.pl
得到:
15/08/18 20:21:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
#!/usr/bin/perl -w
while(<STDIN>) {
chomp;
@words = split;
foreach $w (@words) {
$key = $w;
$value = "1";
print "$key\t$value\n";
}
}
有人能告诉我我的命令有什么问题吗(我想我们可以完全忽略上面的警告信息。)
仅供参考,wordcountreduce.pl是
# !/usr/bin/perl -w
$count = 0;
while(<STDIN>) {
chomp;
($key,$value) = split "\t";
if (!defined($oldkey)) {
$oldkey = $key;
$count = $value;
} else {
if ($oldkey eq $key) {
$count = $count + $value;
} else {
print "$oldkey\t$count\n";
$oldkey = $key;
$count = $value;
}
}
}
print "$oldkey\t$count\n";
和words.txt
a a b
b c
a
“hadoop fs-ls wordcount”的结果是
15/08/18 21:27:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
-rw-r--r-- 1 hduser supergroup 145 2015-08-18 20:04 wordCount/wordCountMap.pl
-rw-r--r-- 1 hduser supergroup 346 2015-08-18 20:04 wordCount/wordCountReduce.pl
-rw-r--r-- 1 hduser supergroup 12 2015-08-18 20:04 wordCount/words.txt
提前谢谢!
1条答案
按热度按时间ki0zmccv1#
如果你仔细检查了说明书http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
这里清楚地显示了不需要将mapper.py和reducer.py复制到hdfs,您可以从本地文件系统链接这两个文件:as/path/to/mapper。我相信你能避免上述错误。