我尝试在hadoop集群上运行python脚本,使用hadoop流进行情感分析。我在本地机器上运行的同一个脚本正在正常运行并给出输出。
要在本地计算机上运行,我使用以下命令。
$ cat /home/MB/analytics/Data/input/* | ./new_mapper.py
为了在hadoop集群上运行,我使用下面的命令
$ hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.5.0-mr1-cdh5.2.0.jar -mapper "python $PWD/new_mapper.py" -reducer "$PWD/new_reducer.py" -input /user/hduser/Test_04012015_Data/input/* -output /user/hduser/python-mr/out-mr-out
我的脚本的示例代码是
# !/usr/bin/env python
import sys
def main(argv):
## for line in sys.stdin:
## print line
for line in sys.stdin:
line = line.split(',')
t_text = re.sub(r'[?|$|.|!|,|!|?|;]',r'',line[7])
words = re.findall(r"[\w']+", t_text.rstrip())
predicted = classifier.classify(feature_select(words))
i=i+1
referenceSets[predicted].add(i)
testSets[predicted].add(i)
print line[7] +'\t'+predicted
if __name__ == "__main__":
main(sys.argv)
异常的堆栈跟踪是:
15/04/22 12:55:14 INFO mapreduce.Job: Task Id : attempt_1429611942931_0010_m_000001_0, Status : FAILED
Error: java.io.IOException: Stream closed at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:434)
...
Exit code: 134
Exception message: /bin/bash: line 1: 1691 Aborted
(core dumped) /usr/lib/jvm/java-7-oracle-cloudera/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.net.preferIPv4Stack=true -Xmx525955249
-Djava.io.tmpdir=/yarn/nm/usercache/hduser/appcache/application_1429611942931_0010/container_1429611942931_0010_01_000016/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1429611942931_0010/container_1429611942931_0010_01_000016 -Dyarn.app.container.log.filesize=0
-Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 192.168.0.122 48725 attempt_1429611942931_0010_m_000006_1 16 > /var/log/hadoop-yarn/container/application_1429611942931_0010/container_1429611942931_0010_01_000016/stdout 2> /var/log/hadoop-yarn/container/application_1429611942931_0010/container_1429611942931_0010_01_000016/stderr
....
15/04/22 12:55:47 ERROR streaming.StreamJob: Job not Successful!
Streaming Command Failed!
我试着看原木,但在色调中它显示了这个错误。
请告诉我出了什么问题。
1条答案
按热度按时间qqrboqgw1#
你好像忘了添加文件
new_mapper.py
为了你的工作。基本上,您的工作尝试运行python脚本
new_mapper.py
,但运行Map程序的服务器上缺少此脚本。必须使用选项将此文件添加到作业中
-file <local_path_to_your_file>
.请参阅此处的文档和示例:https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/hadoopstreaming.html#streaming_command_options