yarn mapreduce近似pi示例在以非hadoop用户身份运行时失败退出代码1

hivapdat  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(373)

我用hadoop2.6.2和yarn运行一个小型的linux私有集群。我从一个linux边缘节点启动yarn作业。当由hadoop(超级用户,集群所有者)用户运行时,近似pi值的canned yarn示例非常有效,但当从边缘节点上的我的个人帐户运行时失败。在这两种情况下(hadoop,me),我都是这样运行作业的:

clott@edge: /home/hadoop/hadoop-2.6.2/bin/yarn jar /home/hadoop/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar pi 2 5

它失败了;完整输出如下。我认为“未找到文件”这个例外完全是假的。我认为某些原因导致容器的启动失败,因此找不到输出。是什么导致容器启动失败,如何调试?
因为这个相同的命令在hadoop用户运行时非常有效,但在同一边缘节点上由不同的帐户运行时却不能,所以我怀疑是权限或其他配置问题;我不怀疑jar文件丢失的问题。我的个人帐户使用了与hadoop帐户相同的环境变量。
这些问题很相似,但我没有找到解决办法:
https://issues.cloudera.org/browse/distro-577
以其他用户身份运行map reduce作业
yarn mapreduce作业问题-hadoop 2.3.0中的am容器启动错误
我尝试了这些补救措施,但没有成功:
在core-site.xml中,将hadoop.tmp.dir的值设置为/tmp/temp-${user.name}
将我的个人用户帐户添加到群集中的每个节点
我想很多安装都是用一个用户运行的,但是我试图让两个人在集群上一起工作,而不会破坏彼此的中间结果。我完全疯了吗?
全输出:

Number of Maps  = 2
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Starting Job
15/12/22 15:29:18 INFO client.RMProxy: Connecting to ResourceManager at ac1.mycompany.com/1.2.3.4:8032
15/12/22 15:29:18 INFO input.FileInputFormat: Total input paths to process : 2
15/12/22 15:29:19 INFO mapreduce.JobSubmitter: number of splits:2
15/12/22 15:29:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1450815437271_0002
15/12/22 15:29:19 INFO impl.YarnClientImpl: Submitted application application_1450815437271_0002
15/12/22 15:29:19 INFO mapreduce.Job: The url to track the job: http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/
15/12/22 15:29:19 INFO mapreduce.Job: Running job: job_1450815437271_0002
15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 running in uber mode : false
15/12/22 15:29:31 INFO mapreduce.Job:  map 0% reduce 0%
15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 failed with state FAILED due to: Application application_1450815437271_0002 failed 2 times due to AM Container for appattempt_1450815437271_0002_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1450815437271_0002_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/12/22 15:29:31 INFO mapreduce.Job: Counters: 0
Job Finished in 13.489 seconds
java.io.FileNotFoundException: File does not exist: hdfs://ac1.mycompany.com/user/clott/QuasiMonteCarlo_1450816156703_163431099/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1817)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1841)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
qzwqbdag

qzwqbdag1#

是的manjunath ballur你是对的这是权限问题!最后学会了如何保存Yarn应用日志,这清楚地揭示了问题所在。步骤如下:
编辑yarn-site.xml并添加属性以延迟yarn日志删除:

<property>
    <name>yarn.nodemanager.delete.debug-delay-sec</name>
    <value>600</value>
</property>

将yarn-site.xml推送到所有节点(唉,我忘记这个很久了),然后重新启动集群。
如上图所示,运行yarn示例来估计pi,它失败了。看看http://namenode:8088/cluster/apps/failed要查看失败的应用程序,请单击最近失败的链接,查看底部以查看使用了群集中的哪些节点。
在应用程序失败的群集中的一个节点上打开一个窗口。找到工作目录,在我的例子中是

~hadoop/hadoop-2.6.2/logs/userlogs/application_1450815437271_0004/container_1450‌​815437271_0004_01_000001/

等等,我看到了文件stdout(只有log4j的bitching)、stderr(几乎为空)和syslog(winner-winner-chicken-dinner)。在syslog文件中,我发现了这个gem:

2015-12-23 08:31:42,376 INFO [main] org.apache.hadoop.service.AbstractService: Service JobHistoryEventHandler failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=clott, access=EXECUTE, inode="/tmp/hadoop-yarn/staging/history":hadoop:supergroup:drwxrwx---

所以问题是hdfs的权限:///tmp/hadoop-yarn/staging/history。一个简单的chmod 777纠正了我的错误,我不再和烫发组斗争了。现在一个非hadoop的非超级用户可以运行一个yarn作业。

相关问题