我们有一个运行hdp2.2.0.0的hadoop集群。
我们有另一个hadoop集群运行hdp2.2.4.2。
我们有一个带有hive操作的oozie工作流,在hdp2.2.0.0的第一个集群上运行良好。
但在运行hdp 2.2.4.2的第二个集群中,相同的工作流失败,出现以下错误:
38098 [main] INFO org.apache.hadoop.hive.ql.Driver - Starting task [Stage-4:MOVE] in serial mode
2015-07-15 16:23:22,810 INFO [main] ql.Driver (Driver.java:launchTask(1604)) - Starting task [Stage-4:MOVE] in serial mode
38099 [main] INFO org.apache.hadoop.hive.ql.exec.Task - Moving data to: hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10000 from hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10002
2015-07-15 16:23:22,811 INFO [main] exec.Task (SessionState.java:printInfo(824)) - Moving data to: hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10000 from hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10002
40129 [main] ERROR hive.ql.metadata.Hive - Unable to move using hadoop distcp, source hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10002 to destination hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10000 using command: /usr/bin/hadoop distcp hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10002 hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10000
2015-07-15 16:23:24,841 ERROR [main] metadata.Hive (Hive.java:renameFile(2444)) - Unable to move using hadoop distcp, source hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10002 to destination hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10000 using command: /usr/bin/hadoop distcp hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10002 hdfs://master-1.local:8020/tmp/hive/cloudfeeds/00f8edac-8b5a-4dfa-9115-5a915acabee0/hive_2015-07-15_16-22-49_023_841777402951025944-1/-ext-10000
40129 [main] ERROR hive.ql.metadata.Hive - Exit value for hadoop distcp command 255
在日志的下面,我们有一个错误:
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=yarn, access=EXECUTE, inode="/tmp/hive/cloudfeeds":cloudfeeds:hdfs:drwx------
我检查了上面目录的权限:/tmp/hive/cloudfeeds。两个集群拥有相同的权限700和所有者cloudfeed。
我检查了map reduce作业日志,两个集群都有以下内容:
user.name=yarn
mapreduce.job.user.name=cloudfeeds
我不想只关闭dfs.permissions。我也不想将777权限授予/tmp/hive/cloudfeeds目录,我确信这会导致作业成功运行。
有什么想法我应该如何调试这个,更重要的是如何解决这个问题?
1条答案
按热度按时间ef1yzkbh1#
我通过将此添加到hive-site.xml解决了权限问题: