为什么执行sparkr作业会使用oozie拒绝许可?

kiayqfof  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(389)

我正在用oozie运行sparkr的shell脚本。当我运行作业时,我面临权限问题:

Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
ERROR Utils: Uncaught exception in thread delete Spark local dirs
java.lang.NullPointerException
Exception in thread "delete Spark local dirs" java.lang.NullPointerException

总日志。。

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/mntc/yarn/nm/filecache/2452/sparkr-  assembly-0.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/09/08 01:39:11 INFO SparkContext: Running Spark version 1.3.0
15/09/08 01:39:13 INFO SecurityManager: Changing view acls to: yarn
15/09/08 01:39:13 INFO SecurityManager: Changing modify acls to: yarn
15/09/08 01:39:13 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(yarn); users with modify permissions: Set(yarn)
15/09/08 01:39:13 INFO Slf4jLogger: Slf4jLogger started
15/09/08 01:39:13 INFO Remoting: Starting remoting
15/09/08 01:39:14 INFO Remoting: Remoting started; listening on addresses :  
  /mnt/yarn/nm/usercache/karun/appcache
  /application_1437539731669_0786/blockmgr-1760ec19-b1de-4bcc-9100- b2c1364b54c8 
  /mntc/yarn/nm/usercache/karun/appcache/application_1437539731669_0786/    
   blockmgr-f57c89eb-4a4b-4fd5-9796-ca3c3a7f2c6f
 15/09/08 01:39:14 INFO DiskBlockManager: Created local directory at     
  15/09/08 01:39:14 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
  15/09/08 01:39:15 INFO Utils: Successfully started service 'SparkUI' on port 4040.
    15/09/08 01:39:16 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6610 MB per container)
    15/09/08 01:39:16 INFO Client: Preparing resources for our AM container
    createSparkContext on edu.berkeley.cs.amplab.sparkr.RRDD failed with  java.lang.reflect.InvocationTargetException
  Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
  Exception in thread "delete Spark local dirs" java.lang.NullPointerException
  Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]

我不知道如何解决这个问题。任何帮助将不胜感激。

vxf3dgd4

vxf3dgd41#

问题可能是因为hdfs中没有这样的用户“Yarn”。有两种可能的解决办法。
在hdfs上创建这样的用户,并授予他访问所需资源的权限。
更简单的是,只需在hdfs user下执行作业(或者在hdfs上执行任何操作)。在oozie属性文件中设置user.name=hdfs。ozie医生

相关问题