如何通过hadoopnfs网关启用chown命令

qf9go6mv  于 2021-06-01  发布在  Hadoop
关注(0)|答案(0)|浏览(249)

我有一个用例,根据这个很好的指南,我为我的hadoop系统启用了nfs网关。我已通过以下方式将其安装到另一台机器上:

sudo mount -v -t nfs -o vers=3,proto=tcp,nolock,noacl $ip:/dataDir /mountDir

现在有一个用例,我需要对中的文件运行chown命令 dataDir 文件夹,所以我运行以下命令:

chown user2  /mountDir/sample.txt

但这会产生错误:
chown:更改“/mountdir/sample.txt”的所有权:权限被拒绝
我在nfs网关日志中看到以下内容:

18/04/05 23:54:25 WARN nfs3.RpcProgramNfs3: Exception
org.apache.hadoop.security.AccessControlException: Non-super user cannot change owner
        at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:83)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1669)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:703)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:464)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

我也试着加入以下内容 /etc/nfs.map 文档中提到的文件和执行此操作时遇到的错误在此处详细说明:

uid 0 594903  //where 0 is uid of root on another machine, and 594903 is uid of hdfs which is superuser on datanode machine where NFS gateway is running.

但我还是有个错误:

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied. user=root is not the owner of inode=sample3.txt
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:250)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:227)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1724)
        at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:80)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1669)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:703)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:464)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

你知道怎么做吗?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题