配置单元无法将文件加载到表中,因为它在配置单元仓库中找不到该文件

pgpifvop  于 2021-06-27  发布在  Hive
关注(0)|答案(1)|浏览(376)

我无法将数据加载到配置单元表,日志显示此问题
我要加载的文件:

> [hdfs@vmi200937 root]$ hdfs dfs -ls /suppression-files Found 1 items
> -rw-rw-rw-   3 hdfs hdfs  694218562 2018-12-21 05:06 /suppression-files/md5.txt

配置单元目录:

> [hdfs@vmi200937 root]$ hdfs dfs -ls
> /apps/hive/warehouse/suppression.db Found 1 items drwxrwxrwx   - hive
> hadoop          0 2018-12-21 06:30
> /apps/hive/warehouse/suppression.db/md5supp

以下是配置单元查询:

> hive (suppression)> LOAD DATA INPATH '/suppression-files/md5.txt' INTO
> TABLE md5supp;

日志:
将数据加载到表suppression.md5supp失败,出现异常java.io.filenotfoundexception:org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkowner(fsdirectory)上的directory/file不存在/apps/hive/warehouse/suppression.db/md5supp/md5.txt。java:1901)在org.apache.hadoop.hdfs.server.namenode.fsdirattrop.setowner(fsdirattrop。java:82) 在org.apache.hadoop.hdfs.server.namenode.fsnamesystem.setowner(fsnamesystem。java:1877)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.setowner(namenoderpcserver。java:828)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.setowner(clientnamenodeprotocolserversidetranslatorpb。java:476)在org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine)。java:640)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:982)在org.apache.hadoop.ipc.server$handler$1.run(服务器。java:2351)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2347)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:422)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1869)在org.apache.hadoop.ipc.server$handler.run(服务器。java:2347)
失败:执行错误,从org.apache.hadoop.hive.ql.exec.movetask返回代码40000。java.io.filenotfoundexception:org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkowner(fsdirectory)上的目录/文件不存在/apps/hive/warehouse/suppression.db/md5supp/md5.txt。java:1901)在org.apache.hadoop.hdfs.server.namenode.fsdirattrop.setowner(fsdirattrop。java:82)在org.apache.hadoop.hdfs.server.namenode.fsnamesystem.setowner(fsnamesystem。java:1877)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.setowner(namenoderpcserver。java:828)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.setowner(clientnamenodeprotocolserversidetranslatorpb。java:476)在org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine)。java:640)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:982)在org.apache.hadoop.ipc.server$handler$1.run(服务器。java:2351)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2347)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:422)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1869)在org.apache.hadoop.ipc.server$handler.run(服务器。java:2347)

zbq4xfa0

zbq4xfa01#

我找到解决办法了!我应该把目录/抑制文件的所有者设置为hive:hdfs by hdfs dfs chown-r公司hive:hdfs /suppression-file

相关问题