hdfs文件系统关闭异常

zynd9foi  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(425)

我刚刚运行了一个hdfs演示,如下所示:

public final class HDFSRemoveDemo {

    public static void main(String[] args) throws Exception {
        Path root = new Path("hdfs://localhost:49000/");
        FileSystem fs = root.getFileSystem(new Configuration());

        fs.create(new Path("/tmp/test"));
        fs.delete(new Path("/tmp/test"), false);
        fs.close();
    }
}

引发了一个令人费解的异常,如下所示:

org.apache.hadoop.hdfs.DFSClient closeAllFilesBeingWritt
en
SEVERE: Failed to close file /tmp/test
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.Le
aseExpiredException: No lease on /tmp/test File does not exist. Holder DFSClient
_NONMAPREDUCE_-1727094995_1 does not have any open files
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.
java:1999)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.
java:1990)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSN
amesystem.java:2045)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesyste
m.java:2033)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:805)
    at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
va:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy1.complete(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocati
onHandler.java:85)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHand
ler.java:62)
    at com.sun.proxy.$Proxy1.complete(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.jav
a:4121)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:4022)
    at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:41
7)
    at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:433)
    at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.jav
a:369)

当我离开的时候 fs.close(); ,效果不错。
环境是:
hadoop核心--1.2.1
jdk—1.6.0\u 21
文件系统关闭时发生了什么?有人遇到过这个问题吗?

gcuhipw9

gcuhipw91#

一般来说,你不应该打电话 fs.close() 当你做一个 FileSystem.get(...) . FileSystem.get(...) 不会真的开“新的” FileSystem 对象。当你做一个 close() 在该文件系统上,对于任何上游进程,您也将关闭它。
例如,如果关闭 FileSystem 在Map程序中,当mapreduce驱动程序再次尝试在清理时关闭文件系统时,它将失败。

相关问题