hadoop/hdfs/name处于不一致状态:存储目录(hadoop/hdfs/data/)不存在或不可访问

bis0qfac  于 2021-05-30  发布在  Hadoop
关注(0)|答案(5)|浏览(427)

我已经尝试了stackoverflow提供的所有不同的解决方案,但是再次询问具体的日志和细节是毫无帮助的
感谢您的帮助
我的hadoop集群中有一个主节点和5个从节点。ubuntu用户和ubuntu组是 ~/Hadoop 文件夹和 ~/hadoop/hdfs/data & ~/hadoop/hdfs/name 文件夹存在
两个文件夹的权限都设置为 755 在启动脚本之前已成功格式化namenode start-all.sh 脚本无法启动“namenode”
它们在主节点上运行

ubuntu@master:~/hadoop/bin$ jps

7067 TaskTracker
6914 JobTracker
7237 Jps
6834 SecondaryNameNode
6682 DataNode

ubuntu@slave5:~/hadoop/bin$ jps

31438 TaskTracker
31581 Jps
31307 DataNode

下面是name节点日志文件中的日志。

..........
..........
.........

014-12-03 12:25:45,460 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2014-12-03 12:25:45,461 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 1013645312
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 = 2097152 entries
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2014-12-03 12:25:45,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
2014-12-03 12:25:45,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2014-12-03 12:25:45,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2014-12-03 12:25:45,622 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2014-12-03 12:25:45,623 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2014-12-03 12:25:45,716 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2014-12-03 12:25:45,777 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2014-12-03 12:25:45,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2014-12-03 12:25:45,785 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name does not exist
2014-12-03 12:25:45,787 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2014-12-03 12:25:45,801 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
llmtgqce

llmtgqce1#

遇到类似的问题,我格式化了namenode,然后启动了它

Hadoop namenode -format
hadoop-daemon.sh start namenode
hgtggwj0

hgtggwj02#

在终端上运行这些命令

$ cd ~
$ mkdir -p mydata/hdfs/namenode
$ mkdir -p mydata/hdfs/datanode

授予目录755和755的权限
那么,
在conf/hdfs-site.xml中添加此属性

<property>
 <name>dfs.namenode.name.dir</name>
 <value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>

<property>
 <name>dfs.datanode.data.dir</name>
 <value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>

如果不起作用,那么

stop-all.sh
start-all.sh
pkwftd7m

pkwftd7m3#

从hdfs-site.xml文件中删除了“file:”
[错误的hdfs site.xml]

<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/hduser/mydata/hdfs/namenode</value>
  </property>
  <property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/hduser/mydata/hdfs/datanode</value>
  </property>

[正确的hdfs site.xml]

<property>
  <name>dfs.namenode.name.dir</name>
  <value>/home/hduser/mydata/hdfs/namenode</value>
  </property>

  <property>
  <name>dfs.datanode.data.dir</name>
  <value>/home/hduser/mydata/hdfs/datanode</value>
  </property>

谢谢埃里克的帮助。

7fhtutme

7fhtutme4#

按照以下步骤操作,
1.停止所有服务
2.设置namenode的格式
3.删除数据节点目录
4.启动所有服务

nkhmeac6

nkhmeac65#

1) 将节点目录命名为所有者,并适当地指定chmod750
2) 停止所有服务
3) 使用hadoop namenode-format格式化namenode
4) 将此添加到hdfs-site.xml

<property>
    <name>dfs.data.dir</name>
    <value>path/to/hadooptmpfolder/dfs/name/data</value> 
    <final>true</final> 
    </property> 
    <property> 
    <name>dfs.name.dir</name>
    <value>path/to/hadooptmpfolder/dfs/name</value> 
    <final>true</final> 
</property>

5) 跑 hadoop namenode -format 添加 export PATH=$PATH:/usr/local/hadoop/bin/ 在~/.bashrc中,只要hadoop被解压,就在路径中添加它

相关问题