hadoop重启后namenode停止工作

jjhzyzn0  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(457)

我有一个安装了hadoop的服务器。
我想改变一些配置(关于 mapreduce.map.output.compress ); 因此,我更改了配置文件,重新启动了hadoop,使用:

stop-all.sh
start-all.sh

之后,我再也不能使用它了,因为它处于安全模式: The reported blocks is only 0 but the threshold is 0.9990 and the total blocks 11313. Safe mode will be turned off automatically .
请注意,报告的块数为0,并且没有增加。
因此,我强迫它离开安全模式:

bin/hadoop dfsadmin -safemode leave

现在,我得到这样的错误:

2014-03-09 18:16:40,586 [Thread-1] ERROR org.apache.hadoop.hdfs.DFSClient - Failed to close file /tmp/temp-39739076/tmp2073328134/GQL.jar
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/temp-39739076/tmp2073328134/GQL.jar could only be replicated to 0 nodes, instead of 1

如果有用的话,我的朋友 hdfs-site.xml 是:

<configuration>
<property>
        <name>dfs.replication</name>
        <value>1</value>
</property>
<property>
    <name>dfs.name.dir</name>
    <value>/home/hduser/hadoop/name/data</value>
</property>

</configuration>
2skhul33

2skhul331#

这个问题我已经遇到很多次了。无论何时你得到错误 x could only be replicated to 0 nodes, instead of 1 ,应通过以下步骤解决问题:

Stop all Hadoop services with: stop-all.sh
Delete the dfs/name and dfs/data directories
Format the NameNode with: hadoop namenode -format
Start Hadoop again with: start-all.sh

相关问题