我是hadoop新手,我尝试在我的虚拟机中安装hadoop3.0,在我配置hadoop之后,然后尝试:
hdfs namenode ‐format
并得到输出:
2017-12-26 00:20:56,255 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [‐format]
STARTUP_MSG: version = 3.0.0
STARTUP_MSG: classpath = /opt/hadoop-3.0.0/etc/hadoop:/opt/hadoop-3.0.0/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-3.0.0/share/hadoop/common/lib/kerby-util-1.0.1.jar: ............. hadoop-yarn-applications-unmanaged-am-launcher-3.0.0.jar:/opt/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-registry-3.0.0.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r c25427ceca461ee979d30edd7a4b0f50718e6533; compiled by 'andrew' on 2017-12-08T19:16Z
STARTUP_MSG: java = 1.8.0_151
************************************************************/
2017-12-26 00:20:56,265 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-12-26 00:20:56,269 INFO namenode.NameNode: createNameNode [‐format]
Usage: hdfs namenode [-backup] |
[-checkpoint] |
[-format [-clusterid cid ] [-force] [-nonInteractive] ] |
[-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-rollback] |
[-rollingUpgrade <rollback|started> ] |
[-importCheckpoint] |
[-initializeSharedEdits] |
[-bootstrapStandby [-force] [-nonInteractive] [-skipSharedEditsCheck] ] |
[-recover [ -force] ] |
[-metadataVersion ]
2017-12-26 00:20:56,365 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/
我将hdfs-site.xml配置如下:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/dan/hadoop_data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/dan/hadoop_data/datanode</value>
</property>
</configuration>
当我启动namenode服务时,它会失败并记录tell:2017-12-26 00:03:41331 info org.apache.hadoop.util.exitutil:正在退出,状态为1:java.io.ioexception:namenode未格式化。2017-12-26 00:03:41337 info org.apache.hadoop.hdfs.server.namenode.namenode:关机\u消息:
有人能告诉我怎么解决这个问题吗?
提前谢谢!
1条答案
按热度按时间r7s23pms1#
解决方案1:
有时候会发生。首先,停止所有服务,直接去你的办公室
current
目录和删除current
目录。hadoop软件current
目录也存储所有日志文件。通过移除current
目录重新启动所有服务。停止所有服务:
$HADOOP_HOME/sbin/stop-all.sh
在停止所有服务之后,您应该通过以下命令格式化namenode。格式名称节点:
$HADOOP_HOME/bin/hadoop namenode –format
现在再次按照命令启动所有服务。启动所有服务:
$HADOOP_HOME/sbin/start-all.sh
解决方案2:有时namenode进入
safe-mode
. 您需要按照以下命令离开安全节点。$HADOOP_HOME/bin/hdfs dfsadmin -safemode leave