hadoop2.2-datanode无法启动

uemypmqf  于 2021-06-02  发布在  Hadoop
关注(0)|答案(2)|浏览(308)

今天早上我用了hadoop2.4(见我前面的两个问题)。现在我删除了它并安装了2.2,因为我有2.4的问题,而且我认为2.2是最新的稳定版本。现在我在这里学习了教程:
http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1
我很肯定我做的每件事都是对的,但我又面临着类似的问题。
当我运行jps时,很明显数据节点没有启动。
我又做错什么了?
任何帮助都将不胜感激。

hduser@test02:~$ start-dfs.sh
14/06/06 18:12:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-test02.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-test02.out
localhost: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
localhost: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-test02.out
0.0.0.0: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
0.0.0.0: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/06 18:13:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hduser@test02:~$ jps
2201 Jps
hduser@test02:~$ jps
2213 Jps
hduser@test02:~$ start-yarn
start-yarn: command not found
hduser@test02:~$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-test02.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-test02.out
hduser@test02:~$ jps
2498 NodeManager
2264 ResourceManager
2766 Jps
hduser@test02:~$ jps
2784 Jps
2498 NodeManager
2264 ResourceManager
hduser@test02:~$ jps
2498 NodeManager
2264 ResourceManager
2796 Jps
hduser@test02:~$
avwztpqn

avwztpqn1#

我的问题是,我把这些指导从教程太字面。
粘贴以下内容 <configuration> fs.default.name名称
hdfs://localhost:9000
我怀疑这是错误的,但我还是做了。
这似乎是不正确的 core-site.xml 文件为xml格式。
所以实际上,它应该是这样的。

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

把它改成这个可以解决我的问题。

8aqjt8rx

8aqjt8rx2#

我在datanode没有启动时遇到了类似的问题。我所做的是重新格式化namenode,然后重新启动集群。然后,跑步 jps 确认数据节点已启动。
这可能是由于将hdfs目录放在“主”目录(在linux框上)中造成的,因为在启动和关闭操作系统时,操作系统会影响这些文件夹(不完全确定如何影响,但为了防止将来出现此问题,请将hdfs目录移出主目录)。
请让我知道这是否有效。

相关问题