在windows上安装hadoop时出错

bvhaajcl  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(611)

**结束。**此问题需要详细的调试信息。它目前不接受答案。
**想改进这个问题吗?**更新问题,使其成为堆栈溢出的主题。

三年前关门了。
改进这个问题
我遵循这些指示:
http://www.ics.uci.edu/~shantas/install_hadoop-2.6.0_on_windows10.pdf
在安装hadoop2.6.5时完全完成了(使用的是准确的编写版本,而不是更新的版本)。
我是这样配置hdfs-site.xml的:
hdfs-site.xml文件

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at
 http://www.apache.org/licenses/LICENSE-2.0
 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property><name>dfs.replication</name><value>1</value></property>
<property> <name>dfs.namenode.name.dir</name><value>/hadoop-
2.6.5/data/name</value><final>true</final></property>
<property><name>dfs.datanode.data.dir</name><value>/hadoop-
2.6.5/data/data</value><final>true</final> </property>
</configuration>

当我尝试运行hadoop namenode-format时,我在命令行中遇到以下错误:

17/11/28 23:51:24 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
17/11/28 23:51:24 INFO http.HttpServer2: Jetty bound to port 50070
17/11/28 23:51:24 INFO mortbay.log: jetty-6.1.26
17/11/28 23:51:24 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
17/11/28 23:51:24 ERROR common.Util: Syntax error in URI /hadoop-
2.6.5/data/name. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in path at index 8: /hadoop-
2.6.5/data/name
        at java.net.URI$Parser.fail(URI.java:2848)
        at java.net.URI$Parser.checkChars(URI.java:3021)
        at java.net.URI$Parser.parseHierarchical(URI.java:3105)
        at java.net.URI$Parser.parse(URI.java:3063)
        at java.net.URI.<init>(URI.java:588)
        at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
        at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:98)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1435)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1390)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:675)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:729)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:539)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:598)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:765)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:749)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
17/11/28 23:51:24 WARN common.Util: Path /hadoop-
2.6.5/data/name should be specified as a URI in configuration files. Please update hdfs configuration.
17/11/28 23:51:24 ERROR common.Util: Error while processing URI: /hadoop-
2.6.5/data/name
java.io.IOException: The filename, directory name, or volume label syntax is incorrect

更多错误。。。

java.io.IOException: No image directories available!
        at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1099)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1091)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:150)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:945)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1387)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
17/11/29 00:23:23 FATAL namenode.NameNode: Failed to start namenode.
java.io.IOException: No image directories available!
        at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1099)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1091)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:150)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:945)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1387)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
17/11/29 00:23:23 INFO util.ExitUtil: Exiting with status 1
17/11/29 00:23:23 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at yonatan/192.168.1.24

************************************************************/

你知道吗?
顺便说一句:hadoop版本看起来不错:

C:\WINDOWS\system32>hadoop version
Hadoop 2.6.5
Subversion https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997
Compiled by sjlee on 2016-10-02T23:43Z
Compiled with protoc 2.5.0
From source with checksum f05c9fa095a395faa9db9f7ba5d754
This command was run using /D:/hadoop-2.6.5/share/hadoop/common/hadoop-common-2.6.5.jar

C:\WINDOWS\system32>

编辑:在像@cricket\u 007建议的那样编辑xml之后,hadoop namenode-f工作了,但是当尝试以管理员身份运行start dfs时,我收到了以下错误:

17/11/29 21:10:36 INFO ipc.Server: Starting Socket Reader #1 for port 50020
17/11/29 21:10:36 INFO datanode.DataNode: Opened IPC server at /0.0.0.0:50020
17/11/29 21:10:36 INFO datanode.DataNode: Refresh request received for nameservices: null
17/11/29 21:10:36 INFO datanode.DataNode: Starting BPOfferServices for nameservices: <default>
17/11/29 21:10:36 WARN common.Util: Path /hadoop-2.6.5/data/data should be specified as a URI in configuration files. Please update hdfs configuration.
17/11/29 21:10:36 INFO datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:50071 starting to offer service
17/11/29 21:10:36 INFO ipc.Server: IPC Server listener on 50020: starting
17/11/29 21:10:36 INFO ipc.Server: IPC Server Responder: starting
17/11/29 21:10:38 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:50071. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/11/29 21:10:40 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:50071. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

最后edit:found the 问题。需要为hadoop位置定义d驱动器

<value>/D:/hadoop-2.6.5/data/name</value>
    <final>true</final>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/D:/hadoop-2.6.5/data/data</value>
ivqmmu1c

ivqmmu1c1#

索引8处的路径中存在非法字符:
你好像有断线。从xml中删除它们,或者至少正确格式化它。

<!-- Put site-specific property overrides in this file. -->
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/hadoop-2.6.5/data/name</value>
    <final>true</final>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/hadoop-2.6.5/data/data</value>
    <final>true</final>
  </property>
</configuration>

我经常看到路径中提到的硬盘。 D:\\hadoop\data ,例如。否则,我认为它默认为c驱动器
我还强烈建议不要将hdfs数据放在与提取的hadoop tarball相同的位置。在真实的hadoop环境中,namenode和datanode数据应该位于完全不同的磁盘上。硬盘是定时炸弹。

相关问题