我在ubuntu 14.04中使用以下教程以伪分布式模式安装了cloudera hadoop(Yarn):
http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_qs_yarn_pseudo.html
但是,当我启动服务时,我的datanode无法启动。我以前安装过mvr1,然后一切都很好,所以安装所需的先决条件不应该出错。我用sudo apt get purge卸载了所有版本的hadoop,然后新安装了它,但没有任何用处。
数据节点日志条目为:
2015-11-14 09:54:34292 info org.apache.hadoop.ipc.server:ipc服务器响应程序:启动
2015-11-14 09:54:35363 info org.apache.hadoop.ipc.client:正在重试连接到服务器:localhost/127.0.0.1:8020。已尝试0次;>重试策略为RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:36364 info org.apache.hadoop.ipc.client:重试连接到服务器:localhost/127.0.0.1:8020。已尝试1次;>重试策略为RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:37365 info org.apache.hadoop.ipc.client:重试连接到服务器:localhost/127.0.0.1:8020。已尝试2次;>重试策略为RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:38365 info org.apache.hadoop.ipc.client:重试连接到服务器:localhost/127.0.0.1:8020。已尝试3次;>重试策略是RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:39366 info org.apache.hadoop.ipc.client:重试连接到服务器:localhost/127.0.0.1:8020。已尝试4次;>重试策略是RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:40366 info org.apache.hadoop.ipc.client:重试连接到服务器:localhost/127.0.0.1:8020。已尝试5次;>重试策略是RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:41367 info org.apache.hadoop.ipc.client:重试连接到服务器:localhost/127.0.0.1:8020。已尝试6次;>重试策略为RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:42367 info org.apache.hadoop.ipc.client:重试连接到服务器:localhost/127.0.0.1:8020。已尝试7次;>重试策略为RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:43368 info org.apache.hadoop.ipc.client:重试连接到服务器:localhost/127.0.0.1:8020。已尝试8次;>重试策略是RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:44368 info org.apache.hadoop.ipc.client:重试连接到服务器:localhost/127.0.0.1:8020。已经尝试了9次;>重试策略是RetryUpMaximumCountWithFixedSleep(maxretries=10,sleeptime=1000毫秒)2015-11-14 09:54:44370 warn org.apache.hadoop.hdfs.server.datanode.datanode:连接到服务器时出现问题:localhost/127.0.0.1:8020 2015-11-14 09:54:49,792 info org.apache.hadoop.hdfs.server.common.storage:lock on/var/lib/hadoop hdfs/cache/hdfs/dfs/data/in_use.lock>由nodename获取1088@monamie-vpceg3aen 2015-11-14 09:54:49,795 warn org.apache.hadoop.hdfs.server.common.storage:java.io.ioexception:incompatible clusterids in/var/lib/hadoop->hdfs/cache/hdfs/dfs/data:namenode clusterid=cid-21fe9d61-d016-43e8-b5fb-1b43851c2ffa;datanode clusterid=cid-2f6844e6-c348-4a40->ac2b-8ef08c674081 2015-11-14 09:54:49796 fatal org.apache.hadoop.hdfs.server.datanode.datanode:本地主机的块池(datanode>uuid unassigned)服务初始化失败/127.0.0.1:8020。正在退出。java.io.ioexception:无法加载所有指定的目录。在org.apache.hadoop.hdfs.server.datanode.datastorage.recovertransitionread(datastorage。java:479)在org.apache.hadoop.hdfs.server.datanode.datanode.initstorage(datanode。java:1398)在org.apache.hadoop.hdfs.server.datanode.datanode.initblockpool(datanode。java:1363)在org.apache.hadoop.hdfs.server.datanode.bpofferservice.verifyandsetnamespaceinfo(bpofferservice)。java:317)在org.apache.hadoop.hdfs.server.datanode.bpserviceactor.connecttonandhandshake(bpserviceactor)上。java:228)在org.apache.hadoop.hdfs.server.datanode.bpserviceactor.run(bpserviceactor。java:847)在java.lang.thread.run(线程。java:745) 2015-11-14 09:54:49798 warn org.apache.hadoop.hdfs.server.datanode.datanode:终止块池服务:块池>>>(datanode uuid unassigned)服务到localhost/127.0.0.1:8020 2015-11-14 09:54:49904 info org.apache.hadoop.hdfs.server.datanode.datanode:删除的块池(datanode uuid unassigned)2015-11-14 09:54:51,904 warn org.apache.hadoop.hdfs.server.datanode.datanode:退出datanode 2015-11-14 09:54:51905 info org.apache.hadoop.util.exitutil:退出,状态为0
***值得注意的一行是:>连接到服务器时出现问题:localhost/127.0.0.1:8020
请帮我认清可能出了什么问题。
名称节点日志条目:
2015-11-14 10:38:51230警告org.apache.hadoop.security.usergroupinformation:priviledgedactionexception as:hdfs(auth:simple) >cause:java.io.ioexception:file/user/oozie/share/lib/mapreduce streaming/hadoop-streaming-2.6.0-cdh5.4.8.jar.copying只能>复制到0个节点,而不是minreplication(=1)。有0个datanode正在运行,此操作中没有排除任何节点。2015-11-14 10:38:51231 info org.apache.hadoop.ipc.server:8020上的ipc服务器处理程序3,call>org.apache.hadoop.hdfs.protocol.clientprotocol.addblock from 127.0.0.1:45057 call#5 retry#0 java.io.ioexception:file/user/oozie/share/lib/mapreduce streaming/hadoop-streaming-2.6.0-cdh5.4.8.jar.复制只能复制到>0个节点,而不是minreplication(=1)。有0个datanode正在运行,此操作中没有排除任何节点。在org.apache.hadoop.hdfs.server.blockmanagement.blockmanager.choosetarget4newblock(blockmanager。java:1541)位于org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getadditionalblock(fsnamesystem)。java:3289)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.addblock(namenoderpcserver。java:668)在org.apache.hadoop.hdfs.server.namenode.authorizationproviderproxyclientprotocol.addblock>>>>>(authorizationproviderproxyclientprotocol。java:212)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.addblock>(clientnamenodeprotocolserversidetranslatorpb。java:483)在org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod>(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine)。java:619)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:1060)在org.apache.hadoop.ipc.server$handler$1.run(服务器。java:2044)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2040)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:415)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1671)在org.apache.hadoop.ipc.server$handler.run(服务器。java:2038) 2015-11-14 10:38:51,320警告org.apache.hadoop.security.usergroupinformation:priviledgedactionexception as:hdfs(auth:simple) >cause:java.io.ioexception:file/user/oozie/share/lib/mapreduce streaming/hadoop-streaming.jar.copying只能复制到0 nodes>而不是minreplication(=1)。有0个datanode正在运行,此操作中没有排除任何节点。2015-11-14 10:38:51320 info org.apache.hadoop.ipc.server:8020上的ipc服务器处理程序9,call>org.apache.hadoop.hdfs.protocol.clientprotocol.addblock from 127.0.0.1:45057 call#12 retry#0 java.io.ioexception:file/user/oozie/share/lib/mapreduce streaming/hadoop-streaming.jar.copying只能复制到0 nodes>而不是minreplication(=1)。有0个datanode正在运行,此操作中没有排除任何节点。在org.apache.hadoop.hdfs.server.blockmanagement.blockmanager.choosetarget4newblock(blockmanager。java:1541)位于org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getadditionalblock(fsnamesystem)。java:3289)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.addblock(namenoderpcserver。java:668)在org.apache.hadoop.hdfs.server.namenode.authorizationproviderproxyclientprotocol.addblock>(授权ProviderProxyClientProtocol。java:212)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.addblock>(clientnamenodeprotocolserversidetranslatorpb。java:483)在org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod>(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine)。java:619)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:1060)在org.apache.hadoop.ipc.server$handler$1.run(服务器。java:2044)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2040)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:415)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1671)在org.apache.hadoop.ipc.server$handler.run(服务器。java:2038)
暂无答案!
目前还没有任何答案,快来回答吧!