hadoop secondarynamenode问题

cxfofazt  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(271)

使用ubuntuvm4vm's1运行hadoop版本1.2.1。hadoop nn(名称节点)2。hadoop snn(辅助名称节点)3。hadoop-dn01(数据节点1)4。hadoop-dn02(数据节点2)所有进程都使用start-all.sh启动
我在secondary name节点中没有看到edit事件,这意味着secondary上的fsiamge没有得到更新。secondarynamenode上的日志文件显示以下错误。
2015-02-04 13:16:12083 info org.apache.hadoop.hdfs.server.common.storage:文件数=50 2015-02-04 13:16:12086 info org.apache.hadoop.hdfs.server.common.storage:在建文件数=0 2015-02-04 13:16:12,087 info org.apache.hadoop.hdfs.server.namenode.fseditlog:开始加载编辑文件/tmp/hadoop-hadoop/dfs/namesecondary/current/edits 2015-02-04 13:16:12088 info org.apache.hadoop.hdfs.server.namenode.fseditlog:eof of/tmp/hadoop-hadoop/dfs/namesecondary/current/edits,到达编辑日志结尾找到的事务数:8。读取字节数:740 2015-02-04 13:16:12088 info org.apache.hadoop.hdfs.server.namenode.fseditlog:edits file/tmp/hadoop-hadoop/dfs/namesecondary/current/edits大小740 edits#8在0秒内加载。2015-02-04 13:16:12088 info org.apache.hadoop.hdfs.server.namenode.fseditlog:事务数:0事务总时间(ms):0同步批处理的事务数:0同步数:0同步次数(ms):0 2015-02-04 13:16:12128 info org.apache.hadoop.hdfs.server.namenode.fseditlog:关闭编辑日志:位置=740,editlog=/tmp/hadoop-hadoop/dfs/namesecondary/current/edits 2015-02-04 13:16:12128 info org.apache.hadoop.hdfs.server.namenode.fseditlog:close success:截断到740,editlog=/tmp/hadoop-hadoop/dfs/namesecondary/current/edits 2015-02-04 13:16:12,130 info org.apache.hadoop.hdfs.server.common.storage:映像文件/tmp/hadoop-hadoop/dfs/namesecondary/current/fsimage,大小为5124字节,在0秒内保存。2015-02-04 13:16:12229 info org.apache.hadoop.hdfs.server.namenode.fseditlog:关闭编辑日志:position=4,editlog=/tmp/hadoop-hadoop/dfs/namesecondary/current/edits 2015-02-04 13:16:12230 info org.apache.hadoop.hdfs.server.namenode.fseditlog:关闭成功:截断为4,editlog=/tmp/hadoop hadoop/dfs/namesecondary/current/edits 2015-02-04 13:16:12485 info org.apache.hadoop.hdfs.server.namenode.secondarynamenode:posted url hadoop nn:50070putuimage=1&port=50090&machine=0.0.0&token=-41:307905665:0:1423080068000:1423079764851&newchecksum=9bbe4619db3323211ed473f8acb7a9 2015-02-04 13:16:12,485 info org.apache.hadoop.hdfs.server.namenode.transferfsimage:打开到的连接http://hadoop-nn:50070/getimage?putimage=1&port=50090&machine=0.0.0&token=-41:307905665:0:1423080068000:1423079764851&newchecksum=9bbe4619db3323211ed473f8acb7a9 2015-02-04 13:16:12,489 error org.apache.hadoop.hdfs.server.namenode.secondarynamenode:docheckpoint中出现异常:2015-02-04 13:16:12,490错误org.apache.hadoop.hdfs.server.namenode.secondarynamenode:java.io.filenotfoundexception:http://hadoop-nn:50070/getimage?putimage=1&port=50090&machine=0.0.0&token=-41:307905665:0:1423080068000:1423079764851&newchecksum=9bbe4619db3323211ed473f8acb7a9 at太阳网。www.protocol.http.httpurlconnection.getinputstream(httpurlconnection。java:1624)在org.apache.hadoop.hdfs.server.namenode.transferfsimage.getfileclient(transferfsimage。java:177)在org.apache.hadoop.hdfs.server.namenode.secondarynamenode.putfsimage(secondarynamenode。java:462)在org.apache.hadoop.hdfs.server.namenode.secondarynamenode.docheckpoint(secondarynamenode)。java:525)在org.apache.hadoop.hdfs.server.namenode.secondarynamenode.dowork(secondarynamenode。java:396)在org.apache.hadoop.hdfs.server.namenode.secondarynamenode.run(secondarynamenode。java:360)在java.lang.thread.run(线程。java:745)

nbewdwxp

nbewdwxp1#

<property> <name>dfs.secondary.http.address</name> <value>hadoop-snn:50090</value> </property> 在hdfs-site.xml中添加这个标记可以解决这个问题。

相关问题