hadoop示例安装后不工作

bjg7j2ky  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(366)

嗨,我最近以分布式模式安装了hadoop2.7.2,namenode是 hadoop 数据节点 hadoop1 以及 hadoop2 . 当我这么做的时候 yarn jar /usr/local/hadoop/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar pi 2 1000 在bash中,它会给我如下错误消息:

Number of Maps  = 2
Samples per Map = 1000
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "benji/192.168.1.4"; destination host is: "hadoop":9000; 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
    at org.apache.hadoop.ipc.Client.call(Client.java:1479)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
    at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:278)
    at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.
    at com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:99)
    at com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:498)
    at com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.<init>(RpcHeaderProtos.java:2207)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.<init>(RpcHeaderProtos.java:2165)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto$1.parsePartialFrom(RpcHeaderProtos.java:2295)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto$1.parsePartialFrom(RpcHeaderProtos.java:2290)
    at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
    at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
    at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcHeaderProtos.java:3167)
    at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1085)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:979)

如果我这么做了 hadoop jar /usr/local/hadoop/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar pi 2 1000 ,它会给出如下错误消息:

Number of Maps  = 2
Samples per Map = 1000
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "hadoop/192.168.1.4"; destination host is: "hadoop":9000;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
... blabla ...

请注意,这两个错误消息之间的奇怪区别在于本地主机名(一个是 benji/192.168.1.4 另一个是 hadoop/192.168.1.4 ). 我愿意 start-dfs.sh ,和 start-yarn.shyarn jar ... ,看起来都不错。
如果有人能帮忙解决这个问题,我将不胜感激。以下是一些配置文件的内容: /etc/hosts 文件( benji 是主计算机上的非hadoop帐户):

192.168.1.4     hadoop benji

192.168.1.5     hadoop1

192.168.1.9     hadoop2
``` `/etc/hostname` 文件:

hadoop
``` ~/.ssh/config 文件:


# hadoop1

Host hadoop1
HostName 192.168.1.5
User hadoop1
IdentityFile ~/.ssh/hadoopid

# hadoop2

Host hadoop2
HostName 192.168.1.9
User hadoop2
IdentityFile ~/.ssh/hadoopid

# hadoop localhost

Host localhost
HostName localhost
User hadoop
IdentityFile ~/.ssh/hadoopid

# hadoop

Host hadoop
HostName hadoop
User hadoop
IdentityFile ~/.ssh/hadoopid
``` `core-site.xml` 文件:

hadoop.tmp.dir

file:///usr/local/hadoop/hadoopdata/tmp

A base for other temporary directories.

fs.defaultFS

hdfs://hadoop:9000

io.file.buffer.size

131072

dfs.namenode.name.dir

file:///usr/local/hadoop/hadoopdata/dfs/namenode

dfs.datanode.du.reserved

21474836480

dfs.datanode.data.dir

file:///usr/local/hadoop/hadoopdata/dfs/datanode

dfs.replication

1

dfs.namenode.secondary.http-address

hadoop:9001

dfs.webhdfs.enabled

true

有人能在这个问题上帮忙吗?谢谢!
更新1
我发现了问题的一部分。是的 `jps` 发现datanode和namenode未运行。之后 `netstat -an | grep 9000` 以及 `lsof -i :9000` 我发现另一个进程正在监听端口 `9000` . namenode可以在我更改后运行 `fs.defaultFS` 从 `hdfs://hadoop:9000` 至 `hdfs://hadoop:9001` 在 `core-site.xml` 文件,以及 `dfs.namenode.secondary.http-address` 从 `hadoop:9001` 至 `hadoop:9002` 在 `hdfs-site.xml` . 这个 `protocol-buffer` 此更改后,错误消息消失。但是数据节点仍然没有按照测试结果运行 `jps` . 
datanode日志文件显示发生了一些奇怪的事情:

... blabla ...
2016-05-19 12:27:12,157 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop/192.168.
1.4:9000. Already tried 44 time(s); maxRetries=45
2016-05-19 12:27:32,158 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to se
rver: hadoop/192.168.1.4:9000
... blabla ...
2016-05-19 13:41:55,382 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
2016-05-19 13:41:55,387 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
... blabla ...

我不明白为什么datanode试图连接到端口9000上的namenode。
41zrol4v

41zrol4v1#

您应该在所有从属服务器中安装一个已配置的hadoop包,只需更改 fs.defaultFS 在namenode中 hdfs://hadoop:9001 但datanodes不会产生datanodes尝试连接到的数据 hdfs://hadoop:900 正如他们在报告中所说的那样 core-site.xml .

相关问题