connectexception:在hadoop中运行mapreduce时连接被拒绝

zzzyeukh  于 2021-05-30  发布在  Hadoop
关注(0)|答案(2)|浏览(388)

我将hadoop(2.6.0)设置为多机模式:1个namenode+3个datanode。当我使用命令start-all.sh时,它们(namenode、datanode、resourcemanager、node manager)工作正常。我用jps命令检查了它,每个节点的结果如下:
名称节点:
7300资源管理器
6942名称节点
7154次要名称节点
数据节点:
3840数据节点
3924节点管理器
我还上传了hdfs上的示例文本文件:/user/hadoop/data/sample.txt。那一刻绝对没有错误。
但是当我尝试用hadoop示例的jar运行mapreduce时:
hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount/user/hadoop/data/sample.txt/user/hadoop/output
我有一个错误:

15/04/08 03:31:26 INFO mapreduce.Job: Job job_1428478232474_0001 running    in uber mode : false
15/04/08 03:31:26 INFO mapreduce.Job:  map 0% reduce 0%
15/04/08 03:31:26 INFO mapreduce.Job: Job job_1428478232474_0001 failed with     state FAILED due to: Application application_1428478232474_0001 failed 2 times due to Error launching appattempt_1428478232474_0001_000002. Got exception: java.net.ConnectException: Call From hadoop/127.0.0.1 to localhost:53245 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy31.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 9 more Failing the application.
15/04/08 03:31:26 INFO mapreduce.Job: Counters: 0

关于配置,请确保namenode可以ssh到datanodes,反之亦然,无需提示密码。我还禁用了ip6并修改了/etc/hosts文件:
127.0.0.1本地主机hadoop
192.168.56.102 hadoop网络
192.168.56.103 hadoop-dn1版本
192.168.56.104 hadoop-dn2
192.168.56.105 hadoop-dn3版本
我不知道为什么mapreduced不能运行,尽管namenode和datanodes工作正常。我在这里几乎被绊倒了,你能帮我找到原因吗??

谢谢

编辑:hdfs-site.xml(namenode)中的配置:

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///usr/local/hadoop/hadoop_stores/hdfs/namenode</value>
    <description>NameNode directory for namespace and transaction logs storage.</description>
</property>
<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>
<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>
<property>
    <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
    <value>false</value>
</property>
<property>
     <name>dfs.namenode.http-address</name>
     <value>hadoop-nn:50070</value>
     <description>Your NameNode hostname for http access.</description>
</property>
<property>
     <name>dfs.namenode.secondary.http-address</name>
     <value>hadoop-nn:50090</value>
     <description>Your Secondary NameNode hostname for http access.</description>
</property>

在数据节点中:

<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///usr/local/hadoop/hadoop_stores/hdfs/data/datanode</value>
    <description>DataNode directory</description>
</property>

<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>
<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>
<property>
     <name>dfs.namenode.http-address</name>
     <value>hadoop-nn:50070</value>
     <description>Your NameNode hostname for http access.</description>
</property>
<property>
     <name>dfs.namenode.secondary.http-address</name>
     <value>hadoop-nn:50090</value>
     <description>Your Secondary NameNode hostname for http access.</description>

下面是命令的结果:hadoopfs-ls/user/hadoop/data
hadoop@hadoop用法:~/data$hadoop fs-ls/user/hadoop/data 15/04/09 00:23:27找到2项
-rw-r--r--3 hadoop超群29 2015-04-09 00:22>/user/hadoop/data/sample.txt
-rw-r--r--3 hadoop超群27 2015-04-09 00:22>/user/hadoop/data/sample1.txt
hadoop fs-ls/user/hadoop/output
ls:`/user/hadoop/output':没有这样的文件或目录

up9lanfz

up9lanfz1#

防火墙问题:
java.net.connectexception:连接被拒绝
此错误可能是由于防火墙问题造成的。在终端中执行此操作:

sudo apt-get install iptables-persistent
sudo iptables -L
sudo iptables-save > /usr/iptables-backup/iptables.v4.rules

在继续之前检查文件是否已创建(因为如果出现问题,这将用于恢复防火墙)。
现在,刷新iptable规则(即停止防火墙):

sudo iptables -F

现在试试看,

sudo iptables -L

此命令不应返回任何规则。现在,尝试运行map/reduce作业。
注意:如果要将iptables恢复到以前的状态,请在终端中键入以下内容: sudo iptables-restore < /usr/iptables-backup/iptables.v4.rules

lg40wkob

lg40wkob2#

找到解决方案!!请参阅本文—将数据节点id/名称显示为localhost

Call From localhost.localdomain/127.0.0.1 to localhost.localdomain:56148 failed on connection exception: java.net.ConnectException: Connection refused;

主机和从机在/etc/hostname中的主机名都是localhost.localdomain。
我把奴隶的主人的名字改成了奴隶1和奴隶2。成功了。谢谢大家抽出时间。
@请确保namenode和datanodes中的etc/hostname未设置为localhost。只需在终端中键入~#hostname即可查看。您可以使用相同的命令设置新的主机名。
我的主人和工人或奴隶的/etc/hosts是这样的-

127.0.0.1    localhost localhost.localdomain localhost4 localhost4.localdomain4

# 127.0.1.1    localhost

192.168.111.72  master
192.168.111.65  worker1
192.168.111.66  worker2

worker1的主机名

hduser@worker1:/mnt/hdfs/datanode$ cat /etc/hostname 
worker1

和工人2

hduser@worker2:/usr/local/hadoop/logs$ cat /etc/hostname 
worker2

另外,您可能不希望使用带有环回接口的“hadoop”主机名。即。

127.0.0.1 localhost hadoop

检查中的这一点(1)https://wiki.apache.org/hadoop/connectionrefused.
谢谢您。

相关问题