apache hadoop 2.7.3,套接字超时错误

nbysray5  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(431)

我有同样的问题,在下面的链接。
hadoop,套接字超时错误
你能帮我解决这个问题吗?我在安装apachehadoop2.7.3ec2时遇到了同样的问题。链接中提到的属性是否需要同时添加到name和data node配置文件中?如果是,那么所有的.xmls是什么?提前谢谢。
此外,应用程序正在尝试访问内部ip的ec2根据下面的错误,我需要打开任何端口吗?在网页界面上写着8042。
所有节点、nodemanager和资源管理器(rm)都显示为在jps上运行。
尝试运行map reduce时,namenode出错示例如下:
作业作业\u 1506038808044 \u 0002失败,状态为失败,原因是:应用程序\u 1506038808044 \u 0002失败2次,原因是启动appattempt时出错\u 1506038808044 \u 0002 \u000002。出现异常:org.apache.hadoop.net.connecttimeoutexception:从ip-172-31-1-10/172.31.1.10调用ip-172-31-5-59.ec2。internal:43555 failed 套接字超时异常:org.apache.hadoop.net.connecttimeoutexception:等待通道准备好连接时超时20000毫秒。ch:java.nio.channels.socketchannel[connection pending remote=ip-172-31-5-59.ec2.internal/172.31.5.59:43555]
最后,rm web ui在作业运行期间始终显示以下消息:
状态:等待分配、启动am容器并向rm注册。
谢谢,阿莎

rjee0c15

rjee0c151#

在尝试了hadoop提供的解决方案、socket timeout error(我的问题中的链接)并将下面的内容添加到hdfs-site.xml文件之后,通过允许ec2示例使用所有icmp和udp规则来解决问题,这样它们就可以在彼此之间ping。

<property>
  <name>dfs.namenode.name.dir</name>
  <value>/usr/local/hadoop/hadoop_work/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>/usr/local/hadoop/hadoop_work/hdfs/datanode</value>
</property>
<property>
  <name>dfs.namenode.checkpoint.dir</name>
  <value>/usr/local/hadoop/hadoop_work/hdfs/namesecondary</value>
</property>
<property>
  <name>dfs.block.size</name>
  <value>134217728</value>
</property>
<property>
  <name>dfs.client.use.datanode.hostname</name>
  <value>true</value>
</property>
<property>
  <name>dfs.datanode.socket.write.timeout</name>
  <value>2000000</value>
</property>
<property>
  <name>dfs.socket.timeout</name>
  <value>2000000</value>
</property>

<property>
  <name>dfs.datanode.use.datanode.hostname</name>
  <value>true</value>
  <description>Whether datanodes should use datanode hostnames when
    connecting to other datanodes for data transfer.
  </description>
</property>

<property>
  <name>dfs.namenode.rpc-bind-host</name>
  <value>0.0.0.0</value>
  <description>
    The actual address the RPC server will bind to. If this optional address is
    set, it overrides only the hostname portion of dfs.namenode.rpc-address.
    It can also be specified per name node or name service for HA/Federation.
    This is useful for making the name node listen on all interfaces by
    setting it to 0.0.0.0.
  </description>
</property>

<property>
  <name>dfs.namenode.servicerpc-bind-host</name>
  <value>0.0.0.0</value>
  <description>
    The actual address the service RPC server will bind to. If this optional address is
    set, it overrides only the hostname portion of dfs.namenode.servicerpc-address.
    It can also be specified per name node or name service for HA/Federation.
    This is useful for making the name node listen on all interfaces by
    setting it to 0.0.0.0.
  </description>
</property>

<property>
  <name>dfs.namenode.http-bind-host</name>
  <value>0.0.0.0</value>
  <description>
    The actual address the HTTP server will bind to. If this optional address
    is set, it overrides only the hostname portion of dfs.namenode.http-address.
    It can also be specified per name node or name service for HA/Federation.
    This is useful for making the name node HTTP server listen on all
    interfaces by setting it to 0.0.0.0.
  </description>
</property>

<property>
  <name>dfs.namenode.https-bind-host</name>
  <value>0.0.0.0</value>
  <description>
    The actual address the HTTPS server will bind to. If this optional address
    is set, it overrides only the hostname portion of dfs.namenode.https-address.
    It can also be specified per name node or name service for HA/Federation.
    This is useful for making the name node HTTPS server listen on all
    interfaces by setting it to 0.0.0.0.
  </description>
</property>

相关问题