hadoop连接被拒绝,docker上的高可用性名称节点配置

bvuwiixz  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(246)

我目前正在使用仲裁日志管理器为namenodeha配置hadoop。我有3个日志节点。以下是我的配置:hdfs-site.xml:

<configuration>

<property>
  <name>dfs.nameservices</name>
  <value>cluster</value>
  <description>
    Comma-separated list of nameservices.
  </description>
</property>

<property>
  <name>dfs.ha.namenodes.cluster</name>
  <value>nn1,nn2</value>
  <description>
  </description>
</property>
<property>
  <name>dfs.namenode.rpc-address.cluster.nn1</name>
  <value>namenode1:8020</value>
  <description>
  </description>
</property>

<property>
  <name>dfs.namenode.rpc-address.cluster.nn2</name>
  <value>namenode2:8020</value>
  <description>
  </description>
</property>

<property>
  <name>dfs.namenode.http-address.cluster.nn1</name>
  <value>namenode1:50070</value>
  <description>
  </description>
</property>

<property>
  <name>dfs.namenode.http-address.cluster.nn2</name>
  <value>namenode2:50070</value>
  <description>
  </description>
</property>

<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://journalnode1:8485;journalnode2:8485;journalnode3:8485/cluster</value>
</property>

<property>
  <name>dfs.client.failover.proxy.provider</name>
  <value></value>
  <description>
    The prefix (plus a required nameservice ID) for the class name of the
    configured Failover proxy provider for the host.  For more detailed
    information, please consult the "Configuration Details" section of
    the HDFS High Availability documentation.
  </description>
</property>

<property>
  <name>dfs.ha.fencing.methods</name>
  <value>shell(/bin/true)</value>
</property>

    <property>
    <name>dfs.namenode.name.dir</name>
    <value>/hadoop_data/namenode</value>
    <description>NameNode directory for storing namespace and transaction logs</description>
    </property>

    <property>
        <name>dfs.datanode.name.dir</name>
        <value>/hadoop_data/datanode</value>
        <description>DataNode directory for storing data blocks</description>
    </property>

  <property>
      <name>dfs.replication</name>
      <value>1</value>
      <description>How many replicas of data blocks should exist in HDFS</description>
  </property>

<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/hadoop_data/journalnode/</value>
  <description>
    The directory where the journal edit files are stored.
  </description>
</property>

和core-site.xml:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://cluster</value>
    </property>
</configuration>

我遵循以下指南:https://docs.hortonworks.com/hdpdocuments/hdp2/hdp-2.6.2/bk_hadoop-high-availability/content/ha-nn-deploy-nn-cluster.html namenodes、datanodes、journalnodes和yarn资源管理器都在一个本地docker网络上的单独容器上运行。在指南中,我放弃了设置zookeeper,因为我只想为两个namenodes运行启动命令。当我尝试跑步时:

hdfs namenode -bootstrapStandby

我收到到第二个namenode端口的connectiondensed错误:namenode2:8020。以下是我的docker运行命令:

docker run -d --net hadoop --net-alias namenode1 --name namenode1 -h namenode1 -p 50070:50070 "nn"
docker run -d --net hadoop --net-alias namenode2 --name namenode2 -h namenode1 -p 50070:50070 "nn"

我需要将nameservices字符串“cluster”添加到我的主机名吗?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题