我已经建立了一个hahadoop集群。但添加kerberos身份验证后,datanode无法连接到namenode。
已验证namenode服务器是否成功启动,并且未记录任何错误。我用用户“hduser”启动所有服务
$ sudo netstat -tuplen
...
tcp 0 0 10.28.94.150:8019 0.0.0.0:* LISTEN 1001 20218 1518/java
tcp 0 0 10.28.94.150:50070 0.0.0.0:* LISTEN 1001 20207 1447/java
tcp 0 0 10.28.94.150:9000 0.0.0.0:* LISTEN 1001 20235 1447/java
数据节点
以根用户身份启动datanode,使用jsvc将服务与特权端口绑定(参考secure datanode)
$ sudo -E sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /opt/hadoop-2.7.3/logs//hadoop-hduser-datanode-STWHDDN01.out
出现datanode无法连接到namenodes的错误:
...
2018-01-08 09:25:40,051 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hduser
2018-01-08 09:25:40,052 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2018-01-08 09:25:40,114 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2018-01-08 09:25:40,125 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2018-01-08 09:25:40,152 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2018-01-08 09:25:40,219 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: ha-cluster
2018-01-08 09:25:41,189 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: ha-cluster
2018-01-08 09:25:41,226 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-08 09:25:41,227 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2018-01-08 09:25:42,297 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: STWHDRM02/10.28.94.151:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-01-08 09:25:42,300 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: STWHDRM01/10.28.94.150:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
datanode hdfs-site.xml(节选):
<property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/opt/hadoop/etc/hadoop/hdfs.keytab</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hduser/_HOST@FDATA.COM</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1004</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1006</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
我在hadoop-env.sh中设置了hadoop\u secure\u dn\u user=hduser和jsvc\u home
数据节点上的hdfs.keytab:
$ klist -ke etc/hadoop/hdfs.keytab Keytab name: FILE:etc/hadoop/hdfs.keytab
KVNO Principal
---- --------------------------------------------------------------------------
1 hduser/stwhddn01@FDATA.COM (aes256-cts-hmac-sha1-96)
1 hduser/stwhddn01@FDATA.COM (aes128-cts-hmac-sha1-96)
1 hduser/stwhddn01@FDATA.COM (des3-cbc-sha1)
1 hduser/stwhddn01@FDATA.COM (arcfour-hmac)
1 hduser/stwhddn01@FDATA.COM (des-hmac-sha1)
1 hduser/stwhddn01@FDATA.COM (des-cbc-md5)
1 HTTP/stwhddn01@FDATA.COM (aes256-cts-hmac-sha1-96)
1 HTTP/stwhddn01@FDATA.COM (aes128-cts-hmac-sha1-96)
1 HTTP/stwhddn01@FDATA.COM (des3-cbc-sha1)
1 HTTP/stwhddn01@FDATA.COM (arcfour-hmac)
1 HTTP/stwhddn01@FDATA.COM (des-hmac-sha1)
1 HTTP/stwhddn01@FDATA.COM (des-cbc-md5)
操作系统:centos 7
hadoop:2.7.3版本
kerberos:mit 1.5.1
当以用户根用户身份运行datanode时,它不使用kerberos进行身份验证。
有什么想法吗?
1条答案
按热度按时间63lcw9qa1#
我发现了问题。需要更改/etc/hosts以仅将127.0.0.1Map到localhost。
之前
之后
我仍然想知道为什么旧的Map在没有kerberos身份验证的环境中工作。