我使用spark standalone 1.6.x版本来连接支持kerberos的hadoop 2.7.x
JavaDStream<String> status = stream.map(new Function<String, String>() {
public String call(String arg0) throws Exception {
Configuration conf = new Configuration();
FileSystem fs = null;
conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
conf.set("hadoop.security.authentication", "kerberos");
conf.set("dfs.namenode.kerberos.principal", "hdfs/_HOST@REALM");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.setLoginUser(UserGroupInformation.loginUserFromKeytabAndReturnUGI("abc","~/abc.ketyab"));
System.out.println("Logged in successfully.");
fs = FileSystem.get(new URI(activeNamenodeURI), conf);
FileStatus[] s = fs.listStatus(new Path("/"));
for (FileStatus status : s) {
System.out.println(status.getPath().toString());
}
return "success";
}
});
但低于例外
用户:@realm (auth:kerberos)原因:java.io.ioexception:本地异常失败:java.io.ioexception:org.apache.hadoop.security.accesscontrolexception:客户端无法通过:[token,kerberos]进行身份验证;主机详细信息:本地主机为:“hostname1/0.0.0.0”;目的主机为:“hostname2”:8020;在org.apache.hadoop.net.netutils.wrapexception(netutils。java:772)在org.apache.hadoop.ipc.client.call(client。java:1472)在org.apache.hadoop.ipc.client.call(client。java:1399)在org.apache.hadoop.ipc.protobufrpceengine$invoker.invoke(protobufrpceengine。java:232)在com.sun.proxy.$proxy44.create(未知源)上org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocoltranslatorpb.create(clientnamenodeprotocoltranslatorpb。java:295)在sun.reflect.nativemethodaccessorimpl.invoke0(本机方法)在sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl)。java:57)在sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl。java:43)在java.lang.reflect.method.invoke(方法。java:606)在org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler。java:187)在org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler。java:102)在com.sun.proxy.$proxy45.create(未知源代码),位于org.apache.hadoop.hdfs.dfsoutputstream.newstreamforcreate(dfsoutputstream)。java:1725)在org.apache.hadoop.hdfs.dfsclient.create(dfsclient。java:1668)在org.apache.hadoop.hdfs.dfsclient.create(dfsclient。java:1593)在org.apache.hadoop.hdfs.distributedfilesystem$6.docall(distributedfilesystem。java:397)在org.apache.hadoop.hdfs.distributedfilesystem$6.docall(distributedfilesystem。java:393)在org.apache.hadoop.fs.filesystemlinkresolver.resolve(filesystemlinkresolver。java:81)在org.apache.hadoop.hdfs.distributedfilesystem.create(distributedfilesystem。java:393) 在org.apache.hadoop.hdfs.distributedfilesystem.create(distributedfilesystem。java:337)在org.apache.hadoop.fs.filesystem.create(filesystem。java:908)在org.apache.hadoop.fs.filesystem.create(filesystem。java:889)在com..hdfsfilewriter.createoutputfile(hdfsfilewriter。java:354) ... 21更多原因:java.io.ioexception:org.apache.hadoop.security.accesscontrolexception:客户端无法通过:[令牌]进行身份验证,kerberos]位于org.apache.hadoop.ipc.client$connection$1.run(client。java:680)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:415)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1628)在org.apache.hadoop.ipc.client$connection.handlesaslconnectionfailure(客户端。java:643)在org.apache.hadoop.ipc.client$connection.setupiostreams(client。java:730)在org.apache.hadoop.ipc.client$connection.access$2800(client。java:368)在org.apache.hadoop.ipc.client.getconnection(client。java:1521)在org.apache.hadoop.ipc.client.call(客户端。java:1438) ... 43更多原因:org.apache.hadoop.security.accesscontrolexception:客户端无法通过:[令牌]进行身份验证,kerberos]位于org.apache.hadoop.security.saslrpclient.selectsaslclient(saslrpclient)。java:172)在org.apache.hadoop.security.saslrpclient.saslconnect(saslrpclient。java:396)在org.apache.hadoop.ipc.client$connection.setupsaslconnection(client。java:553)在org.apache.hadoop.ipc.client$connection.access$1800(client。java:368)在org.apache.hadoop.ipc.client$connection$2.run(客户端。java:722)在org.apache.hadoop.ipc.client$connection$2.run(client。java:718)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:415)
暂无答案!
目前还没有任何答案,快来回答吧!