使用spark scala远程连接hbase

64jmpszr  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(685)

我在windows(我的本地)中配置了hadoop和spark,并且在vm(同一台机器)中安装了cloudera,其中包含hbase。
我正在尝试使用sparkstream提取数据并将其放入vm中的hbase中。
有可能这样做吗?
我的尝试:
包hbase

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{ConnectionFactory,HBaseAdmin,HTable,Put,Get}

object Connect {

  def main(args: Array[String]){
  val conf = HBaseConfiguration.create()
val tablename = "Acadgild_spark_Hbase"

val HbaseConf = HBaseConfiguration.create()
  HbaseConf.set("hbase.zookeeper.quorum","192.168.117.133")
  HbaseConf.set("hbase.zookeeper.property.clientPort","2181")

  val connection = ConnectionFactory.createConnection(HbaseConf);

  val admin = connection.getAdmin();

 val listtables=admin.listTables()

listtables.foreach(println)

  }
}

错误:

18/08/08 21:05:09 INFO ZooKeeper: Initiating client connection, connectString=192.168.117.133:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$13/1357491107@12d1bfb1
18/08/08 21:05:15 INFO ClientCnxn: Opening socket connection to server 192.168.117.133/192.168.117.133:2181. Will not attempt to authenticate using SASL (unknown error)
18/08/08 21:05:15 INFO ClientCnxn: Socket connection established to 192.168.117.133/192.168.117.133:2181, initiating session
18/08/08 21:05:15 INFO ClientCnxn: Session establishment complete on server 192.168.117.133/192.168.117.133:2181, sessionid = 0x16518f57f950012, negotiated timeout = 40000
18/08/08 21:05:16 WARN ConnectionUtils: Can not resolve quickstart.cloudera, please check your network
java.net.UnknownHostException: quickstart.cloudera
    at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$2.lookupAllHostAddr(Unknown Source)
    at java.net.InetAddress.getAddressesFromNameService(Unknown Source)
    at java.net.InetAddress.getAllByName0(Unknown Source)
    at java.net.InetAddress.getAllByName(Unknown Source)
    at java.net.InetAddress.getAllByName(Unknown Source)
    at java.net.InetAddress.getByName(Unknown Source)
    at org.apache.hadoop.hbase.client.ConnectionUtils.getStubKey(ConnectionUtils.java:233)
    at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries(ConnectionImplementation.java:1126)
    at org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1148)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1213)
    at org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1202)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3055)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3047)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:460)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:444)
    at azure.iothub$.main(iothub.scala:35)
    at azure.iothub.main(iothub.scala)
sc4hvdpw

sc4hvdpw1#

基于此错误,您不能使用 quickstart.cloudera 因为网络堆栈正在使用dns尝试访问它,但是外部路由器不知道您的vm。
你需要使用 localhost ,然后确保vm已正确配置为使用您需要连接的端口。
但是,我认为zookeeper正在将主机名返回到您的代码中。因此,您必须在您的操作系统主机上编辑hosts文件以添加行项目。
例如

127.0.0.1 localhost quickstart.cloudera

或者,你可以去公园里逛逛 zookeeper-shell 或cloudera manager(在hbase配置中)并编辑 quickstart.cloudera 返回地址 192.168.117.133 相反

相关问题