scala connect hbase主机故障

64jmpszr  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(407)

我编写scala代码如下:

44 val config: Configuration = HBaseConfiguration.create()
 45     config.set("hbase.zookeeper.property.clientPort", zooKeeperClientPort)
 46     config.set("hbase.zookeeper.quorum", zooKeeperQuorum)
 47     config.set("zookeeper.znode.parent", zooKeeperZNodeParent)
 48     config.set("hbase.master", hbaseMaster)
 49     config.addResource("hbase-site.xml")
 50     config.addResource("hdfs-site.xml")
 51     HBaseAdmin.checkHBaseAvailable(config);
 52     val admin: HBaseAdmin = new HBaseAdmin(config)
 53     // descriptor.addColumn(new HColumnDescriptor(Bytes.toBytes("cfbfeature")))
 54     val conn = ConnectionFactory.createConnection(config)
 55     table = conn.getTable(TableName.valueOf(outputTable))

这是我的完整错误日志:
zookeeperc公司lientport:2181,佐基perquorum:zk1.hbase.busdev.usw2.cmcm.com,zk2.hbase.busdev.usw2.cmcm.com,zk3.hbase.busdev.usw2.cmcm.com,zookeeperznodeparent:/hbase,outputtable:requestfeature,血红蛋白asemaster:10.2.2.62:60000 16/12/13 08:25:56警告util.heapmemorysizeutil:hbase.regionserver.global.memstore.upperlimit被hbase.regionserver.global.memstore.size否决16/12/13 08:25:56警告util.heapmemorysizeutil:hbase.regionserver.global.memstore.upperlimit被hbase.regionserver.global.memstore.size否决16/12/13 08:25:56warn util.heapmemorysizeutil:hbase.regionserver.global.memstore.upperlimit已被hbase.regionserver.global.memstore.size否决16/12/13 08:25:57 info zookeeper.recoverablezookeeper:process identifier=hconnection-0x6ae9e162 connecting to zookeeper ensemble=zk2.hbase.busdev.usw2.cmcm。com:2181,zk1.hbase.busdev.usw2.cmcm。com:2181,zk3.hbase.busdev.usw2.cmcm。com:2181 16/12/13 08:25:57 warn util.heapmemorysizeutil:hbase.regionserver.global.memstore.upperlimit已被hbase.regionserver.global.memstore.size 16/12/13否决08:25:57 warn util.dynamicclassloader:无法识别目录的fshdfs://mycluster/hbase/lib,忽略了java.net.unknownhostexception:未知主机:mycluster at org.apache.hadoop.ipc.client$连接。java:214)在org.apache.hadoop.ipc.client.getconnection(client。java:1196)在org.apache.hadoop.ipc.client.call(client。java:1050)在org.apache.hadoop.ipc.rpc$invoker.invoke(rpc。java:225)在com.sun.proxy.$proxy3.getprotocolversion(未知源代码),位于org.apache.hadoop.ipc.rpc.getproxy(rpc)。java:396)在org.apache.hadoop.ipc.rpc.getproxy(rpc。java:379)在org.apache.hadoop.hdfs.dfsclient.createrpcnamenode(dfsclient。java:119)在org.apache.hadoop.hdfs.dfsclient。java:238)在org.apache.hadoop.hdfs.dfsclient。java:203) 位于org.apache.hadoop.hdfs.distributedfilesystem.initialize(distributedfilesystem)。java:89)在org.apache.hadoop.fs.filesystem.createfilesystem(filesystem。java:1386)在org.apache.hadoop.fs.filesystem.access$200(文件系统)。java:66)在org.apache.hadoop.fs.filesystem$cache.get(filesystem。java:1404)在org.apache.hadoop.fs.filesystem.get(文件系统)。java:254)在org.apache.hadoop.fs.path.getfilesystem(path。java:187)在org.apache.hadoop.hbase.util.dynamicclassloader。java:104)在org.apache.hadoop.hbase.protobuf.protobufutil。java:229)在org.apache.hadoop.hbase.clusterid.parsefrom(clusterid。java:64)在org.apache.hadoop.hbase.zookeeper.zkclusterid.readclusteridznode(zkclusterid。java:75)在org.apache.hadoop.hbase.client.zookeeperregistry.getclusterid(zookeeperregistry。java:86)在org.apache.hadoop.hbase.client.connectionmanager$hconnectionimplementation.retrieveclusterid(connectionmanager。java:833)在org.apache.hadoop.hbase.client.connectionmanager$hconnectionimplementation.(连接管理器。java:623)在sun.reflect.nativeconstructoraccessorimpl.newinstance0(本机方法)在sun.reflect.nativeconstructoraccessorimpl.newinstance(nativeconstructoraccessorimpl)。java:57)在sun.reflect.delegatingconstructoraccessorimpl.newinstance(delegatingconstructoraccessorimpl。java:45)在java.lang.reflect.constructor.newinstance(constructor。java:526)在org.apache.hadoop.hbase.client.connectionfactory.createconnection(connectionfactory。java:238)在org.apache.hadoop.hbase.client.connectionfactory.createconnection(connectionfactory。java:218)在org.apache.hadoop.hbase.client.connectionfactory.createconnection(connectionfactory。java:119)在org.apache.hadoop.hbase.client.hbaseadmin.checkhbaseavailable(hbaseadmin。java:2508)在com.cmcm.datahero.streaming.actor.tohbaseactor.prestart(tohbaseactor。scala:51)在akka.actor.actor$class.aroundprestart(actor。scala:472)在com.cmcm.datahero.streaming.actor.tohbaseactor.aroundprestart(tohbaseactor。scala:16)在akka.actor.actorcell.create(actorcell。scala:580)在akka.actor.actorcell.invokeall$1(actorcell。scala:456)在akka.actor.actorcell.systeminvoke(actorcell。scala:478)在akka.dispatch.mailbox.processallsystemmessages(邮箱。scala:263)在akka.dispatch.mailbox.run(邮箱。scala:219)位于java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor。java:1145)在java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor。java:615)在java.lang.thread.run(线程。java:745)2013年12月16日08:25:57 info client.connectionmanager$hconnectionimplementation:正在关闭zookeeper sessionid=0x356c1ee7cac04c8

anhgbhbe

anhgbhbe1#

最后,我将hbase和hdfsxmlconfiure放入子路径src/main/resources中。然后将资源添加到hadoop配置中。但这不是我问题的核心。hbase包的jar版本应与hbase版本匹配。我修复了我的build.sbt。下面的代码。希望能帮助别人克服我遇到的错误。

libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.0.0-cdh5.4.8"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.0.0-cdh5.4.8"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.0.0-cdh5.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-core" % "2.6.0-mr1-cdh5.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.6.0-cdh5.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0-cdh5.5.4"
// libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.0.0-CDH"
// libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.0.0"
// libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.0.0"

//scalaSource in Compile := baseDirectory.value / "src/main/scala"
//resourceDirectory in Compile := baseDirectory.value / "src/main/resources"
unmanagedBase := baseDirectory.value / "lib"
//unmanagedResourceDirectories in Compile += baseDirectory.value / "conf"
packAutoSettings
resolvers += Resolver.sonatypeRepo("snapshots")
resolvers += "cloudera repo" at "https://repository.cloudera.com/content/repositories/releases/"
resolvers += "cloudera repo1" at "https://repository.cloudera.com/artifactory/cloudera-repos/"

相关问题