spark将数据写入phoenix,区域边界缓存已过期

ifmq2ha2  于 2021-05-27  发布在  Spark
关注(0)|答案(0)|浏览(677)

糖化血红蛋白version:1.4.13 phoenix 版本:apache-phoenix-4.14.3-hbase-1.4-binspark:in intelljidea 独立的,version:2.4.5
我已经使用纯hbase api测试了hbase操作,可以创建表、放置数据和删除数据。
下面的主要例外是,我在重启hbase、zookeeper后多次遇到这种情况。我的虚拟机和一些hbck操作:
org.apache.phoenix.schema.staleregionboundarycacheexception:错误1108(xcl08):区域边界的缓存已过期。
有什么想法吗?
完整堆栈跟踪如下所示:

20/06/29 13:31:54 INFO ConnectionQueryServicesImpl: HConnection
 established. Stacktrace for informational purposes:
 hconnection-0xd60d615 java.lang.Thread.getStackTrace(Thread.java:1559)
     org.apache.phoenix.util.LogUtil.getCallerStackTrace(LogUtil.java:55)
     org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:431)
     org.apache.phoenix.query.ConnectionQueryServicesImpl.access$400(ConnectionQueryServicesImpl.java:269)
     org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2610)
     org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2586)
     org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
     org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2586)
     org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
     org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:143)
     org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
     java.sql.DriverManager.getConnection(DriverManager.java:664)
     java.sql.DriverManager.getConnection(DriverManager.java:208)
     org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113)
     org.apache.phoenix.mapreduce.util.ConnectionUtil.getOutputConnection(ConnectionUtil.java:97)
     org.apache.phoenix.mapreduce.util.ConnectionUtil.getOutputConnection(ConnectionUtil.java:92)
     org.apache.phoenix.mapreduce.util.ConnectionUtil.getOutputConnection(ConnectionUtil.java:71)
     org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getUpsertColumnMetadataList(PhoenixConfigurationUtil.java:306)
     org.apache.phoenix.spark.ProductRDDFunctions$$anonfun$1.apply(ProductRDDFunctions.scala:41)
     org.apache.phoenix.spark.ProductRDDFunctions$$anonfun$1.apply(ProductRDDFunctions.scala:37)
     org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:800)
     org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:800)
     org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
     org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
     org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
     org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
     org.apache.spark.scheduler.Task.run(Task.scala:109)
     org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:344)
     java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     java.lang.Thread.run(Thread.java:748)

**20/06/29 13:32:33 INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=38474 ms ago, cancelled=false,

 msg=org.apache.hadoop.hbase.NotServingRegionException: Region
 hbase:meta,,1 is not online on hadoop138,16020,1593407163133**
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3072)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1271)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2681)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3015)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36804)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
      row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740,
 hostname=hadoop138,16020,1593228907567, seqNum=0
     20/06/29 13:32:43 INFO RpcRetryingCaller: Call exception, tries=11, retries=35, started=48489 ms ago, cancelled=false,
 msg=org.apache.hadoop.hbase.NotServingRegionException: Region
 hbase:meta,,1 is not online on hadoop138,16020,1593407163133
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3072)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1271)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2681)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3015)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36804)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
      row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740,
 hostname=hadoop138,16020,1593228907567, seqNum=0
     20/06/29 13:32:43 INFO ConnectionManager$HConnectionImplementation: Closing zookeeper
 sessionid=0x10000078986000c
     20/06/29 13:32:43 INFO ZooKeeper: Session: 0x10000078986000c closed
     20/06/29 13:32:43 INFO ClientCnxn: EventThread shut down
     20/06/29 13:32:43 INFO QueryLoggerDisruptor: Shutting down QueryLoggerDisruptor..
     20/06/29 13:32:43 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
   **org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 (XCL08): Cache of region boundaries are out of date.**
        at org.apache.phoenix.exception.SQLExceptionCode$14.newException(SQLExceptionCode.java:365)
        at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
        at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:189)
        at org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:169)
        at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:140)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1282)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1576)
        at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2731)
        at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1115)

代码并不复杂,实际上它是一个phoenix-spark集成示例,尽管已弃用

import org.apache.spark.SparkContext

object tp {
  def main(args: Array[String]): Unit = {
    import org.apache.phoenix.spark._
    val sc = new SparkContext("local", "phoenix-test")
    val dataSet = List((1L, "1", 1), (2L, "2", 2), (3L, "3", 3))

    sc
      .parallelize(dataSet)
      .saveToPhoenix(
        "OUTPUT_TEST_TABLE",
        Seq("ID","COL1","COL2"),
        zkUrl = Some("hadoop136,hadoop137,hadoop138")
      )

  }

}

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题