异常“java.lang.unsupportedoperationexception:empty.tail”

xwbd5t1u  于 2021-06-10  发布在  Hbase
关注(0)|答案(1)|浏览(603)

我们使用的是hdp2.4.2,spark1.6是用scala2.10.5编译的。hbase版本为1.1.2.2.4.2.0-258
该环境是一个基本的dev集群(<10个节点),hbase和spark以集群模式运行。
尝试使用spark hbase连接器将soem数据从hbase获取到spark中的Dataframe失败,错误如下-

Exception in thread "main" java.lang.UnsupportedOperationException: empty.tail
    at scala.collection.TraversableLike$class.tail(TraversableLike.scala:445)
    at scala.collection.mutable.ArraySeq.scala$collection$IndexedSeqOptimized$super$tail(ArraySeq.scala:45)
    at scala.collection.IndexedSeqOptimized$class.tail(IndexedSeqOptimized.scala:123)
    at scala.collection.mutable.ArraySeq.tail(ArraySeq.scala:45)
    at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.initRowKey(HBaseTableCatalog.scala:150)
    at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.<init>(HBaseTableCatalog.scala:164)
    at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:239)
    at hbaseReaderHDPCon$.main(hbaseReaderHDPCon.scala:42)
    at hbaseReaderHDPCon.main(hbaseReaderHDPCon.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

在我的代码的第42行-这正在发生-

val cat =
      s"""{
          |"table":{"namespace":"myTable", "name":"person", "tableCoder":"PrimitiveType"},
          |"rowkey":"ROW",
          |"columns":{
            |"col0":{"cf":"person", "col":"detail", "type":"string"}
          |}
          |}""".stripMargin
    val scon = new SparkConf()
    val sparkContext = new SparkContext(scon)
0aydgbwb

0aydgbwb1#

根据您的代码,我认为您的目录中的字段“columns”缺少rowkey。下面是一个对我有用的例子。我使用的是spark 2.0(sparksession),但它应该与spark 1.6配合使用:

val catalog =
        s"""{
            |"table":{"namespace":"default", "name":"person"},
            |"rowkey":"id",
            |"columns":{
            |"id":{"cf":"rowkey", "col":"id", "type":"string"},
            |"name":{"cf":"info", "col":"name", "type":"string"},
            |"age":{"cf":"info", "col":"age", "type":"string"}
            |}
            |}""".stripMargin

    val spark = SparkSession
        .builder()
        .appName("HbaseWriteTest")
        .getOrCreate()

    val df = spark
        .read
        .options(
            Map(
                HBaseTableCatalog.tableCatalog -> catalog
            )
        )
        .format("org.apache.spark.sql.execution.datasources.hbase")
        .load()

相关问题