错误tableinputformat:org.apache.hadoop.hbase.tablename.valueof上的java.lang.nullpointerexception

6tqwzwtp  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(386)

我正在尝试使用spark从hbase读取数据。我使用的版本是spark 1.3.1和hbase 1.1.1。我有以下错误

ERROR TableInputFormat: java.lang.NullPointerException                                                              
    at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:417)                                                              
    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)                                                              
    at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)                                      
    at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:91)                                                     
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)                                                        
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)                                                        
    at scala.Option.getOrElse(Option.scala:120)                                                                                   
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)                                                                         
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)                                             
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)                                                        
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)                                                        
    at scala.Option.getOrElse(Option.scala:120)                                                                                   
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)                                                                         
    at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:82)                                                             
    at org.apache.spark.rdd.ShuffledRDD.getDependencies(ShuffledRDD.scala:80)                                                     
    at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:206)                                                      
    at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:204)                                                      
    at scala.Option.getOrElse(Option.scala:120)                                                                                   
    at org.apache.spark.rdd.RDD.dependencies(RDD.scala:204)                                                                       
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scal

代码如下

public static void main( String[] args )
{
    String TABLE_NAME = "Hello";
    HTable table=null;
    SparkConf sparkConf = new SparkConf();
    sparkConf.setAppName("Data Reader").setMaster("local[1]");
    sparkConf.set("spark.executor.extraClassPath", "$(hbase classpath)");

    JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);

    Configuration hbConf = HBaseConfiguration.create();
    hbConf.set("zookeeper.znode.parent", "/hbase-unsecure");
    try {
         table = new HTable(hbConf, Bytes.toBytes(TABLE_NAME));

    } catch (IOException e) {

        e.printStackTrace();
    }

    JavaPairRDD<ImmutableBytesWritable, Result> hBaseRDD = sparkContext
            .newAPIHadoopRDD(
                    hbConf,
                    TableInputFormat.class,
                    org.apache.hadoop.hbase.io.ImmutableBytesWritable.class,
                    org.apache.hadoop.hbase.client.Result.class);
    hBaseRDD.coalesce(1, true);
    System.out.println("Count "+hBaseRDD.count());
    //.saveAsTextFile("hBaseRDD");
    try {
        table.close();
        sparkContext.close();
    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }
}

我不能解决这个问题。我用的是hortonworks沙盒。

qyyhg6bp

qyyhg6bp1#

你写过:

try {
     table = new HTable(hbConf, Bytes.toBytes(TABLE_NAME));

} catch (IOException e) {

     e.printStackTrace();
}

如果您使用的是1.1.1 api:
在devapidocs中,我只能看到两个构造函数:
用于内部测试的受保护htable(clusterconnection conn,bufferedmutatorparams)。
受保护的htable(tablename tablename、clusterconnection connection、tableconfiguration tableconfig、rpcretryingcallerfactory rpccallerfactory、rpccontrollerfactory rpccontrollerfactory、executorservice pool)创建用于访问hbase表的对象。
第一个构造函数的params构造函数是: BufferedMutatorParams(TableName tableName) tablename没有构造函数。
所以你必须像这样初始化你的htable:

table = new HTable(hbConf, new bufferedMutatorParams(TableName.valueOf(TABLE_NAME))

如果您使用的是0.94 api:
htbale的施工人员是:
htable(byte[]tablename,hconnection connection)创建一个对象来访问hbase表。htable(byte[]tablename,hconnection connection,executorservice pool)创建一个对象来访问hbase表。
htable(org.apache.hadoop.conf.configuration conf,byte[]tablename)创建一个对象来访问hbase表。
htable(org.apache.hadoop.conf.configuration conf,byte[]tablename,executorservice pool)创建一个对象来访问hbase表。
htable(org.apache.hadoop.conf.configuration conf,string tablename)创建一个对象来访问hbase表。
因此,最后一点,您只需要传递字符串名称,而不需要传递字节[]

table = new HTable(hbConf, TABLE_NAME);

应该没问题。

相关问题