使用spark中的hadoop配置连接到hbase

uajslkp6  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(469)

我正在尝试在spark的mappartitionfunction中创建hbase连接。

Caused by: java.io.NotSerializableException: org.apache.hadoop.conf.Configuration

我尝试了以下代码

SparkConf conf = new SparkConf()
            .setAppName("EnterPrise Risk Score")
            .setMaster("local");
    conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
    conf.set("spark.kryo.registrationRequired", "true");
    conf.registerKryoClasses(new Class<?>[] {
                            Class.forName("org.apache.hadoop.conf.Configuration"),
                            Class.forName("org.apache.hadoop.hbase.client.Table"),
                            Class.forName("com.databricks.spark.avro.DefaultSource$SerializableConfiguration")});
    SparkSession sparkSession = SparkSession.builder().config(conf)
            .getOrCreate();
Configuration hbaseConf= HBaseConfiguration
            .create(hadoopConf);

我正在使用sparksession创建数据集,并传递hbaseconf以创建到hbase的连接。
有没有办法连接到hbase?

eeq64g8w

eeq64g8w1#

您可能会隐式地将hbase配置传递给如下spark操作:

Configuration hbaseConfiguration = HBaseConfiguration.create();
sc.hadoopFile(inDirTrails, AvroInputFormat.class, AvroWrapper.class, NullWritable.class)).mapPartitions( i -> {
    Connection connection = ConnectionFactory.createConnection(hbaseConfiguration)
    //more valid code
});

你为什么不在它里面创建这样的配置:

sc.hadoopFile(inDirTrails, AvroInputFormat.class, AvroWrapper.class, NullWritable.class)).mapPartitions( i -> {
    Configuration hbaseConfiguration = HBaseConfiguration.create();
    hbaseConfiguration.set("hbase.zookeeper.quorum", HBASE_ZOOKEEPER_QUORUM);
    Connection connection = ConnectionFactory.createConnection(hbaseConfiguration)
    //more valid code
});

相关问题