无法使用spark脚本将spark数据集写入hbase

5anewei6  于 2021-07-13  发布在  Hbase
关注(0)|答案(2)|浏览(488)

我正在尝试使用spark写入hbase表。我使用的例子与hbaseSpark连接器从链接。我从以下命令开始 spark-shell 呼叫

$ spark-shell --jars /opt/cloudera/parcels/CDH/jars/hbase-spark-2.1.0-cdh6.2.1.jar,/opt/cloudera/parcels/CDH/jars/hbase-client-2.1.0-cdh6.2.1.jar

代码:

val sql = spark.sqlContext
import java.sql.Date

case class Person(name: String, email: String, birthDate: Date, height: Float)
var personDS = Seq(
    Person("alice", "alice@alice.com", Date.valueOf("2000-01-01"), 4.5f),
    Person("bob", "bob@bob.com", Date.valueOf("2001-10-17"), 5.1f)).
    toDS
personDS.write.format("org.apache.hadoop.hbase.spark").
    option("hbase.columns.mapping",
           "name STRING :key, email STRING c:email, birthDate DATE p:birthDate, height FLOAT p:height") .
    option("hbase.table", "test").
    option("hbase.spark.use.hbasecontext", false).
    option("spark.hadoop.validateOutputSpecs", false).
    save()

例外是

java.lang.NullPointerException
  at org.apache.hadoop.hbase.spark.HBaseRelation.<init>(DefaultSource.scala:139)
  at org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:79)
  at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
  at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
  at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668)
  at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
  ... 49 elided

例外的原因是什么?如何避免?

dbf7pr2w

dbf7pr2w1#

我怀疑npe的发生是因为 HBaseContext 应在hbase spark connector进入前正确初始化 hbase:meta 一个您正在引用的表,并创建一个数据源。i、 e.按照链接中的“自定义hbase配置”部分进行操作,例如:

import org.apache.hadoop.hbase.spark.HBaseContext
import org.apache.hadoop.hbase.HBaseConfiguration

new HBaseContext(spark.sparkContext, new HBaseConfiguration())
...
wnrlj8wa

wnrlj8wa2#

还有另一种初始化方法 HBaseContext :

import org.apache.hadoop.fs.Path
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.spark.HBaseContext

val conf = HBaseConfiguration.create()
// use your actual path to hbase-site.xml
conf.addResource(new Path("/etc/hbase/conf.cloudera.hbase/hbase-site.xml"))
new HBaseContext(sc, conf)

相关问题