无法用phoenix/spark加载Dataframe

92dk7w1h  于 2021-06-09  发布在  Hbase
关注(0)|答案(1)|浏览(377)

我在phoenix中创建了一个名为“test”的表,我可以从phoenix中查询它,并且可以在hbase的shell中扫描它。我尝试使用phoenix spark库,如下所示,但Dataframe未填充:

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.phoenix.spark._
val hadoopConf: Configuration = new Configuration()
val hbConf: Configuration = HBaseConfiguration.create(hadoopConf)
val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf = hbConf)

相反,我得到的是:

16/05/11 11:10:47 INFO MemoryStore: ensureFreeSpace(413840) called with curMem=0, maxMem=4445479895
16/05/11 11:10:47 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 404.1 KB, free 4.1 GB)
16/05/11 11:10:47 INFO MemoryStore: ensureFreeSpace(27817) called with curMem=413840, maxMem=4445479895
16/05/11 11:10:47 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.2 KB, free 4.1 GB)
16/05/11 11:10:47 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:39319 (size: 27.2 KB, free: 4.1 GB)
16/05/11 11:10:47 INFO SparkContext: Created broadcast 0 from newAPIHadoopRDD at PhoenixRDD.scala:41
16/05/11 11:10:47 INFO RecoverableZooKeeper: Process identifier=hconnection-0x72187492 connecting to ZooKeeper ensemble=localhost:2181
16/05/11 11:10:47 INFO ZooKeeper: Client environment:zookeeper.version=3.4.6-2950--1, built on 09/30/2015 17:44 GMT
16/05/11 11:10:47 INFO ZooKeeper: Client environment:host.name=some.server.com
16/05/11 11:10:47 INFO ZooKeeper: Client environment:java.version=1.8.0_40
16/05/11 11:10:47 INFO ZooKeeper: Client environment:java.vendor=Oracle Corporation
16/05/11 11:10:47 INFO ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.8.0_40/jre
16/05/11 11:10:47 INFO ZooKeeper: Client environment:java.class.path=/usr/hdp/2.3.2.0-2950/spark/conf/:/usr/hdp/2.3.2.0-2950/spark/lib/spark-assembly-1.4.1.2.3.2.0-2950-hadoop2.7.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/hdp/2.3.2.0-2950/spark/lib/datanucleus-core-3.2.10.jar:/usr/hdp/2.3.2.0-2950/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/hdp/current/hadoop-client/conf/:/usr/hdp/current/hadoop-client/hadoop-azure.jar:/usr/hdp/current/hadoop-client/lib/azure-storage-2.2.0.jar
16/05/11 11:10:47 INFO ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
16/05/11 11:10:47 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
16/05/11 11:10:47 INFO ZooKeeper: Client environment:java.compiler=<NA>
16/05/11 11:10:47 INFO ZooKeeper: Client environment:os.name=Linux
16/05/11 11:10:47 INFO ZooKeeper: Client environment:os.arch=amd64
16/05/11 11:10:47 INFO ZooKeeper: Client environment:os.version=3.10.0-327.10.1.el7.x86_64
16/05/11 11:10:47 INFO ZooKeeper: Client environment:user.name=dude
16/05/11 11:10:47 INFO ZooKeeper: Client environment:user.home=/home/dude
16/05/11 11:10:47 INFO ZooKeeper: Client environment:user.dir=/home/dude
16/05/11 11:10:47 INFO ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x721874920x0, quorum=localhost:2181, baseZNode=/hbase
16/05/11 11:10:47 INFO ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
16/05/11 11:10:47 INFO ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
16/05/11 11:10:47 INFO ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x25494c0cb650086, negotiated timeout = 40000
16/05/11 11:10:47 INFO Metrics: Initializing metrics system: phoenix
16/05/11 11:10:47 INFO MetricsConfig: loaded properties from hadoop-metrics2.properties
16/05/11 11:10:47 INFO MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
16/05/11 11:10:47 INFO MetricsSystemImpl: phoenix metrics system started
16/05/11 11:10:48 INFO RecoverableZooKeeper: Process identifier=hconnection-0xd2eddc2 connecting to ZooKeeper ensemble=localhost:2181
16/05/11 11:10:48 INFO ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0xd2eddc20x0, quorum=localhost:2181, baseZNode=/hbase
16/05/11 11:10:48 INFO ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
16/05/11 11:10:48 INFO ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
16/05/11 11:10:48 INFO ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x25494c0cb650087, negotiated timeout = 40000
16/05/11 11:11:36 INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 ms ago, cancelled=false, msg=
16/05/11 11:11:56 INFO RpcRetryingCaller: Call exception, tries=11, retries=35, started=68312 ms ago, cancelled=false, msg=
16/05/11 11:12:16 INFO RpcRetryingCaller: Call exception, tries=12, retries=35, started=88338 ms ago, cancelled=false, msg=
16/05/11 11:12:36 INFO RpcRetryingCaller: Call exception, tries=13, retries=35, started=108450 ms ago, cancelled=false, msg=
16/05/11 11:12:56 INFO RpcRetryingCaller: Call exception, tries=14, retries=35, started=128530 ms ago, cancelled=false, msg=
16/05/11 11:13:16 INFO RpcRetryingCaller: Call exception, tries=15, retries=35, started=148547 ms ago, cancelled=false, msg=
16/05/11 11:13:37 INFO RpcRetryingCaller: Call exception, tries=16, retries=35, started=168741 ms ago, cancelled=false, msg=
16/05/11 11:13:57 INFO RpcRetryingCaller: Call exception, tries=17, retries=35, started=188856 ms ago, cancelled=false, msg=

我找到了这篇文章,但我已经在使用上述方法并传递hbase配置。我做错什么了?
有趣的是,我的zkquorom服务器不是localhost,而是两个服务器的列表,尽管它在信息消息中似乎显示为localhost。我不确定这是不是应该表现出来的。这个 hbase.zookeeper.quorum 参数设置正确 hbase-site.xml 我检查的时候会列出来 hbConf . 此外, zookeeper.znode.parent 设置为 /hbase-unsecure 即使我看到 /hbase 在留言里。Phoenix星火会忽略这些吗?!
我可以直接使用hbase api,但如果有phoenix就更好了,因为我可以立即将数据作为Dataframe加载。

lyfkaqu1

lyfkaqu11#

该死的!错误是列名应该大写。如果菲尼克斯能告诉我这些列不存在,而不是等待什么都不发生,那就太好了。我要把它作为错误报告归档!

相关问题