spark配置单元报告pyspark.sql.utils.analysisexception:在群集上运行时未找到u'table:'

sr4lhrrt  于 2021-06-28  发布在  Hive
关注(0)|答案(2)|浏览(591)

我正在尝试在Cloud4.2Enterprise上的biginsights上运行pyspark脚本,该脚本访问一个配置单元表。
首先创建配置单元表:

[biadmin@bi4c-xxxxx-mastermanager ~]$ hive
hive> CREATE TABLE pokes (foo INT, bar STRING);
OK
Time taken: 2.147 seconds
hive> LOAD DATA LOCAL INPATH '/usr/iop/4.2.0.0/hive/doc/examples/files/kv1.txt' OVERWRITE INTO TABLE pokes;
Loading data to table default.pokes
Table default.pokes stats: [numFiles=1, numRows=0, totalSize=5812, rawDataSize=0]
OK
Time taken: 0.49 seconds
hive>

然后我创建一个简单的pyspark脚本:

[biadmin@bi4c-xxxxxx-mastermanager ~]$ cat test_pokes.py
from pyspark import SparkContext

sc = SparkContext()

from pyspark.sql import HiveContext
hc = HiveContext(sc)

pokesRdd = hc.sql('select * from pokes')
print( pokesRdd.collect() )

我试图执行:

[biadmin@bi4c-xxxxxx-mastermanager ~]$ spark-submit \
    --master yarn-cluster \
    --deploy-mode cluster \
    --jars /usr/iop/4.2.0.0/hive/lib/datanucleus-api-jdo-3.2.6.jar, \
           /usr/iop/4.2.0.0/hive/lib/datanucleus-core-3.2.10.jar, \
           /usr/iop/4.2.0.0/hive/lib/datanucleus-rdbms-3.2.9.jar \
    test_pokes.py

但是,我遇到了一个错误:

Traceback (most recent call last):
  File "test_pokes.py", line 8, in <module>
    pokesRdd = hc.sql('select * from pokes')
  File "/disk6/local/usercache/biadmin/appcache/application_1477084339086_0481/container_e09_1477084339086_0481_01_000001/pyspark.zip/pyspark/sql/context.py", line 580, in sql
  File "/disk6/local/usercache/biadmin/appcache/application_1477084339086_0481/container_e09_1477084339086_0481_01_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/disk6/local/usercache/biadmin/appcache/application_1477084339086_0481/container_e09_1477084339086_0481_01_000001/pyspark.zip/pyspark/sql/utils.py", line 51, in deco
pyspark.sql.utils.AnalysisException: u'Table not found: pokes; line 1 pos 14'
End of LogType:stdout

如果我运行spark submit standalone,我可以看到表exists ok:

[biadmin@bi4c-xxxxxx-mastermanager ~]$ spark-submit test_pokes.py
…
…
16/12/21 13:09:13 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 18962 bytes result sent to driver
16/12/21 13:09:13 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 168 ms on localhost (1/1)
16/12/21 13:09:13 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/12/21 13:09:13 INFO DAGScheduler: ResultStage 0 (collect at /home/biadmin/test_pokes.py:9) finished in 0.179 s
16/12/21 13:09:13 INFO DAGScheduler: Job 0 finished: collect at /home/biadmin/test_pokes.py:9, took 0.236558 s
[Row(foo=238, bar=u'val_238'), Row(foo=86, bar=u'val_86'), Row(foo=311, bar=u'val_311')
…
…

请参阅我之前提出的与此问题相关的问题:hive sparkYarn集群作业失败,原因是:“classnotfoundexception:org.datanucleus.api.jdo.jdopersistencemanagerfactory”
这个问题类似于另一个问题:spark可以从pyspark访问配置单元表,但不能从spark submit访问。但是,与这个问题不同,我使用的是hivecontext。
更新:请参阅此处了解最终解决方案https://stackoverflow.com/a/41272260/1033422

umuewwlo

umuewwlo1#

看来你受到了影响bug:httpshttp://issues.apache.org/jira/browse/spark-15345。
我在hdp-2.5.0.0上的spark 1.6.2和2.0.0中也遇到了类似的问题:
我的目标是在以下条件下从配置单元sql查询创建Dataframe:
python api,
群集部署模式(在其中一个executor节点上运行的驱动程序)
使用yarn管理executorjvms(而不是独立的spark主示例)。
初步试验得出以下结果: spark-submit --deploy-mode client --master local ... =>工作 spark-submit --deploy-mode client --master yarn ... =>工作 spark-submit --deploy-mode cluster --master yarn ... . => 不起作用
在第三种情况下,运行在其中一个executor节点上的驱动程序可以找到数据库。错误是:

pyspark.sql.utils.AnalysisException: 'Table or view not found: `database_name`.`table_name`; line 1 pos 14'

上面列出的fokkodriesprong的答案对我很有用。
使用下面列出的命令,在executor节点上运行的驱动程序能够访问数据库中的配置单元表,而该数据库不是 default :

$ /usr/hdp/current/spark2-client/bin/spark-submit \
--deploy-mode cluster --master yarn \
--files /usr/hdp/current/spark2-client/conf/hive-site.xml \
/path/to/python/code.py

我用来测试spark 1.6.2和spark 2.0.0的python代码是:(将spark\u version更改为1以测试spark 1.6.2。确保相应地更新spark submit命令中的路径)

SPARK_VERSION=2      
APP_NAME = 'spark-sql-python-test_SV,' + str(SPARK_VERSION)

def spark1():
    from pyspark.sql import HiveContext
    from pyspark import SparkContext, SparkConf

    conf = SparkConf().setAppName(APP_NAME)
    sc = SparkContext(conf=conf)
    hc = HiveContext(sc)

    query = 'select * from database_name.table_name limit 5'
    df = hc.sql(query)
    printout(df)

def spark2():
    from pyspark.sql import SparkSession
    spark = SparkSession.builder.appName(APP_NAME).enableHiveSupport().getOrCreate()
    query = 'select * from database_name.table_name limit 5'
    df = spark.sql(query)
    printout(df)

def printout(df):
    print('\n########################################################################')
    df.show()
    print(df.count())

    df_list = df.collect()
    print(df_list)
    print(df_list[0])
    print(df_list[1])
    print('########################################################################\n')

def main():
    if SPARK_VERSION == 1:
        spark1()
    elif SPARK_VERSION == 2:
        spark2()

if __name__ == '__main__':
    main()
afdcj2ne

afdcj2ne2#

这是因为spark submit作业无法找到 hive-site.xml ,因此无法连接到配置单元元存储。请添加 --files /usr/iop/4.2.0.0/hive/conf/hive-site.xml 向你的spark提交命令。

相关问题