当查询同一个表时,sparksql返回空值,但是hive和impaly得到正常的数据?

uhry853o  于 2021-06-26  发布在  Impala
关注(0)|答案(2)|浏览(549)

我在 hive 里有张table

以两种方式查询同一个表:
Hive或 Impala :我得到这样的预期结果

0: jdbc:hive2://cdh-master3:10000/> SELECT * FROM kafka_table.risk_order_user_level_info rouli WHERE rouli.month = '2019_01' AND rouli.day = '08' androuli.order_id >0 limit 5;
INFO  : OK
+-----------------+-------------------+------------+--------------+---------------+-------------------+-----------------------+---------------+---------------------+----------------------+-------------------+--------------+------------+--+
| rouli.order_id  | rouli.order_type  | rouli.uid  | rouli.po_id  | rouli.status  | rouli.user_level  | rouli.pre_user_level  | rouli.credit  | rouli.down_payment  | rouli.open_order_id  | rouli.createtime  | rouli.month  | rouli.day  |
+-----------------+-------------------+------------+--------------+---------------+-------------------+-----------------------+---------------+---------------------+----------------------+-------------------+--------------+------------+--+
| 39180235        | 2                 | 10526665   | -999         | 100           | 10                | 106                   | 27000         | 0              | -999                 | 1546887803138     | 2019_01      | 08         |
| 39180235        | 2                 | 10526665   | -999         | 100           | 10                | 106                   | 27000         | 0              | -999                 | 1546887805302     | 2019_01      | 08         |
| 39180235        | 2                 | 10526665   | -999         | 100           | 10                | 106                   | 27000         | 0              | -999                 | 1546887807457     | 2019_01      | 08         |
| 39180235        | 2                 | 10526665   | -999         | 100           | 10                | 106                   | 27000         | 0              | -999                 | 1546887809610     | 2019_01      | 08         |
| 39804907        | 2                 | 15022908   | -999         | 100           | -999              | -999                  | 0             | 85000              | -999                 | 1546887807461     | 2019_01      | 08         |
+-----------------+-------------------+------------+--------------+---------------+-------------------+-----------------------+---------------+---------------------+----------------------+-------------------+--------------+------------+--+

但是usr spark是python还是scala,我知道了,有几个列是空的

scala> spark.sql("SELECT * FROM kafka_table.risk_order_user_level_info WHERE month = '2019_01' AND day = '08'  limit 5").show()
+--------+----------+--------+-----+------+----------+--------------+-------+------------+-------------+-------------+-------+---+
|order_id|order_type|     uid|po_id|status|user_level|pre_user_level| credit|down_payment|open_order_id|   createTime|  month|day|
+--------+----------+--------+-----+------+----------+--------------+-------+------------+-------------+-------------+-------+---+
|    null|      null|14057428| null|    90|      null|          null|2705000|        null|         null|1546920940672|2019_01| 08|
|    null|      null| 5833953| null|    90|      null|          null|2197000|        null|         null|1546920941872|2019_01| 08|
|    null|      null|10408291| null|   100|      null|          null|1386000|        null|         null|1546920941979|2019_01| 08|
|    null|      null|  621761| null|   100|      null|          null| 100000|        null|         null|1546920942282|2019_01| 08|
|    null|      null|10408291| null|   100|      null|          null|1386000|        null|         null|1546920942480|2019_01| 08|
+--------+----------+--------+-----+------+----------+--------------+-------+------------+-------------+-------------+-------+---+

如何使sparksql返回预期结果???

ps:我在spark和hive中执行流动sql,发现不同的结果;

SELECT * FROM kafka_table.risk_order_user_level_info rouli
WHERE rouli.month = '2019_01' AND rouli.day = '08'
and order_id IN (
 39906526,
 39870975,
 39832606,
 39889240,
 39836630
)

两个结果

这就是这个问题贴在这一页击中我;
我还用上面两种方法检查了表中记录的编号,并且计数是相同的

3npbholx

3npbholx1#

我自己解决的。此选项卡中的数据由sparksql编写,但scala(spark)中字段的名称与hive(create table sql)不同。
例如:orderid(scala)但是orderid(sql)

xzlaal3s

xzlaal3s2#

包括 rouli.order_id >0 条件也可以触发sql查询。您将在sparksql输出中看到非空记录。
注意:limit将随机返回记录。以上两个场景中显示的结果属于不同的顺序。

相关问题