databricks spark基于sql子查询的查询抛出treenodeexception

7gyucuyw  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(400)

我在databricks笔记本中运行一个非常简单的查询,其中包含一个子查询。

select recorddate, count(*) 
from( select record_date as recorddate, column1 
      from table1 
      where record_date >= date_sub(current_date(), 1) 
    )t
group by recorddate
order by recorddate

我得到以下异常:sql语句中的错误:package.treenodeexception:binding属性,tree:recorddate
当删除order by子句时,查询运行正常。我看到一些帖子谈论类似的问题,但完全相同。这是已知的行为吗?有解决方法/修复方法吗?

ih99xse1

ih99xse11#

对我来说效果很好,(spark=2.4.5)我认为问题是不同的-

val df = spark.sql("select current_date() as record_date, '1' column1")
    df.show(false)
    /**
      * +-----------+-------+
      * |record_date|column1|
      * +-----------+-------+
      * |2020-07-29 |1      |
      * +-----------+-------+
      */

    df.createOrReplaceTempView("table1")
    spark.sql(
      """
        |select recorddate, count(*)
        |from( select record_date as recorddate, column1
        |      from table1
        |      where record_date >= date_sub(current_date(), 1)
        |    )t
        |group by recorddate
        |order by recorddate
        |
      """.stripMargin)
      .show(false)

    /**
      * +----------+--------+
      * |recorddate|count(1)|
      * +----------+--------+
      * |2020-07-29|1       |
      * +----------+--------+
      */

相关问题