Databricks:我在尝试使用autoloader从Azure ADLS Gen2读取json文件时遇到了问题

wkyowqbh  于 2022-12-04  发布在  其他
关注(0)|答案(1)|浏览(179)

当我尝试使用autoloader从Azure ADLS Gen2读取json文件时遇到了一个问题。我只在特定文件中遇到了这个问题。我检查了文件是否完好且未损坏。

问题如下:

Caused by: java.lang.IllegalArgumentException: ***requirement failed: Literal must have a corresponding value to string, but class Integer found.***
    at scala.Predef$.require(Predef.scala:281)
    at at ***com.databricks.sql.io.FileReadException: Error while reading file /mnt/Source/kafka/customer_raw/filtered_data/year=2022/month=11/day=9/hour=15/part-00000-31413bcf-0a8f-480f-8d45-6970f4c4c9f7.c000.json.***
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.logFileNameAndThrow(FileScanRDD.scala:598)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:422)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(null:-1)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
java.lang.IllegalArgumentException: requirement failed: Literal must have a corresponding value to string, but class Integer found.
at scala.Predef$.require(Predef.scala:281)
at org.apache.spark.sql.catalyst.expressions.Literal$.validateLiteralValue(literals.scala:274)
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.sat java.lang.Thread.run(Thread.java:750)

我正在使用Delta Live Pipeline。代码如下:

@dlt.table(name = tablename,
    comment = "Create Bronze Table",
    table_properties={
        "quality": "bronze"
    }
)
def Bronze_Table_Create():
    return
            spark
            .readStream
            .schema(schemapath)
            .format("cloudFiles")
            .option("cloudFiles.format","json)
            .option("cloudFile.schemaLocation, schemalocation)
            .option("cloudFiles.inferColumnTypes", "false")
            .option("cloudFiles.schemaEvolutionMode", "rescue")
            .load(sourcelocation
tf7tbtn2

tf7tbtn21#

我解决了这个问题。问题是我们在模式文件中错误地有重复的列。因为它显示了这个错误。但是,这个错误完全是误导性的,这就是为什么我们无法纠正它。

相关问题