无法使用许可模式在pyspark中保留损坏的行

m4pnthwp  于 2021-07-12  发布在  Spark
关注(0)|答案(1)|浏览(245)

我得到了一个csv文件,我需要使用pyspark执行某些清理任务。在清理之前,我正在做一些模式验证检查。下面是我的代码。


# schema for the input data

def get_input_schema():
    return StructType([StructField("Group ID", StringType(), True),                           
                       StructField("Start Date", DateType(), True),
                       StructField("Start Time", StringType(), True),
                       ...
                       StructField("malformed_rows", StringType(), True)
                       ])

# basic cleanup logic

def main(argv):
    spark = SparkSession.builder.appName('cleaner_job').getOrCreate()
    spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")
    df = spark.read.option("mode", "PERMISSIVE") \
        .option("dateFormat", "yyyy-MM-dd") \
        .option("columnNameOfCorruptRecord", "malformed_rows") \
        .schema(get_input_schema()) \
        .csv(input_path, header=True)

    # this is where the error is happening
    df_bad = df.filter(df["malformed_rows"].isNotNull())
    df_good = df.filter(df["malformed_rows"].isNull())

    df_good.write.csv(output_path, header=True)
    df_bad.write.csv(output_malformed_path, header=True)

我用的是 PERMISSIVE 读取csv并尝试将输入Dataframe拆分为两个Dataframe时的模式( df_good 以及 df_bad )取决于 malformed_rows 列是否为空。如果我不拆分Dataframe并将其直接写入csv,我可以看到 malformed_rows 输出csv中的列。但是上面的代码抛出了一个错误:

ERROR Utils: Aborting task
java.lang.IllegalArgumentException: malformed_rows does not exist. Available: Group ID, Start Date, Start Time, ...,
    at org.apache.spark.sql.types.StructType.$anonfun$fieldIndex$1(StructType.scala:306)
    at scala.collection.MapLike.getOrElse(MapLike.scala:131)
    at scala.collection.MapLike.getOrElse$(MapLike.scala:129)
    at scala.collection.AbstractMap.getOrElse(Map.scala:63)
    at org.apache.spark.sql.types.StructType.fieldIndex(StructType.scala:305)
    at org.apache.spark.sql.catalyst.csv.CSVFilters.$anonfun$predicates$4(CSVFilters.scala:65)
    at org.apache.spark.sql.catalyst.csv.CSVFilters.$anonfun$predicates$4$adapted(CSVFilters.scala:65)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
    at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
    at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
    at scala.collection.TraversableLike.map(TraversableLike.scala:238)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
    at org.apache.spark.sql.catalyst.csv.CSVFilters.$anonfun$predicates$3(CSVFilters.scala:65)
    at org.apache.spark.sql.catalyst.csv.CSVFilters.$anonfun$predicates$3$adapted(CSVFilters.scala:54)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.sql.catalyst.csv.CSVFilters.<init>(CSVFilters.scala:54)
    at org.apache.spark.sql.catalyst.csv.UnivocityParser.<init>(UnivocityParser.scala:101)
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.$anonfun$buildReader$1(CSVFileFormat.scala:138)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:488)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:272)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:281)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:205)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:127)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
ERROR FileFormatWriter: Job job_20210302150943_0000 aborted.
ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Task failed while writing rows.
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:291)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:205)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:127)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)

我浏览了spark文档,它说要保留损坏的数据列,我们需要在模式中定义它,我正在这样做。但这让我困惑,为什么只有在我尝试过滤数据时,它才不起作用。任何帮助解决这个问题都是非常感谢的。

vnzz0bqm

vnzz0bqm1#

malformed_rows 是默认命名的内部损坏记录列 _corrupt_record 您重命名为:

.option("columnNameOfCorruptRecord", "malformed_rows")

但从spark 2.3开始,您不能只使用文档中引用的这一列来查询数据,您需要先缓存df:
从spark 2.3开始,当引用的列只包含内部损坏的记录列(名为 _corrupt_record 默认情况下)。例如, spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count() 以及 spark.read.schema(schema).json(file).select("_corrupt_record").show() . 相反,您可以缓存或保存解析的结果,然后发送相同的查询。例如, val df = spark.read.schema(schema).json(file).cache() 然后 df.filter($"_corrupt_record".isNotNull).count() .

相关问题