scala 从包导入手动声明的嵌套架构导致NullPointerException

ukqbszuj  于 12个月前  发布在  Scala
关注(0)|答案(1)|浏览(148)

我尝试使用Databases的spark-xml将XML文件解析为DataFrames,代码如下:

val xmlDF = spark
    .read
    .option("rowTag", "MeterReadingDocument")
    .option("valueTag", "foo") // meaningless, used to parse tags with no child elements
    .option("inferSchema", "false")
    .schema(schema)
    .xml(connectionString)

正如您所看到的,我提供了一个模式,以避免模式推理的昂贵操作。此模式定义为

val schema = MyProjectUtils.Schemas.meterReadingDocumentSchema

其中MyProjectUtils是一个包含对象Schemas的包,该对象具有模式定义:

object Schemas {
...
// nested schemas 
...

val meterReadingDocumentSchema = StructType(
    Array(
      StructField("ReadingStatusRefTable", readingStatusRefTableSchema, nullable = true),
      StructField("Header", headerSchema, nullable = true),
      StructField("ImportExportParameters", importExportParametersSchema, nullable = true),
      StructField("Channels", channelsSchema, nullable = true),
      StructField("_xmlns:xsd", StringType, nullable = true),
      StructField("_xmlns:xsi", StringType, nullable = true)
    )
  )
}

您会注意到readingStatusRefTableSchemaheaderSchema和其他自定义模式,它们是对应于XML中嵌套元素的StructTypes。这些反过来也是嵌套的,例如:

val headerSchema = StructType(
    Array(
      StructField("Creation_Datetime", creationDatetimeSchema, nullable = true),
      StructField("Export_Template", exportTemplateSchema, nullable = true),
      StructField("System", SystemSchema, nullable = true),
      StructField("Path", pathSchema, nullable = true),
      StructField("Timezone", timezoneSchema, nullable = true)
    )
  )

val creationDatetimeSchema = StructType(
    Array(
      StructField("_Datetime", TimestampType, nullable = true),
      StructField("foo", StringType, nullable = true)
    )
  )

(如果相关的话,我可以提供更多关于模式嵌套性质的细节)
如果我在一个notebook上声明这些嵌套模式,或者将其声明为notebook中用于读取数据的对象,那么这将工作并加载数据。但是当我从这个项目中创建一个jar并执行它时,我得到以下堆栈跟踪:

INFO ApplicationMaster [shutdown-hook-0]: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: java.lang.NullPointerException
    at org.apache.spark.sql.types.ArrayType.existsRecursively(ArrayType.scala:102)
    at org.apache.spark.sql.types.StructType.$anonfun$existsRecursively$1(StructType.scala:508)
    at org.apache.spark.sql.types.StructType.$anonfun$existsRecursively$1$adapted(StructType.scala:508)
    at scala.collection.IndexedSeqOptimized.prefixLengthImpl(IndexedSeqOptimized.scala:41)
    at scala.collection.IndexedSeqOptimized.exists(IndexedSeqOptimized.scala:49)
    at scala.collection.IndexedSeqOptimized.exists$(IndexedSeqOptimized.scala:49)
    at scala.collection.mutable.ArrayOps$ofRef.exists(ArrayOps.scala:198)
    at org.apache.spark.sql.types.StructType.existsRecursively(StructType.scala:508)
    at org.apache.spark.sql.types.StructType.$anonfun$existsRecursively$1(StructType.scala:508)
    at org.apache.spark.sql.types.StructType.$anonfun$existsRecursively$1$adapted(StructType.scala:508)
    at scala.collection.IndexedSeqOptimized.prefixLengthImpl(IndexedSeqOptimized.scala:41)
    at scala.collection.IndexedSeqOptimized.exists(IndexedSeqOptimized.scala:49)
    at scala.collection.IndexedSeqOptimized.exists$(IndexedSeqOptimized.scala:49)
    at scala.collection.mutable.ArrayOps$ofRef.exists(ArrayOps.scala:198)
    at org.apache.spark.sql.types.StructType.existsRecursively(StructType.scala:508)
    at org.apache.spark.sql.catalyst.util.CharVarcharUtils$.hasCharVarchar(CharVarcharUtils.scala:56)
    at org.apache.spark.sql.catalyst.util.CharVarcharUtils$.failIfHasCharVarchar(CharVarcharUtils.scala:63)
    at org.apache.spark.sql.DataFrameReader.schema(DataFrameReader.scala:76)
    at com.mycompany.DataIngestion$.delayedEndpoint$com$mycompany$DataIngestion$1(DataIngestion.scala:44)
    at com.mycompany.DataIngestion$delayedInit$body.apply(DataIngestion.scala:10)
    at scala.Function0.apply$mcV$sp(Function0.scala:39)
    at scala.Function0.apply$mcV$sp$(Function0.scala:39)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
    at scala.App.$anonfun$main$1$adapted(App.scala:80)
    at scala.collection.immutable.List.foreach(List.scala:431)
    at scala.App.main(App.scala:80)
    at scala.App.main$(App.scala:78)
    at com.mycompany.DataIngestion$.main(DataIngestion.scala:10)
    at com.mycompany.DataIngestion.main(DataIngestion.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:739)
)

我添加了另一个更简单的csv文件,并在Schemas对象中为它创建了一个模式。此架构没有来自相同Schemas对象和writing的嵌套结构。

val simplerDocSchema = MyProjectUtils.Schemas.anotherDocSchema

spark
      .read
      .csv(path)
      .schema(simplerDocSchema)
      .load(connectionString)
Schemas {
 ...
val anotherDocSchema: StructType = StructType(
    Array(
      StructField("ID", StringType, nullable = true),
      StructField("DATE", StringType, nullable = true),
      StructField("CODE", StringType, nullable = true),
      StructField("AD", StringType, nullable = true),
      StructField("ACCOUNT", StringType, nullable = true)
    )
  )
}

我预计这也会失败,但在编译项目和笔记本中运行良好

whlutmcx

whlutmcx1#

虽然你没有说明你使用的是哪个Spark版本,但代码似乎已经8年没有改变了:

override private[spark] def existsRecursively(f: (DataType) => Boolean): Boolean = {
    f(this) || elementType.existsRecursively(f)
  }

elementType很可能为null。由于你没有提供完整的代码,我猜你有一个尚未定义的ArrayType(someVal,..)。
把你的vals换成def的再试一次。

相关问题