sparkMap方法引发序列化异常

oipij1gg  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(331)

我是新来的 Spark 我对里面的序列化有意见 map 功能。下面是代码的一些元素

private Function<Row, String> SparkMap() throws IOException {
        return new Function<Row, String>() {
            public String call(Row row) throws IOException {
                /* some code */
            }
        };
    }

public static void main(String[] args) throws Exception {
        MyClass myClass = new MyClass();
        SQLContext sqlContext = new SQLContext(sc);
        DataFrame df = sqlContext.load(args[0], "com.databricks.spark.avro");

        JavaRDD<String> output = df.javaRDD().map(myClass.SparkMap());
    }

这是错误日志

Caused by: java.io.NotSerializableException: myPackage.MyClass
Serialization stack:
    - object not serializable (class: myPackage.MyClass, value: myPackage.MyClass@281c8380)
    - field (class: myPackage.MyClass$1, name: this$0, type: class myPackage.MyClass)
    - object (class myPackage.MyClass$1, myPackage.MyClass$1@28ef1bc8)
    - field (class: org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, name: fun$1, type: interface org.apache.spark.api.java.function.Function)
    - object (class org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, <function1>)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:81)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:312)
    ... 12 more

如果我声明静态 SparkMap 方法,然后它运行。怎么可能呢

4xrmg8kj

4xrmg8kj1#

例外情况很有解释力:

object not serializable (class: myPackage.MyClass, value: myPackage.MyClass@281c8380)

简单地让你的 MyClas s Serializable 它会起作用的。
它是静态的,因为在这种情况下它只接受函数,而不是整个函数 myClass 对象

相关问题