无法使用spark.net udfs和hdinsight群集

yshpjwxd  于 2021-05-18  发布在  Spark
关注(0)|答案(1)|浏览(436)

我尝试在prod env中运行一个简单的应用程序,其中包含https://github.com/dotnet/spark/blob/master/examples/microsoft.spark.csharp.examples/sql/batch/basic.cs 应用程序运行良好,并向stdout发出输出,直到它遇到第一个udf时该代码崩溃为止。谢谢你在这方面的见解。
环境。代码打包使用

dotnet publish -c Release -f netcoreapp3.1 -r ubuntu.16.04-x64

hdinsight cluster hdi 4.0、spark 2.4—服务器是使用中的指导原则设置的https://docs.microsoft.com/en-us/dotnet/spark/tutorials/hdinsight-deployment

spark-submit --master yarn --conf spark.yarn.appMasterEnv.DOTNET_ASSEMBLY_SEARCH_PATHS="./app/publish.zip" --archives wasbs://xxx@yyy.blob.core.windows.net/SparkJobs/publish.zip#mySparkApp --class org.apache.spark.deploy.dotnet.DotnetRunner wasbs://xxx@yyy.blob.core.windows.net/SparkJobs/microsoft-spark-2.4.x-0.12.1.jar wasbs://xxx@yyy.blob.core.windows.net/SparkJobs/publish.zip mySparkApp

(在绝望中各种各样的变化,--部署模式集群,各种路径,等等。什么都不起作用)
标准输出:。。。

+---+-----+
|age| name|
+---+-----+
| 22|Ricky|
| 36| Jeff|
| 62|Geddy|
+---+-----+

[2020-10-28T09:15:10.1478641Z] [wn0-hdinsi] [Error] [JvmBridge] JVM method execution failed: Nonstatic method 'showString' failed for class '41' when called with 3 arguments ([Index=1, Type=Int32, Value=20], [Index=2, Type=Int32, Value=20], [Index=3, Type=Boolean, Value=False], )
[2020-10-28T09:15:10.1480587Z] [wn0-hdinsi] [Error] [JvmBridge] org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 210, wn0-hdinsi.xwccrqijnmqujdjghwrza0nzbb.fx.internal.cloudapp.net, executor 2): org.apache.spark.api.python.PythonException: System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.Spark.Utils.UdfSerDe.<>c.b__10_0(TypeData td) in //src/csharp/Microsoft.Spark/Utils/UdfSerDe.cs:line 262
at System.Collections.Concurrent.ConcurrentDictionary2.GetOrAdd(TKey key, Func2 valueFactory)
at Microsoft.Spark.Utils.UdfSerDe.DeserializeType(TypeData typeData) in //src/csharp/Microsoft.Spark/Utils/UdfSerDe.cs:line 258
at Microsoft.Spark.Utils.UdfSerDe.Deserialize(UdfData udfData) in //src/csharp/Microsoft.Spark/Utils/UdfSerDe.cs:line 160
at Microsoft.Spark.Utils.CommandSerDe.DeserializeUdfs[T](UdfWrapperData data, Int32& nodeIndex, Int32& udfIndex) in //src/csharp/Microsoft.Spark/Utils/CommandSerDe.cs:line 333
at Microsoft.Spark.Utils.CommandSerDe.Deserialize[T](Stream stream, SerializedMode& serializerMode, SerializedMode& deserializerMode, String& runMode) in /_/src/csharp/Microsoft.Spark/Utils/CommandSerDe.cs:line 306
at Microsoft.Spark.Worker.Processor.CommandProcessor.ReadSqlCommands(PythonEvalType evalType, Stream stream) in D:\a\1\s\src\csharp\Microsoft.Spark.Worker\Processor\CommandProcessor.cs:line 188
at Microsoft.Spark.Worker.Processor.CommandProcessor.ReadSqlCommands(PythonEvalType evalType, Stream stream, Version version) in D:\a\1\s\src\csharp\Microsoft.Spark.Worker\Processor\CommandProcessor.cs:line 98
at Microsoft.Spark.Worker.Processor.CommandProcessor.Process(Stream stream) in D:\a\1\s\src\csharp\Microsoft.Spark.Worker\Processor\CommandProcessor.cs:line 43
at Microsoft.Spark.Worker.Processor.PayloadProcessor.Process(Stream stream) in D:\a\1\s\src\csharp\Microsoft.Spark.Worker\Processor\PayloadProcessor.cs:line 82
at Microsoft.Spark.Worker.TaskRunner.ProcessStream(Stream inputStream, Stream outputStream, Version version, Boolean& readComplete) in D:\a\1\s\src\csharp\Microsoft.Spark.Worker\TaskRunner.cs:line 143
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:456)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:81)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:64)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:410)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

------我的问题确实是路径问题。对于其他有同样问题的人,我通过让带有udf的dll(可以是与通用spark应用程序相同的dll)列在“--files”中来实现这一点。因此,基本上您需要一个包含程序集的zip文件,然后直接链接到dll。也许有更聪明的方法,但我做到了(在集群模式下运行):spark submit——deploy mode cluster——master yarn——文件wasbs://@yyy.blob.core.windows.net/sparkjobs/mysparkapp.dll --类org.apache.spark.deploy.dotnet.dotnetrunnerwasbs://@yyy.blob.core.windows.net/sparkjobs/microsoft-spark-2.4.x-0.12.1.jar wasbs://@yyy.blob.core.windows.net/sparkjobs/publish.zip mysparkapp公司

mfuanj7w

mfuanj7w1#

错误是因为找不到包含代码的dll。
两件事,首先是Yarn模式。在dotnet\u程序集\u search\u路径的开头,会将用户的主目录放在路径的前面,这样它就不是currentdirectory/app/publish.zip,因此如果这是不同的,那么它将查找错误的位置。
其次,确保publish.zip不包含文件夹,并且带有udf的dll位于zip的顶层。
我不用把zip文件放在app文件夹中,只使用当前文件夹,不用担心dotnet\u程序集\u搜索\u路径
对于演练,请确保遵循以下步骤:
https://docs.microsoft.com/en-us/dotnet/spark/tutorials/hdinsight-deployment

相关问题