在databricks azure上运行pyspark collect()时出错

qyswt5oh  于 2021-05-29  发布在  Spark
关注(0)|答案(0)|浏览(341)

我有以下pyspark(azure上的databricks)代码:


# load exchange data

df_ex = spark.read.format("csv").load(xxx.csv, inferSchema = True, header = True)

# udf

get_country = udf( lambda x : pycountry.countries.get(alpha_2=x).name )

# clean exchange data

clean_df_ex = df_ex.select(["EQUITY EXCH CODE","EQUITY EXCH NAME","Composite Code","ISO COUNTRY"])\
  .withColumn("COUNTRY", get_country(col("ISO COUNTRY")) )

# convert 2 columns to new json column

df_list_of_dict = clean_df_ex.withColumn("dict_value", to_json(struct(col("EQUITY EXCH CODE"), col("COUNTRY"))))

# final df, list of dicts

df_list = df_list_of_dict.select("dict_value")

到目前为止,一切都会很完美,我可以做show()或take()
例如:如果我这样做了 df_list.take(2) ,我将得到我期望的值。
我的主要目标是遍历新的df,并创建一个列表。例如,使用take()将不会产生任何问题:

mylist = [ i.dict_value for i in df_list.take(5) ]
mylist

结果是:
[{“equity exch code”:“aj”,“country”:“south africa”}','{“equity exch code”:“pf”,“country”:“australia”}','{“equity exch code”:“up”,“country”:“united states”}','{“equity exch code”:“aq”,“country”:“australia”}','{“equity exch code”:“qe”,“country”:“france”}']
但是,如果尝试collect()而不是take(),则会出现以下错误:代码:

mylist = [ i.dict_value for i in df_list.collect() ]
mylist

错误:

Py4JJavaError                             Traceback (most recent call last)
<command-3895085882512910> in <module>
      1 # this cod is the correct way to do it but it won't work
----> 2 for i in df_list.collect():
      3   print(i.dict_value)
      4 

/databricks/spark/python/pyspark/sql/dataframe.py in collect(self)
    552         # Default path used in OSS Spark / for non-DF-ACL clusters:
    553         with SCCallSiteSync(self._sc) as css:
--> 554             sock_info = self._jdf.collectToPython()
    555         return list(_load_from_socket(sock_info, BatchedSerializer(PickleSerializer())))
    556 

/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/databricks/spark/python/pyspark/sql/utils.py in deco(*a,**kw)
     61     def deco(*a,**kw):
     62         try:
---> 63             return f(*a,**kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o14499.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 157.0 failed 4 times, most recent failure: Lost task 0.3 in stage 157.0 (TID 330, 10.139.64.5, executor 1): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/databricks/spark/python/pyspark/worker.py", line 480, in main
    process()
  File "/databricks/spark/python/pyspark/worker.py", line 472, in process
    serializer.dump_stream(out_iter, outfile)
  File "/databricks/spark/python/pyspark/serializers.py", line 460, in dump_stream
    self.serializer.dump_stream(self._batched(iterator), stream)
  File "/databricks/spark/python/pyspark/serializers.py", line 150, in dump_stream
    for obj in iterator:
  File "/databricks/spark/python/pyspark/serializers.py", line 449, in _batched
    for item in iterator:
  File "<string>", line 1, in <lambda>
  File "/databricks/spark/python/pyspark/worker.py", line 87, in <lambda>
    return lambda *a: f(*a)
  File "/databricks/spark/python/pyspark/util.py", line 99, in wrapper
    return f(*args,**kwargs)
  File "<command-2765369177614916>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'name'

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:540)
    at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:81)
    at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:64)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:494)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:62)
    at org.apache.spark.sql.execution.collect.Collector$$anonfun$1.apply(Collector.scala:151)
    at org.apache.spark.sql.execution.collect.Collector$$anonfun$1.apply(Collector.scala:150)
    at org.apache.spark.SparkContext$$anonfun$41.apply(SparkContext.scala:2377)
    at org.apache.spark.SparkContext$$anonfun$41.apply(SparkContext.scala:2377)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.doRunTask(Task.scala:140)
    at org.apache.spark.scheduler.Task.run(Task.scala:113)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:537)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1541)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:543)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2362)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2350)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2349)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2349)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1102)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1102)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1102)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2582)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2529)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2517)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:897)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2280)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2378)
    at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:245)
    at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:280)
    at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:80)
    at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:86)
    at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:508)
    at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:480)
    at org.apache.spark.sql.execution.SparkPlan.executeCollectResult(SparkPlan.scala:328)
    at org.apache.spark.sql.Dataset$$anonfun$50.apply(Dataset.scala:3367)
    at org.apache.spark.sql.Dataset$$anonfun$50.apply(Dataset.scala:3366)
    at org.apache.spark.sql.Dataset$$anonfun$54.apply(Dataset.scala:3501)
    at org.apache.spark.sql.Dataset$$anonfun$54.apply(Dataset.scala:3496)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv$1$$anonfun$apply$1.apply(SQLExecution.scala:112)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:217)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv$1.apply(SQLExecution.scala:98)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:835)
    at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:74)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:169)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withAction(Dataset.scala:3496)
    at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3366)
    at sun.reflect.GeneratedMethodAccessor521.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
    at py4j.Gateway.invoke(Gateway.java:295)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:251)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/databricks/spark/python/pyspark/worker.py", line 480, in main
    process()
  File "/databricks/spark/python/pyspark/worker.py", line 472, in process
    serializer.dump_stream(out_iter, outfile)
  File "/databricks/spark/python/pyspark/serializers.py", line 460, in dump_stream
    self.serializer.dump_stream(self._batched(iterator), stream)
  File "/databricks/spark/python/pyspark/serializers.py", line 150, in dump_stream
    for obj in iterator:
  File "/databricks/spark/python/pyspark/serializers.py", line 449, in _batched
    for item in iterator:
  File "<string>", line 1, in <lambda>
  File "/databricks/spark/python/pyspark/worker.py", line 87, in <lambda>
    return lambda *a: f(*a)
  File "/databricks/spark/python/pyspark/util.py", line 99, in wrapper
    return f(*args,**kwargs)
  File "<command-2765369177614916>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'name'

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:540)
    at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:81)
    at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:64)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:494)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:62)
    at org.apache.spark.sql.execution.collect.Collector$$anonfun$1.apply(Collector.scala:151)
    at org.apache.spark.sql.execution.collect.Collector$$anonfun$1.apply(Collector.scala:150)
    at org.apache.spark.SparkContext$$anonfun$41.apply(SparkContext.scala:2377)
    at org.apache.spark.SparkContext$$anonfun$41.apply(SparkContext.scala:2377)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.doRunTask(Task.scala:140)
    at org.apache.spark.scheduler.Task.run(Task.scala:113)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:537)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1541)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:543)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more

更新:我可以确认问题的发生是因为当我调用collect()udf(country)或在sql投影中包含来自udf(country)的列时。
attributeerror:'nonetype'对象没有属性'name'
基本上,它抛出了一个错误,它不能;在我使用的第三部分python模块(pycountry)中找不到attar名称。我可以确认属性(名称)是否存在,例如:

pycountry.countries.get(alpha_2="DE").name >> will out out Germany

作为解决方法:我建立了一本字典,然后用在我的自定义项中,现在似乎找到了工作。

country_dict = { i.alpha_2: i.name for i in list(pycountry.countries)}

然后将其用作:

udf_get_country = udf( lambda x : country_dict.get(x, "No Country") , StringType())

我还是很想知道发生了什么

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题