我开始使用 @pandas_udf
对于pyspark,在使用文档中的示例进行测试时,我发现了一个无法解决的错误。
我运行的代码是:
from pyspark.sql import SparkSession
from pyspark.sql.functions import pandas_udf, PandasUDFType
spark = SparkSession.builder.getOrCreate()
df = spark.createDataFrame(
[(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)],
("id", "v"))
@pandas_udf("id long, v double", PandasUDFType.GROUPED_MAP)
def subtract_mean(pdf):
# pdf is a pandas.DataFrame
v = pdf.v
return pdf.assign(v=v - v.mean())
df.groupby("id").apply(subtract_mean).show()
我得到的错误是:
Py4JJavaError: An error occurred while calling o53.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 44 in stage 7.0 failed 1 times, most recent failure: Lost task 44.0 in stage 7.0 (TID 132, localhost, executor driver): java.lang.IllegalArgumentException: capacity < 0: (-1 < 0)
我正在使用:
pyspark 2.4.5
py4j 0.10.7
pyarrow 0.15.1
1条答案
按热度按时间zte4gxcn1#
这是使用
PyArrow
版本>0.15Spark 2.4.x
,请按此链接修复https://issues.apache.org/jira/browse/spark-29367