PySpark:WHEN语句太多了?

e5nqia27  于 2022-11-01  发布在  Spark
关注(0)|答案(1)|浏览(268)

我在使用Spark v2.4.0的时候发生了一个奇怪的现象:
我有一个相当简单的dataframe函数,并在一些名为“new”的dataframe上使用它:

from product_mapping import product_mapping

new2 = product_mapping(new)   
new2.show()

产品Map(由于语句的长度,它是单独的python脚本)

import pyspark.sql.functions as F

def product_mapping(df):

    df = df.withColumn('PRODUCT',  F.when((df.var1 == "301") & (df.var2 == 0) & (df.var3 == 30), F.lit('101')) 
                              .when((df.var1 == "301") & (df.var2 == 1) & (df.var3 == 30), F.lit('102'))  
                              .when((df.var1 == "302") & (df.var2 == 0) & (df.var3 == 31), F.lit('103'))  
                              .when((df.var1 == "302") & (df.var2 == 1) & (df.var3 == 31), F.lit('104'))  
                              .when((df.var1 == "303") & (df.var2 == 0) & (df.var3 == 61), F.lit('105'))  
                              .when((df.var1 == "303") & (df.var2 == 0) & (df.var3 == 32), F.lit('106'))  
                              .when((df.var1 == "303") & (df.var2 == 1) & (df.var3 == 32), F.lit('107'))  
                              .when((df.var1 == "303") & (df.var2 == 1) & (df.var3 == 61), F.lit('108'))  
                              .when((df.var1 == "304") & (df.var2 == 0) & (df.var3 == 69), F.lit('109')) 
    (many more WHEN lines) 
                              .when((df.var1 == "304") & (df.var2 == 1) & (df.var3 == 69), F.lit('205')))

    return df

总共我有一些〉150行,但代码似乎不工作;它抛出错误:

py4j.protocol.Py4JJavaError: An error occurred while calling o1754.showString.
: java.lang.StackOverflowError
        at org.codehaus.janino.CodeContext.extract16BitValue(CodeContext.java:720)
        at org.codehaus.janino.CodeContext.flowAnalysis(CodeContext.java:561)

然而,当我把语句缩短到5个WHEN语句时,代码运行得很好......那么,WHEN语句的使用是否有一个最大数量?如何克服这个问题?
谢谢

uqdfh47h

uqdfh47h1#

以下是我的2分:(这里的想法是连接三个变量,然后根据键-值对的字典进行Map)
1.声明函数如下:

def product_mapping(df):

     conditions = {  '301-1-30' : '102',
                     '302-0-31' : '103',
                     '302-1-31' : '104',
                     '303-0-61' : '105',
                     '301-0-30' : '106',
                     '303-1-32' : '107',
                     '303-1-61' : '108',
                     '304-0-69' : '109',
                     '301-0-31' : '114'
                 }

     df = df.withColumn('PRODUCT', fx.concat(df.var1, fx.lit('-'), df.var2, fx.lit('-'),df.var3))
     df1 = df.replace(to_replace=conditions, subset=['PRODUCT'])\
             .withColumn('PRODUCT',fx.col('PRODUCT').cast(IntegerType()))
     return df1

1.创建一个数据框,假设它具有3个必需列-var 1、var 2、var 3

new  = spark.createDataFrame([(301,0,30),(301,1,30),(301,0,31)],schema = ['var1','var2','var3'])

1.调用函数并打印返回的 Dataframe :

new2 = product_mapping(new)   
 new2.show()

请参考以下屏幕截图:

相关问题