python 每次运行的结果不同(pyspark)

tquggr8v  于 2023-04-28  发布在  Python
关注(0)|答案(1)|浏览(145)

bounty已结束。回答此问题可获得+50声望奖励。赏金宽限期22小时后结束。Lazloo Xp正在寻找来自可靠来源的**答案 *:如果答案适用,将支付赏金

我有一个数据框作为多个连接的结果。我想调查是否有重复的。但每次我调查时, Dataframe 看起来都不一样。特别是,下面的命令会导致不同的IDs,但结果的数量保持不变。

from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
import pyspark.sql.functions as f
from pyspark.sql.functions import lit

# Create a Spark session
spark = SparkSession.builder.appName("CreateDataFrame").getOrCreate()

# User input for number of rows
n_a = 10
n_a_c = 5
n_a_c_d = 3
n_a_c_e = 4

# Define the schema for the DataFrame
schema_a = StructType([StructField("id1", StringType(), True)])
schema_a_b = StructType(
    [
        StructField("id1", StringType(), True),
        StructField("id2", StringType(), True),
        StructField("extra", StringType(), True),
    ]
)
schema_a_c = StructType(
    [
        StructField("id1", StringType(), True),
        StructField("id3", StringType(), True),
    ]
)
schema_a_c_d = StructType(
    [
        StructField("id3", StringType(), True),
        StructField("id4", StringType(), True),
    ]
)
schema_a_c_e = StructType(
    [
        StructField("id3", StringType(), True),
        StructField("id5", StringType(), True),
    ]
)

# Create a list of rows with increasing integer values for "id1" and a constant value of "1" for "id2"
rows_a = [(str(i),) for i in range(1, n_a + 1)]
rows_a_integers = [str(i) for i in range(1, n_a + 1)]
rows_a_b = [(str(i), str(1), "A") for i in range(1, n_a + 1)]

def get_2d_list(ids_part_1: list, n_new_ids: int):
    rows = [
        [
            (str(i), str(i) + "_" + str(j))
            for i in ids_part_1
            for j in range(1, n_new_ids + 1)
        ]
    ]
    return [item for sublist in rows for item in sublist]

rows_a_c = get_2d_list(ids_part_1=rows_a_integers, n_new_ids=n_a_c)
rows_a_c_d = get_2d_list(ids_part_1=[i[1] for i in rows_a_c], n_new_ids=n_a_c_d)
rows_a_c_e = get_2d_list(ids_part_1=[i[1] for i in rows_a_c], n_new_ids=n_a_c_e)

# Create the DataFrame
df_a = spark.createDataFrame(rows_a, schema_a)
df_a_b = spark.createDataFrame(rows_a_b, schema_a_b)
df_a_c = spark.createDataFrame(rows_a_c, schema_a_c)
df_a_c_d = spark.createDataFrame(rows_a_c_d, schema_a_c_d)
df_a_c_e = spark.createDataFrame(rows_a_c_e, schema_a_c_e)

# Join everything
df_join = (
    df_a.join(df_a_b, on="id1")
    .join(df_a_c, on="id1")
    .join(df_a_c_d, on="id3")
    .join(df_a_c_e, on="id3")
)

# Nested structure
# show
df_nested = df_join.withColumn("id3", f.struct(f.col("id3")))

for i, index in enumerate([(5, 3), (4, 3), (3, None)]):
    remaining_columns = list(set(df_nested.columns).difference(set([f"id{index[0]}"])))
    df_nested = (
        df_nested.groupby(*remaining_columns)
        .agg(f.collect_list(f.col(f"id{index[0]}")).alias(f"id{index[0]}_tmp"))
        .drop(f"id{index[0]}")
        .withColumnRenamed(
            f"id{index[0]}_tmp",
            f"id{index[0]}",
        )
    )

    if index[1]:
        df_nested = df_nested.withColumn(
            f"id{index[1]}",
            f.struct(
                f.col(f"id{index[1]}.*"),
                f.col(f"id{index[0]}"),
            ).alias(f"id{index[1]}"),
        ).drop(f"id{index[0]}")

# Investigate for duplicates in id3 (should be unique)
df_test = df_nested.select("id2", "extra", f.explode(f.col("id3")["id3"]).alias("id3"))

for i in range(5):
    df_test.groupby("id3").count().filter(f.col("count") > 1).show()

最后一个命令打印两个my case不同结果之一。有时:

+---+-----+
|id3|count|
+---+-----+
|6_4|    2|
+---+-----+

有时候

+---+-----+
|id3|count|
+---+-----+
|9_3|    2|
+---+-----+

如果有帮助,我使用Databricks Runtime版本11。3 LTS(包括Apache Spark 3。3.0,Scala 2.12)
此外,不能有重复的理解,我的基础上的设计的代码。找到的副本似乎是一个bug!?
也许可以作为连接不会导致任何重复的潜在证据:

df_join.groupby("id3", "id4", "id5").count().filter(f.col("count") > 1).show()

是空的

bpzcxfmw

bpzcxfmw1#

"id3"的构造方式是随机的,所以每次执行都会得到不同的结果,你需要定义一个orderBy()来得到相同的结果,所以在该列上添加一个orderBy()后,如下所示:

df_nested = df_join.withColumn("id3", f.struct(f.col("id3"))).orderBy("id3")

现在,对于多次执行,您将始终获得相同的结果。
记住,spark求值是惰性的,因此Dag将为每个动作重新构造,在本例中是show()。
因此,如果你的代码不是确定性的,它每次都会给予不同的输出。

相关问题