scala 数据框化的zipWithIndex

jtoj6r0c  于 2022-11-09  发布在  Scala
关注(0)|答案(9)|浏览(388)

我正在努力解决将序列号添加到数据集这个古老的问题。我正在使用DataFrame,但似乎没有与RDD.zipWithIndex等价的DataFrame。另一方面,以下内容或多或少地按照我希望的方式运行:

val origDF = sqlContext.load(...)    

val seqDF= sqlContext.createDataFrame(
    origDF.rdd.zipWithIndex.map(ln => Row.fromSeq(Seq(ln._2) ++ ln._1.toSeq)),
    StructType(Array(StructField("seq", LongType, false)) ++ origDF.schema.fields)
)

在我的实际应用程序中,OrigDF不会直接从文件中加载--它将通过将2-3个其他DataFrame连接在一起来创建,它将包含超过1亿行。
有没有更好的方法来做这件事?我能做些什么来优化它?

vsnjm48y

vsnjm48y1#

  • 以下内容是代表大卫·格里芬(David Griffin)发布的(编辑无误)。*

全唱全舞的dfZipWithIndex方法。您可以设置起始偏移量(默认为1)、索引列名称(默认为“id”),并将列放在前面或后面:

import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.types.{LongType, StructField, StructType}
import org.apache.spark.sql.Row

def dfZipWithIndex(
  df: DataFrame,
  offset: Int = 1,
  colName: String = "id",
  inFront: Boolean = true
) : DataFrame = {
  df.sqlContext.createDataFrame(
    df.rdd.zipWithIndex.map(ln =>
      Row.fromSeq(
        (if (inFront) Seq(ln._2 + offset) else Seq())
          ++ ln._1.toSeq ++
        (if (inFront) Seq() else Seq(ln._2 + offset))
      )
    ),
    StructType(
      (if (inFront) Array(StructField(colName,LongType,false)) else Array[StructField]()) 
        ++ df.schema.fields ++ 
      (if (inFront) Array[StructField]() else Array(StructField(colName,LongType,false)))
    )
  ) 
}
ia2d9nvy

ia2d9nvy2#

从Spark 1.6开始,有一个名为*Monotonally_increating_id()的函数
它为每行生成一个具有唯一64位单调索引的新列
但这并不重要,每个分区开始一个新的范围,所以我们必须在使用之前计算每个分区的偏移量。
为了提供一个“无RDD”的解决方案,我得到了一些Collect(),但它只收集偏移量,每个分区
一个值,所以它不会导致OOM

def zipWithIndex(df: DataFrame, offset: Long = 1, indexName: String = "index") = {
    val dfWithPartitionId = df.withColumn("partition_id", spark_partition_id()).withColumn("inc_id", monotonically_increasing_id())

    val partitionOffsets = dfWithPartitionId
        .groupBy("partition_id")
        .agg(count(lit(1)) as "cnt", first("inc_id") as "inc_id")
        .orderBy("partition_id")
        .select(sum("cnt").over(Window.orderBy("partition_id")) - col("cnt") - col("inc_id") + lit(offset) as "cnt" )
        .collect()
        .map(_.getLong(0))
        .toArray

     dfWithPartitionId
        .withColumn("partition_offset", udf((partitionId: Int) => partitionOffsets(partitionId), LongType)(col("partition_id")))
        .withColumn(indexName, col("partition_offset") + col("inc_id"))
        .drop("partition_id", "partition_offset", "inc_id")
}

该解决方案不会重新打包原始行,也不会对原始庞大的 Dataframe 进行重新分区,因此它在现实世界中相当快:在240个核心上,在2分钟内读取200 GB的CSV数据(4300万行,150列),对其进行索引并打包到拼图上
在测试我的解决方案之后,我运行了Kirk Broadhurst's solution,它慢了20秒
您可能想要或不想使用dfWithPartitionId.cache(),这取决于任务

bakd9h0s

bakd9h0s3#

从Spark 1.5开始,Spark中添加了Window表达式。现在可以使用org.apache.spark.sql.expressions.row_number,而不必将DataFrame转换为RDD。请注意,我发现上面的dfZipWithIndex算法的性能比下面的算法快得多。但我把它贴出来是因为:
1.其他人会忍不住想试一试
1.也许有人可以优化下面的表达方式
无论如何,以下是对我有效的方法:

import org.apache.spark.sql.expressions._

df.withColumn("row_num", row_number.over(Window.partitionBy(lit(1)).orderBy(lit(1))))

请注意,我使用lit(1)进行分区和排序--这使得所有内容都在同一个分区中,并且似乎保留了DataFrame的原始顺序,但我认为这就是它速度变慢的原因。
我在具有7,000,000行的4列DataFrame上测试了它,它与上面的dfZipWithIndex之间的速度差异非常显著(正如我所说的,RDD函数的速度要快得多)。

b4lqfgs4

b4lqfgs44#

PySpark版本:

from pyspark.sql.types import LongType, StructField, StructType

def dfZipWithIndex (df, offset=1, colName="rowId"):
    '''
        Enumerates dataframe rows is native order, like rdd.ZipWithIndex(), but on a dataframe 
        and preserves a schema

        :param df: source dataframe
        :param offset: adjustment to zipWithIndex()'s index
        :param colName: name of the index column
    '''

    new_schema = StructType(
                    [StructField(colName,LongType(),True)]        # new added field in front
                    + df.schema.fields                            # previous schema
                )

    zipped_rdd = df.rdd.zipWithIndex()

    new_rdd = zipped_rdd.map(lambda (row,rowId): ([rowId +offset] + list(row)))

    return spark.createDataFrame(new_rdd, new_schema)

我还创建了一个JIRA来在Spark中本地添加此功能:https://issues.apache.org/jira/browse/SPARK-23074

cetgtptt

cetgtptt5#

@Evgeny,your solution很有趣。注意,当您有空分区时有一个错误(数组缺少这些分区索引,至少在我使用Spark 1.6时是这样),所以我将数组转换为Map(artitionId->Offsets)。
另外,我去掉了单调递增id的源代码,使每个分区中的“incid”从0开始。
以下是更新后的版本:

import org.apache.spark.sql.catalyst.expressions.LeafExpression
import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.types.LongType
import org.apache.spark.sql.catalyst.expressions.Nondeterministic
import org.apache.spark.sql.catalyst.expressions.codegen.GeneratedExpressionCode
import org.apache.spark.sql.catalyst.expressions.codegen.CodeGenContext
import org.apache.spark.sql.types.DataType
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions._
import org.apache.spark.sql.Column
import org.apache.spark.sql.expressions.Window

case class PartitionMonotonicallyIncreasingID() extends LeafExpression with Nondeterministic {

  /**
   * From org.apache.spark.sql.catalyst.expressions.MonotonicallyIncreasingID
   *
   * Record ID within each partition. By being transient, count's value is reset to 0 every time
   * we serialize and deserialize and initialize it.
   */
  @transient private[this] var count: Long = _

  override protected def initInternal(): Unit = {
    count = 1L // notice this starts at 1, not 0 as in org.apache.spark.sql.catalyst.expressions.MonotonicallyIncreasingID
  }

  override def nullable: Boolean = false

  override def dataType: DataType = LongType

  override protected def evalInternal(input: InternalRow): Long = {
    val currentCount = count
    count += 1
    currentCount
  }

  override def genCode(ctx: CodeGenContext, ev: GeneratedExpressionCode): String = {
    val countTerm = ctx.freshName("count")
    ctx.addMutableState(ctx.JAVA_LONG, countTerm, s"$countTerm = 1L;")
    ev.isNull = "false"
    s"""
      final ${ctx.javaType(dataType)} ${ev.value} = $countTerm;
      $countTerm++;
    """
  }
}

object DataframeUtils {
  def zipWithIndex(df: DataFrame, offset: Long = 0, indexName: String = "index") = {
    // from https://stackoverflow.com/questions/30304810/dataframe-ified-zipwithindex)
    val dfWithPartitionId = df.withColumn("partition_id", spark_partition_id()).withColumn("inc_id", new Column(PartitionMonotonicallyIncreasingID()))

    // collect each partition size, create the offset pages
    val partitionOffsets: Map[Int, Long] = dfWithPartitionId
      .groupBy("partition_id")
      .agg(max("inc_id") as "cnt") // in each partition, count(inc_id) is equal to max(inc_id) (I don't know which one would be faster)
      .select(col("partition_id"), sum("cnt").over(Window.orderBy("partition_id")) - col("cnt") + lit(offset) as "cnt")
      .collect()
      .map(r => (r.getInt(0) -> r.getLong(1)))
      .toMap

    def partition_offset(partitionId: Int): Long = partitionOffsets(partitionId)
    val partition_offset_udf = udf(partition_offset _)
    // and re-number the index
    dfWithPartitionId
      .withColumn("partition_offset", partition_offset_udf(col("partition_id")))
      .withColumn(indexName, col("partition_offset") + col("inc_id"))
      .drop("partition_id")
      .drop("partition_offset")
      .drop("inc_id")
  }
}
lhcgjxsq

lhcgjxsq6#

我已经修改了@Tagar的版本,以便在Python3.7上运行,我想分享一下:

def dfZipWithIndex (df, offset=1, colName="rowId"):
'''
    Enumerates dataframe rows is native order, like rdd.ZipWithIndex(), but on a dataframe
    and preserves a schema

    :param df: source dataframe
    :param offset: adjustment to zipWithIndex()'s index
    :param colName: name of the index column
'''

new_schema = StructType(
                [StructField(colName,LongType(),True)]        # new added field in front
                + df.schema.fields                            # previous schema
            )

zipped_rdd = df.rdd.zipWithIndex()

new_rdd = zipped_rdd.map(lambda args: ([args[1] + offset] + list(args[0])))      # use this for python 3+, tuple gets passed as single argument so using args and [] notation to read elements within args
return spark.createDataFrame(new_rdd, new_schema)
myzjeezk

myzjeezk7#

Spark Java API版本:
我用Java实现了@Evgeny的solution,用于在DataFrames上执行zipWithIndex,并想分享代码。
它还包含@fylb在他的solution中提供的改进。我可以为Spark 2.4确认,当spark_artition_id()返回的条目不是以0开头或不按顺序递增时,执行失败。由于此函数被记录为不确定的,因此很可能会发生上述情况之一。一个例子是通过增加分区计数来触发。
Java实现如下:

public static Dataset<Row> zipWithIndex(Dataset<Row> df, Long offset, String indexName) {
        Dataset<Row> dfWithPartitionId = df
                .withColumn("partition_id", spark_partition_id())
                .withColumn("inc_id", monotonically_increasing_id());

        Object partitionOffsetsObject = dfWithPartitionId
                .groupBy("partition_id")
                .agg(count(lit(1)).alias("cnt"), first("inc_id").alias("inc_id"))
                .orderBy("partition_id")
                .select(col("partition_id"), sum("cnt").over(Window.orderBy("partition_id")).minus(col("cnt")).minus(col("inc_id")).plus(lit(offset).alias("cnt")))
                .collect();
        Row[] partitionOffsetsArray = ((Row[]) partitionOffsetsObject);
        Map<Integer, Long> partitionOffsets = new HashMap<>();
        for (int i = 0; i < partitionOffsetsArray.length; i++) {
            partitionOffsets.put(partitionOffsetsArray[i].getInt(0), partitionOffsetsArray[i].getLong(1));
        }

        UserDefinedFunction getPartitionOffset = udf(
                (partitionId) -> partitionOffsets.get((Integer) partitionId), DataTypes.LongType
        );

        return dfWithPartitionId
                .withColumn("partition_offset", getPartitionOffset.apply(col("partition_id")))
                .withColumn(indexName, col("partition_offset").plus(col("inc_id")))
                .drop("partition_id", "partition_offset", "inc_id");
    }
7nbnzgx9

7nbnzgx98#

以下是我的建议,其好处是:

  • 不涉及DataFrameInternalRow的任何序列化/反序列化[1]。
  • 其逻辑是极简主义的,只依赖RDD.zipWithIndex

它的主要缺点是:

  • 无法从非JVM API(pySpark、SparkR)直接使用。
  • 必须在package org.apache.spark.sql;以下。

进口:

import org.apache.spark.rdd.RDD
import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.execution.LogicalRDD
import org.apache.spark.sql.functions.lit
/**
  * Optimized Spark SQL equivalent of RDD.zipWithIndex.
  *
  * @param df
  * @param indexColName
  * @return `df` with a column named `indexColName` of consecutive unique ids.
  */
def zipWithIndex(df: DataFrame, indexColName: String = "index"): DataFrame = {
  import df.sparkSession.implicits._

  val dfWithIndexCol: DataFrame = df
    .drop(indexColName)
    .select(lit(0L).as(indexColName), $"*")

  val internalRows: RDD[InternalRow] = dfWithIndexCol
    .queryExecution
    .toRdd
    .zipWithIndex()
    .map {
      case (internalRow: InternalRow, index: Long) =>
        internalRow.setLong(0, index)
        internalRow
    }

  Dataset.ofRows(
    df.sparkSession,
    LogicalRDD(dfWithIndexCol.schema.toAttributes, internalRows)(df.sparkSession)
  )

[1]:(从/到InternalRow的底层字节数组<-->GenericRow的底层JVM对象集合Array[Any])。

eeq64g8w

eeq64g8w9#

我已经将@canberker suggest移植到了Python3(Pyspark)。
此外,我没有使用带有散列Map的UDF,而是使用了Broadcast Join,这在测试期间略微提高了性能。
注意:此解决方案仍然存在由于空分区造成的缺口。

def zip_with_index(df, offset: int = 1, index_name: str = "id"):
    df_with_partition_id = (
        df
        .withColumn("partition_id", F.spark_partition_id())
        .withColumn("inc_id", F.monotonically_increasing_id())
    )
    partition_offsets_df = (
        df_with_partition_id
        .groupBy("partition_id")
        .agg(F.count(F.lit(1)).alias("cnt"), F.first("inc_id").alias("inc_id"))
        .orderBy("partition_id")
        .select(
            F.col("partition_id"),
            (
                F.sum("cnt").over(Window.orderBy("partition_id"))
                - F.col("cnt") - F.col("inc_id") + F.lit(offset)
            ).alias("partition_offset")
        )
    )

    res = (
        df_with_partition_id
        .join(partition_offsets_df.hint("broadcast"), on="partition_id")
        .withColumn(index_name, F.col("partition_offset") + F.col("inc_id"))
        .drop("partition_id", "partition_offset", "inc_id")
    )
    return res

相关问题