spark-rdd与cassandra表的连接

nsc4cvqm  于 2021-06-10  发布在  Cassandra
关注(0)|答案(1)|浏览(311)

我要加入 Spark RDDCassandra table (查找)但不能理解一些事情。
spark将从中提取范围\u开始和范围\u结束之间的所有记录 Cassandra table 然后加入 RDD 在spark内存中,它将把rdd中的所有值下推到cassandra并在那里执行连接
限制(1)将应用于何处( Cassandra 或者 Spark )
威尔 Spark 总是从中提取相同数量的记录 Cassandra 无论应用什么限制(1或1000)?
代码如下:

//creating dataframe with fields required for join with cassandra table
//and converting same to rdd
val df_for_join = src_df.select(src_df("col1"),src_df("col2"))
val rdd_for_join = df_for_join.rdd

val result_rdd = rdd_for_join
.joinWithCassandraTable("my_keyspace", "my_table"
,selectedColumns = SomeColumns("col1","col2","col3","col4")
,SomeColumns("col1", "col2")
).where("created_at >''range_start'' and created_at<= range_end")
.clusteringOrder(Ascending).limit(1)

Cassandratable详图-

PRIMARY KEY ((col1, col2), created_at) WITH CLUSTERING ORDER BY (created_at ASC)
ds97pgxw

ds97pgxw1#

joinWithCassandra 表从传递的rdd中提取分区/主键值,并将它们转换为针对cassandra中分区的单个请求。然后,除此之外,scc可能会应用额外的过滤,比如 where 条件。如果我没记错的话,但我可能错了,限制不会完全推到Cassandra身上——它仍然可能会有价值 limit 每个分区的行数。
您可以通过执行 result_rdd.toDebugString . 对于我的代码:

val df_for_join = Seq((2, 5),(5, 2)).toDF("col1", "col2")
val rdd_for_join = df_for_join.rdd

val result_rdd = rdd_for_join
.joinWithCassandraTable("test", "jt"
,selectedColumns = SomeColumns("col1","col2", "v")
,SomeColumns("col1", "col2")
).where("created_at >'2020-03-13T00:00:00Z' and created_at<= '2020-03-14T00:00:00Z'")
.limit(1)

它给出了以下内容:

scala> result_rdd.toDebugString
res7: String =
(2) CassandraJoinRDD[14] at RDD at CassandraRDD.scala:19 []
 |  MapPartitionsRDD[2] at rdd at <console>:45 []
 |  MapPartitionsRDD[1] at rdd at <console>:45 []
 |  ParallelCollectionRDD[0] at rdd at <console>:45 []

如果您执行“正常”联接,则会得到以下结果:

scala> val rdd1 = sc.parallelize(Seq((2, 5),(5, 2)))
rdd1: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[21] at parallelize at <console>:44
scala> val ct = sc.cassandraTable[(Int, Int)]("test", "jt").select("col1", "col2")
ct: com.datastax.spark.connector.rdd.CassandraTableScanRDD[(Int, Int)] = CassandraTableScanRDD[31] at RDD at CassandraRDD.scala:19

scala> rdd1.join(ct)
res15: org.apache.spark.rdd.RDD[(Int, (Int, Int))] = MapPartitionsRDD[34] at join at <console>:49
scala> rdd1.join(ct).toDebugString
res16: String =
(6) MapPartitionsRDD[37] at join at <console>:49 []
 |  MapPartitionsRDD[36] at join at <console>:49 []
 |  CoGroupedRDD[35] at join at <console>:49 []
 +-(3) ParallelCollectionRDD[21] at parallelize at <console>:44 []
 +-(6) CassandraTableScanRDD[31] at RDD at CassandraRDD.scala:19 []

更多信息见scc文件的相应章节。

相关问题