spark cassandra连接器与java for read

oxf4rvwz  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(447)

要求:-我在cassandra中保存了数据,每小时我需要根据记录的更新计算一些分数。我在dataset上使用show()方法可以正确地看到数据
下面是读取数据的代码:-

Dataset<DealFeedSchema> dealFeedSchemaDataset = session.read()
     .format(Constants.SPARK_CASSANDRA_SOURCE_PATH)
     .option(Constants.KEY_SPACE, Constants.CASSANDRA_KEY_SPACE)
     .option(Constants.TABLE, Constants.CASSANDRA_DEAL_TABLE_SPACE)
     .option(Constants.DATE_FORMAT, "yyyy-MM-dd HH:mm:ss")
     .schema(DealFeedSchema.getDealFeedSchema())
     .load()
     .as(Encoders.bean(DealFeedSchema.class));
dealFeedSchemaDataset.show();

节目输出如下:

+-------+----------+-------------+--------------------+-----------+------------+----------+------------------------+---------------+-----------+-------------------+-------------------+----------+------------+-------------------+----------------+-------------------+-------------+----------+--------------------+----------+-------------------------+---------------+----------------+---------------+--------------+--------------+-----+
|deal_id| deal_name|deal_category|           deal_tags|growth_tags|deal_tag_ids|deal_price|deal_discount_percentage|deal_group_size|deal_active|    deal_start_time|        deal_expiry|product_id|product_name|product_description|product_category|product_category_id|product_price|hero_image|      product_images| video_url|video_thumbnail_image_url|deal_like_count|deal_share_count|deal_view_count|deal_buy_count|weighted_score|boost|
+-------+----------+-------------+--------------------+-----------+------------+----------+------------------------+---------------+-----------+-------------------+-------------------+----------+------------+-------------------+----------------+-------------------+-------------+----------+--------------------+----------+-------------------------+---------------+----------------+---------------+--------------+--------------+-----+
|      4|7h12349961|          mqw|[under999, under3...|         []|          []|    4969.0|                    null|       95166551|          1|2020-07-08 14:48:57|2020-07-18 14:48:57|4725457233|  kao62ggnm7|         32h64e356z|      jnnh29zr1f|               null|       6651.0|86kk7s34yr|[dSt4P79, i4WXOHb...|d6tag27924|               4j1l36lp17|           null|            null|           null|          null|          null| null|

所以当我在dealfeedschemadataset上使用map/foreach时会发生一件奇怪的事情,数据似乎不正确,我得到deal\u start\u time的列值作为当前系统时间,如下所示,不确定这是如何改变的。
即使在下面的一行也给出了同样的问题:

dealFeedSchemaDataset.select(
      functions.col("deal_start_time")).as(Encoders.bean(DateTime.class))
.collectAsList().forEach(schema -> System.out.println(schema));
2020-07-10T20:21:47.895+05:30

谁能帮我一下我做错了什么?

4ktjp1zp

4ktjp1zp1#

java.sql.Timestamp

这是为了使用包含时间部分的格式

java.sql.Date

这只是日期

相关问题