regex Pyspark中的拆分变量

bttbmeg0  于 2022-11-18  发布在  Spark
关注(0)|答案(1)|浏览(88)

我尝试将timestamp_value中的utc值拆分到一个名为utc的新列中。我尝试使用Python RegEx,但我无法做到这一点。谢谢您的回答!
这是我的 Dataframe

+--------+----------------------------+
|machine |timestamp_value             |
+--------+----------------------------+
|1       |2022-01-06T07:47:37.319+0000|
|2       |2022-01-06T07:47:37.319+0000|
|3       |2022-01-06T07:47:37.319+0000|
+--------+----------------------------+

应该是这样的

+--------+----------------------------+-----+
|machine |timestamp_value             |utc  |
+--------+----------------------------------+
|1       |2022-01-06T07:47:37.319     |+0000|
|2       |2022-01-06T07:47:37.319     |+0000|
|3       |2022-01-06T07:47:37.319     |+0000|
+--------+----------------------------------+
xtfmy6hx

xtfmy6hx1#

您可以分别使用regexp_extractregexp_replace执行此操作

import pyspark.sql.functions as F

(df
 .withColumn('utc', F.regexp_extract('timestamp_value', '.*(\+.*)', 1))
 .withColumn('timestamp_value', F.regexp_replace('timestamp_value', '\+(.*)', ''))
).show(truncate=False)

+-------+-----------------------+-----+
|machine|timestamp_value        |utc  |
+-------+-----------------------+-----+
|1      |2022-01-06T07:47:37.319|+0000|
|2      |2022-01-06T07:47:37.319|+0000|
|3      |2022-01-06T07:47:37.319|+0000|
+-------+-----------------------+-----+

为了更好地理解正则表达式的含义,请看一下tool

相关问题