检查摄取pyspark的模式

vsnjm48y  于 2021-05-27  发布在  Spark
关注(0)|答案(2)|浏览(435)

我有一个经常改变模式的文件。
例如,在下面的示例中,发票日期可以采用不同的格式,发票金额可以采用不同的格式(有时使用a$,有时不使用,有时使用其他货币)。
我希望能够逐列扫描Dataframe,并说:-如果有非数字字符(a除外)在“发票价值”列中,然后。。。。然后,我可以定义应用于文件的模式。
有可能吗?

rqmkfv5c

rqmkfv5c1#

勾选下面的代码,这里的发票金额将以浮点数和日期格式为例,您可以根据需要进行更改。

from pyspark.sql.functions import *
from pyspark.sql.functions import expr
import pyspark.sql.functions as f

df = sc.parallelize([["ThoughtStorm","11/23/2019","$6.09","true"],
["Talane","3/28/2019","£7.20","true"]]).toDF(("company_name","invoice_date","invoice_value","paid"))
df = df.withColumn("invoice_value",regexp_extract(col("invoice_value"), """([0-9]*[.])?[0-9]+""", 0))
df = df.withColumn("invoice_date", f.date_format(f.to_date(f.unix_timestamp(col("invoice_date"), "mm/dd/yyyy").cast("timestamp")), 'yyyy-MM-dd'))
df.show()
+------------+------------+-------------+----+
|company_name|invoice_date|invoice_value|paid|
+------------+------------+-------------+----+
|ThoughtStorm|  2019-01-23|         6.09|true|
|      Talane|  2019-01-28|         7.20|true|
+------------+------------+-------------+----+
lokaqttq

lokaqttq2#

为了 InvoiceValue 你可以用 regexp_extract . 正则表达式并不完美,但你有这个想法。

import org.apache.spark.sql.functions._

val data = List(
  ("$50.60"),
  ("$5.60"),
  ("£500.400"),
  ("100"),
  ("100.20")
).toDF("InvoiceValue")

data.show

val newdata = data.withColumn("value", regexp_extract($"InvoiceValue","""\D*(\d{1,4}\.{1}\d{1,4}).*""", 1))
newdata.show

输出

data: org.apache.spark.sql.DataFrame = [InvoiceValue: string]
+------------+
|InvoiceValue|
+------------+
|      $50.60|
|       $5.60|
|    £500.400|
|         100|
|      100.20|
+------------+

newdata: org.apache.spark.sql.DataFrame = [InvoiceValue: string, value: string]
+------------+-------+
|InvoiceValue|  value|
+------------+-------+
|      $50.60|  50.60|
|       $5.60|   5.60|
|    £500.400|500.400|
|         100|    100|
|      100.20| 100.20|
+------------+-------+

相关问题