在sparkDataframe上执行ngram

drnojrws  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(337)

我使用的是spark 2.3.1,我有这样的sparkDataframe

+----------+
|    values|
+----------+
|embodiment|
|   present|
| invention|
|   include|
|   pairing|
|       two|
|  wireless|
|    device|
|   placing|
|     least|
|       one|
|       two|
+----------+

我想执行这样的spark-mln-gram特性。

bigram = NGram(n=2, inputCol="values", outputCol="bigrams")

bigramDataFrame = bigram.transform(tokenized_df)

此行出现以下错误bigramdataframe=bigram.transform(标记化的\u df)
pyspark.sql.utils.illegalargumentexception:'要求失败:输入类型必须是arraytype(stringtype),但得到了stringtype'
所以我改了密码

df_new = tokenized_df.withColumn("testing", array(tokenized_df["values"]))

bigram = NGram(n=2, inputCol="values", outputCol="bigrams")

bigramDataFrame = bigram.transform(df_new)

bigramDataFrame.show()

所以我得到了我的最终数据框架如下

+----------+------------+-------+
|    values|     testing|bigrams|
+----------+------------+-------+
|embodiment|[embodiment]|     []|
|   present|   [present]|     []|
| invention| [invention]|     []|
|   include|   [include]|     []|
|   pairing|   [pairing]|     []|
|       two|       [two]|     []|
|  wireless|  [wireless]|     []|
|    device|    [device]|     []|
|   placing|   [placing]|     []|
|     least|     [least]|     []|
|       one|       [one]|     []|
|       two|       [two]|     []|
+----------+------------+-------+

为什么我的bigram列值是空的。
我希望我的输出为bigram列如下

+----------+
|   bigrams|
+--------------------+
|embodiment present  |
|present invention   |
|invention include   |
|include pairing     |
|pairing two         |
|two wireless        |
|wireless device     |
|device placing      |
|placing least       |
|least one           |
|one two             |
+--------------------+
vsdwdz23

vsdwdz231#

您的bi-gram列值为空,因为“values”列的每行中都没有bi-gram。
如果输入数据框中的值如下所示:

+--------------------------------------------+
|values                                      |
+--------------------------------------------+
|embodiment present invention include pairing|
|two wireless device placing                 |
|least one two                               |
+--------------------------------------------+

然后您可以得到如下的双克输出:

+--------------------------------------------+--------------------------------------------------+---------------------------------------------------------------------------+
|values                                      |testing                                           |ngrams                                                                     |
+--------------------------------------------+--------------------------------------------------+---------------------------------------------------------------------------+
|embodiment present invention include pairing|[embodiment, present, invention, include, pairing]|[embodiment present, present invention, invention include, include pairing]|
|two wireless device placing                 |[two, wireless, device, placing]                  |[two wireless, wireless device, device placing]                            |
|least one two                               |[least, one, two]                                 |[least one, one two]                                                       |
+--------------------------------------------+--------------------------------------------------+---------------------------------------------------------------------------+

执行此操作的scala spark代码是:

val df_new = df.withColumn("testing", split(df("values")," "))
val ngram = new NGram().setN(2).setInputCol("testing").setOutputCol("ngrams")
val ngramDataFrame = ngram.transform(df_new)

双字元是一个符号串中两个相邻元素的序列,通常是字母、音节或单词。
但是在输入Dataframe中,每行只有一个标记,因此不能从中获得任何bi-gram。
所以,对于你的问题,你可以这样做:

Input: df1
+----------+
|values    |
+----------+
|embodiment|
|present   |
|invention |
|include   |
|pairing   |
|two       |
|wireless  |
|devic     |
|placing   |
|least     |
|one       |
|two       |
+----------+

Output: ngramDataFrameInRows
+------------------+
|ngrams            |
+------------------+
|embodiment present|
|present invention |
|invention include |
|include pairing   |
|pairing two       |
|two wireless      |
|wireless devic    |
|devic placing     |
|placing least     |
|least one         |
|one two           |
+------------------+

spark scala代码:

val df_new=df1.agg(collect_list("values").alias("testing"))
val ngram = new NGram().setN(2).setInputCol("testing").setOutputCol("ngrams")
val ngramDataFrame = ngram.transform(df_new)
val ngramDataFrameInRows=ngramDataFrame.select(explode(col("ngrams")).alias("ngrams"))

相关问题