如何在Spark中分裂?

zyfwsgd6  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(396)

我在一个rdd中有数据,数据如下:

scala> c_data
res31: org.apache.spark.rdd.RDD[String] = /home/t_csv MapPartitionsRDD[26] at textFile at <console>:25

scala> c_data.count()
res29: Long = 45212                                                             

scala> c_data.take(2).foreach(println)
age;job;marital;education;default;balance;housing;loan;contact;day;month;duration;campaign;pdays;previous;poutcome;y
58;management;married;tertiary;no;2143;yes;no;unknown;5;may;261;1;-1;0;unknown;no

我想将数据拆分为另一个rdd,我正在使用:

scala> val csv_data = c_data.map{x=>
 | val w = x.split(";")
 | val age = w(0)
 | val job = w(1)
 | val marital_stat = w(2)
 | val education = w(3)
 | val default = w(4)
 | val balance = w(5)
 | val housing = w(6)
 | val loan = w(7)
 | val contact = w(8)
 | val day = w(9)
 | val month = w(10)
 | val duration = w(11)
 | val campaign = w(12)
 | val pdays = w(13)
 | val previous = w(14)
 | val poutcome = w(15)
 | val Y = w(16)
 | }

结果是:

csv_data: org.apache.spark.rdd.RDD[Unit] = MapPartitionsRDD[28] at map at <console>:27

当我查询csv_数据时,它返回数组((),…)。如何获取第一行作为标题,其余行作为数据的数据?我哪里做错了?
提前谢谢。

umuewwlo

umuewwlo1#

Map函数返回 Unit ,所以您Map到 RDD[Unit] . 您可以通过将代码更改为

val csv_data = c_data.map{x=>
   val w = x.split(";")
   ...
   val Y = w(16)
   (w, age, job, marital_stat, education, default, balance, housing, loan, contact, day, month, duration, campaign, pdays, previous, poutcome, Y)
}

相关问题