无法使用spark scala中的case类从文本文件创建dataframe

4ngedf3f  于 2021-07-13  发布在  Spark
关注(0)|答案(1)|浏览(299)

我有一个textfile格式的数据集,我试图用case类创建一个dataframe,但是我得到了下面提到的内容error:-
线程“main”java.lang.illegalargumentexception中出现异常:要求失败:列数不匹配。旧列名(1):值新列名(4):名称、年龄、部门、薪资
这是我的前三行dataset:-

Name,Age,Department,Salary
 Sohom,30,TD,9000000
 Aminul,32,AC,10000000

我使用的代码是below:-

import org.apache.log4j.Logger
import org.apache.log4j.Level
import org.apache.spark.sql.SparkSession
case class Record(Name: String, Age :Int, Department: String, Salary: Int)
object airportDetails {

    def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder().appName("Spark SQL val basic example").config("spark.master", "local").getOrCreate()
    spark.sparkContext.setLogLevel("OFF")
    Logger.getLogger("org").setLevel(Level.OFF)
    Logger.getLogger("akka").setLevel(Level.OFF)
    import spark.implicits._

    val input = spark.sparkContext.textFile("file:///C:/Users/USER/Desktop/SparkDocuments/airport_dataset.txt")
      .map(line => line.split(",").map(x => Record(x(0).toString,x(1).toInt,x(2).toString,x(3).toInt)))
    val input1 = input.toDF("Name", "Age", "Department", "Salary")

    input1.show()

    }
}
r7s23pms

r7s23pms1#

您只需使用spark dataframe csv读取器并将其转换为记录类型为的数据集:

case class Record(Name: String, Age: Int, Department: String, Salary: Int)

val ds = spark.read.option("header",true)
                   .option("inferschema",true)
                   .csv("file:///C:/Users/USER/Desktop/SparkDocuments/airport_dataset.txt")
                   .as[Record]

如果您想要一个Dataframe,您可以使用 toDF :

val df = ds.toDF("Name", "Age", "Department", "Salary")

相关问题