将数据保存为spark到hdfs的文本文件

bmvo0sr5  于 2021-06-28  发布在  Hive
关注(0)|答案(2)|浏览(363)

我使用 pySpark 以及 sqlContext 使用以下查询:

(sqlContext.sql("select LastUpdate,Count(1) as Count" from temp_t)
           .rdd.coalesce(1).saveAsTextFile("/apps/hive/warehouse/Count"))

它以以下格式存储:

Row(LastUpdate=u'2016-03-14 12:27:55.01', Count=1)
Row(LastUpdate=u'2016-02-18 11:56:54.613', Count=1)
Row(LastUpdate=u'2016-04-13 13:53:32.697', Count=1)
Row(LastUpdate=u'2016-02-22 17:43:37.257', Count=5)

但是我想把数据存储在一个配置单元表中

LastUpdate                           Count

2016-03-14 12:27:55.01                   1
.                                        .
.                                        .

下面是如何在配置单元中创建表:

CREATE TABLE Data_Count(LastUpdate string, Count int )
ROW FORMAT DELIMITED fields terminated by '|';

我尝试了许多选择,但没有成功。请帮我解决这个问题。

zbdgwd5y

zbdgwd5y1#

为什么不将数据加载到配置单元本身,而不经历保存文件然后将其加载到配置单元的过程呢。

from datetime import datetime, date, time, timedelta
hiveCtx = HiveContext(sc)

# Create sample data

currTime = datetime.now()
currRow = Row(LastUpdate=currTime)
delta = timedelta(days=1)
futureTime = currTime + delta
futureRow = Row(LastUpdate=futureTime)
lst = [currRow, currRow, futureRow, futureRow, futureRow]

# parallelize the list and convert to dataframe

myRdd = sc.parallelize(lst)
df = myRdd.toDF()
df.registerTempTable("temp_t")
aggRDD = hiveCtx.sql("select LastUpdate,Count(1) as Count from temp_t group by LastUpdate")
aggRDD.saveAsTable("Data_Count")
8gsdolmq

8gsdolmq2#

您创建了一个表,现在需要用生成的数据填充它。
我相信这可能是从一个星星之火的背景中产生的

LOAD DATA INPATH '/apps/hive/warehouse/Count' INTO TABLE Data_Count

或者,您可能希望在数据上构建一个表

CREATE EXTERNAL TABLE IF NOT Exists Data_Count(
    LastUpdate DATE, 
    Count INT
   ) 
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
STORED AS TEXTFILE
LOCATION '/apps/hive/warehouse/Count';

相关问题