我创建的Dataframe如下:
val df = Seq(
(1,27,"bob",2020,9,3),
(1,27,"jack",2020,9,3),
(3,31,"tom",2020,9,4)
).toDF("id","age","nom","year","month","day")
我得到以下Dataframe
+---+---+----+----+-----+---+
|id |age|nom |year|month|day|
+---+---+----+----+-----+---+
|1 |27 |bob |2020|9 |3 |
|1 |27 |jack|2020|9 |3 |
|3 |31 |tom |2020|9 |4 |
+---+---+----+----+-----+---+
然后我在hdfs上用年、月、日来划分df;
df.write
.mode(SaveMode.Append)
.partitionBy("year", "month", "day")
.parquet(s"$outputPath/test_hive")
我在以下hdfs路径上获取数据:
/outputPath/test_hive/year=2020/month=9/day=3 /outputPath/test_hive/year=2020/month=9/day=4
我想知道如何在位置创建外部配置单元表 outputPath/test_hive
可以考虑子目录的年、月和日。
我尝试了以下创建表,但不起作用:
CREATE EXTERNAL TABLE test1(id int, age int, nom string, year int, month int, day int) STORED AS PARQUET LOCATION 'outputPath/test_hive'
+-----------+------------+------------+--+
| test1.id | test1.age | test1.nom |
+-----------+------------+------------+--+
| 1 | 27 | bob |
| 1 | 27 | jack |
| 3 | 31 | tom |
+-----------+------------+------------+--+
和
CREATE EXTERNAL TABLE test2(id int, age int, nom string) PARTITIONED BY(year INT, month int , day INT) STORED AS PARQUET LOCATION 'outputPath/test_hive'
+-----------+------------+------------+-------------+--------------+------------+--+
| test2.id | test2.age | test2.nom | test2.year | test2.month | test2.day |
+-----------+------------+------------+-------------+--------------+------------+--+
+-----------+------------+------------+-------------+--------------+------------+--+
和
CREATE EXTERNAL TABLE test3(id int, age int, nom string) STORED AS PARQUET LOCATION 'outputPath/test_hive' PARTITIONED BY(year INT, month int , day INT);
Error while compiling statement: FAILED: ParseException line 1:138 missing EOF at 'PARTITIONED' near ''/outputPath/test_hive'' (state=42000,code=40000)
1条答案
按热度按时间fd3cxomn1#
做
msck
repair table or add partitions
到table上去。Example:
From Hive:
```hive> msck repair table test3
--or
hive> ALTER TABLE test3 ADD PARTITION (year='2020', month='9',day='3') location '/outputPath/test_hive/year=2020/month=9/day=3';
`From spark:`
spark.sql("ALTER TABLE test3 ADD PARTITION (year='2020', month='9',day='3') location '/outputPath/test_hive/year=2020/month=9/day=3'")
//or
spark.sql("msck repair table test3")