我已经创建了创建Parquet文件,然后我试图导入到 Impala 表它。
我创建了如下表:
CREATE EXTERNAL TABLE `user_daily` (
`user_id` BIGINT COMMENT 'User ID',
`master_id` BIGINT,
`walletAgency` BOOLEAN,
`zone_id` BIGINT COMMENT 'Zone ID',
`day` STRING COMMENT 'The stats are aggregated for single days',
`clicks` BIGINT COMMENT 'The number of clicks',
`impressions` BIGINT COMMENT 'The number of impressions',
`avg_position` BIGINT COMMENT 'The average position * 100',
`money` BIGINT COMMENT 'The cost of the clicks, in hellers',
`web_id` BIGINT COMMENT 'Web ID',
`discarded_clicks` BIGINT COMMENT 'Number of discarded clicks from column "clicks"',
`impression_money` BIGINT COMMENT 'The cost of the impressions, in hellers'
)
PARTITIONED BY (
year BIGINT,
month BIGINT
)
STORED AS PARQUET
LOCATION '/warehouse/impala/contextstat.db/user_daily/';
然后我用这个模式复制文件:
parquet-tools schema user_daily/year\=2016/month\=8/part-r-00001-fd77e1cd-c824-4ebd-9328-0aca5a168d11.snappy.parquet
message spark_schema {
optional int32 user_id;
optional int32 web_id (INT_16);
optional int32 zone_id;
required int32 master_id;
required boolean walletagency;
optional int64 impressions;
optional int64 clicks;
optional int64 money;
optional int64 avg_position;
optional double impression_money;
required binary day (UTF8);
}
当我试着用
SELECT * FROM user_daily;
我明白了
File 'hdfs://.../warehouse/impala/contextstat.db/user_daily/year=2016/month=8/part-r-00000-fd77e1cd-c824-4ebd-9328-0aca5a168d11.snappy.parquet'
has an incompatible Parquet schema for column 'contextstat.user_daily.user_id'.
Column type: BIGINT, Parquet schema:
optional int32 user_id [i:0 d:1 r:0]
你知道怎么解决这个问题吗?我认为bigint和int\u32是一样的。我应该改变表的方案还是生成Parquet文件?
2条答案
按热度按时间yhived7q1#
bigint是int64,这就是它抱怨的原因。但你不一定要找出不同的类型,你必须使用自己, Impala 可以为你做到这一点。只需使用create table like parquet变量:
变体创建表。。。与parquet类似,“hdfs\u path\u of \u parquet\u file”允许您跳过create table语句的列定义。列名和数据类型是根据指定的Parquet数据文件的组织自动配置的,该文件必须已经驻留在hdfs中。
pzfprimi2#
我用
CAST(... AS BIGINT)
,将Parquet地板模式从int32
至int64
. 然后我必须重新排序列,因为它不会按名称连接。那就行了。