查询包含flume流的外部表时发生配置单元错误

pxyaymoc  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(271)

在cdh 5.4上,我尝试在twitter analytics上创建一个演示,使用:
flume用于将tweets捕获到hdfs文件夹中
用于使用hiveserde查询tweet的配置单元
步骤1成功。我可以看到tweets被捕获并正确地定向到所需的hdfs文件夹。我注意到,首先创建一个临时文件,然后将其转换为永久文件:

-rw-r--r--   3 root hadoop       7548 2015-10-06 06:39 /user/flume/tweets/FlumeData.1444127932782
-rw-r--r--   3 root hadoop      10034 2015-10-06 06:39 /user/flume/tweets/FlumeData.1444127932783.tmp

我使用下表声明:

CREATE EXTERNAL TABLE tweets(
    id bigint, 
    created_at string, 
    lang string, 
    source string, 
    favorited boolean, 
    retweet_count int, 
    retweeted_status 
    struct<text:string,user:struct<screen_name:string,name:string>>,
    entities struct<urls:array<struct<expanded_url:string>>,
    user_mentions:array<struct<screen_name:string,name:string>>,
    hashtags:array<struct<text:string>>>,
    text string,
    user struct<location:string,geo_enabled:string,screen_name:string,name:string,friends_count:int,followers_count:int,statuses_count:int,verified:boolean,utc_offset:int,time_zone:string>,
    in_reply_to_screen_name string)
ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 'hdfs://master.ds.com:8020/user/flume/tweets';

但是当我查询这个表时,我得到以下错误:

hive> select count(*) from tweets;

Ended Job = job_1443526273848_0140 with errors
...
Diagnostic Messages for this Task:
Error: java.io.IOException: java.lang.reflect.InvocationTargetException
        at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreation
        ... 11 more

Caused by: java.io.FileNotFoundException: File does not exist: /user/flume/tweets/FlumeData.1444128601078.tmp
        at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
        ...

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:

Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 1.19 sec   HDFS Read: 10492 HDFS Write: 0 FAIL

我认为这个问题与临时文件有关,这个临时文件是由配置单元查询生成的map reduce作业没有被读取。是否有一些解决方法或配置更改可以成功地处理此问题?

axzmvihb

axzmvihb1#

我也有同样的经历,我通过在flume配置文件中添加下面的hdfs sink配置来解决这个问题 some_agent.hdfssink.hdfs.inUsePrefix = . hdfs.inUseSuffix = .temp 希望对你有帮助。

相关问题