classcastexception:org.apache.hadoop.hive.ql.io.hiveignorekeytextoutputformat不能转换为org.apache.hadoop.hive.ql.io.acidoutputformat

6uxekuva  于 2021-06-21  发布在  Storm
关注(0)|答案(0)|浏览(286)

我正在尝试使用将数据保存到配置单元表 DelimitedRecordHiveMapper . 我得到以下例外:

69145 [Thread-12-b-0] INFO  hive.metastore - Trying to connect to metastore with URI thrift://slc05zcx.us.oracle.com:9083
69146 [Thread-12-b-0] INFO  hive.metastore - Connected to metastore.
69152 [Thread-12-b-0] WARN  org.apache.storm.hive.trident.HiveState - hive streaming failed.
java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat cannot be cast to org.apache.hadoop.hive.ql.io.AcidOutputFormat
        at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.<init>(AbstractRecordWriter.java:73) ~[hive-hcatalog-streaming-0.14.0.2.2.0.0-2041.jar:0.14.0.2.2.0.0-2041]
        at org.apache.hive.hcatalog.streaming.DelimitedInputWriter.<init>(DelimitedInputWriter.java:114) ~[hive-hcatalog-streaming-0.14.0.2.2.0.0-2041.jar:0.14.0.2.2.0.0-2041]
        at org.apache.hive.hcatalog.streaming.DelimitedInputWriter.<init>(DelimitedInputWriter.java:91) ~[hive-hcatalog-streaming-0.14.0.2.2.0.0-2041.jar:0.14.0.2.2.0.0-2041]
        at org.apache.hive.hcatalog.streaming.DelimitedInputWriter.<init>(DelimitedInputWriter.java:72) ~[hive-hcatalog-streaming-0.14.0.2.2.0.0-2041.jar:0.14.0.2.2.0.0-2041]
        at org.apache.storm.hive.bolt.mapper.DelimitedRecordHiveMapper.createRecordWriter(DelimitedRecordHiveMapper.java:78) ~[storm-hive-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
        at org.apache.storm.hive.common.HiveWriter.<init>(HiveWriter.java:71) ~[storm-hive-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
        at org.apache.storm.hive.common.HiveUtils.makeHiveWriter(HiveUtils.java:44) ~[storm-hive-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
        at org.apache.storm.hive.trident.HiveState.getOrCreateWriter(HiveState.java:201) ~[storm-hive-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
        at org.apache.storm.hive.trident.HiveState.writeTuples(HiveState.java:125) ~[storm-hive-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
        at org.apache.storm.hive.trident.HiveState.updateState(HiveState.java:109) ~[storm-hive-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
        at org.apache.storm.hive.trident.HiveUpdater.updateState(HiveUpdater.java:12) [storm-hive-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
        at org.apache.storm.hive.trident.HiveUpdater.updateState(HiveUpdater.java:9) [storm-hive-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
        at storm.trident.planner.processor.PartitionPersistProcessor.finishBatch(PartitionPersistProcessor.java:98) [storm-core-0.9.3.jar:0.9.3]
        at storm.trident.planner.SubtopologyBolt.finishBatch(SubtopologyBolt.java:152) [storm-core-0.9.3.jar:0.9.3]
        at storm.trident.topology.TridentBoltExecutor.finishBatch(TridentBoltExecutor.java:252) [storm-core-0.9.3.jar:0.9.3]
        at storm.trident.topology.TridentBoltExecutor.checkFinish(TridentBoltExecutor.java:285) [storm-core-0.9.3.jar:0.9.3]
        at storm.trident.topology.TridentBoltExecutor.execute(TridentBoltExecutor.java:359) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.daemon.executor$fn__3441$tuple_action_fn__3443.invoke(executor.clj:633) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.daemon.executor$mk_task_receiver$fn__3364.invoke(executor.clj:401) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.disruptor$clojure_handler$reify__1447.onEvent(disruptor.clj:58) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.daemon.executor$fn__3441$fn__3453$fn__3500.invoke(executor.clj:748) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.util$async_loop$fn__464.invoke(util.clj:463) [storm-core-0.9.3.jar:0.9.3]
        at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
        at java.lang.Thread.run(Thread.java:662) [na:1.6.0_31]

代码是:

public class FirstTry {

public static void main(String[] args) {
        // TODO Auto-generated method stub
        String metastoreURI = "thrift://localhost:9083";
        String dbName = "default";
        String tableName = "stormFirstTable";

        String[] colNames= {"id", "name", "age"};   

        @SuppressWarnings("unchecked")
        FixedBatchSpout spout = new FixedBatchSpout(new Fields("sentence"), 3,
                   new Values("101,name1,18"),
                   new Values("102,name2,19"),
                   new Values("103,name3,20"));

        spout.setCycle(true);
        DelimitedRecordHiveMapper mapper = new DelimitedRecordHiveMapper()
            .withColumnFields(new Fields("sentence"));
        HiveOptions hiveOptions = new HiveOptions(metastoreURI,dbName,tableName,mapper)
            .withTxnsPerBatch(10)
            .withBatchSize(1000)
            .withIdleTimeout(10);
        StateFactory factory = new HiveStateFactory().withOptions(hiveOptions);

        TridentTopology topology = new TridentTopology();
        Stream stream = topology.newStream("sample-spout", spout)
                .each(new Fields("sentence"), new Split(), new Fields(colNames));
        TridentState state = stream.partitionPersist(factory, new Fields("sentence"), new HiveUpdater(), new Fields());
        StormTopology stormTopology = topology.build();

        Config conf = new Config();
        conf.setDebug(true);
        conf.setMaxSpoutPending(1);

        LocalCluster localCluster = new LocalCluster();
        localCluster.submitTopology("storm-trident-topology", conf, stormTopology);
    }
}

请告诉我这是代码问题还是配置单元api问题。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题