kafka流到主题

mftmpeh8  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(469)

我需要计算一个Kafka流的平均值。生产者用avro生成,所以我需要用它反序列化,然后我收到一个带有json字符串的genericord,我必须详细说明。
我使用用户定义的类型作为支持。

private class Tuple {

    public int occ;
    public int sum;

    public Tuple (int occ, int sum) {
        this.occ = occ;
        this.sum = sum;
    }

    public void sum (int toAdd) {
        this.sum += toAdd;
        this.occ ++;
    }

    public Double getAverage () {
        return new Double (this.sum / this.occ);
    }

    public String toString() {
        return "occorrenze: " + this.occ + ", somma: " + sum + ", media -> " + getAverage();
    }

}

现在详细说明:

StreamsBuilder builder = new StreamsBuilder();
    KStream<GenericRecord, GenericRecord> source =
          builder.stream(topic);

    KStream<GenericRecord, GenericRecord>[] branches = source.branch(
            (key,value) -> partition(value.toString()),
            (key, value) -> true
    );

    KGroupedStream <String, String> groupedStream = branches[0]
            .mapValues(value -> createJson(value.toString()))
            .map((key, value) -> KeyValue.pair(new String("T_DUR_CICLO"), value.getNumberValue("payload", "T_DUR_CICLO")))
            .groupByKey( Serialized.with(
                    Serdes.String(), /* key (note: type was modified) */
                    Serdes.String()));  /* value */

    branches[0].foreach((key, value) -> System.out.println(key + " " + value));

    KTable<String, Tuple> aggregatedStream = groupedStream.aggregate(
            () -> new Tuple(0, 0), // initializer 
            (aggKey, newValue, aggValue) ->  new Tuple (aggValue.occ + 1, aggValue.sum + Integer.parseInt(newValue)),
            Materialized.as("aggregate-state-store").with(Serdes.String(), new MySerde()));

    aggregatedStream
            .toStream()
            .foreach((key, value) -> System.out.println(key + ": " + value));

    KStream<String, Double> average = aggregatedStream
            .mapValues(v -> v.getAverage())
            .toStream();

问题是,当我将流存储在主题中时:

average.to("average");

例外情况如下:

Exception in thread "streamtest-6d743b83-ce22-435e-aee5-76a745ce3571-StreamThread-1" org.apache.kafka.streams.errors.ProcessorStateException: task [1_0] Failed to flush state store KSTREAM-AGGREGATE-STATE-STORE-0000000007
    at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:242)
    at org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:202)
    at org.apache.kafka.streams.processor.internals.StreamTask.flushState(StreamTask.java:420)
    at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:394)
    at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:382)
    at org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:67)
    at org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:362)
    at org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:352)
    at org.apache.kafka.streams.processor.internals.TaskManager.commitAll(TaskManager.java:401)
    at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:1042)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:845)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
Caused by: org.apache.kafka.streams.errors.StreamsException: A serializer (key: io.confluent.kafka.streams.serdes.avro.GenericAvroSerializer / value: io.confluent.kafka.streams.serdes.avro.GenericAvroSerializer) is not compatible to the actual key or value type (key type: java.lang.String / value type: java.lang.Double). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters.
    at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:94)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:126)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
    at org.apache.kafka.streams.kstream.internals.KStreamMapValues$KStreamMapProcessor.process(KStreamMapValues.java:41)
    at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:50)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.runAndMeasureLatency(ProcessorNode.java:244)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:126)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
    at org.apache.kafka.streams.kstream.internals.KTableMapValues$KTableMapValuesProcessor.process(KTableMapValues.java:106)
    at org.apache.kafka.streams.kstream.internals.KTableMapValues$KTableMapValuesProcessor.process(KTableMapValues.java:83)
    at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:50)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.runAndMeasureLatency(ProcessorNode.java:244)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:129)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
    at org.apache.kafka.streams.kstream.internals.ForwardingCacheFlushListener.apply(ForwardingCacheFlushListener.java:42)
    at org.apache.kafka.streams.state.internals.CachingKeyValueStore.putAndMaybeForward(CachingKeyValueStore.java:101)
    at org.apache.kafka.streams.state.internals.CachingKeyValueStore.access$000(CachingKeyValueStore.java:38)
    at org.apache.kafka.streams.state.internals.CachingKeyValueStore$1.apply(CachingKeyValueStore.java:83)
    at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:141)
    at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:99)
    at org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:125)
    at org.apache.kafka.streams.state.internals.CachingKeyValueStore.flush(CachingKeyValueStore.java:123)
    at org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.flush(InnerMeteredKeyValueStore.java:284)
    at org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.flush(MeteredKeyValueBytesStore.java:149)
    at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:239)
    ... 12 more
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.avro.generic.GenericRecord
    at io.confluent.kafka.streams.serdes.avro.GenericAvroSerializer.serialize(GenericAvroSerializer.java:39)
    at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:156)
    at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:101)
    at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)
    ... 41 more

-----edit-----我添加了一个用于序列化和反序列化的类
序列化程序:

private class TupleSerializer implements Serializer<Tuple> {

    @Override
    public void configure(Map<String, ?> map, boolean bln) {
    }

    @Override
    public byte[] serialize(String string, Tuple t) {
         ByteBuffer buffer = ByteBuffer.allocate(4 + 4);
         buffer.putInt(t.occ);
         buffer.putInt(t.sum);
         return buffer.array();
    }

    @Override
    public void close() {
    }

}

反序列化程序:

private class TupleDeserializer implements Deserializer<Tuple> {

    @Override
    public void configure(Map<String, ?> map, boolean bln) {
    }

    @Override
    public void close() {
    }

    @Override
    public Tuple deserialize(String string, byte[] bytes) {
         ByteBuffer buffer = ByteBuffer.wrap(bytes);
         int occ = buffer.getInt();
         int sum = buffer.getInt();
         return new Tuple (occ, sum);
    }

}

迈瑟德:

private class MySerde implements Serde<Tuple> {

    @Override
    public void configure(Map<String, ?> map, boolean bln) {
    }

    @Override
    public void close() {
    }

    @Override
    public Serializer<Tuple> serializer() {
        return new TupleSerializer ();
    }

    @Override
    public Deserializer<Tuple> deserializer() {
        return new TupleDeserializer ();
    }

}
t1rydlwq

t1rydlwq1#

你必须用 .to() 方法重写默认的serde类型。 average.to("average",Produced.with(Serdes.String(),Serdes.Double()); 更多详情请参见:
https://docs.confluent.io/current/streams/developer-guide/dsl-api.html#writing-回到Kafka

相关问题