为什么cep在使用processingtime时只在我输入第二个事件之后才打印第一个事件?

5anewei6  于 2021-06-21  发布在  Flink
关注(0)|答案(1)|浏览(379)

我向Kafka发送了一个isstart为true的事件,并让flink使用Kafka中的事件,还将timecharacteristic设置为processingtime并设置为(time.seconds(5)),因此我希望cep在我发送第一个事件的5秒后打印事件,但它没有打印,在我把第二个事件发给Kafka之后,它才打印出第一个事件。为什么它只印了第一个事件后我就发了两个事件?在使用processingtime时,我发送第一个事件的5秒钟后,不应该打印该事件吗?
代码如下:

public class LongRidesWithKafka {
private static final String LOCAL_ZOOKEEPER_HOST = "localhost:2181";
private static final String LOCAL_KAFKA_BROKER = "localhost:9092";
private static final String RIDE_SPEED_GROUP = "rideSpeedGroup";
private static final int MAX_EVENT_DELAY = 60; // rides are at most 60 sec out-of-order.

public static void main(String[] args) throws Exception {
    final int popThreshold = 1; // threshold for popular places
    // set up streaming execution environment
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
    Properties kafkaProps = new Properties();
    //kafkaProps.setProperty("zookeeper.connect", LOCAL_ZOOKEEPER_HOST);
    kafkaProps.setProperty("bootstrap.servers", LOCAL_KAFKA_BROKER);
    kafkaProps.setProperty("group.id", RIDE_SPEED_GROUP);
    // always read the Kafka topic from the start
    kafkaProps.setProperty("auto.offset.reset", "earliest");

    // create a Kafka consumer
    FlinkKafkaConsumer011<TaxiRide> consumer = new FlinkKafkaConsumer011<>(
            "flinktest",
            new TaxiRideSchema(),
            kafkaProps);
    // assign a timestamp extractor to the consumer
    //consumer.assignTimestampsAndWatermarks(new CustomWatermarkExtractor());
    DataStream<TaxiRide> rides = env.addSource(consumer);

    DataStream<TaxiRide> keyedRides = rides.keyBy("rideId");
    // A complete taxi ride has a START event followed by an END event
    Pattern<TaxiRide, TaxiRide> completedRides =
            Pattern.<TaxiRide>begin("start")
                    .where(new SimpleCondition<TaxiRide>() {
                        @Override
                        public boolean filter(TaxiRide ride) throws Exception {
                            return ride.isStart;
                        }
                    })
                    .next("end")
                    .where(new SimpleCondition<TaxiRide>() {
                        @Override
                        public boolean filter(TaxiRide ride) throws Exception {
                            return !ride.isStart;
                        }
                    });

    // We want to find rides that have NOT been completed within 120 minutes
    PatternStream<TaxiRide> patternStream = CEP.pattern(keyedRides, completedRides.within(Time.seconds(5)));

    OutputTag<TaxiRide> timedout = new OutputTag<TaxiRide>("timedout") {
    };
    SingleOutputStreamOperator<TaxiRide> longRides = patternStream.flatSelect(
            timedout,
            new LongRides.TaxiRideTimedOut<TaxiRide>(),
            new LongRides.FlatSelectNothing<TaxiRide>()
    );
    longRides.getSideOutput(timedout).print();
    env.execute("Long Taxi Rides");
}

public static class TaxiRideTimedOut<TaxiRide> implements PatternFlatTimeoutFunction<TaxiRide, TaxiRide> {
    @Override
    public void timeout(Map<String, List<TaxiRide>> map, long l, Collector<TaxiRide> collector) throws Exception {
        TaxiRide rideStarted = map.get("start").get(0);
        collector.collect(rideStarted);
    }
}

public static class FlatSelectNothing<T> implements PatternFlatSelectFunction<T, T> {
    @Override
    public void flatSelect(Map<String, List<T>> pattern, Collector<T> collector) {
    }
}

private static class TaxiRideTSExtractor extends AscendingTimestampExtractor<TaxiRide> {
    private static final long serialVersionUID = 1L;

    @Override
    public long extractAscendingTimestamp(TaxiRide ride) {

        //  Watermark Watermark = getCurrentWatermark();

        if (ride.isStart) {
            return ride.startTime.getMillis();
        } else {
            return ride.endTime.getMillis();
        }
    }
}

private static class CustomWatermarkExtractor implements AssignerWithPeriodicWatermarks<TaxiRide> {

    private static final long serialVersionUID = -742759155861320823L;

    private long currentTimestamp = Long.MIN_VALUE;

    @Override
    public long extractTimestamp(TaxiRide ride, long previousElementTimestamp) {
        // the inputs are assumed to be of format (message,timestamp)

        if (ride.isStart) {
            this.currentTimestamp = ride.startTime.getMillis();
            return ride.startTime.getMillis();
        } else {
            this.currentTimestamp = ride.endTime.getMillis();
            return ride.endTime.getMillis();
        }
    }

    @Nullable
    @Override
    public Watermark getCurrentWatermark() {
        return new Watermark(currentTimestamp == Long.MIN_VALUE ? Long.MIN_VALUE : currentTimestamp - 1);
    }
}

}

mqkwyuun

mqkwyuun1#

原因是flink的cep库目前只在另一个元素到达并被处理时检查时间戳。潜在的假设是你有一个稳定的事件流。
我认为这是flink的cep库的局限性。为了正确工作,flink应该向注册处理时间计时器 arrivalTime + timeout 如果没有事件到达,则触发模式超时。

相关问题