SpringKafka,嵌入式Kafka测试

w1jd8yoj  于 2021-06-08  发布在  Kafka
关注(0)|答案(2)|浏览(574)

我们的servicetest和嵌入式kafka正在观察一个奇怪的行为。
该测试是一个spock测试,我们使用junit规则kafkamebedded并传播brokersasstring,如下所示:

@ClassRule
@Shared
KafkaEmbedded embeddedKafka = new KafkaEmbedded(1)

@Autowired
KafkaListenerEndpointRegistry endpointRegistry

def setupSpec() {
    System.setProperty("kafka.bootstrapServers",  embeddedKafka.getBrokersAsString())
}

通过对Kafka法典的考察,构造了一个示例 KafkaEmbedded(int count) 导致一个kafka服务器,每个主题有两个分区。
为了解决测试中的分区分配和服务器客户端同步问题,我们遵循SpringKafka的containertestutils类中的策略。

public static void waitForAssignment(KafkaMessageListenerContainer<String, String> container, int partitions)
        throws Exception {

        log.info(
            "Waiting for " + container.getContainerProperties().getTopics() + " to connect to " + partitions + " " +
                "partitions.")

        int n = 0;
        int count = 0;
        while (n++ < 600 && count < partitions) {
            count = 0;
            container.getAssignedPartitions().each {
                TopicPartition it ->
                    log.info(it.topic() + ":" + it.partition() + "; ")
            }

            if (container.getAssignedPartitions() != null) {
                count = container.getAssignedPartitions().size();
            }
            if (count < partitions) {
                Thread.sleep(100);
            }
        }
     }

当我们观察日志时,我们注意到以下模式:

2016-07-29 11:24:02.600  WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 1 : {deliveryZipCode_v1=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.600  WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 1 : {staggering=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.600  WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 1 : {moa=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.696  WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 3 : {staggering=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.699  WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 3 : {moa=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.699  WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 3 : {deliveryZipCode_v1=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.807  WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 5 : {deliveryZipCode_v1=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.811  WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 5 : {staggering=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:02.812  WARN 1160 --- [afka-consumer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 5 : {moa=LEADER_NOT_AVAILABLE}
2016-07-29 11:24:03.544  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions revoked:[]
2016-07-29 11:24:03.544  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions revoked:[]
2016-07-29 11:24:03.544  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions revoked:[]
2016-07-29 11:24:03.602  INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator  : SyncGroup for group timeslot-service-group-06x failed due to coordinator rebalance, rejoining the group
2016-07-29 11:24:03.637  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[]
2016-07-29 11:24:03.637  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[]
2016-07-29 11:24:04.065  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[staggering-0]
2016-07-29 11:24:04.066  INFO 1160 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 50810 (http)
2016-07-29 11:24:04.073  INFO 1160 --- [           main] .t.s.AllocationsDeliveryZonesServiceSpec : Started AllocationsDeliveryZonesServiceSpec in 20.616 seconds (JVM running for 25.456)
2016-07-29 11:24:04.237  INFO 1160 --- [           main] org.eclipse.jetty.server.Server          : jetty-9.2.17.v20160517
2016-07-29 11:24:04.265  INFO 1160 --- [           main] o.e.jetty.server.handler.ContextHandler  : Started o.e.j.s.ServletContextHandler@6a8598e7{/__admin,null,AVAILABLE}
2016-07-29 11:24:04.270  INFO 1160 --- [           main] o.e.jetty.server.handler.ContextHandler  : Started o.e.j.s.ServletContextHandler@104ea372{/,null,AVAILABLE}
2016-07-29 11:24:04.279  INFO 1160 --- [           main] o.eclipse.jetty.server.ServerConnector   : Started ServerConnector@3c9b416a{HTTP/1.1}{0.0.0.0:50811}
2016-07-29 11:24:04.430  INFO 1160 --- [           main] o.eclipse.jetty.server.ServerConnector   : Started ServerConnector@7c214597{SSL-http/1.1}{0.0.0.0:50812}
2016-07-29 11:24:04.430  INFO 1160 --- [           main] org.eclipse.jetty.server.Server          : Started @25813ms
2016-07-29 11:24:04.632  INFO 1160 --- [           main] .t.s.AllocationsDeliveryZonesServiceSpec : waiting...
2016-07-29 11:24:04.662  INFO 1160 --- [           main] .t.s.AllocationsDeliveryZonesServiceSpec : Waiting for [moa] to connect to 2 partitions.^
2016-07-29 11:24:13.644  INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator  : Attempt to heart beat failed since the group is rebalancing, try to re-join group.
2016-07-29 11:24:13.644  INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator  : Attempt to heart beat failed since the group is rebalancing, try to re-join group.
2016-07-29 11:24:13.644  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions revoked:[]
2016-07-29 11:24:13.644  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions revoked:[]
2016-07-29 11:24:13.655  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[staggering-0]
2016-07-29 11:24:13.655  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[moa-0]
2016-07-29 11:24:13.655  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[deliveryZipCode_v1-0]
2016-07-29 11:24:13.740  INFO 1160 --- [           main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
[...]
2016-07-29 11:24:16.644  INFO 1160 --- [           main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
2016-07-29 11:24:16.666  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions revoked:[staggering-0]
2016-07-29 11:24:16.750  INFO 1160 --- [           main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
[...]
2016-07-29 11:24:23.559  INFO 1160 --- [           main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
2016-07-29 11:24:23.660  INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator  : Attempt to heart beat failed since the group is rebalancing, try to re-join group.
2016-07-29 11:24:23.660  INFO 1160 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator  : Attempt to heart beat failed since the group is rebalancing, try to re-join group.
2016-07-29 11:24:23.662  INFO 1160 --- [           main] .t.s.AllocationsDeliveryZonesServiceSpec : moa:0;
2016-07-29 11:24:23.686  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions revoked:[moa-0]
2016-07-29 11:24:23.686  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions revoked:[deliveryZipCode_v1-0]
2016-07-29 11:24:23.695  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[moa-0]
2016-07-29 11:24:23.695  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[staggering-0]
2016-07-29 11:24:23.695  INFO 1160 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[deliveryZipCode_v1-0]

请注意[…]指示省略行
我们出发了 metadata.max.age.ms 因此,它尝试频繁地刷新元数据信息。
现在让我们困惑的是,如果我们等待两个分区连接,等待就会超时。只有当我们等待一个分区连接时,过一段时间一切都会成功运行。
我们是否理解错误的代码,即在嵌入的kafka中每个主题有两个分区?只有一个被分配给我们的听众是正常的吗?

ih99xse1

ih99xse11#

我无法解释你看到的片状;是的,每个主题默认有2个分区。我刚刚运行了一个框架容器测试,看到这个。。。

09:24:06.139 INFO  [testSlow3-kafka-consumer-1][org.springframework.kafka.listener.KafkaMessageListenerContainer] partitions revoked:[]
09:24:06.611 INFO  [testSlow3-kafka-consumer-1][org.springframework.kafka.listener.KafkaMessageListenerContainer] partitions assigned:[testTopic3-1, testTopic3-0]
kx5bkwkv

kx5bkwkv2#

对于测试,设置 spring.kafka.consumer.auto-offset-reset=earliest 要避免竞争条件(消费者与生产者之间的顺序或时间),请参阅https://docs.spring.io/spring-kafka/reference/html/#junit
从版本2.5开始,consumerprops方法将consumerconfig.auto\u offset\u reset\u config设置为earlish。这是因为,在大多数情况下,您希望使用者使用测试用例中发送的任何消息。consumerconfig默认值为latest,这意味着在使用者启动之前,测试已经发送的消息将不会接收这些记录。要恢复到以前的行为,请在调用方法后将属性设置为latest。

相关问题