boot/kafka应用程序,防止应用程序关闭?

iyr7buue  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(573)

我正在开发一个springboot2.1.1,它需要与kafka进行通信,应用程序启动,连接到kafka,读/写一些消息,然后退出。我们的目标是保持应用程序的运行,并且在不退出的情况下收听Kafka的一些主题。
以下是主应用程序文件:

package com.example.springbootstarter;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.annotation.TopicPartition;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.support.SendResult;
import org.springframework.messaging.handler.annotation.Header;
import org.springframework.messaging.handler.annotation.Payload;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;

@SpringBootApplication
public class SpringBootStarterApplication {

    public static void main(String[] args) throws Exception {

        ConfigurableApplicationContext context = SpringApplication.run(SpringBootStarterApplication.class, args);

        MessageProducer producer = context.getBean(MessageProducer.class);
        MessageListener listener = context.getBean(MessageListener.class);
        /*
         * Sending a Hello World message to topic 'baeldung'.
         * Must be recieved by both listeners with group foo
         * and bar with containerFactory fooKafkaListenerContainerFactory
         * and barKafkaListenerContainerFactory respectively.
         * It will also be recieved by the listener with
         * headersKafkaListenerContainerFactory as container factory
         */
        producer.sendMessage("Hello, World!");
        listener.latch.await(10, TimeUnit.SECONDS);

        /*
         * Sending message to a topic with 5 partition,
         * each message to a different partition. But as per
         * listener configuration, only the messages from
         * partition 0 and 3 will be consumed.
         */
        for (int i = 0; i < 5; i++) {
            producer.sendMessageToPartion("Hello To Partioned Topic!", i);
        }
        listener.partitionLatch.await(10, TimeUnit.SECONDS);

        /*
         * Sending message to 'filtered' topic. As per listener
         * configuration,  all messages with char sequence
         * 'World' will be discarded.
         */
        producer.sendMessageToFiltered("Hello Baeldung!");
        producer.sendMessageToFiltered("Hello World!");
        listener.filterLatch.await(10, TimeUnit.SECONDS);

        /*
         * Sending message to 'greeting' topic. This will send
         * and recieved a java object with the help of
         * greetingKafkaListenerContainerFactory.
         */
        producer.sendGreetingMessage(new Greeting("Greetings", "World!"));
        listener.greetingLatch.await(10, TimeUnit.SECONDS);

        context.close();
    }

    @Bean
    public MessageProducer messageProducer() {
        return new MessageProducer();
    }

    @Bean
    public MessageListener messageListener() {
        return new MessageListener();
    }

    public static class MessageProducer {

        @Autowired
        private KafkaTemplate<String, String> kafkaTemplate;

        @Autowired
        private KafkaTemplate<String, Greeting> greetingKafkaTemplate;

        @Value(value = "${message.topic.name}")
        private String topicName;

        @Value(value = "${partitioned.topic.name}")
        private String partionedTopicName;

        @Value(value = "${filtered.topic.name}")
        private String filteredTopicName;

        @Value(value = "${greeting.topic.name}")
        private String greetingTopicName;

        public void sendMessage(String message) {

            ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(topicName, message);

            future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {

                @Override
                public void onSuccess(SendResult<String, String> result) {
                    System.out.println("Sent message=[" + message + "] with offset=[" + result.getRecordMetadata().offset() + "]");
                }
                @Override
                public void onFailure(Throwable ex) {
                    System.out.println("Unable to send message=[" + message + "] due to : " + ex.getMessage());
                }
            });
        }

        public void sendMessageToPartion(String message, int partition) {
            kafkaTemplate.send(partionedTopicName, partition, null, message);
        }

        public void sendMessageToFiltered(String message) {
            kafkaTemplate.send(filteredTopicName, message);
        }

        public void sendGreetingMessage(Greeting greeting) {
            greetingKafkaTemplate.send(greetingTopicName, greeting);
        }
    }

    public static class MessageListener {

        private CountDownLatch latch = new CountDownLatch(3);

        private CountDownLatch partitionLatch = new CountDownLatch(2);

        private CountDownLatch filterLatch = new CountDownLatch(2);

        private CountDownLatch greetingLatch = new CountDownLatch(1);

        @KafkaListener(topics = "${message.topic.name}", groupId = "foo", containerFactory = "fooKafkaListenerContainerFactory")
        public void listenGroupFoo(String message) {
            System.out.println("Received Messasge in group 'foo': " + message);
            latch.countDown();
        }

        @KafkaListener(topics = "${message.topic.name}", groupId = "bar", containerFactory = "barKafkaListenerContainerFactory")
        public void listenGroupBar(String message) {
            System.out.println("Received Messasge in group 'bar': " + message);
            latch.countDown();
        }

        @KafkaListener(topics = "${message.topic.name}", containerFactory = "headersKafkaListenerContainerFactory")
        public void listenWithHeaders(@Payload String message, @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
            System.out.println("Received Messasge: " + message + " from partition: " + partition);
            latch.countDown();
        }

        @KafkaListener(topicPartitions = @TopicPartition(topic = "${partitioned.topic.name}", partitions = { "0", "3" }))
        public void listenToParition(@Payload String message, @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
            System.out.println("Received Message: " + message + " from partition: " + partition);
            this.partitionLatch.countDown();
        }

        @KafkaListener(topics = "${filtered.topic.name}", containerFactory = "filterKafkaListenerContainerFactory")
        public void listenWithFilter(String message) {
            System.out.println("Recieved Message in filtered listener: " + message);
            this.filterLatch.countDown();
        }

        @KafkaListener(topics = "${greeting.topic.name}", containerFactory = "greetingKafkaListenerContainerFactory")
        public void greetingListener(Greeting greeting) {
            System.out.println("Recieved greeting message: " + greeting);
            this.greetingLatch.countDown();
        }

    }

}

下面是应用程序运行的简略日志输出:

019-01-11 13:51:51.728  INFO 40885 --- [           main] c.e.s.SpringBootStarterApplication       : Starting SpringBootStarterApplication on CHIMAC11592-2.local with PID 40885 (/Users/e602684/Documents/dev/spring-boot-mongo-crud/target/classes started by e602684 in /Users/e602684/Documents/dev/spring-boot-mongo-crud)
2019-01-11 13:51:51.731  INFO 40885 --- [           main] c.e.s.SpringBootStarterApplication       : No active profile set, falling back to default profiles: default
2019-01-11 13:51:52.181  INFO 40885 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data repositories in DEFAULT mode.
2019-01-11 13:51:52.223  INFO 40885 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 39ms. Found 1 repository interfaces.
2019-01-11 13:51:52.389  INFO 40885 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of type [org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$6ef5e3a3] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-01-11 13:51:52.666  INFO 40885 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2019-01-11 13:51:52.684  INFO 40885 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2019-01-11 13:51:52.684  INFO 40885 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/9.0.13
2019-01-11 13:51:52.689  INFO 40885 --- [           main] o.a.catalina.core.AprLifecycleListener   : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/Users/e602684/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]
2019-01-11 13:51:52.773  INFO 40885 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2019-01-11 13:51:52.774  INFO 40885 --- [           main] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 1012 ms
2019-01-11 13:51:52.993  INFO 40885 --- [           main] org.mongodb.driver.cluster               : Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2019-01-11 13:51:52.993  INFO 40885 --- [           main] org.mongodb.driver.cluster               : Adding discovered server localhost:27017 to client view of cluster
2019-01-11 13:51:53.031  INFO 40885 --- [localhost:27017] org.mongodb.driver.connection            : Opened connection [connectionId{localValue:1, serverValue:12}] to localhost:27017
2019-01-11 13:51:53.034  INFO 40885 --- [localhost:27017] org.mongodb.driver.cluster               : Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 0, 4]}, minWireVersion=0, maxWireVersion=7, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1632137}
2019-01-11 13:51:53.035  INFO 40885 --- [localhost:27017] org.mongodb.driver.cluster               : Discovered cluster type of STANDALONE
2019-01-11 13:51:53.412  INFO 40885 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2019-01-11 13:51:53.574  INFO 40885 --- [           main] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
    bootstrap.servers = [127.0.0.1:9092]
    client.id = 
    connections.max.idle.ms = 300000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 120000
    retries = 5
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS

2019-01-11 13:51:53.968  INFO 40885 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 2.0.1
2019-01-11 13:51:53.968  INFO 40885 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : fa14705e51bd2ce5
2019-01-11 13:51:53.973  INFO 40885 --- [ad | producer-1] org.apache.kafka.clients.Metadata        : Cluster ID: QW5A9DYxTlSZVC2M8pxAsg
Sent message=[Hello, World!] with offset=[9]
2019-01-11 13:51:56.858  INFO 40885 --- [ntainer#2-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=headers] Successfully joined group with generation 9
2019-01-11 13:51:56.859  INFO 40885 --- [ntainer#2-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-2, groupId=headers] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.862  INFO 40885 --- [ntainer#2-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned: [test-0]
Received Messasge: Hello, World! from partition: 0
2019-01-11 13:51:56.895  INFO 40885 --- [ntainer#4-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-6, groupId=filter] Successfully joined group with generation 9
2019-01-11 13:51:56.896  INFO 40885 --- [ntainer#4-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-6, groupId=filter] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.898  INFO 40885 --- [ntainer#4-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned: [test-0]
2019-01-11 13:51:56.915  INFO 40885 --- [ntainer#5-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-8, groupId=greeting] Successfully joined group with generation 9
2019-01-11 13:51:56.916  INFO 40885 --- [ntainer#5-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-8, groupId=greeting] Setting newly assigned partitions [greetings-0]
2019-01-11 13:51:56.919  INFO 40885 --- [ntainer#5-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned: [greetings-0]
2019-01-11 13:51:56.930  INFO 40885 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-10, groupId=foo] Successfully joined group with generation 9
2019-01-11 13:51:56.931  INFO 40885 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-10, groupId=foo] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.933  INFO 40885 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned: [test-0]
Recieved greeting message: Greetings, World!!
Received Messasge in group 'foo': Hello, World!
2019-01-11 13:51:56.944  INFO 40885 --- [ntainer#1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-12, groupId=bar] Successfully joined group with generation 9
2019-01-11 13:51:56.944  INFO 40885 --- [ntainer#1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-12, groupId=bar] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.946  INFO 40885 --- [ntainer#1-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned: [test-0]
Received Messasge in group 'bar': Hello, World!
Received Message: Hello To Partioned Topic! from partition: 0
Received Message: Hello To Partioned Topic! from partition: 3
Received Messasge: Hello Baeldung! from partition: 0
Received Messasge: Hello World! from partition: 0
Received Messasge in group 'bar': Hello Baeldung!
Received Messasge in group 'bar': Hello World!
Received Messasge in group 'foo': Hello Baeldung!
Received Messasge in group 'foo': Hello World!
Recieved Message in filtered listener: Hello Baeldung!
2019-01-11 13:52:06.968  INFO 40885 --- [           main] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values: 
    acks = 1
    batch.size = 16384
    bootstrap.servers = [127.0.0.1:9092]
    buffer.memory = 33554432
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 0
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 0
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer

2019-01-11 13:52:06.971  INFO 40885 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 2.0.1
2019-01-11 13:52:06.971  INFO 40885 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : fa14705e51bd2ce5
2019-01-11 13:52:06.984  INFO 40885 --- [ad | producer-2] org.apache.kafka.clients.Metadata        : Cluster ID: QW5A9DYxTlSZVC2M8pxAsg
2019-01-11 13:52:07.002  INFO 40885 --- [ntainer#3-0-C-1] o.s.s.c.ThreadPoolTaskScheduler          : Shutting down ExecutorService
2019-01-11 13:52:07.003  INFO 40885 --- [ntainer#0-0-C-1] o.s.s.c.ThreadPoolTaskScheduler          : Shutting down ExecutorService
2019-01-11 13:52:07.003  INFO 40885 --- [ntainer#2-0-C-1] o.s.s.c.ThreadPoolTaskScheduler          : Shutting down ExecutorService
2019-01-11 13:52:07.003  INFO 40885 --- [ntainer#1-0-C-1] o.s.s.c.ThreadPoolTaskScheduler          : Shutting down ExecutorService
2019-01-11 13:52:07.003  INFO 40885 --- [ntainer#4-0-C-1] o.s.s.c.ThreadPoolTaskScheduler          : Shutting down ExecutorService
2019-01-11 13:52:07.005  INFO 40885 --- [ntainer#5-0-C-1] o.s.s.c.ThreadPoolTaskScheduler          : Shutting down ExecutorService
2019-01-11 13:52:07.007  INFO 40885 --- [ntainer#3-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.014  INFO 40885 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.014  INFO 40885 --- [ntainer#2-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.021  INFO 40885 --- [ntainer#1-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.021  INFO 40885 --- [ntainer#4-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.021  INFO 40885 --- [ntainer#5-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.022  INFO 40885 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Shutting down ExecutorService 'applicationTaskExecutor'
2019-01-11 13:52:07.022  INFO 40885 --- [           main] o.a.k.clients.producer.KafkaProducer     : [Producer clientId=producer-2] Closing the Kafka producer with timeoutMillis = 30000 ms.
2019-01-11 13:52:07.024  INFO 40885 --- [           main] o.a.k.clients.producer.KafkaProducer     : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 30000 ms.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.catalina.loader.WebappClassLoaderBase (file:/Users/e602684/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.13/tomcat-embed-core-9.0.13.jar) to field java.io.ObjectStreamClass$Caches.localDescs
WARNING: Please consider reporting this to the maintainers of org.apache.catalina.loader.WebappClassLoaderBase
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Disconnected from the target VM, address: '127.0.0.1:57175', transport: 'socket'

Process finished with exit code 0

我应该在主应用程序文件中更改什么以阻止此应用程序退出?

mfuanj7w

mfuanj7w1#

因为上下文赋值是run语句的一部分:

ConfigurableApplicationContext context =
            SpringApplication.run(SpringBootStarterApplication.class, args);

我所要做的,就是不让应用程序关闭,只不过是发表评论而已 context.close() :

//context.close();

相关问题