将kafka升级到2.x后,在使用最新的embeddedkafkarule后无法启动我的ut

siotufzp  于 2021-06-06  发布在  Kafka
关注(0)|答案(1)|浏览(462)

我的测试依赖于Kafka如下:

|    +--- org.apache.kafka:kafka_2.11:2.0.0 -> 2.0.1
|    |    +--- org.apache.kafka:kafka-clients:2.0.1 -> 2.1.1-cp1
|    +--- org.apache.kafka:kafka-clients:2.0.0 -> 2.1.1-cp1 (*)
|    +--- io.confluent:kafka-avro-serializer:5.1.2
|    |    +--- io.confluent:kafka-schema-registry-client:5.1.2
|    |    |    +--- org.apache.kafka:kafka-clients:2.1.1-cp1 (*)
|    +--- io.confluent:kafka-schema-registry-client:5.1.2 (*)
+--- org.springframework.kafka:spring-kafka:2.2.4.RELEASE
|    \--- org.apache.kafka:kafka-clients:2.0.1 -> 2.1.1-cp1 (*)
+--- org.springframework.kafka:spring-kafka-test:2.2.4.RELEASE
|    +--- org.apache.kafka:kafka-clients:2.0.1 -> 2.1.1-cp1 (*)
|    \--- org.apache.kafka:kafka_2.11:2.0.1 (*)

我在junit入口点类中添加了规则:

@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true,
        SENDER_TOPICS.toArray(new String[0]));

但ut启动失败,出现以下异常:

08:11:06.122 [main] ERROR kafka.server.BrokerMetadataCheckpoint - Failed to read meta.properties file under dir C:\Users\shanh\AppData\Local\Temp\kafka-2116758262206608778\meta.properties due to C:\Users\shanh\AppData\Local\Temp\kafka-2116758262206608778\meta.properties
08:11:06.125 [main] ERROR kafka.server.KafkaServer - Fail to read meta.properties under log directory C:\Users\shanh\AppData\Local\Temp\kafka-2116758262206608778
java.nio.file.NoSuchFileException: C:\Users\shanh\AppData\Local\Temp\kafka-2116758262206608778\meta.properties
    at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79) ~[?:1.8.0_172]
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) ~[?:1.8.0_172]
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) ~[?:1.8.0_172]
    at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:230) ~[?:1.8.0_172]
    at java.nio.file.Files.newByteChannel(Files.java:361) ~[?:1.8.0_172]
    at java.nio.file.Files.newByteChannel(Files.java:407) ~[?:1.8.0_172]
    at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384) ~[?:1.8.0_172]
    at java.nio.file.Files.newInputStream(Files.java:152) ~[?:1.8.0_172]
    at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:560) ~[kafka-clients-2.1.1-cp1.jar:?]
    at kafka.server.BrokerMetadataCheckpoint.liftedTree2$1(BrokerMetadataCheckpoint.scala:63) ~[kafka_2.11-2.0.0.jar:?]
    at kafka.server.BrokerMetadataCheckpoint.read(BrokerMetadataCheckpoint.scala:62) ~[kafka_2.11-2.0.0.jar:?]
    at kafka.server.KafkaServer$$anonfun$getBrokerIdAndOfflineDirs$1.apply(KafkaServer.scala:665) [kafka_2.11-2.0.0.jar:?]
    at kafka.server.KafkaServer$$anonfun$getBrokerIdAndOfflineDirs$1.apply(KafkaServer.scala:663) [kafka_2.11-2.0.0.jar:?]
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) [scala-library-2.11.8.jar:?]
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35) [scala-library-2.11.8.jar:?]
    at kafka.server.KafkaServer.getBrokerIdAndOfflineDirs(KafkaServer.scala:663) [kafka_2.11-2.0.0.jar:?]
    at kafka.server.KafkaServer.startup(KafkaServer.scala:209) [kafka_2.11-2.0.0.jar:?]
    at kafka.utils.TestUtils$.createServer(TestUtils.scala:132) [kafka_2.11-2.0.1-test.jar:?]
    at kafka.utils.TestUtils.createServer(TestUtils.scala) [kafka_2.11-2.0.1-test.jar:?]
    at org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:223) [spring-kafka-test-2.2.4.RELEASE.jar:2.2.4.RELEASE]
    at org.springframework.kafka.test.rule.EmbeddedKafkaRule.before(EmbeddedKafkaRule.java:109) [spring-kafka-test-2.2.4.RELEASE.jar:2.2.4.RELEASE]
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:46) [junit-4.12.jar:4.12]
    at org.junit.rules.RunRules.evaluate(RunRules.java:20) [junit-4.12.jar:4.12]
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
    at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206) [.cp/:?]
08:11:06.455 [main] ERROR kafka.server.LogDirFailureChannel - Failed to create or validate data directory C:\Users\shanh\AppData\Local\Temp\kafka-2116758262206608778
java.io.IOException: Failed to load C:\Users\shanh\AppData\Local\Temp\kafka-2116758262206608778 during broker startup
    at kafka.log.LogManager$$anonfun$createAndValidateLogDirs$1.apply(LogManager.scala:152) [kafka_2.11-2.0.0.jar:?]
    at kafka.log.LogManager$$anonfun$createAndValidateLogDirs$1.apply(LogManager.scala:149) [kafka_2.11-2.0.0.jar:?]
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) [scala-library-2.11.8.jar:?]
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) [scala-library-2.11.8.jar:?]
    at kafka.log.LogManager.createAndValidateLogDirs(LogManager.scala:149) [kafka_2.11-2.0.0.jar:?]
    at kafka.log.LogManager.<init>(LogManager.scala:80) [kafka_2.11-2.0.0.jar:?]
    at kafka.log.LogManager$.apply(LogManager.scala:953) [kafka_2.11-2.0.0.jar:?]
    at kafka.server.KafkaServer.startup(KafkaServer.scala:237) [kafka_2.11-2.0.0.jar:?]
    at kafka.utils.TestUtils$.createServer(TestUtils.scala:132) [kafka_2.11-2.0.1-test.jar:?]
    at kafka.utils.TestUtils.createServer(TestUtils.scala) [kafka_2.11-2.0.1-test.jar:?]
    at org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:223) [spring-kafka-test-2.2.4.RELEASE.jar:2.2.4.RELEASE]
    at org.springframework.kafka.test.rule.EmbeddedKafkaRule.before(EmbeddedKafkaRule.java:109) [spring-kafka-test-2.2.4.RELEASE.jar:2.2.4.RELEASE]
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:46) [junit-4.12.jar:4.12]
    at org.junit.rules.RunRules.evaluate(RunRules.java:20) [junit-4.12.jar:4.12]
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
    at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460) [.cp/:?]
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206) [.cp/:?]
08:11:06.458 [main] ERROR kafka.log.LogManager - Shutdown broker because none of the specified log dirs from C:\Users\shanh\AppData\Local\Temp\kafka-2116758262206608778 can be created or validated

在我使用org.apache之前。kafka:kafka_2.11:0.10.1.1带SpringKafka-test:1.1.3.release,它们工作得非常好。我不知道这是我 Spring Kafka测试中的新依赖问题或bug。

z2acfund

z2acfund1#

问题是kafka版本在我的依赖树org.apache中升级到了2.1.1。kafka:kafka-clients:2.0.1->2.1.1-cp1 forcedmodules to“org.apache。kafka:kafka-clients:2.0.1“,我的ut没有问题

相关问题