我遇到了topologytestdriver的问题,因为必须设置属性: KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "url"
当拓扑试图将记录发布到主题中时,它会转到配置中提供的“url”。如何模拟该访问以指向mockedschemaregistry?
要在avro中为主题编写记录,请使用 mockSchemaRegistryClient.register
还有一个问题,如何将statestore加载到拓扑中?我在初始化时创建statestore(主题已经创建)
我的亲属:
testImplementation("org.junit.jupiter:junit-jupiter-api:5.3.1")
testRuntimeOnly("org.junit.jupiter:junit-jupiter-engine:5.3.1")
testCompile 'org.mockito:mockito-core:2.18.3'
testCompile 'org.assertj:assertj-core:3.9.1'
testCompile ("org.mockito:junit-jupiter:2.20.0")
testCompile 'org.skyscreamer:jsonassert:1.5.0'
testCompile group: 'org.springframework', name: 'spring-test', version: '5.0.8.RELEASE'
testCompile 'org.apache.kafka:kafka-streams-test-utils:2.0.0'
这是我的密码:
@ExtendWith(SpringExtension.class)
@Import({KafkaStreamsCdlcfMapperConfiguration.class, KafkaStreamsCdlcfMapperSpecificConfiguration.class,
CdlcfStreamsTopologyImpl.class, CdlcfMappingProcessor.class, CdlcfMappingServiceImpl.class, RecordParserServiceImpl.class,
FormatFileFromJarImpl.class})
@TestPropertySource(locations = "../application.properties")
public class SyncronizerIntegrationTest {
String schemaRegistryUrl = "http://mock:8081";
@Autowired
private CdlcfStreamsTopology cdlcfStreamsTopology;
private GenericDatumWriter<GenericRecord> datumWriter;
MockSchemaRegistryClient mockSchemaRegistryClient = new MockSchemaRegistryClient();
@Value("${cdlcf-mapper.topics.unmapped-cdlcf}") String unmappedCdlcfTopic;
@Value("${cdlcf-mapper.topics.mapped-cdlcf}") String mappedCdlcfTopic;
@Value("${cdlcf.topics.logs}") String logsTopic;
@Test
void integrationTest() throws Exception {
Properties fakeProps = new Properties();
fakeProps.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, "streamsTest");
fakeProps.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "dummy:1234");
fakeProps.setProperty(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
fakeProps.setProperty(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, GenericAvroSerde.class.getName());
fakeProps.setProperty("value.serializer", KafkaAvroSerializer.class.getName());
fakeProps.setProperty(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "url"); //If records are produced will try to register the record in the schemaRegistry
StreamsBuilder kStreamBuilder = new StreamsBuilder();
int idSchema = mockSchemaRegistryClient.register(getSubjectName("topic",false),Tracking.getClassSchema());
Serde<GenericRecord> avroSerde = getAvroSerde(mockSchemaRegistryClient);
ConsumerRecordFactory<String, String> recordFactory = new ConsumerRecordFactory<>(new StringSerializer(), new StringSerializer());
String lineContent="lineContent";
TopologyTestDriver testDriver = new TopologyTestDriver(cdlcfStreamsTopology.getTopology(),fakeProps);
testDriver.pipeInput(recordFactory.create(unmappedCdlcfTopic,"CDLCF_20180903_125115009", lineContent));
}
引发异常(显然是因为没有启动schemaregistry)
org.apache.kafka.streams.errors.StreamsException: Exception caught in
process. taskId=0_0, processor=KSTREAM-SOURCE-0000000002, topic=test, partition=0, offset=0
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:304)
at org.apache.kafka.streams.TopologyTestDriver.pipeInput(TopologyTestDriver.java:393)
Caused by: org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309)
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:172)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:229)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:320)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:312)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:307)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:114)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:153)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:79)
at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53)
at io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer.serialize(SpecificAvroSerializer.java:65)
at io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer.serialize(SpecificAvroSerializer.java:38)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:154)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:98)
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:129)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
at org.apache.kafka.streams.kstream.internals.KStreamPassThrough$KStreamPassThroughProcessor.process(KStreamPassThrough.java:33)
at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:50)
at org.apache.kafka.streams.processor.internals.ProcessorNode.runAndMeasureLatency(ProcessorNode.java:244)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:122)
at org.apache.kafka.streams.kstream.internals.KStreamBranch$KStreamBranchProcessor.process(KStreamBranch.java:48)
at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:50)
at org.apache.kafka.streams.processor.internals.ProcessorNode.runAndMeasureLatency(ProcessorNode.java:244)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:126)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
at org.apache.kafka.streams.kstream.internals.KStreamFlatMap$KStreamFlatMapProcessor.process(KStreamFlatMap.java:42)
at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:50)
at org.apache.kafka.streams.processor.internals.ProcessorNode.runAndMeasureLatency(ProcessorNode.java:244)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:143)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:129)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:90)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:87)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:288)
at org.apache.kafka.streams.TopologyTestDriver.pipeInput(TopologyTestDriver.java:393)
3条答案
按热度按时间6rqinv9w1#
好的,多亏了matthias,找到了一个有效的解决方案,但我将特别介绍一些技巧,以改进此解决方案。因为我不得不添加一个只用于测试的方法(我不喜欢)。正如matthias指出的,问题是拓扑内部生成的serde没有指向模拟的模式。所以我写了一个塞德斯设定器,设定了“嘲笑塞德斯”
此处为解决方案代码:
拓扑的setter(要更改此…)
使用模拟模式创建avro serdes
我必须创建这个ownspecificavroserde,以便通过构造函数传递mockedschema。为此,我必须创建一个与avro库同名的本地包,以访问具有模式构造函数的默认类。
重要的是,将模式注册到模拟模式注册表:
还得进口
getSubjectName
从mockedschema库中,为了生成与它们生成的键相同的键,查找schema id。1yjd4xko2#
下面是一个例子
mockSchemaRegistryClient
https://objectpartners.com/2018/08/21/testing-with-spring-kafka-and-mockschemaregistryclient/oewdyzsn3#
我在kafka流、topologytestdriver和mockschemaregistry上遇到了完全相同的问题。我总是通过向我试图测试的kafka流拓扑提供键和值serde来修复它。
示例:
或
这样,它可以处理内部主题和存储,因为您可以在测试阶段使用mockschemaregistryclient配置mycustomerde。您甚至不必将主题注册到mockschemaregistryclient中。您只需将SERDE配置为: