我正在使用avro模式将数据写入kafka主题。起初,一切正常。在avro文件中添加一个新字段(scan\u app\u id)之后。我面临这个错误。
avro文件:{
“type”:“record”,“name”:“initiate \u scan”,“namespace”:“avro”,“doc”:“avro schema registry for initiate \u scan”,“fields”:[{“name”:“app \u id”,“type”:“string”,“doc”:“3位应用程序id”},
{
"name": "app_name",
"type": "string",
"doc": "application name"
},
{
"name": "dev_stage",
"type": "string",
"doc": "development stage"
},
{
"name": "scan_app_id",
"type": "string",
"doc": "unique scan id for an app in Veracode"
},
{
"name": "scan_name",
"type": "string",
"doc": "scan details"
},
{
"name": "seq_num",
"type": "int",
"doc": "unique number"
},
{
"name": "result_flg",
"type": "string",
"doc": "Y indicates results of scan available",
"default": "Y"
},
{
"name": "request_id",
"type": "int",
"doc": "unique id"
},
{
"name": "scan_number",
"type": "int",
"doc": "number of scans"
} ] }
错误:由:org.apache.kafka.common.errors.serializationexception:注册avro架构时出错:{“type”:“record”,“name”:“initiate\u scan”,“namespace”:“avro”,“doc”:“avro schema registry for initiate\u scan”,“fields”:[{“name”:“app\u id”,“type”:{“type”:“string”,“avro.java.string”:“string”},“doc”:“3位应用程序id”},{“name”:“app u name”,“type”:{“type”:“string”,“avro.java.string”:“string”},“doc”:“application name”},{“name”:“dev\u stage”,“type”:{“type”:“string”,“avro.java.string”:“string”},“doc”:“development stage”},{“name”:“scan\u app\u id”,“type”:{“type”:“string”,“avro.java.string”:“string”},“doc”:“app的唯一扫描id”},“name”:“scan\u name”,“type”:{“type”:“string”,“avro.java.string”:“string”},“doc”:“scan details”},{“name”:“seq\u num”,“type”:“int”,“doc”:“unique number”},{“name”:“result\u flg”,“type”:{“type”:“string”,“avro.java.string”:“string”},“doc”:“y表示可用的扫描结果”,“default”:“y”},{“name”:“request\u id”,“type”:“int”,“doc”:“unique id”},{“name”:“scan\u number”,“type”:“int”,“doc”:“扫描次数”}]}
使用timeoutmillis=9223372036854775807 ms关闭Kafka制作者的信息。(org.apache.kafka.clients.producer.kafk)aproducer:1017)原因:io.confluent.kafka.schemaregistry.client.rest.exceptions.restclientexception:注册操作超时;错误代码:50002,位于io.confluent.kafka.schemaregistry.client.rest.restservice.sendhttprequest(restservice)。java:182)at io.confluent.kafka.schemaregistry.client.rest.restservice.httprequest(restservice。java:203)在io.confluent.kafka.schemaregistry.client.rest.restservice.registerschema(restservice。java:292)在io.confluent.kafka.schemaregistry.client.rest.restservice.registerschema(restservice。java:284)在io.confluent.kafka.schemaregistry.client.rest.restservice.registerschema(restservice。java:279)在io.confluent.kafka.schemaregistry.client.cachedschemaregistryclient.registerandgetid(cachedschemaregistryclient。java:61)在io.confluent.kafka.schemaregistry.client.cachedschemaregistryclient.register(cachedschemaregistryclient)。java:93)位于io.confluent.kafka.serializers.abstractkafkaavroserializer.serializeimpl(abstractkafkaavroserializer)。java:72)在io.confluent.kafka.serializers.kafkaavroserializer.serialize(kafkaavroserializer。java:54)在org.apache.kafka.common.serialization.extendedserializer$wrapper.serialize(extendedserializer。java:65)在org.apache.kafka.common.serialization.extendedserializer$wrapper.serialize(extendedserializer。java:55)在org.apache.kafka.clients.producer.kafkaproducer.dosend(kafkaproducer。java:768)在org.apache.kafka.clients.producer.kafkaproducer.send(kafkaproducer。java:745)在com.ssc.svc.svds.initiate.initiateproducer.initiatescandata(initiateproducer。java:146)在com.ssc.svc.svds.initiate.initiateproducer.topicsdata(initiateproducer。java:41)在com.ssc.svc.svds.initiate.inputdata.main(inputdata。java:31)
我查阅了有关50002错误的合流文档,其中
架构应该与以前注册的架构兼容。
这是否意味着我无法更改/更新现有架构?
如何解决这个问题?
1条答案
按热度按时间5fjcxozz1#
实际上,链接上说
50002 -- Operation timed out
. 如果真的是不相容的话,回应者实际上会这么说。在任何情况下,如果添加新字段,都需要定义
default
价值观。这样,任何使用较新架构定义的、正在读取较旧消息的使用者都知道要为该字段设置什么值。
我发现的一个允许avro更改的直接列表是由oracle提供的
可能的错误有:
添加的字段没有默认值