我有一个kafka connect流mongodb->kafka connect->elasticsearch发送端到端的数据,但是负载文档是json编码的。这是我的mongodb源文件。
{
"_id": "1541527535911",
"enabled": true,
"price": 15.99,
"style": {
"color": "blue"
},
"tags": [
"shirt",
"summer"
]
}
下面是我的mongodb源连接器配置:
{
"name": "redacted",
"config": {
"connector.class": "com.teambition.kafka.connect.mongo.source.MongoSourceConnector",
"databases": "redacted.redacted",
"initial.import": "true",
"topic.prefix": "redacted",
"tasks.max": "8",
"batch.size": "1",
"key.serializer": "org.apache.kafka.common.serialization.StringSerializer",
"value.serializer": "org.apache.kafka.common.serialization.JSONSerializer",
"key.serializer.schemas.enable": false,
"value.serializer.schemas.enable": false,
"compression.type": "none",
"mongo.uri": "mongodb://redacted:27017/redacted",
"analyze.schema": false,
"schema.name": "__unused__",
"transforms": "RenameTopic",
"transforms.RenameTopic.type":
"org.apache.kafka.connect.transforms.RegexRouter",
"transforms.RenameTopic.regex": "redacted.redacted_Redacted",
"transforms.RenameTopic.replacement": "redacted"
}
}
在elasticsearch中,结果是这样的:
{
"_index" : "redacted",
"_type" : "kafka-connect",
"_id" : "{\"schema\":{\"type\":\"string\",\"optional\":true},\"payload\":\"1541527535911\"}",
"_score" : 1.0,
"_source" : {
"ts" : 1541527536,
"inc" : 2,
"id" : "1541527535911",
"database" : "redacted",
"op" : "i",
"object" : "{ \"_id\" : \"1541527535911\", \"price\" : 15.99,
\"enabled\" : true, \"tags\" : [\"shirt\", \"summer\"],
\"style\" : { \"color\" : \"blue\" } }"
}
}
我想使用2个单消息转换: ExtractField
抓住 object
,这是一个json字符串
可以将json解析为一个对象,或者让普通的jsonconverter来处理它,只要它在elasticsearch中的结构是正确的。
我试着用一个 ExtractField
在我的接收器配置,但我看到这个错误Kafka记录
kafka-connect_1 | org.apache.kafka.connect.errors.ConnectException:
Bulk request failed: [{"type":"mapper_parsing_exception",
"reason":"failed to parse",
"caused_by":{"type":"not_x_content_exception",
"reason":"Compressor detection can only be called on some xcontent bytes or
compressed xcontent bytes"}}]
这是我的elasticsearchFlume连接器配置。在这个版本中,我可以工作,但是我必须编写一个定制的parsejson smt。它工作得很好,但是如果有更好的方法或者一种结合了一些内置的东西(转换器,SMT,任何可以工作的东西)的方法来做到这一点,我很乐意看到。
{
"name": "redacted",
"config": {
"connector.class":
"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"batch.size": 1,
"connection.url": "http://redacted:9200",
"key.converter.schemas.enable": true,
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"schema.ignore": true,
"tasks.max": "1",
"topics": "redacted",
"transforms": "ExtractFieldPayload,ExtractFieldObject,ParseJson,ReplaceId",
"transforms.ExtractFieldPayload.type": "org.apache.kafka.connect.transforms.ExtractField$Value",
"transforms.ExtractFieldPayload.field": "payload",
"transforms.ExtractFieldObject.type": "org.apache.kafka.connect.transforms.ExtractField$Value",
"transforms.ExtractFieldObject.field": "object",
"transforms.ParseJson.type": "reaction.kafka.connect.transforms.ParseJson",
"transforms.ReplaceId.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
"transforms.ReplaceId.renames": "_id:id",
"type.name": "kafka-connect",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": false
}
}
1条答案
按热度按时间xkftehaa1#
我不确定你的mongo连接器。我不认识这个类或配置。。。大多数人可能使用debezium mongo连接器
不过,我会这样安排的
这个
schemas.enable
重要的是,这样内部连接数据类就可以知道如何与其他格式进行转换。然后,在sink中,您需要再次使用json反序列化程序(通过转换器),以便它创建一个完整的对象,而不是纯文本字符串,正如您在elasticsearch中看到的那样(
{\"schema\":{\"type\":\"string\"
).如果这不起作用,那么您可能需要提前在elasticsearch中手动创建索引Map,以便它知道如何实际解析您发送给它的字符串