kafka连接器elasticsearch不使用主题

oug3syen  于 2021-06-04  发布在  Kafka
关注(0)|答案(0)|浏览(350)

这是我的Kafka连接器属性


## 

# Licensed to the Apache Software Foundation (ASF) under one or more

# contributor license agreements.  See the NOTICE file distributed with

# this work for additional information regarding copyright ownership.

# The ASF licenses this file to You under the Apache License, Version 2.0

# (the "License"); you may not use this file except in compliance with

# the License.  You may obtain a copy of the License at

# 

# http://www.apache.org/licenses/LICENSE-2.0

# 

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

## 

# This file contains some of the configurations for the Kafka Connect distributed worker. This file is intended

# to be used with the examples, and some settings may differ from those used in a production system, especially

# the `bootstrap.servers` and those specifying replication factors.

# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.

bootstrap.servers=localhost:9092

# unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs

group.id=connect-cluster

# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will

# need to configure these based on the format they want their data in when loaded from or stored into Kafka

key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter

# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply

# it to

key.converter.schemas.enable=false
value.converter.schemas.enable=false

# Topic to use for storing offsets. This topic should have many partitions and be replicated and compacted.

# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create

# the topic before starting Kafka Connect if a specific topic configuration is needed.

# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.

# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able

# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.

offset.storage.topic=__connect_offsets
offset.storage.replication.factor=1

# offset.storage.partitions=25

# Topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated,

# and compacted topic. Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create

# the topic before starting Kafka Connect if a specific topic configuration is needed.

# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.

# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able

# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.

config.storage.topic=__connect_configs
config.storage.replication.factor=1

# Topic to use for storing statuses. This topic can have multiple partitions and should be replicated and compacted.

# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create

# the topic before starting Kafka Connect if a specific topic configuration is needed.

# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.

# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able

# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.

status.storage.topic=__connect_status
status.storage.replication.factor=1

# status.storage.partitions=5

# Flush much faster than normal, which is useful for testing/debugging

# offset.flush.interval.ms=10000

# These are provided to inform the user about the presence of the REST host and port configs

# Hostname & Port for the REST API to listen on. If this is set, it will bind to the interface used to listen to requests.

# rest.host.name=

# rest.port=8083

# The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers.

# rest.advertised.host.name=

# rest.advertised.port=

# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins

# (connectors, converters, transformations). The list should consist of top level directories that include

# any combination of:

# a) directories immediately containing jars with plugins and their dependencies

# b) uber-jars with plugins and their dependencies

# c) directories immediately containing the package directory structure of classes of plugins and their dependencies

# Examples:

# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,

plugin.path=/usr/share/java

这是我用来创建elasticsearch接收器的帖子体

{
 "name" : "test-distributed-connector",
 "config" : {
  "connector.class" : "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
  "tasks.max" : "2",
  "topics.regex" : "^test[0-9A-Za-z-_]*(?<!-raw$)$", 
  "connection.url" : "http://elasticsearch:9200",
  "connection.username": "admin",
  "connection.password": "admin",
  "type.name" : "_doc",
  "key.ignore" : "true",
  "schema.ignore" : "true",
  "transforms": "TimestampRouter",
  "transforms.TimestampRouter.type": "org.apache.kafka.connect.transforms.TimestampRouter",
  "transforms.TimestampRouter.topic.format": "${topic}-${timestamp}",
  "transforms.TimestampRouter.timestamp.format": "YYYY.MM.dd",
  "batch.size": "100",
  "offset.flush.interval.ms":"60000",
  "offset.flush.timeout.ms": "15000",
  "read.timeout.ms": "15000",
  "connection.timeout.ms": "10000",
  "max.buffered.records": "1500"
 }
}

我遇到的问题是,有时这个接收器会工作并将数据发送到elasticsearch和showing
[2020-09-15 20:27:05904]info workersinktask{id=test-distributed-connector-0}使用序号1异步提交偏移量。。。。。。。
但大多数时候,它只是停留和重复这一部分

[2020-09-15 20:24:29,458] INFO [Consumer clientId=consumer-4, groupId=connect-test-distributed-connector] Group coordinator kafka:9092 (id: 2147483543 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:706)
[2020-09-15 20:24:29,560] INFO [Consumer clientId=consumer-4, groupId=connect-test-distributed-connector] Discovered group coordinator kafka:9092 (id: 2147483543 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:654)
[2020-09-15 20:24:29,561] INFO [Consumer clientId=consumer-4, groupId=connect-test-distributed-connector] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:486)

我有一个疑问,那就是这个elasticsearch接收器会阅读大量的主题和大量的信息/数据。
因此,当试图从Kafka那里读到这个主题时,它就有了问题
因为我有另一个elasticsearhFlume,基本上和这个和那个一样。
有什么方法/调整可以使这个ElasticJob?

######## 更新#########

有时(经常)我会看到这个日志

[2020-09-16 09:51:18,189] WARN [Consumer clientId=consumer-6, groupId=connect-test-distributed-connector] Close timed out with 1 pending requests to coordinator, terminating client connections (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:769)

[2020-09-16 10:17:43,369] WARN [Consumer clientId=consumer-16, groupId=connect-test-distributed-connector-] Close timed out with 1 pending requests to coordinator, terminating client connections (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:769)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题