带有ssl的apache kafka正在工作,但是kafka日志中针对本地主机的ssl错误(让我发疯)

xjreopfe  于 2021-06-04  发布在  Kafka
关注(0)|答案(1)|浏览(481)

我有Kafka的问题,这让我发疯。
我们有一个4节点的集群。在我们的开发阶段,我们通常不使用ssl。-->没问题。
为了发布,我们为两个侦听器都启用了ssl。-->一切正常(应用程序+kafka manager cmak+监控)
但是我们在所有环境(测试、发布、生产)中的kafka代理服务器日志中都有一个错误。有什么事,我不知道´我不知道它是什么,也不知道去哪里看:
首先是:

[2020-10-16 10:50:27,866] INFO AdminClientConfig values:
        bootstrap.servers = [127.0.0.1:10092]
        client.dns.lookup = default
        client.id =
        connections.max.idle.ms = 300000
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 120000
        retries = 5
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        security.providers = null
        send.buffer.bytes = 131072
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
 (org.apache.kafka.clients.admin.AdminClientConfig)

然后,ssl错误轮询:

[2020-10-16 10:48:11,799] INFO [SocketServer brokerId=2] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2020-10-16 10:48:13,141] INFO [SocketServer brokerId=2] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2020-10-16 10:48:14,476] INFO [SocketServer brokerId=2] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)

然后超时:

[2020-10-16 10:48:20,890] INFO [AdminClient clientId=adminclient-25] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2020-10-16 10:48:20,892] INFO [AdminClient clientId=adminclient-25] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.

1-2分钟后,它又开始了。
我们的经纪人配置:


# Maintained by Ansible

zookeeper.connect=ZOOKEEPER1:2181,ZOOKEEPER2:2181,ZOOKEEPER3:2181
log.dirs=KAFKKALOGDIR
broker.id=2

confluent.license.topic.replication.factor=3
log.segment.bytes=1073741824
socket.receive.buffer.bytes=102400
socket.send.buffer.bytes=102400
offsets.topic.replication.factor=3
num.network.threads=8
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
confluent.support.metrics.enable=False
zookeeper.connection.timeout.ms=18000
num.io.threads=16
socket.request.max.bytes=104857600
log.retention.check.interval.ms=300000
group.initial.rebalance.delay.ms=0
confluent.metadata.topic.replication.factor=3
num.recovery.threads.per.data.dir=2
default.replication.factor=3
num.partitions=10
log.retention.hours=168
confluent.support.customer.id=anonymous

listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL
listeners=INTERNAL://:10091,EXTERNAL://:10092
advertised.listeners=INTERNAL://BROKERURL:10091,EXTERNAL://BROKERURL:10092

## Inter Broker Listener Configuration

inter.broker.listener.name=INTERNAL

listener.name.internal.ssl.truststore.location=LOCATION
listener.name.internal.ssl.truststore.password=PASSWORD
listener.name.internal.ssl.keystore.location=LOCATION
listener.name.internal.ssl.keystore.password=PASSWORD
listener.name.internal.ssl.key.password=PASSWORD

listener.name.external.ssl.truststore.location=LOCATION
listener.name.external.ssl.truststore.password=PASSWORD
listener.name.external.ssl.keystore.location=LOCATION
listener.name.external.ssl.keystore.password=PASSWORD
listener.name.external.ssl.key.password=PASSWORD

## Metrics Reporter Configuration

confluent.metrics.reporter.security.protocol=SSL
confluent.metrics.reporter.ssl.truststore.location=LOCATION
confluent.metrics.reporter.ssl.truststore.password=PASSWORD

我所做的:-禁用了我们的监视代理(以为该代理在没有ssl的情况下进行轮询)-->什么都没有
-添加一个额外的localhost监听器,纯文本127.0.0.1-->出现大量问题,错误为“没有与主题xy匹配的前导”
所以,我不知道´我不知道如何继续-也许有人有主意
非常感谢

2ledvvac

2ledvvac1#

adminclientconfig指定 security.protocol=PLAINTEXT -考虑到您想要启用ssl,这似乎是不对的。https://kafka.apache.org/11/javadoc/org/apache/kafka/common/security/auth/securityprotocol.html 显示该变量的可能选项。
你也有 sasl.jaas.config=null ,我也不认为这是正确的。
https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/ 提供了一个如何设置Kafka的安全性很好的演练。
编辑以添加(回答后续问题):adminclient是从connect-distributed.properties文件示例化的java类。当您在服务日志文件中搜索adminclient时,您将看到如下内容:

[2020-09-16 02:58:14,180] INFO AdminClientConfig values:
        bootstrap.servers = [servernames-elided:9092]
        client.dns.lookup = default
        client.id =
        connections.max.idle.ms = 300000
        default.api.timeout.ms = 60000
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 20000
        retries = 2147483647
        retry.backoff.ms = 500
        sasl.client.callback.handler.class = null
        sasl.jaas.config = [hidden]
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = PLAIN
        security.protocol = SASL_SSL
        security.providers = null
        send.buffer.bytes = 131072
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
 (org.apache.kafka.clients.admin.AdminClientConfig:347)

请注意 sasl.jaas.config = [hidden] -这是因为访问集群的用户名和密码直接存储在属性文件中,如下所示:

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"someUsernameThisIs\" password=\"notMyRealPassword\";

注意,双引号的转义对于配置解析器是必需的。

相关问题