我正在使用filebeat将我的日志推到使用logstash的elasticsearch,之前的设置对我来说很好用。我现在得到了Failed to publish events error
。
filebeat | 2020-06-20T06:26:03.832969730Z 2020-06-20T06:26:03.832Z INFO log/harvester.go:254 Harvester started for file: /logs/app-service.log
filebeat | 2020-06-20T06:26:04.837664519Z 2020-06-20T06:26:04.837Z ERROR logstash/async.go:256 Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970506599Z 2020-06-20T06:26:05.970Z ERROR pipeline/output.go:121 Failed to publish events: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970749223Z 2020-06-20T06:26:05.970Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://xx.com:5044))
filebeat | 2020-06-20T06:26:05.972790871Z 2020-06-20T06:26:05.972Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://xx.com:5044)) established
日志存储管道
02-beats-input.conf
input {
beats {
port => 5044
}
}
10-syslog-filter.conf
filter {
json {
source => "message"
}
}
30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index-%{+YYYY.MM.dd}"
}
}
文件拍配置在/usr/share/filebeat/filebeat.yml上共享我的文件拍配置
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /logs/*
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["xx.com:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
当我远程登录xx.xx5044时,这就是我在终端中看到的内容
Trying X.X.X.X...
Connected to xx.xx.
Escape character is '^]'
2条答案
按热度按时间xsuvu9jc1#
我也有同样的问题。这里有一些步骤,可以帮助你找到你的问题的核心。首先我测试这样的方式:filebeat(localhost)-〉logstash(localhost)-〉elastic -〉kibana.每个服务都在同一台机器上。
我的/etc/日志文件/conf.d/配置文件:
在这里,我特意禁用了ssl(在我的例子中,这是问题的主要原因,即使证书是正确的,神奇)。之后,不要忘记重新启动logstash并使用
sudo filebeat -e
命令进行测试。如果一切正常,您将不会看到“connectionresetbypeer”错误irtuqstp2#
我也遇到了同样的问题,作为sudo用户启动filebeat对我来说很有效。
我对输入插件配置做了一些修改,比如指定
ssl => false
,但是如果不以sudo特权用户或root身份启动filebeat,就无法正常工作。要以sudo用户身份启动filebeat,filebeat.yml文件必须为root所有。使用sudo
chown -R sime_sudo_user:some_group filebeat-7.15.0-linux-x86_64/
将整个filebeat文件夹权限更改为sudo特权用户,然后chown root filebeat.yml
将更改file的权限。