“应为[\\t\\r\\n]、\“#\”、\“{\”日志存储配置错误之一

z6psavjg  于 2021-06-14  发布在  ElasticSearch
关注(0)|答案(0)|浏览(330)

我在尝试启动日志存储时出现以下错误:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/fortigate_test_bkb.conf

错误:

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and 

will likely be removed in a future release.
WARNING: Unknown module: org.jruby.dist specified to --add-opens
WARNING: Unknown module: org.jruby.dist specified to --add-opens
WARNING: Unknown module: org.jruby.dist specified to --add-opens
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2020-10-15 12:34:57.813 [main] runner - Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-post-Ubuntu-0ubuntu120.04 on 11.0.8+10-post-Ubuntu-0ubuntu120.04 +indy +jit [linux-x86_64]"}
[WARN ] 2020-10-15 12:34:59.045 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[ERROR] 2020-10-15 12:35:01.531 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"{\" at line 9, column 6 (byte 68) after input {\nport ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:183:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:69:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:44:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:357:in `block in converge_state'"]}
[INFO ] 2020-10-15 12:35:02.083 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2020-10-15 12:35:06.868 [LogStash::Runner] runner - Logstash shut down.

这是我的日志 /etc/logstash/conf.d/.conf :


# input {

# udp {

# port => 514

# type => firewall

# }

# }

input {
port => 514
type => firewall

# port => 5044

# ssl => true

ssl_enable => true

# ssl_key => "/etc/logstash/config/certs/lnxlogstash01.pkcs8.key"

# ssl_certificate => "/etc/logstash/config/certs/lnxlogstash01.crt"

}

filter {
if [type] == "firewall" {
mutate {
add_tag => ["fortigate"]
}
grok {
break_on_match => false
match => [ "message", "%{SYSLOG5424PRI:syslog_index}%{GREEDYDATA:message}" ]
overwrite => [ "message" ]
tag_on_failure => [ "failure_grok_fortigate" ]
}

kv { }

if [msg] {
mutate {
replace => [ "message", "%{msg}" ]
}
}
mutate {
convert => { "duration" => "integer" }
convert => { "rcvdbyte" => "integer" }
convert => { "rcvdpkt" => "integer" }
convert => { "sentbyte" => "integer" }
convert => { "sentpkt" => "integer" }
convert => { "cpu" => "integer" }
convert => { "disk" => "integer" }
convert => { "disklograte" => "integer" }
convert => { "fazlograte" => "integer" }
convert => { "mem" => "integer" }
convert => { "totalsession" => "integer" }
}
mutate {
add_field => [ "fgtdatetime", "%{date} %{time}" ]
add_field => [ "loglevel", "%{level}" ]
replace => [ "fortigate_type", "%{type}" ]
replace => [ "fortigate_subtype", "%{subtype}" ]
remove_field => [ "msg", "message", "date", "time", "eventtime" ]
}
date {
match => [ "fgtdatetime", "YYYY-MM-dd HH:mm:ss" ]

# locale => "en"

# timezone => "NZ"

remove_field => [ "fgtdatetime" ]
}

# geoip {

# source => "srcip"

# target => "geosrcip"

# add_field => [ "[geosrcip][coordinates]", "%{[geosrcip][longitude]}" ]

# add_field => [ "[geosrcip][coordinates]", "%{[geosrcip][latitude]}" ]

# }

# geoip {

# source => "dstip"

# target => "geodstip"

# add_field => [ "[geodstip][coordinates]", "%{[geodstip][longitude]}" ]

# add_field => [ "[geodstip][coordinates]", "%{[geodstip][latitude]}" ]

# }

# mutate {

# convert => [ "[geoip][coordinates]", "float" ]

# }

}
}

output {

# if [host] == "10.a.b.c" {

elasticsearch {
hosts => "https://192.168.2.51:9200"
index => "fortinet-%{+YYYY.MM.dd}"
}
}

# output {

# elasticsearch {

# hosts => "192.168.2.51:9200"

# cacert => '/etc/logstash/config/certs/ca.crt'

# user => 'logstash_writer'

# password => '##local01'

# index => "fortinet-%{+YYYY.MM.dd}"

# }

# }

问题是因为我在elk集群中启动了安全设置,logstash conf的新输入和输出部分导致了上述错误。我通过检查语法、缺少的prackets等检查了其他有类似问题的线程,但是除非我想启动它,否则一切看起来都是正常的。
旧部分:


# input {

# udp {

# port => 514

# type => firewall

# }

# }

...

output {
elasticsearch {
hosts => "https://xxx:9200"
index => "fortinet-%{+YYYY.MM.dd}"
}

用于处理我们的任何错误(bevore应用弹性安全功能),主要来自以下帖子:https://www.elastic.co/de/blog/configuring-ssl-tls-and-https-to-secure-elasticsearch-kibana-beats-and-logstash
my logstash.yml包含以下内容:

path.data: /var/lib/logstash
pipeline.ordered: auto
path.logs: /var/log/logstash
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: 'xxx'
xpack.monitoring.elasticsearch.hosts: [ 'https://<ip-of-elasticsearch-server>:9200' ]
xpack.monitoring.elasticsearch.ssl.certificate_authority: /etc/logstash/config/certs/ca.crt

有人能帮我吗。如果你需要更多的信息,请告诉我

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题