我正试图通过AWS ElasticSearch为Kubernetes上运行的应用程序设置一个监控解决方案。
我使用filebeat --〉Logstash --〉AWS ElasticSearch来发送日志,到目前为止,它被证明是一场噩梦:(
要从logstash发送日志,我需要使用amazon_es输出插件,但我得到不同的错误
下面是我正在使用的logstash的清单文件
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: kube-system
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
# all input will come from filebeat, no local logs
input {
beats {
port => 5044
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
amazon_es {
hosts => [ "https://vpc-eks***********.es.amazonaws.com:443" ]
region => "eu-west-1"
index => "devtest-logs-%{+YYYY.MM.dd}"
}
}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: logstash-deployment
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.1.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: kube-system
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
type: ClusterIP
│ [INFO ] 2019-08-27 13:20:50.114 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"fb22e6cb-d7bb-4735-a05a-da4a2e4dabde", :path=>"/usr/share/logstash/data/uuid"} │
│ [ERROR] 2019-08-27 13:20:55.290 [Converge PipelineAction::Create<main>] registry - Tried to load a plugin's code, but failed. {:exception=>#<LoadError: no such file to load -- logstash/outputs/amazon_es>, :path=>"logstash/outp │
│ uts/amazon_es", :type=>"output", :name=>"amazon_es"} │
│ [ERROR] 2019-08-27 13:20:55.295 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::PluginLoadingError", :message=>"Could │
│ n't find any output plugin named 'amazon_es'. Are you sure this is correct? Trying to load the amazon_es output plugin resulted in this error: no such file to load -- logstash/outputs/amazon_es", :backtrace=>["/usr/share/logst │
│ ash/logstash-core/lib/logstash/plugins/registry.rb:211:in `lookup_pipeline_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/plugin.rb:137:in `lookup'", "org/logstash/plugins/PluginFactoryExt.java:200:in `plugin'", "or │
│ g/logstash/plugins/PluginFactoryExt.java:137:in `buildOutput'", "org/logstash/execution/JavaBasePipelineExt.java:50:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:in `initialize'", "/usr/ │
│ share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
到目前为止,我还没有能够找到任何东西,因为容器进入崩溃循环。任何帮助将是伟大的
1条答案
按热度按时间vxf3dgd41#
你需要安装amazon_es插件来解决这个问题.
包括在您的dockerfile中。
RUN logstash-plugin install logstash-output-amazon_es