Kibana filebeat不从pod收集日志

jvlzgdj9  于 2023-08-01  发布在  Kibana
关注(0)|答案(1)|浏览(193)

首先,我的英语很差。我使用filebeat来收集Spring应用程序的日志。日志通过ElasticSearch发送到kibana,在那里我可以分析它们。这运行得很好,但现在我想直接从单个Pod收集日志,其中有一个名为“application-logs{date}.log”的文件。现在filebat看不到的问题是这个文件,所以我没有kibana中的日志。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: default #kube-system
  labels:
    k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""] # "" indicates the core API group
    resources:
      - namespaces
      - pods
      - nodes
    verbs:
      - get
      - watch
      - list
  - apiGroups: ["apps"]
    resources:
      - replicasets
    verbs: ["get", "list", "watch"]
  - apiGroups: ["batch"]
    resources:
      - jobs
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat
  # should be the namespace where filebeat is running
  namespace: default #kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat-kubeadm-config
  namespace: default #kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: default #kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat
  namespace: default #kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: default #kube-system
roleRef:
  kind: Role
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat-kubeadm-config
  namespace: default #kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: default #kube-system
roleRef:
  kind: Role
  name: filebeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: default #kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      enabled: true
      paths:
        - /*.log
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
    
    filebeat.autodiscover:
      providers:
        - type: containers
          node: ${NODE_NAME}
          templates:
            - condition:
                not:
                  contains:
                    kubernetes.container.name: "application-log"
              config:
                - type: container
                  paths:
                    - "/*.log"
            - condition:
                or:
                  - contains:
                      kubernetes.container.name: "application-log"
                  - contains:
                      kubernetes.container.name: "filebeat"
              config:
                - type: container
                  paths:
                    - "/*.log"
                  json.keys_under_root: true
                  json.add_error_key: true
                  json.message_key: message
    
    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #      hints.default_config:
    #        type: container
    #        paths:
    #          - /var/log/containers/*${data.kubernetes.container.id}.log

    processors:
      - add_cloud_metadata:
      - add_host_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      #ssl.verification_mode: none
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: default #kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: filebeat
          image: docker.elastic.co/beats/filebeat:8.7.1
          args: [
            "-c", "/etc/filebeat.yml",
            "-e",
          ]
          env:
            - name: ELASTICSEARCH_HOST
              value: https://elastic.staging.mmos.dev
            - name: ELASTICSEARCH_PORT
              value: "443"
            - name: ELASTICSEARCH_USERNAME
              value: elastic
            - name: ELASTICSEARCH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: quickstart-es-elastic-user
                  key: elastic
#            - name: ELASTIC_CLOUD_ID
#              value:
#            - name: ELASTIC_CLOUD_AUTH
#              value:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          securityContext:
            runAsUser: 0
            # If using Red Hat OpenShift uncomment this:
            #privileged: true
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
            - name: config
              mountPath: /etc/filebeat.yml
              readOnly: true
              subPath: filebeat.yml
            - name: data
              mountPath: /usr/share/filebeat/data
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
            - name: varlog
              mountPath: /var/log
              readOnly: true
      volumes:
        - name: config
          configMap:
            defaultMode: 0640
            name: filebeat-config
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: varlog
          hostPath:
            path: /var/log
        # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
        - name: data
          hostPath:
            # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
            path: /var/lib/filebeat-data
            type: DirectoryOrCreate

---

字符串
这是我的filebeat yaml
这里是应用程序日志日期

...
{"@timestamp":"2023-07-03T20:40:51.169Z","timestamp":"2023-07-03T20:40:51.169+0000","severity":"INFO","service":"springAppName_IS_UNDEFINED","trace":"","span":"","exportable":"","pid":"1","thread":"http-nio-8080-exec-8","class":
"online.minimuenchen.mmos.jobcenterservice.exceptions.NoRaffleTicketOfUserFound","rest":"User#64a3179ec0dc8d2cc25e1daa: No raffleTicket"}
...


我已经检查了filebat和elastic的日志,没有任何错误或警告。我已经检查了路径和检查,filebat有读取此文件的权利。

bpzcxfmw

bpzcxfmw1#

看起来您正在使用Filebeat从运行在pod中的Spring应用程序中收集日志。
要解决此问题,以下是一些建议:
1.如果您正在使用Kibana检查日志数据,请确保您检查的时间正确。使用Kibana UI索引管理或通过curl到Elasticsearch和_cat/indexes API调用仔细检查索引是否存在(如果数据来了,索引必须自动创建)。
1.检查pod日志并确保pod创建日志。kubectl logs <filebeat-pod-name> -n <namespace>

相关问题