多行JSON数据到logstash

7ajki6be  于 2023-04-27  发布在  Logstash
关注(0)|答案(1)|浏览(173)

我有下面的JSON结构,我想用Logstash解析:

{
  "messages" : [ {
    "remoteReferenceId" : "133883",
    "sender" : {
      "name" : "User1"
    }
  }, {
    "remoteReferenceId" : "133894",
    "sender" : {
      "name" : "User2"
    }
  } ],
  "timestamp" : "2021-03-20T15:36:01.868+02:00"
}

我的配置如下:

input {
  file {
    mode => "read"
    path => "/path-to-data/*.json"
    file_completed_action => "delete"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    type => "json"
    codec => multiline {
        pattern => "remoteReferenceId"
        negate => true
        what => previous
        auto_flush_interval => 1
        multiline_tag => ""
     }
  }
}
filter{
    json{
        source => "messages"
        target => "parsedJson"
    }
    mutate {
      add_field => {
        "remoteReferenceId" => "%{[remoteReferenceId]}"
          }
        }
}
output {
    elasticsearch{
        hosts => ["http://localhost:9200/"]
        index => "index_messages"
    }
    stdout { codec => rubydebug }
}

JSON文件被导入到Elasticsearch中。我的问题是,我想添加一个新字段“remoteReferenceId”,其中包含节点的值(例如上面的值“133883”),但在Elastic中,我得到的是“%{[remoteReferenceId]}”,而不是真实的的值。
有什么问题吗?非常感谢!

hi3rlvi2

hi3rlvi21#

Tldr;

我相信你的管道永远不会工作。json插件失败了,因为你的多行提取的字符串不是有效的json。

解决方案:

你能试试这个管道吗?:

input {
  file {
    mode => "read"
    path => "/tmp/*.json"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    codec => multiline
    {
        pattern => '^\{'
        negate => true
        what => previous                
    }
  }
}

filter {
    json {
        source => "message"
    }

    split { field => "[messages]" }

    mutate {
        add_field => {
            "remoteReferenceId" => "%{[messages][remoteReferenceId]}"
        }
    }
}

output {
    stdout { codec => rubydebug }
}

相关问题