json Kibana -如何导出搜索结果

d4so4syb  于 2023-03-04  发布在  Kibana
关注(0)|答案(8)|浏览(463)

我们最近将集中式日志记录从Splunk迁移到了ELK解决方案,并且我们需要导出搜索结果-Kibana 4.1中是否有方法可以做到这一点?如果有,这并不明显...
谢谢!

idv4meu8

idv4meu81#

这是一个很老的帖子。但我想仍然有人在寻找一个好的答案。
您可以轻松地从Kibana Discover导出您的搜索。
先单击保存,然后单击共享

单击CSV报告

然后单击生成CSV

几分钟后,你会得到下载选项底部右侧.

lmvvr0a8

lmvvr0a82#

这适用于Kibana v7.2.0-将查询结果导出到本地JSON文件。这里我假设你有Chrome,类似的方法可能适用于Firefox。

  1. Chrome -打开开发者工具/网络
  2. Kibana -执行查询
  3. Chrome -右键单击网络呼叫并选择复制/复制为cURL
    1.命令行- execute [cURL from step 3] > query_result.json。查询响应数据现在存储在query_result.json

**编辑:**要使用jq深入到生成的JSON文件中的source节点:

jq '.responses | .[]  | .hits  | .hits | .[]._source ' query_result.json
dzhpxtsq

dzhpxtsq3#

如果您想导出日志(不仅仅是时间戳和计数),您有几个选项(tylerjl在Kibana forums上很好地回答了这个问题):
如果你想从Elasticsearch中导出日志,你可能想把它们保存在某个地方,所以在浏览器中查看它们可能不是查看成百上千条日志的最佳方式。这里有几个选项:

  • 在“Discover”选项卡中,您可以单击底部附近的箭头选项卡查看原始请求和响应,还可以单击“Request”并将其用作对ES的查询,使用curl(或类似的方法)查询ES以获取所需的日志。
  • 您可以使用logstash或stream2es206来转储索引的内容(使用可能的查询参数来获取您想要的特定文档)。
k4aesqcs

k4aesqcs4#

@肖恩的回答是对的,但缺乏细节。
下面是一个简单的脚本,它可以通过httpie从ElasticSearch获取所有日志,通过jq解析和写出它们,并使用滚动光标迭代查询,以便可以捕获前500个以上的条目(不同于本页面上的其他解决方案)。
这个脚本是用httpie(http命令)和fish shell实现的,但是可以很容易地修改为更标准的工具,比如bash和curl。
根据@Sean's answer设置查询:
在"Discover"选项卡中,您可以单击底部附近的箭头选项卡查看原始请求和响应,还可以单击"Request"并将其用作对ES的查询,使用curl(或类似的方法)查询ES以获取所需的日志。

set output logs.txt
set query '<paste value from Discover tab here>'
set es_url http://your-es-server:port
set index 'filebeat-*'

function process_page
  # You can do anything with each page of results here
  # but writing to a TSV file isn't a bad example -- note
  # the jq expression here extracts a kubernetes pod name and
  # the message field, but can be modified to suit
  echo $argv | \
    jq -r '.hits.hits[]._source | [.kubernetes.pod.name, .message] | @tsv' \
    >> $output
end

function summarize_string
  echo (echo $argv | string sub -l 10)"..."(echo $argv | string sub -s -10 -l 10)
end

set response (echo $query | http POST $es_url/$index/_search\?scroll=1m)
set scroll_id (echo $response | jq -r ._scroll_id)
set hits_count (echo $response | jq -r '.hits.hits | length')
set hits_so_far $hits_count
echo "Got initial response with $hits_count hits and scroll ID "(summarize_string $scroll_id)

process_page $response

while test "$hits_count" != "0"
  set response (echo "{ \"scroll\": \"1m\", \"scroll_id\": \"$scroll_id\" }" | http POST $es_url/_search/scroll)
  set scroll_id (echo $response | jq -r ._scroll_id)
  set hits_count (echo $response | jq -r '.hits.hits | length')
  set hits_so_far (math $hits_so_far + $hits_count)
  echo "Got response with $hits_count hits (hits so far: $hits_so_far) and scroll ID "(summarize_string $scroll_id)

  process_page $response
end

echo Done!

最终结果是Kibana中与查询匹配的所有日志,在脚本顶部指定的输出文件中,按照process_page函数中的代码进行转换。

ej83mcc0

ej83mcc05#

如果你有麻烦,使自己的请求与curl或你不需要自动程序从Kibana提取日志,只需点击'响应',并得到你需要的。
在使用curl时遇到“xsrf令牌丢失”这样的麻烦后,我发现这种方法更容易和简单!
就像其他人说的,点击底部附近的箭头标签后会出现请求按钮。

mefy6pfw

mefy6pfw6#

Only the Timestamp and the count of messages at that time are exported, not the log information:

Raw:

1441240200000,1214 1441251000000,1217 1441261800000,1342 1441272600000,1452 1441283400000,1396 1441294200000,1332 1441305000000,1332 1441315800000,1334 1441326600000,1337 1441337400000,1215 1441348200000,12523 1441359000000,61897

Formatted:

"September 3rd 2015, 06:00:00.000","1,214" "September 3rd 2015, 09:00:00.000","1,217" "September 3rd 2015, 12:00:00.000","1,342" "September 3rd 2015, 15:00:00.000","1,452" "September 3rd 2015, 18:00:00.000","1,396" "September 3rd 2015, 21:00:00.000","1,332" "September 4th 2015, 00:00:00.000","1,332" "September 4th 2015, 03:00:00.000","1,334" "September 4th 2015, 06:00:00.000","1,337" "September 4th 2015, 09:00:00.000","1,215" "September 4th 2015, 12:00:00.000","12,523" "September 4th 2015, 15:00:00.000","61,897"

6ss1mwsb

6ss1mwsb7#

我试了一下脚本,但总是碰到一些空格或隐藏字符的问题。
我检查了网络(当我在kibana用户界面上时,它只显示原始事件日志),把请求复制为curl,在一个随机的网站上转换为python,然后添加了提取和更新search_after的逻辑,这样我就可以得到不止一个页面的结果。
请注意,这对于CVAT(一个使用Kibana存储其事件数据的计算机视觉图像标记软件)来说有点特殊,但只是在某些API端点与其他Kibana示例不同的程度上。
工作对我来说太痛苦了,我真的认为我应该留下一些东西。

import requests

cookies = {
    'PGADMIN_LANGUAGE': 'en',
    'sessionid': 'gqnwizma4m088siz93q7uafjygkbd1b3',
    'csrftoken': 'khLc0XNgkESvVxoPHyOyCIJ2dXzv2tHWTIoOcxqN6X6CR75E6VTzis6jRxNmVI43',
}

headers = {
    'Accept': 'application/json, text/plain, */*',
    'Accept-Language': 'en-GB,en',
    'Connection': 'keep-alive',
    'Origin': '<kibana-address>',
    'Referer': '<kibana-address>/analytics/app/kibana',
    'Sec-GPC': '1',
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36',
    'content-type': 'application/x-ndjson',
    'kbn-version': '6.8.23',
}

params = {
    'rest_total_hits_as_int': 'true',
    'ignore_throttled': 'true',
}

# https://www.elastic.co/guide/en/elasticsearch/reference/8.6/search-multi-search.html
# https://stackoverflow.com/questions/68127892/how-does-search-after-work-in-elastic-search

results = []
for i in range(0, 500):
    
    if i == 0: 
        data = '{"index":"cvat*", "ignore_unavailable":true,"preference":1676572620990}\n{"version":true,"size":500, "from": ' + str(i*500) + ', "sort":[{"@timestamp":{"order":"desc","unmapped_type":"boolean"}}],"_source":{"excludes":[]},"aggs":{"2":{"date_histogram":{"field":"@timestamp","interval":"1M","time_zone":"Europe/London","min_doc_count":1}}},"stored_fields":["*"],"script_fields":{},"docvalue_fields":[{"field":"@timestamp","format":"date_time"}],"query":{"bool":{"must":[{"range":{"@timestamp":{"gte":1673308800000,"lte":1676591999999,"format":"epoch_millis"}}}],"filter":[{"match_all":{}}],"should":[],"must_not":[]}},"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"*":{}},"fragment_size":2147483647},"timeout":"30000ms"}\n'
    else: 
        search_after = f'"search_after": {str(search_after)}'
        print(search_after)
        data = '{"index":"cvat*", "ignore_unavailable":true,"preference":1676572620990}\n{"version":true,"size":500, ' + search_after + ', "sort":[{"@timestamp":{"order":"desc","unmapped_type":"boolean"}}],"_source":{"excludes":[]},"aggs":{"2":{"date_histogram":{"field":"@timestamp","interval":"1M","time_zone":"Europe/London","min_doc_count":1}}},"stored_fields":["*"],"script_fields":{},"docvalue_fields":[{"field":"@timestamp","format":"date_time"}],"query":{"bool":{"must":[{"range":{"@timestamp":{"gte":1673308800000,"lte":1676591999999,"format":"epoch_millis"}}}],"filter":[{"match_all":{}}],"should":[],"must_not":[]}},"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"*":{}},"fragment_size":2147483647},"timeout":"30000ms"}\n'
        
    #print(data)
    
    response = requests.post(
        f'<kibana-address>/analytics/elasticsearch/_msearch', #?from={str(i*500)}',
        params=params,
        cookies=cookies,
        headers=headers,
        data=data,
        verify=False,
    )
    
    print(i, response.status_code)
    
    if response.status_code == 500:
        break
        
    results.extend(response.json()['responses'][0]['hits']['hits'])
    search_after = results[-1]['sort']
biswetbf

biswetbf8#

当然,您可以从Kibana的Discover(Kibana 4.x+)导出。1.在Discover页面上,单击此处的“向上箭头”:

1.现在,在页面底部,您将有两个用于导出搜索结果的选项

在logz.io(我工作的公司),我们将根据特定的搜索发布预定的报告。

相关问题