ElasticSearchMap对分组文档执行折叠/执行操作的结果

pdtvr36n  于 2022-11-22  发布在  ElasticSearch
关注(0)|答案(2)|浏览(127)

有一个对话列表,每个对话都有一个消息列表。每个消息都有不同的字段和一个action字段。我们需要考虑在对话的第一个消息中使用了A操作,在几个消息之后使用了A.1操作,再过一段时间使用了A.1.1操作等等(有一个聊天机器人意图列表)。
将对话的消息动作分组如下:A > A > A > A.1 > A > A.1 > A.1.1 ...

问题:

我需要使用ElasticSearch创建一个报告,该报告将返回每个对话的actions group;接下来,我需要对类似的actions groups进行分组,并添加一个计数;最后将导致Map<actionsGroup, count>'A > A.1 > A > A.1 > A.1.1', 3
构造actions group时我需要消除每组重复项;我需要A > A.1 > A > A.1 > A.1.1而不是A > A > A > A.1 > A > A.1 > A.1.1

我开始执行的步骤

{
   "collapse":{
      "field":"context.conversationId",
      "inner_hits":{
         "name":"logs",
         "size": 10000,
         "sort":[
            {
               "@timestamp":"asc"
            }
         ]
      }
   },
   "aggs":{
   },
}

我接下来需要的:

1.我需要将折叠的结果Map到单个结果中,如A > A.1 > A > A.1 > A.1.1。我已经看到,在或aggr的情况下,可以对结果使用scripts,并且可以创建一个操作列表,如我需要的,但aggr对所有消息执行操作,不仅仅是在collapse中的分组消息上。有没有可能在collapse中使用aggr或类似的解决方案?
1.我需要对所有折叠的结果值(A > A.1 > A > A.1 > A.1.1)进行分组,添加一个计数,得到Map<actionsGroup, count>

或者:

1.使用aggrconversationId字段对对话消息进行分组(我不知道如何执行此操作)
1.使用脚本迭代所有值,并为每个对话创建actions group。(不确定这是否可行)
1.对所有值使用另一个aggr,并对重复项进行分组,返回Map<actionsGroup, count>
Map:

"mappings":{
  "properties":{
     "@timestamp":{
        "type":"date",
        "format": "epoch_millis"
     }
     "context":{
        "properties":{
           "action":{
              "type":"keyword"
           },
           "conversationId":{
              "type":"keyword"
           }
        }
     }
  }
}

对话的示例文档:

Conversation 1.
{
    "@timestamp": 1579632745000,
    "context": {
        "action": "A",
        "conversationId": "conv_id1",
    }
},
{
    "@timestamp": 1579632745001,
    "context": {
        "action": "A.1",
        "conversationId": "conv_id1",
    }
},
{
    "@timestamp": 1579632745002,
    "context": {
        "action": "A.1.1",
        "conversationId": "conv_id1",
    }
}

Conversation 2.
{
    "@timestamp": 1579632745000,
    "context": {
        "action": "A",
        "conversationId": "conv_id2",
    }
},
{
    "@timestamp": 1579632745001,
    "context": {
        "action": "A.1",
        "conversationId": "conv_id2",
    }
},
{
    "@timestamp": 1579632745002,
    "context": {
        "action": "A.1.1",
        "conversationId": "conv_id2",
    }
}

Conversation 3.
{
    "@timestamp": 1579632745000,
    "context": {
        "action": "B",
        "conversationId": "conv_id3",
    }
},
{
    "@timestamp": 1579632745001,
    "context": {
        "action": "B.1",
        "conversationId": "conv_id3",
    }
}

预期结果:

{
    "A -> A.1 -> A.1.1": 2,
    "B -> B.1": 1
}
Something similar, having this or any other format.
7gcisfzg

7gcisfzg1#

我用弹性的scripted_metric解了它。而且,index从初始状态改变了。
脚本:

{
   "size": 0,
   "aggs": {
        "intentPathsCountAgg": {
            "scripted_metric": {
                "init_script": "state.messagesList = new ArrayList();",
                "map_script": "long currentMessageTime = doc['messageReceivedEvent.context.timestamp'].value.millis; Map currentMessage = ['conversationId': doc['messageReceivedEvent.context.conversationId.keyword'], 'time': currentMessageTime, 'intentsPath': doc['brainQueryRequestEvent.brainQueryRequest.user_data.intentsHistoryPath.keyword'].value]; state.messagesList.add(currentMessage);",  
                "combine_script": "return state",
                "reduce_script": "List messages = new ArrayList(); Map conversationsMap = new HashMap(); Map intentsMap = new HashMap(); String[] ifElseWorkaround = new String[1]; for (state in states) { messages.addAll(state.messagesList);} messages.stream().forEach((message) -> { Map existingMessage = conversationsMap.get(message.conversationId); if(existingMessage == null || message.time > existingMessage.time) { conversationsMap.put(message.conversationId, ['time': message.time, 'intentsPath': message.intentsPath]); } else { ifElseWorkaround[0] = ''; } }); conversationsMap.entrySet().forEach(conversation -> { if (intentsMap.containsKey(conversation.getValue().intentsPath)) { long intentsCount = intentsMap.get(conversation.getValue().intentsPath) + 1; intentsMap.put(conversation.getValue().intentsPath, intentsCount); } else {intentsMap.put(conversation.getValue().intentsPath, 1L);} }); return intentsMap.entrySet().stream().map(intentPath -> [intentPath.getKey().toString(): intentPath.getValue()]).collect(Collectors.toSet()) "
            }
        }
    }
}

格式化脚本(为了更好的可读性-使用.ts):

scripted_metric: {
  init_script: 'state.messagesList = new ArrayList();',
  map_script: `
    long currentMessageTime = doc['messageReceivedEvent.context.timestamp'].value.millis;
    Map currentMessage = [
      'conversationId': doc['messageReceivedEvent.context.conversationId.keyword'],
      'time': currentMessageTime,
      'intentsPath': doc['brainQueryRequestEvent.brainQueryRequest.user_data.intentsHistoryPath.keyword'].value
    ];
    state.messagesList.add(currentMessage);`,
  combine_script: 'return state',
  reduce_script: `
    List messages = new ArrayList();
    Map conversationsMap = new HashMap();
    Map intentsMap = new HashMap();
    boolean[] ifElseWorkaround = new boolean[1];

    for (state in states) {
      messages.addAll(state.messagesList);
    }

    messages.stream().forEach(message -> {
      Map existingMessage = conversationsMap.get(message.conversationId);
      if(existingMessage == null || message.time > existingMessage.time) {
        conversationsMap.put(message.conversationId, ['time': message.time, 'intentsPath': message.intentsPath]);
      } else {
        ifElseWorkaround[0] = true;
      }
    });

    conversationsMap.entrySet().forEach(conversation -> {
      if (intentsMap.containsKey(conversation.getValue().intentsPath)) {
        long intentsCount = intentsMap.get(conversation.getValue().intentsPath) + 1;
        intentsMap.put(conversation.getValue().intentsPath, intentsCount);
      } else {
        intentsMap.put(conversation.getValue().intentsPath, 1L);
      }
    });

    return intentsMap.entrySet().stream().map(intentPath -> [
      'path': intentPath.getKey().toString(),
      'count': intentPath.getValue()
    ]).collect(Collectors.toSet())`

的回答:

{
    "took": 2,
    "timed_out": false,
    "_shards": {
        "total": 5,
        "successful": 5,
        "skipped": 0,
        "failed": 0
    },
    "hits": {
        "total": {
            "value": 11,
            "relation": "eq"
        },
        "max_score": null,
        "hits": []
    },
    "aggregations": {
        "intentPathsCountAgg": {
            "value": [
                {
                    "smallTalk.greet -> smallTalk.greet2 -> smallTalk.greet3": 2
                },
                {
                    "smallTalk.greet -> smallTalk.greet2 -> smallTalk.greet3  -> smallTalk.greet4": 1
                },
                {
                    "smallTalk.greet -> smallTalk.greet2": 1
                }
            ]
        }
    }
}
eni9jsuy

eni9jsuy2#

在术语聚合中使用脚本,我们可以在“context.action”的第一个字符上创建存储桶。使用类似的术语子聚合,我们可以在父存储桶下获得所有“context.action”,例如A-〉A.1-〉A.1.1...
查询:

{
  "size": 0,
  "aggs": {
    "conversations": {
      "terms": {
        "script": {
          "source": "def term=doc['context.action'].value; return term.substring(0,1);" 
--->  returns first character ex A,B,C etc
        },
        "size": 10
      },
      "aggs": {
        "sub_conversations": {
          "terms": {
            "script": {
              "source": "if(doc['context.action'].value.length()>1) return doc['context.action'];"--> All context.action under [A], length check to ignore [A]
            },
            "size": 10
          }
        },
        "count": {
          "cardinality": {
            "script": {
              "source": "if(doc['context.action'].value.length()>1) return doc['context.action'];"--> count of all context.action under A
            }
          }
        }
      }
    }
  }
}

由于在ElasticSearch中不可能连接不同的文档,因此您必须通过在聚合桶上迭代来获得客户端中的组合键。
结果:

"aggregations" : {
    "conversations" : {
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [
        {
          "key" : "A",
          "doc_count" : 6,
          "sub_conversations" : {
            "doc_count_error_upper_bound" : 0,
            "sum_other_doc_count" : 0,
            "buckets" : [
              {
                "key" : "A.1",
                "doc_count" : 2
              },
              {
                "key" : "A.1.1",
                "doc_count" : 2
              }
            ]
          },
          "count" : {
            "value" : 2
          }
        },
        {
          "key" : "B",
          "doc_count" : 2,
          "sub_conversations" : {
            "doc_count_error_upper_bound" : 0,
            "sum_other_doc_count" : 0,
            "buckets" : [
              {
                "key" : "B.1",
                "doc_count" : 1
              }
            ]
          },
          "count" : {
            "value" : 1
          }
        }
      ]
    }
  }

相关问题