ElasticSearch-查询只搜索5个字符

csbfibhn  于 2021-06-14  发布在  ElasticSearch
关注(0)|答案(1)|浏览(394)

我有这样一个问题,不管我在查询中发送什么值,我都不会得到超过搜索的第五个字符的任何结果。
例子:
{“match”:{“name”:“benjami”}-将不返回任何结果
{“match”:{“name”:“benja”}-返回名为benja的结果。。。
{“match”:{“name”:“benjamin”}——返回名为benjamin的结果
索引:
“name”:{“type”:“string”,“analyzer”:“edge\ngram\u analyzer”}
设置:

"analyzer": {
    "edge_ngram_analyzer":{
        "type": "custom", "tokenizer": "standard", "filter": ["lowercase","edge_ngram_filter"]}},
"filter": {
    "edge_ngram_filter":{
        "type": "edge_ngram", "min_gram": 1, "max_gram": 40}}

使用术语向量,我发现字段的索引是正确的。问题在于ElasticSearch没有搜索完整的查询值。有人知道为什么会这样吗?非常感谢您的帮助,我正在使用ElasticSearch版本5.6!
指数

"properties" : { "searchid": {"type": "string", "index": "not_analyzed"},
        "otherId": {"type": "string", "analyzer": "edge_ngram_analyzer"},
        "name": {"type": "string", "analyzer": "edge_ngram_analyzer"},
}

设置

"settings": {
        "number_of_replicas": 0,
        "analysis": {
            "filter": {"edge_ngram_filter": {"type": "edge_ngram", "min_gram": 2, "max_gram": 80}},
            "analyzer": {
                "edge_ngram_analyzer": {
                    "type": "custom",
                    "tokenizer": "my_tokenizer",
                    "filter": ["lowercase", "edge_ngram_filter"],
                },
                "short_edge_ngram_analyzer": {
                    "type": "custom",
                    "tokenizer": "standard",
                    "filter": ["lowercase", "edge_ngram_filter"],
                },
                "case_sensitive": {"type": "custom", "tokenizer": "whitespace", "filter": []}
            },
            "tokenizer": {
                "my_tokenizer": {
                  "type": "edge_ngram",
                  "min_gram": 2,
                  "max_gram": 40,
                  "token_chars": [
                    "letter","digit"
                  ]
                }
        },
        },
    },

查询

{'query': 
{'function_score': 
{'query': 
{'bool': {'should': [{'multi_match': {'query': 'A162412350', 'fields': ['otherId']}}}]}}, 
'functions': [{'field_value_factor': {'field': 'positionOrActive', 'modifier': 'none', 'missing': '0', 'factor': '1.1'}}], 'score_mode': 'sum', 'boost_mode': 'sum'}}, 'size': 25}

文档结果

[{u'otherId': u'A1624903499',
  u'positionOrActive': 0,
  'searchScore': 18.152431,
  u'id': 35631,,
 {u'otherId': u'A1624903783',
  u'positionOrActive': 0,
  'searchScore': 18.152431,
  u'id': 35632,
 {u'otherId': u'A1624904100',
  u'positionOrActive': 0,
  'searchScore': 18.152431,
  u'id': 35633,]

设置

{
  "issuersearch": {
    "settings": {
      "index": {
        "refresh_interval": "1s",
        "number_of_shards": "1",
        "provided_name": "issuersearch",
        "creation_date": "1602687790617",
        "analysis": {
          "filter": {
            "edge_ngram_filter": {
              "type": "edge_ngram",
              "min_gram": "2",
              "max_gram": "80"
            }
          },
          "analyzer": {
            "edge_ngram_analyzer": {
              "filter": Array[2][
                "lowercase",
                "edge_ngram_filter"
              ],
              "type": "custom",
              "tokenizer": "my_tokenizer"
            },
            "short_edge_ngram_analyzer": {
              "filter": Array[2][
                "lowercase",
                "edge_ngram_filter"
              ],
              "type": "custom",
              "tokenizer": "standard"
            },
            "case_sensitive": {
              "type": "custom",
              "tokenizer": "whitespace"
            }
          },
          "tokenizer": {
            "my_tokenizer": {
              "token_chars": Array[2][
                "letter",
                "digit"
              ],
              "min_gram": "2",
              "type": "edge_ngram",
              "max_gram": "40"
            }
          }
        },
        "number_of_replicas": "0",
        "uuid": "dexqFx32RXy-AC3HHpfElA",
        "version": {
          "created": "5060599"
        }
      }
    }
  }
}
luaexgnf

luaexgnf1#

可能是因为 standard tokenizer将标记拆分为空格,您需要提供完整的示例(完整的索引Map、示例文档和实际结果,以便您的搜索查询进行确认)。
另外,希望你没有在你的电脑上使用任何搜索分析器 name 现场。

相关问题