如何使用Lucene/Hibernate搜索关键字为“With”的名字?

fcy6dtqo  于 2022-11-07  发布在  Lucene
关注(0)|答案(2)|浏览(210)

要搜索的人的名字是“Suleman Kumar With”,其中With是姓氏。它适用于所有其他名字,但不适用于此英语关键字
下面是我创建Lucene索引的方法:

@Fields({ @Field(index = Index.YES, store = Store.NO),
@Field(name = "LastName_Sort", index = Index.YES, analyzer = @Analyzer(definition = "sortAnalyzer")) })
@Column(name = "LASTNAME", length = 50)
public String getLastName() {
  return lastName;
 }

sortAnalyzer具有以下配置:

@AnalyzerDef(name = "sortAnalyzer",
  tokenizer = @TokenizerDef(factory = KeywordTokenizerFactory.class),
filters = {
    @TokenFilterDef(factory = LowerCaseFilterFactory.class),
    @TokenFilterDef(factory = PatternReplaceFilterFactory.class, params = {
        @Parameter(name = "pattern", value = "('-&\\.,\\(\\))"),
        @Parameter(name = "replacement", value = " "),
        @Parameter(name = "replace", value = "all")
    }),
    @TokenFilterDef(factory = PatternReplaceFilterFactory.class, params = {
        @Parameter(name = "pattern", value = "([^0-9\\p{L} ])"),
        @Parameter(name = "replacement", value = ""),
        @Parameter(name = "replace", value = "all")
    })
}
)

可以搜索姓氏和主键:ID,我在那里得到令牌不匹配错误。

ljsrvy3e

ljsrvy3e1#

我已经实现了它使用我自己的“自定义分析器”。

public class IgnoreStopWordsAnalyzer extends StopwordAnalyzerBase {

    public IgnoreStopWordsAnalyzer() {
        super(Version.LUCENE_36, null);
    }

    @Override
    protected ReusableAnalyzerBase.TokenStreamComponents createComponents(final String fieldName, final Reader reader) {
        final StandardTokenizer src = new StandardTokenizer(Version.LUCENE_36, reader);
        TokenStream tok = new StandardFilter(Version.LUCENE_36, src);
        tok = new LowerCaseFilter(Version.LUCENE_36, tok);
        tok = new StopFilter(Version.LUCENE_36, tok, this.stopwords);
        return new ReusableAnalyzerBase.TokenStreamComponents(src, tok);
    }
}

在字段中调用此分析器,将忽略非索引字。

dvtswwa3

dvtswwa32#

对于Hibernate搜索版本5,您可以使用这样的自定义分析器:

import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.core.StopFilter;
import org.apache.lucene.analysis.standard.StandardFilter;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.apache.lucene.analysis.util.StopwordAnalyzerBase;

public class IgnoreStopWordsAnalyzer extends StopwordAnalyzerBase {

    public IgnoreStopWordsAnalyzer() {
        super(null);
    }

    @Override
    protected TokenStreamComponents createComponents(String fieldName) {
        final Tokenizer source = new StandardTokenizer();
        TokenStream tokenStream = new StandardFilter(source);
        tokenStream = new LowerCaseFilter(tokenStream);
        tokenStream = new StopFilter(tokenStream, this.stopwords);
        return new TokenStreamComponents(source, tokenStream);
    }

}

相关问题