org.apache.lucene.analysis.Analyzer.getPositionIncrementGap()方法的使用及代码示例

x33g5p2x  于2022-01-15 转载在 其他  
字(9.8k)|赞(0)|评价(0)|浏览(110)

本文整理了Java中org.apache.lucene.analysis.Analyzer.getPositionIncrementGap()方法的一些代码示例,展示了Analyzer.getPositionIncrementGap()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Analyzer.getPositionIncrementGap()方法的具体详情如下:
包路径:org.apache.lucene.analysis.Analyzer
类名称:Analyzer
方法名:getPositionIncrementGap

Analyzer.getPositionIncrementGap介绍

[英]Invoked before indexing a IndexableField instance if terms have already been added to that field. This allows custom analyzers to place an automatic position increment gap between IndexbleField instances using the same field name. The default value position increment gap is 0. With a 0 position increment gap and the typical default token position increment of 1, all terms in a field, including across IndexableField instances, are in successive positions, allowing exact PhraseQuery matches, for instance, across IndexableField instance boundaries.
[中]如果已将术语添加到IndexableField实例,则在索引该实例之前调用。这允许自定义分析器在使用相同字段名的IndexbleField实例之间放置自动位置增量间隙。默认值“位置增量间距”为0。如果位置增量间隔为0,典型的默认标记位置增量为1,则字段中的所有术语(包括跨IndexableField实例的术语)都处于连续位置,允许精确的短语查询匹配,例如跨IndexableField实例边界。

代码示例

代码示例来源:origin: org.apache.lucene/lucene-core

@Override
public int getPositionIncrementGap(String fieldName) {
 return getWrappedAnalyzer(fieldName).getPositionIncrementGap(fieldName);
}

代码示例来源:origin: tjake/Solandra

if (position > 0)
  position += analyzer.getPositionIncrementGap(field.name());

代码示例来源:origin: org.apache.lucene/lucene-analyzers-common

@Override
public int getPositionIncrementGap(String fieldName) {
 // use default from Analyzer base class if null
 return (posIncGap == null) ? super.getPositionIncrementGap(fieldName) : posIncGap.intValue();
}

代码示例来源:origin: org.apache.lucene/lucene-core

invertState.position += docState.analyzer.getPositionIncrementGap(fieldInfo.name);
invertState.offset += docState.analyzer.getOffsetGap(fieldInfo.name);

代码示例来源:origin: com.atlassian.jira/jira-core

public int getPositionIncrementGap(String fieldName)
{
  return analyzer.getPositionIncrementGap(fieldName);
}

代码示例来源:origin: org.apache.lucene/lucene-benchmark

@Override
public int getPositionIncrementGap(String fieldName) {
 return null == positionIncrementGap ? super.getPositionIncrementGap(fieldName) : positionIncrementGap;
}

代码示例来源:origin: org.elasticsearch/elasticsearch

lastPosition += posIncr.getPositionIncrement();
  lastPosition += analyzer.getPositionIncrementGap(field);
  lastOffset += analyzer.getOffsetGap(field);
} catch (IOException e) {

代码示例来源:origin: org.compass-project/compass

public int getPositionIncrementGap(String fieldName) {
  return analyzer.getPositionIncrementGap(fieldName);
}

代码示例来源:origin: org.elasticsearch/elasticsearch

private void analyze(TokenStream stream, Analyzer analyzer, String field, Set<String> includeAttributes) {
  try {
    stream.reset();
    CharTermAttribute term = stream.addAttribute(CharTermAttribute.class);
    PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class);
    OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class);
    TypeAttribute type = stream.addAttribute(TypeAttribute.class);
    PositionLengthAttribute posLen = stream.addAttribute(PositionLengthAttribute.class);
    while (stream.incrementToken()) {
      int increment = posIncr.getPositionIncrement();
      if (increment > 0) {
        lastPosition = lastPosition + increment;
      }
      tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(),
        lastOffset + offset.endOffset(), posLen.getPositionLength(), type.type(),
        extractExtendedAttributes(stream, includeAttributes)));
    }
    stream.end();
    lastOffset += offset.endOffset();
    lastPosition += posIncr.getPositionIncrement();
    lastPosition += analyzer.getPositionIncrementGap(field);
    lastOffset += analyzer.getOffsetGap(field);
  } catch (IOException e) {
    throw new ElasticsearchException("failed to analyze", e);
  } finally {
    IOUtils.closeWhileHandlingException(stream);
  }
}

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

/** Return the positionIncrementGap from the analyzer assigned to fieldName */
public int getPositionIncrementGap(String fieldName) {
 Analyzer analyzer = (Analyzer) analyzerMap.get(fieldName);
 if (analyzer == null)
  analyzer = defaultAnalyzer;
 return analyzer.getPositionIncrementGap(fieldName);
}

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

/** Return the positionIncrementGap from the analyzer assigned to fieldName */
public int getPositionIncrementGap(String fieldName) {
 Analyzer analyzer = (Analyzer) analyzerMap.get(fieldName);
 if (analyzer == null)
  analyzer = defaultAnalyzer;
 return analyzer.getPositionIncrementGap(fieldName);
}

代码示例来源:origin: org.infinispan/infinispan-embedded-query

@Override
public int getPositionIncrementGap(String fieldName) {
 // use default from Analyzer base class if null
 return (posIncGap == null) ? super.getPositionIncrementGap(fieldName) : posIncGap.intValue();
}

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

@Override
 public int getPositionIncrementGap(String fieldName) {
  return getAnalyzer(fieldName).getPositionIncrementGap(fieldName);
 }
}

代码示例来源:origin: org.apache.servicemix.bundles/org.apache.servicemix.bundles.lucene

@Override
public int getPositionIncrementGap(String fieldName) {
 return getWrappedAnalyzer(fieldName).getPositionIncrementGap(fieldName);
}

代码示例来源:origin: org.infinispan/infinispan-embedded-query

@Override
public int getPositionIncrementGap(String fieldName) {
 return getWrappedAnalyzer(fieldName).getPositionIncrementGap(fieldName);
}

代码示例来源:origin: gncloud/fastcatsearch

@Override
public final int getPositionIncrementGap(String fieldName) {
 return getWrappedAnalyzer(fieldName).getPositionIncrementGap(fieldName);
}

代码示例来源:origin: org.apache.lucene/lucene-memory

/**
 * Convenience method; Tokenizes the given field text and adds the resulting
 * terms to the index; Equivalent to adding an indexed non-keyword Lucene
 * {@link org.apache.lucene.document.Field} that is tokenized, not stored,
 * termVectorStored with positions (or termVectorStored with positions and offsets),
 * 
 * @param fieldName
 *            a name to be associated with the text
 * @param text
 *            the text to tokenize and index.
 * @param analyzer
 *            the analyzer to use for tokenization
 */
public void addField(String fieldName, String text, Analyzer analyzer) {
 if (fieldName == null)
  throw new IllegalArgumentException("fieldName must not be null");
 if (text == null)
  throw new IllegalArgumentException("text must not be null");
 if (analyzer == null)
  throw new IllegalArgumentException("analyzer must not be null");
 
 TokenStream stream = analyzer.tokenStream(fieldName, text);
 storeTerms(getInfo(fieldName, defaultFieldType), stream,
   analyzer.getPositionIncrementGap(fieldName), analyzer.getOffsetGap(fieldName));
}

代码示例来源:origin: johtani/elasticsearch-extended-analyze

private void analyze(TokenStream stream, Analyzer analyzer, String field, Set<String> includeAttributes, boolean shortAttrName) {
  try {
    stream.reset();
    CharTermAttribute term = stream.addAttribute(CharTermAttribute.class);
    PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class);
    OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class);
    TypeAttribute type = stream.addAttribute(TypeAttribute.class);
    while (stream.incrementToken()) {
      int increment = posIncr.getPositionIncrement();
      if (increment > 0) {
        lastPosition = lastPosition + increment;
      }
      tokens.add(new ExtendedAnalyzeResponse.ExtendedAnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(),
        lastOffset +offset.endOffset(), type.type(), extractExtendedAttributes(stream, includeAttributes, shortAttrName)));
    }
    stream.end();
    lastOffset += offset.endOffset();
    lastPosition += posIncr.getPositionIncrement();
    lastPosition += analyzer.getPositionIncrementGap(field);
    lastOffset += analyzer.getOffsetGap(field);
  } catch (IOException e) {
    throw new ElasticsearchException("failed to analyze", e);
  } finally {
    IOUtils.closeWhileHandlingException(stream);
  }
}

代码示例来源:origin: com.strapdata.elasticsearch/elasticsearch

private void analyze(TokenStream stream, Analyzer analyzer, String field, Set<String> includeAttributes) {
  try {
    stream.reset();
    CharTermAttribute term = stream.addAttribute(CharTermAttribute.class);
    PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class);
    OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class);
    TypeAttribute type = stream.addAttribute(TypeAttribute.class);
    PositionLengthAttribute posLen = stream.addAttribute(PositionLengthAttribute.class);
    while (stream.incrementToken()) {
      int increment = posIncr.getPositionIncrement();
      if (increment > 0) {
        lastPosition = lastPosition + increment;
      }
      tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(),
        lastOffset + offset.endOffset(), posLen.getPositionLength(), type.type(), extractExtendedAttributes(stream, includeAttributes)));
    }
    stream.end();
    lastOffset += offset.endOffset();
    lastPosition += posIncr.getPositionIncrement();
    lastPosition += analyzer.getPositionIncrementGap(field);
    lastOffset += analyzer.getOffsetGap(field);
  } catch (IOException e) {
    throw new ElasticsearchException("failed to analyze", e);
  } finally {
    IOUtils.closeWhileHandlingException(stream);
  }
}

代码示例来源:origin: apache/servicemix-bundles

private void analyze(TokenStream stream, Analyzer analyzer, String field, Set<String> includeAttributes) {
  try {
    stream.reset();
    CharTermAttribute term = stream.addAttribute(CharTermAttribute.class);
    PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class);
    OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class);
    TypeAttribute type = stream.addAttribute(TypeAttribute.class);
    PositionLengthAttribute posLen = stream.addAttribute(PositionLengthAttribute.class);
    while (stream.incrementToken()) {
      int increment = posIncr.getPositionIncrement();
      if (increment > 0) {
        lastPosition = lastPosition + increment;
      }
      tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(),
        lastOffset + offset.endOffset(), posLen.getPositionLength(), type.type(), extractExtendedAttributes(stream, includeAttributes)));
    }
    stream.end();
    lastOffset += offset.endOffset();
    lastPosition += posIncr.getPositionIncrement();
    lastPosition += analyzer.getPositionIncrementGap(field);
    lastOffset += analyzer.getOffsetGap(field);
  } catch (IOException e) {
    throw new ElasticsearchException("failed to analyze", e);
  } finally {
    IOUtils.closeWhileHandlingException(stream);
  }
}

相关文章