本文整理了Java中org.apache.lucene.analysis.Analyzer.getOffsetGap()
方法的一些代码示例,展示了Analyzer.getOffsetGap()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Analyzer.getOffsetGap()
方法的具体详情如下:
包路径:org.apache.lucene.analysis.Analyzer
类名称:Analyzer
方法名:getOffsetGap
[英]Just like #getPositionIncrementGap, except for Token offsets instead. By default this returns 1. This method is only called if the field produced at least one token for indexing.
[中]就像#getPositionIncrementGap一样,除了令牌偏移。默认情况下,返回1。仅当字段生成至少一个用于索引的标记时,才会调用此方法。
代码示例来源:origin: org.apache.lucene/lucene-core
@Override
public int getOffsetGap(String fieldName) {
return getWrappedAnalyzer(fieldName).getOffsetGap(fieldName);
}
代码示例来源:origin: org.elasticsearch/elasticsearch
@Override
public int getOffsetGap(String field) {
if (offsetGap < 0) {
return super.getOffsetGap(field);
}
return this.offsetGap;
}
代码示例来源:origin: org.apache.lucene/lucene-analyzers-common
@Override
public int getOffsetGap(String fieldName) {
// use default from Analyzer base class if null
return (offsetGap == null) ? super.getOffsetGap(fieldName) : offsetGap.intValue();
}
代码示例来源:origin: org.apache.lucene/lucene-core
invertState.offset += docState.analyzer.getOffsetGap(fieldInfo.name);
代码示例来源:origin: com.strapdata.elasticsearch/elasticsearch
@Override
public int getOffsetGap(String field) {
if (offsetGap < 0) {
return super.getOffsetGap(field);
}
return this.offsetGap;
}
代码示例来源:origin: apache/servicemix-bundles
@Override
public int getOffsetGap(String field) {
if (offsetGap < 0) {
return super.getOffsetGap(field);
}
return this.offsetGap;
}
代码示例来源:origin: org.apache.lucene/lucene-benchmark
@Override
public int getOffsetGap(String fieldName) {
return null == offsetGap ? super.getOffsetGap(fieldName) : offsetGap;
}
};
代码示例来源:origin: org.apache.servicemix.bundles/org.apache.servicemix.bundles.elasticsearch
@Override
public int getOffsetGap(String field) {
if (offsetGap < 0) {
return super.getOffsetGap(field);
}
return this.offsetGap;
}
代码示例来源:origin: org.elasticsearch/elasticsearch
lastOffset += analyzer.getOffsetGap(field);
} catch (IOException e) {
throw new ElasticsearchException("failed to analyze", e);
代码示例来源:origin: org.elasticsearch/elasticsearch
private void analyze(TokenStream stream, Analyzer analyzer, String field, Set<String> includeAttributes) {
try {
stream.reset();
CharTermAttribute term = stream.addAttribute(CharTermAttribute.class);
PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class);
OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class);
TypeAttribute type = stream.addAttribute(TypeAttribute.class);
PositionLengthAttribute posLen = stream.addAttribute(PositionLengthAttribute.class);
while (stream.incrementToken()) {
int increment = posIncr.getPositionIncrement();
if (increment > 0) {
lastPosition = lastPosition + increment;
}
tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(),
lastOffset + offset.endOffset(), posLen.getPositionLength(), type.type(),
extractExtendedAttributes(stream, includeAttributes)));
}
stream.end();
lastOffset += offset.endOffset();
lastPosition += posIncr.getPositionIncrement();
lastPosition += analyzer.getPositionIncrementGap(field);
lastOffset += analyzer.getOffsetGap(field);
} catch (IOException e) {
throw new ElasticsearchException("failed to analyze", e);
} finally {
IOUtils.closeWhileHandlingException(stream);
}
}
代码示例来源:origin: org.infinispan/infinispan-embedded-query
@Override
public int getOffsetGap(String fieldName) {
// use default from Analyzer base class if null
return (offsetGap == null) ? super.getOffsetGap(fieldName) : offsetGap.intValue();
}
代码示例来源:origin: org.apache.servicemix.bundles/org.apache.servicemix.bundles.lucene
@Override
public int getOffsetGap(String fieldName) {
return getWrappedAnalyzer(fieldName).getOffsetGap(fieldName);
}
代码示例来源:origin: gncloud/fastcatsearch
@Override
public final int getOffsetGap(String fieldName) {
return getWrappedAnalyzer(fieldName).getOffsetGap(fieldName);
}
代码示例来源:origin: org.infinispan/infinispan-embedded-query
@Override
public int getOffsetGap(String fieldName) {
return getWrappedAnalyzer(fieldName).getOffsetGap(fieldName);
}
代码示例来源:origin: org.apache.lucene/lucene-memory
/**
* Convenience method; Tokenizes the given field text and adds the resulting
* terms to the index; Equivalent to adding an indexed non-keyword Lucene
* {@link org.apache.lucene.document.Field} that is tokenized, not stored,
* termVectorStored with positions (or termVectorStored with positions and offsets),
*
* @param fieldName
* a name to be associated with the text
* @param text
* the text to tokenize and index.
* @param analyzer
* the analyzer to use for tokenization
*/
public void addField(String fieldName, String text, Analyzer analyzer) {
if (fieldName == null)
throw new IllegalArgumentException("fieldName must not be null");
if (text == null)
throw new IllegalArgumentException("text must not be null");
if (analyzer == null)
throw new IllegalArgumentException("analyzer must not be null");
TokenStream stream = analyzer.tokenStream(fieldName, text);
storeTerms(getInfo(fieldName, defaultFieldType), stream,
analyzer.getPositionIncrementGap(fieldName), analyzer.getOffsetGap(fieldName));
}
代码示例来源:origin: com.strapdata.elasticsearch/elasticsearch
lastOffset += analyzer.getOffsetGap(field);
} catch (IOException e) {
throw new ElasticsearchException("failed to analyze", e);
代码示例来源:origin: org.apache.servicemix.bundles/org.apache.servicemix.bundles.elasticsearch
lastOffset += analyzer.getOffsetGap(field);
} catch (IOException e) {
throw new ElasticsearchException("failed to analyze", e);
代码示例来源:origin: johtani/elasticsearch-extended-analyze
private void analyze(TokenStream stream, Analyzer analyzer, String field, Set<String> includeAttributes, boolean shortAttrName) {
try {
stream.reset();
CharTermAttribute term = stream.addAttribute(CharTermAttribute.class);
PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class);
OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class);
TypeAttribute type = stream.addAttribute(TypeAttribute.class);
while (stream.incrementToken()) {
int increment = posIncr.getPositionIncrement();
if (increment > 0) {
lastPosition = lastPosition + increment;
}
tokens.add(new ExtendedAnalyzeResponse.ExtendedAnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(),
lastOffset +offset.endOffset(), type.type(), extractExtendedAttributes(stream, includeAttributes, shortAttrName)));
}
stream.end();
lastOffset += offset.endOffset();
lastPosition += posIncr.getPositionIncrement();
lastPosition += analyzer.getPositionIncrementGap(field);
lastOffset += analyzer.getOffsetGap(field);
} catch (IOException e) {
throw new ElasticsearchException("failed to analyze", e);
} finally {
IOUtils.closeWhileHandlingException(stream);
}
}
代码示例来源:origin: com.strapdata.elasticsearch/elasticsearch
private void analyze(TokenStream stream, Analyzer analyzer, String field, Set<String> includeAttributes) {
try {
stream.reset();
CharTermAttribute term = stream.addAttribute(CharTermAttribute.class);
PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class);
OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class);
TypeAttribute type = stream.addAttribute(TypeAttribute.class);
PositionLengthAttribute posLen = stream.addAttribute(PositionLengthAttribute.class);
while (stream.incrementToken()) {
int increment = posIncr.getPositionIncrement();
if (increment > 0) {
lastPosition = lastPosition + increment;
}
tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(),
lastOffset + offset.endOffset(), posLen.getPositionLength(), type.type(), extractExtendedAttributes(stream, includeAttributes)));
}
stream.end();
lastOffset += offset.endOffset();
lastPosition += posIncr.getPositionIncrement();
lastPosition += analyzer.getPositionIncrementGap(field);
lastOffset += analyzer.getOffsetGap(field);
} catch (IOException e) {
throw new ElasticsearchException("failed to analyze", e);
} finally {
IOUtils.closeWhileHandlingException(stream);
}
}
代码示例来源:origin: apache/servicemix-bundles
private void analyze(TokenStream stream, Analyzer analyzer, String field, Set<String> includeAttributes) {
try {
stream.reset();
CharTermAttribute term = stream.addAttribute(CharTermAttribute.class);
PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class);
OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class);
TypeAttribute type = stream.addAttribute(TypeAttribute.class);
PositionLengthAttribute posLen = stream.addAttribute(PositionLengthAttribute.class);
while (stream.incrementToken()) {
int increment = posIncr.getPositionIncrement();
if (increment > 0) {
lastPosition = lastPosition + increment;
}
tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(),
lastOffset + offset.endOffset(), posLen.getPositionLength(), type.type(), extractExtendedAttributes(stream, includeAttributes)));
}
stream.end();
lastOffset += offset.endOffset();
lastPosition += posIncr.getPositionIncrement();
lastPosition += analyzer.getPositionIncrementGap(field);
lastOffset += analyzer.getOffsetGap(field);
} catch (IOException e) {
throw new ElasticsearchException("failed to analyze", e);
} finally {
IOUtils.closeWhileHandlingException(stream);
}
}
内容来源于网络,如有侵权,请联系作者删除!