當前位置: 首頁>>代碼示例>>Java>>正文


Java Token.setOffset方法代碼示例

本文整理匯總了Java中org.apache.lucene.analysis.Token.setOffset方法的典型用法代碼示例。如果您正苦於以下問題:Java Token.setOffset方法的具體用法?Java Token.setOffset怎麽用?Java Token.setOffset使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.lucene.analysis.Token的用法示例。


在下文中一共展示了Token.setOffset方法的10個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: analyze

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
protected void analyze(Collection<Token> result, String text, int offset, int flagsAttValue) throws IOException {
  TokenStream stream = analyzer.tokenStream("", text);
  // TODO: support custom attributes
  CharTermAttribute termAtt = stream.addAttribute(CharTermAttribute.class);
  TypeAttribute typeAtt = stream.addAttribute(TypeAttribute.class);
  PayloadAttribute payloadAtt = stream.addAttribute(PayloadAttribute.class);
  PositionIncrementAttribute posIncAtt = stream.addAttribute(PositionIncrementAttribute.class);
  OffsetAttribute offsetAtt = stream.addAttribute(OffsetAttribute.class);
  stream.reset();
  while (stream.incrementToken()) {      
    Token token = new Token();
    token.copyBuffer(termAtt.buffer(), 0, termAtt.length());
    token.setOffset(offset + offsetAtt.startOffset(), 
                    offset + offsetAtt.endOffset());
    token.setFlags(flagsAttValue); //overwriting any flags already set...
    token.setType(typeAtt.type());
    token.setPayload(payloadAtt.getPayload());
    token.setPositionIncrement(posIncAtt.getPositionIncrement());
    result.add(token);
  }
  stream.end();
  stream.close();
}
 
開發者ID:europeana,項目名稱:search,代碼行數:24,代碼來源:SpellingQueryConverter.java

示例2: getNextPrefixInputToken

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
private Token getNextPrefixInputToken(Token token) throws IOException {
  if (!prefix.incrementToken()) return null;
  token.copyBuffer(p_termAtt.buffer(), 0, p_termAtt.length());
  token.setPositionIncrement(p_posIncrAtt.getPositionIncrement());
  token.setFlags(p_flagsAtt.getFlags());
  token.setOffset(p_offsetAtt.startOffset(), p_offsetAtt.endOffset());
  token.setType(p_typeAtt.type());
  token.setPayload(p_payloadAtt.getPayload());
  return token;
}
 
開發者ID:lamsfoundation,項目名稱:lams,代碼行數:11,代碼來源:PrefixAwareTokenFilter.java

示例3: getNextSuffixInputToken

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
private Token getNextSuffixInputToken(Token token) throws IOException {
  if (!suffix.incrementToken()) return null;
  token.copyBuffer(termAtt.buffer(), 0, termAtt.length());
  token.setPositionIncrement(posIncrAtt.getPositionIncrement());
  token.setFlags(flagsAtt.getFlags());
  token.setOffset(offsetAtt.startOffset(), offsetAtt.endOffset());
  token.setType(typeAtt.type());
  token.setPayload(payloadAtt.getPayload());
  return token;
}
 
開發者ID:lamsfoundation,項目名稱:lams,代碼行數:11,代碼來源:PrefixAwareTokenFilter.java

示例4: addToken

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
void addToken(float score) {
  if (numTokens < MAX_NUM_TOKENS_PER_GROUP) {
    int termStartOffset = offsetAtt.startOffset();
    int termEndOffset = offsetAtt.endOffset();
    if (numTokens == 0) {
      startOffset = matchStartOffset = termStartOffset;
      endOffset = matchEndOffset = termEndOffset;
      tot += score;
    } else {
      startOffset = Math.min(startOffset, termStartOffset);
      endOffset = Math.max(endOffset, termEndOffset);
      if (score > 0) {
        if (tot == 0) {
          matchStartOffset = offsetAtt.startOffset();
          matchEndOffset = offsetAtt.endOffset();
        } else {
          matchStartOffset = Math.min(matchStartOffset, termStartOffset);
          matchEndOffset = Math.max(matchEndOffset, termEndOffset);
        }
        tot += score;
      }
    }
    Token token = new Token();
    token.setOffset(termStartOffset, termEndOffset);
    token.setEmpty().append(termAtt);
    tokens[numTokens] = token;
    scores[numTokens] = score;
    numTokens++;
  }
}
 
開發者ID:europeana,項目名稱:search,代碼行數:31,代碼來源:TokenGroup.java

示例5: Token

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
private static Token createToken
  (String term, int start, int offset, int positionIncrement)
{
  Token token = new Token();
  token.setOffset(start, offset);
  token.copyBuffer(term.toCharArray(), 0, term.length());
  token.setPositionIncrement(positionIncrement);
  return token;
}
 
開發者ID:europeana,項目名稱:search,代碼行數:10,代碼來源:ShingleFilterTest.java

示例6: makeToken

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
private Token makeToken(String text, int posIncr, int startOffset, int endOffset) {
  final Token t = new Token();
  t.append(text);
  t.setPositionIncrement(posIncr);
  t.setOffset(startOffset, endOffset);
  return t;
}
 
開發者ID:europeana,項目名稱:search,代碼行數:8,代碼來源:TestPostingsOffsets.java

示例7: convert

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
@Override
public Collection<Token> convert(String origQuery) {
  Collection<Token> result = new HashSet<>();
  WhitespaceAnalyzer analyzer = new WhitespaceAnalyzer();
  
  TokenStream ts = null;
  try {
    ts = analyzer.tokenStream("", origQuery);
    // TODO: support custom attributes
    CharTermAttribute termAtt = ts.addAttribute(CharTermAttribute.class);
    OffsetAttribute offsetAtt = ts.addAttribute(OffsetAttribute.class);
    TypeAttribute typeAtt = ts.addAttribute(TypeAttribute.class);
    FlagsAttribute flagsAtt = ts.addAttribute(FlagsAttribute.class);
    PayloadAttribute payloadAtt = ts.addAttribute(PayloadAttribute.class);
    PositionIncrementAttribute posIncAtt = ts.addAttribute(PositionIncrementAttribute.class);

    ts.reset();

    while (ts.incrementToken()) {
      Token tok = new Token();
      tok.copyBuffer(termAtt.buffer(), 0, termAtt.length());
      tok.setOffset(offsetAtt.startOffset(), offsetAtt.endOffset());
      tok.setFlags(flagsAtt.getFlags());
      tok.setPayload(payloadAtt.getPayload());
      tok.setPositionIncrement(posIncAtt.getPositionIncrement());
      tok.setType(typeAtt.type());
      result.add(tok);
    }
    ts.end();      
    return result;
  } catch (IOException e) {
    throw new RuntimeException(e);
  } finally {
    IOUtils.closeWhileHandlingException(ts);
  }
}
 
開發者ID:europeana,項目名稱:search,代碼行數:37,代碼來源:SimpleQueryConverter.java

示例8: updateInputToken

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
public Token updateInputToken(Token inputToken, Token lastPrefixToken) {
  inputToken.setOffset(lastPrefixToken.endOffset() + inputToken.startOffset(), 
                       lastPrefixToken.endOffset() + inputToken.endOffset());
  return inputToken;
}
 
開發者ID:lamsfoundation,項目名稱:lams,代碼行數:6,代碼來源:PrefixAndSuffixAwareTokenFilter.java

示例9: updateSuffixToken

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
public Token updateSuffixToken(Token suffixToken, Token lastInputToken) {
  suffixToken.setOffset(lastInputToken.endOffset() + suffixToken.startOffset(),
                        lastInputToken.endOffset() + suffixToken.endOffset());
  return suffixToken;
}
 
開發者ID:lamsfoundation,項目名稱:lams,代碼行數:6,代碼來源:PrefixAndSuffixAwareTokenFilter.java

示例10: updateSuffixToken

import org.apache.lucene.analysis.Token; //導入方法依賴的package包/類
/**
 * The default implementation adds last prefix token end offset to the suffix token start and end offsets.
 *
 * @param suffixToken a token from the suffix stream
 * @param lastPrefixToken the last token from the prefix stream
 * @return consumer token
 */
public Token updateSuffixToken(Token suffixToken, Token lastPrefixToken) {
  suffixToken.setOffset(lastPrefixToken.endOffset() + suffixToken.startOffset(),
                        lastPrefixToken.endOffset() + suffixToken.endOffset());
  return suffixToken;
}
 
開發者ID:lamsfoundation,項目名稱:lams,代碼行數:13,代碼來源:PrefixAwareTokenFilter.java


注:本文中的org.apache.lucene.analysis.Token.setOffset方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。