當前位置: 首頁>>代碼示例>>Java>>正文


Java IndexReader.numDocs方法代碼示例

本文整理匯總了Java中org.apache.lucene.index.IndexReader.numDocs方法的典型用法代碼示例。如果您正苦於以下問題:Java IndexReader.numDocs方法的具體用法?Java IndexReader.numDocs怎麽用?Java IndexReader.numDocs使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.lucene.index.IndexReader的用法示例。


在下文中一共展示了IndexReader.numDocs方法的12個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: rewrite

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
@Override
public Query rewrite(IndexReader reader) throws IOException {
    if (getBoost() != 1.0F) {
        return super.rewrite(reader);
    }
    if (reader instanceof DirectoryReader) {
        String joinField = ParentFieldMapper.joinField(parentType);
        IndexSearcher indexSearcher = new IndexSearcher(reader);
        indexSearcher.setQueryCache(null);
        indexSearcher.setSimilarity(similarity);
        IndexParentChildFieldData indexParentChildFieldData = parentChildIndexFieldData.loadGlobal((DirectoryReader) reader);
        MultiDocValues.OrdinalMap ordinalMap = ParentChildIndexFieldData.getOrdinalMap(indexParentChildFieldData, parentType);
        return JoinUtil.createJoinQuery(joinField, innerQuery, toQuery, indexSearcher, scoreMode, ordinalMap, minChildren, maxChildren);
    } else {
        if (reader.leaves().isEmpty() && reader.numDocs() == 0) {
            // asserting reader passes down a MultiReader during rewrite which makes this
            // blow up since for this query to work we have to have a DirectoryReader otherwise
            // we can't load global ordinals - for this to work we simply check if the reader has no leaves
            // and rewrite to match nothing
            return new MatchNoDocsQuery();
        }
        throw new IllegalStateException("can't load global ordinals for reader of type: " + reader.getClass() + " must be a DirectoryReader");
    }
}
 
開發者ID:baidu,項目名稱:Elasticsearch,代碼行數:25,代碼來源:HasChildQueryParser.java

示例2: buildEmptyAggregation

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
@Override
public SignificantStringTerms buildEmptyAggregation() {
    // We need to account for the significance of a miss in our global stats - provide corpus size as context
    ContextIndexSearcher searcher = context.searcher();
    IndexReader topReader = searcher.getIndexReader();
    int supersetSize = topReader.numDocs();
    return new SignificantStringTerms(name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(),
            pipelineAggregators(), metaData(), format, 0, supersetSize, significanceHeuristic, emptyList());
}
 
開發者ID:justor,項目名稱:elasticsearch_my,代碼行數:10,代碼來源:SignificantStringTermsAggregator.java

示例3: buildEmptyAggregation

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
@Override
public SignificantLongTerms buildEmptyAggregation() {
    // We need to account for the significance of a miss in our global stats - provide corpus size as context
    ContextIndexSearcher searcher = context.searcher();
    IndexReader topReader = searcher.getIndexReader();
    int supersetSize = topReader.numDocs();
    return new SignificantLongTerms(name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(),
            pipelineAggregators(), metaData(), format, 0, supersetSize, significanceHeuristic, emptyList());
}
 
開發者ID:justor,項目名稱:elasticsearch_my,代碼行數:10,代碼來源:SignificantLongTermsAggregator.java

示例4: rewrite

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
@Override
public Query rewrite(IndexReader reader) throws IOException {
    Query rewritten = super.rewrite(reader);
    if (rewritten != this) {
        return rewritten;
    }
    if (reader instanceof DirectoryReader) {
        String joinField = ParentFieldMapper.joinField(parentType);
        IndexSearcher indexSearcher = new IndexSearcher(reader);
        indexSearcher.setQueryCache(null);
        indexSearcher.setSimilarity(similarity);
        IndexParentChildFieldData indexParentChildFieldData = parentChildIndexFieldData.loadGlobal((DirectoryReader) reader);
        MultiDocValues.OrdinalMap ordinalMap = ParentChildIndexFieldData.getOrdinalMap(indexParentChildFieldData, parentType);
        return JoinUtil.createJoinQuery(joinField, innerQuery, toQuery, indexSearcher, scoreMode,
                ordinalMap, minChildren, maxChildren);
    } else {
        if (reader.leaves().isEmpty() && reader.numDocs() == 0) {
            // asserting reader passes down a MultiReader during rewrite which makes this
            // blow up since for this query to work we have to have a DirectoryReader otherwise
            // we can't load global ordinals - for this to work we simply check if the reader has no leaves
            // and rewrite to match nothing
            return new MatchNoDocsQuery();
        }
        throw new IllegalStateException("can't load global ordinals for reader of type: " +
                reader.getClass() + " must be a DirectoryReader");
    }
}
 
開發者ID:justor,項目名稱:elasticsearch_my,代碼行數:28,代碼來源:HasChildQueryBuilder.java

示例5: getNumberOfDocuments

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
public int getNumberOfDocuments() throws IOException
{
    IndexReader reader = getMainIndexReferenceCountingReadOnlyIndexReader();
    try
    {
        return reader.numDocs();
    }
    finally
    {
        reader.close();
    }
}
 
開發者ID:Alfresco,項目名稱:alfresco-repository,代碼行數:13,代碼來源:IndexInfo.java

示例6: getImagesOf

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
private List<Image> getImagesOf(Collection collection) {

        List<Image> results = new ArrayList<>();

        try {
            Path path = indexPath(collection);
            if(!Files.exists(path)) return results;

            IndexReader ir = DirectoryReader.open(FSDirectory.open(path));

            int num = ir.numDocs();
            for ( int i = 0; i < num; i++)
            {
                Document d = ir.document(i);
                String imagePath = d.getField(DocumentBuilder.FIELD_NAME_IDENTIFIER).stringValue();
                String thumbnailPath = collectionUtils.getThumbnailPathFromImagePath(collection, imagePath);
                Image image = new Image(imagePath, thumbnailPath);
                image.setDocId(i);
                results.add(image);
            }
            ir.close();

        } catch (IOException e) {
            throw new LireLabException("Could not read index", e);
        }

        return results;
    }
 
開發者ID:AntonioGabrielAndrade,項目名稱:LIRE-Lab,代碼行數:29,代碼來源:CollectionAssembler.java

示例7: buildEmptyAggregation

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
@Override
public SignificantStringTerms buildEmptyAggregation() {
    // We need to account for the significance of a miss in our global stats - provide corpus size as context
    ContextIndexSearcher searcher = context.searchContext().searcher();
    IndexReader topReader = searcher.getIndexReader();
    int supersetSize = topReader.numDocs();
    return new SignificantStringTerms(0, supersetSize, name, bucketCountThresholds.getRequiredSize(),
            bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(),
            Collections.<InternalSignificantTerms.Bucket> emptyList(), pipelineAggregators(), metaData());
}
 
開發者ID:baidu,項目名稱:Elasticsearch,代碼行數:11,代碼來源:SignificantStringTermsAggregator.java

示例8: buildEmptyAggregation

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
@Override
public SignificantLongTerms buildEmptyAggregation() {
    // We need to account for the significance of a miss in our global stats - provide corpus size as context
    ContextIndexSearcher searcher = context.searchContext().searcher();
    IndexReader topReader = searcher.getIndexReader();
    int supersetSize = topReader.numDocs();
    return new SignificantLongTerms(0, supersetSize, name, formatter, bucketCountThresholds.getRequiredSize(),
            bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(),
            Collections.<InternalSignificantTerms.Bucket> emptyList(), pipelineAggregators(), metaData());
}
 
開發者ID:baidu,項目名稱:Elasticsearch,代碼行數:11,代碼來源:SignificantLongTermsAggregator.java

示例9: performSearch

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
private TopDocs performSearch(IndexSearcher searcher, Query query, IndexReader reader, Integer maxResultsCount,
                              Sort sort) throws IOException {
    final TopDocs docs;
    int resultsCount = maxResultsCount == null ? reader.numDocs() : maxResultsCount;
    if (sort == null) {
        docs = searcher.search(query, resultsCount);
    } else {
        docs = searcher.search(query, resultsCount, sort);
    }

    return docs;
}
 
開發者ID:react-dev26,項目名稱:NGB-master,代碼行數:13,代碼來源:FeatureIndexDao.java

示例10: getTopDocs

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
/**
 * Executes the given {@link Query} but returns lucene's {@link TopDocs}.
 * @param query the query to execute
 * @param options the additional options to execute the query.
 * @return {@link TopDocs} or null if an error occurred.
 */
public TopDocs getTopDocs(final Query query, final SearchOptions options) {
	TopDocs topDocs = null;

	final Index index = IndexManager.getInstance().getIndex();
	final IndexReader reader = index.getIndexReader();
	final IndexSearcher searcher = new IndexSearcher(reader);

	// stopwatch to check performance of search
	final StopWatch stopWatch = new StopWatch();

	try {
		int maxResults = options.getMaxResults();
		if(maxResults <= 0) {
			maxResults = reader.numDocs();
		}

		stopWatch.start();
		if(options.getSort() == null) {
			if(options.getAfterScoreDoc() == null) {
				topDocs = searcher.search(query, maxResults);
			}
			else {
				topDocs = searcher.searchAfter(options.getAfterScoreDoc(), query, maxResults);
			}
		}
		else {
			if(options.getAfterScoreDoc() == null) {
				topDocs = searcher.search(query, maxResults, options.getSort());
			}
			else {
				topDocs = searcher.searchAfter(options.getAfterScoreDoc(), query, maxResults, options.getSort());
			}
		}

		stopWatch.stop();
		LOGGER.info("Query execution used {}ms {}.", stopWatch.getTime(), query);
	}
	catch (final IOException e) {
		LOGGER.error("Can't execute search because of an IOException.", e);
	}

	return topDocs;
}
 
開發者ID:XMBomb,項目名稱:InComb,代碼行數:50,代碼來源:IndexSearch.java

示例11: explain

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
public Explanation explain(IndexReader reader, int doc)
  throws IOException {

  ComplexExplanation result = new ComplexExplanation();
  result.setDescription("weight("+getQuery()+" in "+doc+"), product of:");

  Explanation idfExpl =
    new Explanation(idf, "idf(docFreq=" + reader.docFreq(term) +
        ", numDocs=" + reader.numDocs() + ")");

  // explain query weight
  Explanation queryExpl = new Explanation();
  queryExpl.setDescription("queryWeight(" + getQuery() + "), product of:");

  Explanation boostExpl = new Explanation(getBoost(), "boost");
  if (getBoost() != 1.0f)
    queryExpl.addDetail(boostExpl);
  queryExpl.addDetail(idfExpl);

  Explanation queryNormExpl = new Explanation(queryNorm,"queryNorm");
  queryExpl.addDetail(queryNormExpl);

  queryExpl.setValue(boostExpl.getValue() *
                     idfExpl.getValue() *
                     queryNormExpl.getValue());

  result.addDetail(queryExpl);

  // explain field weight
  String field = term.field();
  ComplexExplanation fieldExpl = new ComplexExplanation();
  fieldExpl.setDescription("fieldWeight("+term+" in "+doc+
                           "), product of:");

  Explanation tfExpl = scorer(reader).explain(doc);
  fieldExpl.addDetail(tfExpl);
  fieldExpl.addDetail(idfExpl);

  Explanation fieldNormExpl = new Explanation();
  byte[] fieldNorms = reader.norms(field);
  float fieldNorm =
    fieldNorms!=null ? Similarity.decodeNorm(fieldNorms[doc]) : 0.0f;
  fieldNormExpl.setValue(fieldNorm);
  fieldNormExpl.setDescription("fieldNorm(field="+field+", doc="+doc+")");
  fieldExpl.addDetail(fieldNormExpl);
  
  fieldExpl.setMatch(Boolean.valueOf(tfExpl.isMatch()));
  fieldExpl.setValue(tfExpl.getValue() *
                     idfExpl.getValue() *
                     fieldNormExpl.getValue());

  result.addDetail(fieldExpl);
  result.setMatch(fieldExpl.getMatch());
  
  // combine them
  result.setValue(queryExpl.getValue() * fieldExpl.getValue());

  if (queryExpl.getValue() == 1.0f)
    return fieldExpl;

  return result;
}
 
開發者ID:Alfresco,項目名稱:alfresco-repository,代碼行數:63,代碼來源:TermQuery.java

示例12: QueryAutoStopWordAnalyzer

import org.apache.lucene.index.IndexReader; //導入方法依賴的package包/類
/**
 * Creates a new QueryAutoStopWordAnalyzer with stopwords calculated for the
 * given selection of fields from terms with a document frequency percentage
 * greater than the given maxPercentDocs
 *
 * @param delegate Analyzer whose TokenStream will be filtered
 * @param indexReader IndexReader to identify the stopwords from
 * @param fields Selection of fields to calculate stopwords for
 * @param maxPercentDocs The maximum percentage (between 0.0 and 1.0) of index documents which
 *                      contain a term, after which the word is considered to be a stop word
 * @throws IOException Can be thrown while reading from the IndexReader
 */
public QueryAutoStopWordAnalyzer(
    Analyzer delegate,
    IndexReader indexReader,
    Collection<String> fields,
    float maxPercentDocs) throws IOException {
  this(delegate, indexReader, fields, (int) (indexReader.numDocs() * maxPercentDocs));
}
 
開發者ID:lamsfoundation,項目名稱:lams,代碼行數:20,代碼來源:QueryAutoStopWordAnalyzer.java


注:本文中的org.apache.lucene.index.IndexReader.numDocs方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。