当前位置: 首页>>代码示例>>Java>>正文


Java IndexReader.numDocs方法代码示例

本文整理汇总了Java中org.apache.lucene.index.IndexReader.numDocs方法的典型用法代码示例。如果您正苦于以下问题:Java IndexReader.numDocs方法的具体用法?Java IndexReader.numDocs怎么用?Java IndexReader.numDocs使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.lucene.index.IndexReader的用法示例。


在下文中一共展示了IndexReader.numDocs方法的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: rewrite

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
@Override
public Query rewrite(IndexReader reader) throws IOException {
    if (getBoost() != 1.0F) {
        return super.rewrite(reader);
    }
    if (reader instanceof DirectoryReader) {
        String joinField = ParentFieldMapper.joinField(parentType);
        IndexSearcher indexSearcher = new IndexSearcher(reader);
        indexSearcher.setQueryCache(null);
        indexSearcher.setSimilarity(similarity);
        IndexParentChildFieldData indexParentChildFieldData = parentChildIndexFieldData.loadGlobal((DirectoryReader) reader);
        MultiDocValues.OrdinalMap ordinalMap = ParentChildIndexFieldData.getOrdinalMap(indexParentChildFieldData, parentType);
        return JoinUtil.createJoinQuery(joinField, innerQuery, toQuery, indexSearcher, scoreMode, ordinalMap, minChildren, maxChildren);
    } else {
        if (reader.leaves().isEmpty() && reader.numDocs() == 0) {
            // asserting reader passes down a MultiReader during rewrite which makes this
            // blow up since for this query to work we have to have a DirectoryReader otherwise
            // we can't load global ordinals - for this to work we simply check if the reader has no leaves
            // and rewrite to match nothing
            return new MatchNoDocsQuery();
        }
        throw new IllegalStateException("can't load global ordinals for reader of type: " + reader.getClass() + " must be a DirectoryReader");
    }
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:25,代码来源:HasChildQueryParser.java

示例2: buildEmptyAggregation

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
@Override
public SignificantStringTerms buildEmptyAggregation() {
    // We need to account for the significance of a miss in our global stats - provide corpus size as context
    ContextIndexSearcher searcher = context.searcher();
    IndexReader topReader = searcher.getIndexReader();
    int supersetSize = topReader.numDocs();
    return new SignificantStringTerms(name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(),
            pipelineAggregators(), metaData(), format, 0, supersetSize, significanceHeuristic, emptyList());
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:10,代码来源:SignificantStringTermsAggregator.java

示例3: buildEmptyAggregation

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
@Override
public SignificantLongTerms buildEmptyAggregation() {
    // We need to account for the significance of a miss in our global stats - provide corpus size as context
    ContextIndexSearcher searcher = context.searcher();
    IndexReader topReader = searcher.getIndexReader();
    int supersetSize = topReader.numDocs();
    return new SignificantLongTerms(name, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getMinDocCount(),
            pipelineAggregators(), metaData(), format, 0, supersetSize, significanceHeuristic, emptyList());
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:10,代码来源:SignificantLongTermsAggregator.java

示例4: rewrite

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
@Override
public Query rewrite(IndexReader reader) throws IOException {
    Query rewritten = super.rewrite(reader);
    if (rewritten != this) {
        return rewritten;
    }
    if (reader instanceof DirectoryReader) {
        String joinField = ParentFieldMapper.joinField(parentType);
        IndexSearcher indexSearcher = new IndexSearcher(reader);
        indexSearcher.setQueryCache(null);
        indexSearcher.setSimilarity(similarity);
        IndexParentChildFieldData indexParentChildFieldData = parentChildIndexFieldData.loadGlobal((DirectoryReader) reader);
        MultiDocValues.OrdinalMap ordinalMap = ParentChildIndexFieldData.getOrdinalMap(indexParentChildFieldData, parentType);
        return JoinUtil.createJoinQuery(joinField, innerQuery, toQuery, indexSearcher, scoreMode,
                ordinalMap, minChildren, maxChildren);
    } else {
        if (reader.leaves().isEmpty() && reader.numDocs() == 0) {
            // asserting reader passes down a MultiReader during rewrite which makes this
            // blow up since for this query to work we have to have a DirectoryReader otherwise
            // we can't load global ordinals - for this to work we simply check if the reader has no leaves
            // and rewrite to match nothing
            return new MatchNoDocsQuery();
        }
        throw new IllegalStateException("can't load global ordinals for reader of type: " +
                reader.getClass() + " must be a DirectoryReader");
    }
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:28,代码来源:HasChildQueryBuilder.java

示例5: getNumberOfDocuments

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
public int getNumberOfDocuments() throws IOException
{
    IndexReader reader = getMainIndexReferenceCountingReadOnlyIndexReader();
    try
    {
        return reader.numDocs();
    }
    finally
    {
        reader.close();
    }
}
 
开发者ID:Alfresco,项目名称:alfresco-repository,代码行数:13,代码来源:IndexInfo.java

示例6: getImagesOf

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
private List<Image> getImagesOf(Collection collection) {

        List<Image> results = new ArrayList<>();

        try {
            Path path = indexPath(collection);
            if(!Files.exists(path)) return results;

            IndexReader ir = DirectoryReader.open(FSDirectory.open(path));

            int num = ir.numDocs();
            for ( int i = 0; i < num; i++)
            {
                Document d = ir.document(i);
                String imagePath = d.getField(DocumentBuilder.FIELD_NAME_IDENTIFIER).stringValue();
                String thumbnailPath = collectionUtils.getThumbnailPathFromImagePath(collection, imagePath);
                Image image = new Image(imagePath, thumbnailPath);
                image.setDocId(i);
                results.add(image);
            }
            ir.close();

        } catch (IOException e) {
            throw new LireLabException("Could not read index", e);
        }

        return results;
    }
 
开发者ID:AntonioGabrielAndrade,项目名称:LIRE-Lab,代码行数:29,代码来源:CollectionAssembler.java

示例7: buildEmptyAggregation

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
@Override
public SignificantStringTerms buildEmptyAggregation() {
    // We need to account for the significance of a miss in our global stats - provide corpus size as context
    ContextIndexSearcher searcher = context.searchContext().searcher();
    IndexReader topReader = searcher.getIndexReader();
    int supersetSize = topReader.numDocs();
    return new SignificantStringTerms(0, supersetSize, name, bucketCountThresholds.getRequiredSize(),
            bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(),
            Collections.<InternalSignificantTerms.Bucket> emptyList(), pipelineAggregators(), metaData());
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:11,代码来源:SignificantStringTermsAggregator.java

示例8: buildEmptyAggregation

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
@Override
public SignificantLongTerms buildEmptyAggregation() {
    // We need to account for the significance of a miss in our global stats - provide corpus size as context
    ContextIndexSearcher searcher = context.searchContext().searcher();
    IndexReader topReader = searcher.getIndexReader();
    int supersetSize = topReader.numDocs();
    return new SignificantLongTerms(0, supersetSize, name, formatter, bucketCountThresholds.getRequiredSize(),
            bucketCountThresholds.getMinDocCount(), termsAggFactory.getSignificanceHeuristic(),
            Collections.<InternalSignificantTerms.Bucket> emptyList(), pipelineAggregators(), metaData());
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:11,代码来源:SignificantLongTermsAggregator.java

示例9: performSearch

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
private TopDocs performSearch(IndexSearcher searcher, Query query, IndexReader reader, Integer maxResultsCount,
                              Sort sort) throws IOException {
    final TopDocs docs;
    int resultsCount = maxResultsCount == null ? reader.numDocs() : maxResultsCount;
    if (sort == null) {
        docs = searcher.search(query, resultsCount);
    } else {
        docs = searcher.search(query, resultsCount, sort);
    }

    return docs;
}
 
开发者ID:react-dev26,项目名称:NGB-master,代码行数:13,代码来源:FeatureIndexDao.java

示例10: getTopDocs

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
/**
 * Executes the given {@link Query} but returns lucene's {@link TopDocs}.
 * @param query the query to execute
 * @param options the additional options to execute the query.
 * @return {@link TopDocs} or null if an error occurred.
 */
public TopDocs getTopDocs(final Query query, final SearchOptions options) {
	TopDocs topDocs = null;

	final Index index = IndexManager.getInstance().getIndex();
	final IndexReader reader = index.getIndexReader();
	final IndexSearcher searcher = new IndexSearcher(reader);

	// stopwatch to check performance of search
	final StopWatch stopWatch = new StopWatch();

	try {
		int maxResults = options.getMaxResults();
		if(maxResults <= 0) {
			maxResults = reader.numDocs();
		}

		stopWatch.start();
		if(options.getSort() == null) {
			if(options.getAfterScoreDoc() == null) {
				topDocs = searcher.search(query, maxResults);
			}
			else {
				topDocs = searcher.searchAfter(options.getAfterScoreDoc(), query, maxResults);
			}
		}
		else {
			if(options.getAfterScoreDoc() == null) {
				topDocs = searcher.search(query, maxResults, options.getSort());
			}
			else {
				topDocs = searcher.searchAfter(options.getAfterScoreDoc(), query, maxResults, options.getSort());
			}
		}

		stopWatch.stop();
		LOGGER.info("Query execution used {}ms {}.", stopWatch.getTime(), query);
	}
	catch (final IOException e) {
		LOGGER.error("Can't execute search because of an IOException.", e);
	}

	return topDocs;
}
 
开发者ID:XMBomb,项目名称:InComb,代码行数:50,代码来源:IndexSearch.java

示例11: explain

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
public Explanation explain(IndexReader reader, int doc)
  throws IOException {

  ComplexExplanation result = new ComplexExplanation();
  result.setDescription("weight("+getQuery()+" in "+doc+"), product of:");

  Explanation idfExpl =
    new Explanation(idf, "idf(docFreq=" + reader.docFreq(term) +
        ", numDocs=" + reader.numDocs() + ")");

  // explain query weight
  Explanation queryExpl = new Explanation();
  queryExpl.setDescription("queryWeight(" + getQuery() + "), product of:");

  Explanation boostExpl = new Explanation(getBoost(), "boost");
  if (getBoost() != 1.0f)
    queryExpl.addDetail(boostExpl);
  queryExpl.addDetail(idfExpl);

  Explanation queryNormExpl = new Explanation(queryNorm,"queryNorm");
  queryExpl.addDetail(queryNormExpl);

  queryExpl.setValue(boostExpl.getValue() *
                     idfExpl.getValue() *
                     queryNormExpl.getValue());

  result.addDetail(queryExpl);

  // explain field weight
  String field = term.field();
  ComplexExplanation fieldExpl = new ComplexExplanation();
  fieldExpl.setDescription("fieldWeight("+term+" in "+doc+
                           "), product of:");

  Explanation tfExpl = scorer(reader).explain(doc);
  fieldExpl.addDetail(tfExpl);
  fieldExpl.addDetail(idfExpl);

  Explanation fieldNormExpl = new Explanation();
  byte[] fieldNorms = reader.norms(field);
  float fieldNorm =
    fieldNorms!=null ? Similarity.decodeNorm(fieldNorms[doc]) : 0.0f;
  fieldNormExpl.setValue(fieldNorm);
  fieldNormExpl.setDescription("fieldNorm(field="+field+", doc="+doc+")");
  fieldExpl.addDetail(fieldNormExpl);
  
  fieldExpl.setMatch(Boolean.valueOf(tfExpl.isMatch()));
  fieldExpl.setValue(tfExpl.getValue() *
                     idfExpl.getValue() *
                     fieldNormExpl.getValue());

  result.addDetail(fieldExpl);
  result.setMatch(fieldExpl.getMatch());
  
  // combine them
  result.setValue(queryExpl.getValue() * fieldExpl.getValue());

  if (queryExpl.getValue() == 1.0f)
    return fieldExpl;

  return result;
}
 
开发者ID:Alfresco,项目名称:alfresco-repository,代码行数:63,代码来源:TermQuery.java

示例12: QueryAutoStopWordAnalyzer

import org.apache.lucene.index.IndexReader; //导入方法依赖的package包/类
/**
 * Creates a new QueryAutoStopWordAnalyzer with stopwords calculated for the
 * given selection of fields from terms with a document frequency percentage
 * greater than the given maxPercentDocs
 *
 * @param delegate Analyzer whose TokenStream will be filtered
 * @param indexReader IndexReader to identify the stopwords from
 * @param fields Selection of fields to calculate stopwords for
 * @param maxPercentDocs The maximum percentage (between 0.0 and 1.0) of index documents which
 *                      contain a term, after which the word is considered to be a stop word
 * @throws IOException Can be thrown while reading from the IndexReader
 */
public QueryAutoStopWordAnalyzer(
    Analyzer delegate,
    IndexReader indexReader,
    Collection<String> fields,
    float maxPercentDocs) throws IOException {
  this(delegate, indexReader, fields, (int) (indexReader.numDocs() * maxPercentDocs));
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:20,代码来源:QueryAutoStopWordAnalyzer.java


注:本文中的org.apache.lucene.index.IndexReader.numDocs方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。