当前位置: 首页>>代码示例>>Java>>正文


Java PackedInts.bitsRequired方法代码示例

本文整理汇总了Java中org.apache.lucene.util.packed.PackedInts.bitsRequired方法的典型用法代码示例。如果您正苦于以下问题:Java PackedInts.bitsRequired方法的具体用法?Java PackedInts.bitsRequired怎么用?Java PackedInts.bitsRequired使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.lucene.util.packed.PackedInts的用法示例。


在下文中一共展示了PackedInts.bitsRequired方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getPageMemoryUsage

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
private long getPageMemoryUsage(PackedLongValues values, float acceptableOverheadRatio, int pageSize, long pageMinOrdinal, long pageMaxOrdinal) {
    int bitsRequired;
    long pageMemorySize = 0;
    PackedInts.FormatAndBits formatAndBits;
    if (pageMaxOrdinal == Long.MIN_VALUE) {
        // empty page - will use the null reader which just stores size
        pageMemorySize += RamUsageEstimator.alignObjectSize(RamUsageEstimator.NUM_BYTES_OBJECT_HEADER + RamUsageEstimator.NUM_BYTES_INT);

    } else {
        long pageMinValue = values.get(pageMinOrdinal);
        long pageMaxValue = values.get(pageMaxOrdinal);
        long pageDelta = pageMaxValue - pageMinValue;
        if (pageDelta != 0) {
            bitsRequired = pageDelta < 0 ? 64 : PackedInts.bitsRequired(pageDelta);
            formatAndBits = PackedInts.fastestFormatAndBits(pageSize, bitsRequired, acceptableOverheadRatio);
            pageMemorySize += formatAndBits.format.longCount(PackedInts.VERSION_CURRENT, pageSize, formatAndBits.bitsPerValue) * RamUsageEstimator.NUM_BYTES_LONG;
            pageMemorySize += RamUsageEstimator.NUM_BYTES_LONG; // min value per page storage
        } else {
            // empty page
            pageMemorySize += RamUsageEstimator.alignObjectSize(RamUsageEstimator.NUM_BYTES_OBJECT_HEADER + RamUsageEstimator.NUM_BYTES_INT);
        }
    }
    return pageMemorySize;
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:25,代码来源:PackedArrayIndexFieldData.java

示例2: precisionFromThreshold

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
/**
 * Compute the required precision so that <code>count</code> distinct entries
 * would be counted with linear counting.
 */
public static int precisionFromThreshold(long count) {
    final long hashTableEntries = (long) Math.ceil(count / MAX_LOAD_FACTOR);
    int precision = PackedInts.bitsRequired(hashTableEntries * Integer.BYTES);
    precision = Math.max(precision, MIN_PRECISION);
    precision = Math.min(precision, MAX_PRECISION);
    return precision;
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:12,代码来源:HyperLogLogPlusPlus.java

示例3: significantlySmallerThanSinglePackedOrdinals

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
/**
 * Return true if this impl is going to be smaller than {@link SinglePackedOrdinals} by at least 20%.
 */
public static boolean significantlySmallerThanSinglePackedOrdinals(int maxDoc, int numDocsWithValue, long numOrds, float acceptableOverheadRatio) {
    int bitsPerOrd = PackedInts.bitsRequired(numOrds);
    bitsPerOrd = PackedInts.fastestFormatAndBits(numDocsWithValue, bitsPerOrd, acceptableOverheadRatio).bitsPerValue;
    // Compute the worst-case number of bits per value for offsets in the worst case, eg. if no docs have a value at the
    // beginning of the block and all docs have one at the end of the block
    final float avgValuesPerDoc = (float) numDocsWithValue / maxDoc;
    final int maxDelta = (int) Math.ceil(OFFSETS_PAGE_SIZE * (1 - avgValuesPerDoc) * avgValuesPerDoc);
    int bitsPerOffset = PackedInts.bitsRequired(maxDelta) + 1; // +1 because of the sign
    bitsPerOffset = PackedInts.fastestFormatAndBits(maxDoc, bitsPerOffset, acceptableOverheadRatio).bitsPerValue;

    final long expectedMultiSizeInBytes = (long) numDocsWithValue * bitsPerOrd + (long) maxDoc * bitsPerOffset;
    final long expectedSingleSizeInBytes = (long) maxDoc * bitsPerOrd;
    return expectedMultiSizeInBytes < 0.8f * expectedSingleSizeInBytes;
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:18,代码来源:MultiOrdinals.java

示例4: reset

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
void reset(int len) {
  final int bitsPerOffset = PackedInts.bitsRequired(len - LAST_LITERALS);
  final int bitsPerOffsetLog = 32 - Integer.numberOfLeadingZeros(bitsPerOffset - 1);
  hashLog = MEMORY_USAGE + 3 - bitsPerOffsetLog;
  if (hashTable == null || hashTable.size() < 1 << hashLog || hashTable.getBitsPerValue() < bitsPerOffset) {
    hashTable = PackedInts.getMutable(1 << hashLog, bitsPerOffset, PackedInts.DEFAULT);
  } else {
    hashTable.clear();
  }
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:11,代码来源:LZ4.java

示例5: BinaryDocValuesFieldUpdates

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
public BinaryDocValuesFieldUpdates(String field, int maxDoc) {
  super(field, FieldInfo.DocValuesType.BINARY);
  bitsPerValue = PackedInts.bitsRequired(maxDoc - 1);
  docs = new PagedMutable(1, PAGE_SIZE, bitsPerValue, PackedInts.COMPACT);
  offsets = new PagedGrowableWriter(1, PAGE_SIZE, 1, PackedInts.FAST);
  lengths = new PagedGrowableWriter(1, PAGE_SIZE, 1, PackedInts.FAST);
  values = new BytesRefBuilder();
  size = 0;
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:10,代码来源:BinaryDocValuesFieldUpdates.java

示例6: NumericDocValuesFieldUpdates

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
public NumericDocValuesFieldUpdates(String field, int maxDoc) {
  super(field, FieldInfo.DocValuesType.NUMERIC);
  bitsPerValue = PackedInts.bitsRequired(maxDoc - 1);
  docs = new PagedMutable(1, PAGE_SIZE, bitsPerValue, PackedInts.COMPACT);
  values = new PagedGrowableWriter(1, PAGE_SIZE, 1, PackedInts.FAST);
  size = 0;
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:8,代码来源:NumericDocValuesFieldUpdates.java

示例7: precisionFromThreshold

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
/**
 * Compute the required precision so that <code>count</code> distinct entries
 * would be counted with linear counting.
 */
public static int precisionFromThreshold(long count) {
    final long hashTableEntries = (long) Math.ceil(count / MAX_LOAD_FACTOR);
    int precision = PackedInts.bitsRequired(hashTableEntries * RamUsageEstimator.NUM_BYTES_INT);
    precision = Math.max(precision, MIN_PRECISION);
    precision = Math.min(precision, MAX_PRECISION);
    return precision;
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:12,代码来源:HyperLogLogPlusPlus.java

示例8: OrdinalsBuilder

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
public OrdinalsBuilder(long numTerms, int maxDoc, float acceptableOverheadRatio) throws IOException {
    this.maxDoc = maxDoc;
    int startBitsPerValue = 8;
    if (numTerms >= 0) {
        startBitsPerValue = PackedInts.bitsRequired(numTerms);
    }
    ordinals = new OrdinalsStore(maxDoc, startBitsPerValue, acceptableOverheadRatio);
    spare = new LongsRef();
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:10,代码来源:OrdinalsBuilder.java

示例9: addNumericField

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
@Override
public void addNumericField(FieldInfo field, Iterable<Number> values) throws IOException {
  // examine the values to determine best type to use
  long minValue = Long.MAX_VALUE;
  long maxValue = Long.MIN_VALUE;
  for (Number n : values) {
    long v = n == null ? 0 : n.longValue();
    minValue = Math.min(minValue, v);
    maxValue = Math.max(maxValue, v);
  }
  
  String fileName = IndexFileNames.segmentFileName(state.segmentInfo.name + "_" + Integer.toString(field.number), segmentSuffix, "dat");
  IndexOutput data = dir.createOutput(fileName, state.context);
  boolean success = false;
  try {
    if (minValue >= Byte.MIN_VALUE && maxValue <= Byte.MAX_VALUE && PackedInts.bitsRequired(maxValue-minValue) > 4) {
      // fits in a byte[], would be more than 4bpv, just write byte[]
      addBytesField(field, data, values);
    } else if (minValue >= Short.MIN_VALUE && maxValue <= Short.MAX_VALUE && PackedInts.bitsRequired(maxValue-minValue) > 8) {
      // fits in a short[], would be more than 8bpv, just write short[]
      addShortsField(field, data, values);
    } else if (minValue >= Integer.MIN_VALUE && maxValue <= Integer.MAX_VALUE && PackedInts.bitsRequired(maxValue-minValue) > 16) {
      // fits in a int[], would be more than 16bpv, just write int[]
      addIntsField(field, data, values);
    } else {
      addVarIntsField(field, data, values, minValue, maxValue);
    }
    success = true;
  } finally {
    if (success) {
      IOUtils.close(data);
    } else {
      IOUtils.closeWhileHandlingException(data);
    }
  }
}
 
开发者ID:europeana,项目名称:search,代码行数:37,代码来源:Lucene40DocValuesWriter.java

示例10: addNumericField

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
@Override
public void addNumericField(FieldInfo field, Iterable<Number> values) throws IOException {
  // examine the values to determine best type to use
  long minValue = Long.MAX_VALUE;
  long maxValue = Long.MIN_VALUE;
  for (Number n : values) {
    long v = n.longValue();
    minValue = Math.min(minValue, v);
    maxValue = Math.max(maxValue, v);
  }
  
  String fileName = IndexFileNames.segmentFileName(state.segmentInfo.name + "_" + Integer.toString(field.number), segmentSuffix, "dat");
  IndexOutput data = dir.createOutput(fileName, state.context);
  boolean success = false;
  try {
    if (minValue >= Byte.MIN_VALUE && maxValue <= Byte.MAX_VALUE && PackedInts.bitsRequired(maxValue-minValue) > 4) {
      // fits in a byte[], would be more than 4bpv, just write byte[]
      addBytesField(field, data, values);
    } else if (minValue >= Short.MIN_VALUE && maxValue <= Short.MAX_VALUE && PackedInts.bitsRequired(maxValue-minValue) > 8) {
      // fits in a short[], would be more than 8bpv, just write short[]
      addShortsField(field, data, values);
    } else if (minValue >= Integer.MIN_VALUE && maxValue <= Integer.MAX_VALUE && PackedInts.bitsRequired(maxValue-minValue) > 16) {
      // fits in a int[], would be more than 16bpv, just write int[]
      addIntsField(field, data, values);
    } else {
      addVarIntsField(field, data, values, minValue, maxValue);
    }
    success = true;
  } finally {
    if (success) {
      IOUtils.close(data);
    } else {
      IOUtils.closeWhileHandlingException(data);
    }
  }
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:37,代码来源:Lucene40DocValuesWriter.java

示例11: createValue

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
@Override
protected Accountable createValue(AtomicReader reader, CacheKey key, boolean setDocsWithField /* ignored */)
    throws IOException {

  final int maxDoc = reader.maxDoc();

  Terms terms = reader.terms(key.field);

  final float acceptableOverheadRatio = ((Float) key.custom).floatValue();

  final PagedBytes bytes = new PagedBytes(15);

  int startTermsBPV;

  final int termCountHardLimit;
  if (maxDoc == Integer.MAX_VALUE) {
    termCountHardLimit = Integer.MAX_VALUE;
  } else {
    termCountHardLimit = maxDoc+1;
  }

  // TODO: use Uninvert?
  if (terms != null) {
    // Try for coarse estimate for number of bits; this
    // should be an underestimate most of the time, which
    // is fine -- GrowableWriter will reallocate as needed
    long numUniqueTerms = terms.size();
    if (numUniqueTerms != -1L) {
      if (numUniqueTerms > termCountHardLimit) {
        // app is misusing the API (there is more than
        // one term per doc); in this case we make best
        // effort to load what we can (see LUCENE-2142)
        numUniqueTerms = termCountHardLimit;
      }

      startTermsBPV = PackedInts.bitsRequired(numUniqueTerms);
    } else {
      startTermsBPV = 1;
    }
  } else {
    startTermsBPV = 1;
  }

  PackedLongValues.Builder termOrdToBytesOffset = PackedLongValues.monotonicBuilder(PackedInts.COMPACT);
  final GrowableWriter docToTermOrd = new GrowableWriter(startTermsBPV, maxDoc, acceptableOverheadRatio);

  int termOrd = 0;

  // TODO: use Uninvert?

  if (terms != null) {
    final TermsEnum termsEnum = terms.iterator(null);
    DocsEnum docs = null;

    while(true) {
      final BytesRef term = termsEnum.next();
      if (term == null) {
        break;
      }
      if (termOrd >= termCountHardLimit) {
        break;
      }

      termOrdToBytesOffset.add(bytes.copyUsingLengthPrefix(term));
      docs = termsEnum.docs(null, docs, DocsEnum.FLAG_NONE);
      while (true) {
        final int docID = docs.nextDoc();
        if (docID == DocIdSetIterator.NO_MORE_DOCS) {
          break;
        }
        // Store 1+ ord into packed bits
        docToTermOrd.set(docID, 1+termOrd);
      }
      termOrd++;
    }
  }

  // maybe an int-only impl?
  return new SortedDocValuesImpl(bytes.freeze(true), termOrdToBytesOffset.build(), docToTermOrd.getMutable(), termOrd);
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:81,代码来源:FieldCacheImpl.java

示例12: chooseStorageFormat

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
protected CommonSettings.MemoryStorageFormat chooseStorageFormat(LeafReader reader, PackedLongValues values, Ordinals build, RandomAccessOrds ordinals,
                                                                 long minValue, long maxValue, float acceptableOverheadRatio, int pageSize) {

    CommonSettings.MemoryStorageFormat format;

    // estimate memory usage for a single packed array
    long packedDelta = maxValue - minValue + 1; // allow for a missing value
    // valuesDelta can be negative if the difference between max and min values overflows the positive side of longs.
    int bitsRequired = packedDelta < 0 ? 64 : PackedInts.bitsRequired(packedDelta);
    PackedInts.FormatAndBits formatAndBits = PackedInts.fastestFormatAndBits(reader.maxDoc(), bitsRequired, acceptableOverheadRatio);
    final long singleValuesSize = formatAndBits.format.longCount(PackedInts.VERSION_CURRENT, reader.maxDoc(), formatAndBits.bitsPerValue) * 8L;

    // ordinal memory usage
    final long ordinalsSize = build.ramBytesUsed() + values.ramBytesUsed();

    // estimate the memory signature of paged packing
    long pagedSingleValuesSize = (reader.maxDoc() / pageSize + 1) * RamUsageEstimator.NUM_BYTES_OBJECT_REF; // array of pages
    int pageIndex = 0;
    long pageMinOrdinal = Long.MAX_VALUE;
    long pageMaxOrdinal = Long.MIN_VALUE;
    for (int i = 1; i < reader.maxDoc(); ++i, pageIndex = (pageIndex + 1) % pageSize) {
        ordinals.setDocument(i);
        if (ordinals.cardinality() > 0) {
            long ordinal = ordinals.ordAt(0);
            pageMaxOrdinal = Math.max(ordinal, pageMaxOrdinal);
            pageMinOrdinal = Math.min(ordinal, pageMinOrdinal);
        }
        if (pageIndex == pageSize - 1) {
            // end of page, we now know enough to estimate memory usage
            pagedSingleValuesSize += getPageMemoryUsage(values, acceptableOverheadRatio, pageSize, pageMinOrdinal, pageMaxOrdinal);

            pageMinOrdinal = Long.MAX_VALUE;
            pageMaxOrdinal = Long.MIN_VALUE;
        }
    }

    if (pageIndex > 0) {
        // last page estimation
        pageIndex++;
        pagedSingleValuesSize += getPageMemoryUsage(values, acceptableOverheadRatio, pageSize, pageMinOrdinal, pageMaxOrdinal);
    }

    if (ordinalsSize < singleValuesSize) {
        if (ordinalsSize < pagedSingleValuesSize) {
            format = CommonSettings.MemoryStorageFormat.ORDINALS;
        } else {
            format = CommonSettings.MemoryStorageFormat.PAGED;
        }
    } else {
        if (pagedSingleValuesSize < singleValuesSize) {
            format = CommonSettings.MemoryStorageFormat.PAGED;
        } else {
            format = CommonSettings.MemoryStorageFormat.PACKED;
        }
    }
    return format;
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:58,代码来源:PackedArrayIndexFieldData.java

示例13: createValue

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
@Override
protected Object createValue(AtomicReader reader, CacheKey key, boolean setDocsWithField /* ignored */)
    throws IOException {

  // TODO: would be nice to first check if DocTermsIndex
  // was already cached for this field and then return
  // that instead, to avoid insanity

  final int maxDoc = reader.maxDoc();
  Terms terms = reader.terms(key.field);

  final float acceptableOverheadRatio = ((Float) key.custom).floatValue();

  final int termCountHardLimit = maxDoc;

  // Holds the actual term data, expanded.
  final PagedBytes bytes = new PagedBytes(15);

  int startBPV;

  if (terms != null) {
    // Try for coarse estimate for number of bits; this
    // should be an underestimate most of the time, which
    // is fine -- GrowableWriter will reallocate as needed
    long numUniqueTerms = terms.size();
    if (numUniqueTerms != -1L) {
      if (numUniqueTerms > termCountHardLimit) {
        numUniqueTerms = termCountHardLimit;
      }
      startBPV = PackedInts.bitsRequired(numUniqueTerms*4);
    } else {
      startBPV = 1;
    }
  } else {
    startBPV = 1;
  }

  final GrowableWriter docToOffset = new GrowableWriter(startBPV, maxDoc, acceptableOverheadRatio);
  
  // pointer==0 means not set
  bytes.copyUsingLengthPrefix(new BytesRef());

  if (terms != null) {
    int termCount = 0;
    final TermsEnum termsEnum = terms.iterator(null);
    DocsEnum docs = null;
    while(true) {
      if (termCount++ == termCountHardLimit) {
        // app is misusing the API (there is more than
        // one term per doc); in this case we make best
        // effort to load what we can (see LUCENE-2142)
        break;
      }

      final BytesRef term = termsEnum.next();
      if (term == null) {
        break;
      }
      final long pointer = bytes.copyUsingLengthPrefix(term);
      docs = termsEnum.docs(null, docs, DocsEnum.FLAG_NONE);
      while (true) {
        final int docID = docs.nextDoc();
        if (docID == DocIdSetIterator.NO_MORE_DOCS) {
          break;
        }
        docToOffset.set(docID, pointer);
      }
    }
  }

  // maybe an int-only impl?
  return new BinaryDocValuesImpl(bytes.freeze(true), docToOffset.getMutable());
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:74,代码来源:FieldCacheImpl.java

示例14: createValue

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
@Override
protected Object createValue(AtomicReader reader, CacheKey key, boolean setDocsWithField /* ignored */)
    throws IOException {

  final int maxDoc = reader.maxDoc();

  Terms terms = reader.terms(key.field);

  final float acceptableOverheadRatio = ((Float) key.custom).floatValue();

  final PagedBytes bytes = new PagedBytes(15);

  int startTermsBPV;

  final int termCountHardLimit;
  if (maxDoc == Integer.MAX_VALUE) {
    termCountHardLimit = Integer.MAX_VALUE;
  } else {
    termCountHardLimit = maxDoc+1;
  }

  // TODO: use Uninvert?
  if (terms != null) {
    // Try for coarse estimate for number of bits; this
    // should be an underestimate most of the time, which
    // is fine -- GrowableWriter will reallocate as needed
    long numUniqueTerms = terms.size();
    if (numUniqueTerms != -1L) {
      if (numUniqueTerms > termCountHardLimit) {
        // app is misusing the API (there is more than
        // one term per doc); in this case we make best
        // effort to load what we can (see LUCENE-2142)
        numUniqueTerms = termCountHardLimit;
      }

      startTermsBPV = PackedInts.bitsRequired(numUniqueTerms);
    } else {
      startTermsBPV = 1;
    }
  } else {
    startTermsBPV = 1;
  }

  MonotonicAppendingLongBuffer termOrdToBytesOffset = new MonotonicAppendingLongBuffer();
  final GrowableWriter docToTermOrd = new GrowableWriter(startTermsBPV, maxDoc, acceptableOverheadRatio);

  int termOrd = 0;

  // TODO: use Uninvert?

  if (terms != null) {
    final TermsEnum termsEnum = terms.iterator(null);
    DocsEnum docs = null;

    while(true) {
      final BytesRef term = termsEnum.next();
      if (term == null) {
        break;
      }
      if (termOrd >= termCountHardLimit) {
        break;
      }

      termOrdToBytesOffset.add(bytes.copyUsingLengthPrefix(term));
      docs = termsEnum.docs(null, docs, DocsEnum.FLAG_NONE);
      while (true) {
        final int docID = docs.nextDoc();
        if (docID == DocIdSetIterator.NO_MORE_DOCS) {
          break;
        }
        // Store 1+ ord into packed bits
        docToTermOrd.set(docID, 1+termOrd);
      }
      termOrd++;
    }
  }
  termOrdToBytesOffset.freeze();

  // maybe an int-only impl?
  return new SortedDocValuesImpl(bytes.freeze(true), termOrdToBytesOffset, docToTermOrd.getMutable(), termOrd);
}
 
开发者ID:yintaoxue,项目名称:read-open-source-code,代码行数:82,代码来源:FieldCacheImpl.java

示例15: PackedNumericFieldUpdates

import org.apache.lucene.util.packed.PackedInts; //导入方法依赖的package包/类
public PackedNumericFieldUpdates(int maxDoc) {
  docsWithField = new FixedBitSet(64);
  docs = new PagedMutable(1, 1024, PackedInts.bitsRequired(maxDoc - 1), PackedInts.COMPACT);
  values = new PagedGrowableWriter(1, 1024, 1, PackedInts.FAST);
  size = 0;
}
 
开发者ID:yintaoxue,项目名称:read-open-source-code,代码行数:7,代码来源:NumericFieldUpdates.java


注:本文中的org.apache.lucene.util.packed.PackedInts.bitsRequired方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。