当前位置: 首页>>代码示例>>Java>>正文


Java RamUsageEstimator.shallowSizeOfInstance方法代码示例

本文整理汇总了Java中org.apache.lucene.util.RamUsageEstimator.shallowSizeOfInstance方法的典型用法代码示例。如果您正苦于以下问题:Java RamUsageEstimator.shallowSizeOfInstance方法的具体用法?Java RamUsageEstimator.shallowSizeOfInstance怎么用?Java RamUsageEstimator.shallowSizeOfInstance使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.lucene.util.RamUsageEstimator的用法示例。


在下文中一共展示了RamUsageEstimator.shallowSizeOfInstance方法的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: Lucene45DocValuesProducer

import org.apache.lucene.util.RamUsageEstimator; //导入方法依赖的package包/类
/** expert: instantiates a new reader */
@SuppressWarnings("deprecation")
protected Lucene45DocValuesProducer(SegmentReadState state, String dataCodec, String dataExtension, String metaCodec, String metaExtension) throws IOException {
  Version ver = state.segmentInfo.getVersion();
  lenientFieldInfoCheck = Version.LUCENE_4_9_0.onOrAfter(ver);
  String metaName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, metaExtension);
  // read in the entries from the metadata file.
  ChecksumIndexInput in = state.directory.openChecksumInput(metaName, state.context);
  this.maxDoc = state.segmentInfo.getDocCount();
  boolean success = false;
  try {
    version = CodecUtil.checkHeader(in, metaCodec, 
                                    Lucene45DocValuesFormat.VERSION_START,
                                    Lucene45DocValuesFormat.VERSION_CURRENT);
    numerics = new HashMap<>();
    ords = new HashMap<>();
    ordIndexes = new HashMap<>();
    binaries = new HashMap<>();
    sortedSets = new HashMap<>();
    readFields(in, state.fieldInfos);

    if (version >= Lucene45DocValuesFormat.VERSION_CHECKSUM) {
      CodecUtil.checkFooter(in);
    } else {
      CodecUtil.checkEOF(in);
    }

    success = true;
  } finally {
    if (success) {
      IOUtils.close(in);
    } else {
      IOUtils.closeWhileHandlingException(in);
    }
  }

  String dataName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, dataExtension);
  this.data = state.directory.openInput(dataName, state.context);
  success = false;
  try {
    final int version2 = CodecUtil.checkHeader(data, dataCodec, 
                                               Lucene45DocValuesFormat.VERSION_START,
                                               Lucene45DocValuesFormat.VERSION_CURRENT);
    if (version != version2) {
      throw new CorruptIndexException("Format versions mismatch");
    }
    
    if (version >= Lucene45DocValuesFormat.VERSION_CHECKSUM) {
      // NOTE: data file is too costly to verify checksum against all the bytes on open,
      // but for now we at least verify proper structure of the checksum footer: which looks
      // for FOOTER_MAGIC + algorithmID. This is cheap and can detect some forms of corruption
      // such as file truncation.
      CodecUtil.retrieveChecksum(data);
    }

    success = true;
  } finally {
    if (!success) {
      IOUtils.closeWhileHandlingException(this.data);
    }
  }
  
  ramBytesUsed = new AtomicLong(RamUsageEstimator.shallowSizeOfInstance(getClass()));
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:65,代码来源:Lucene45DocValuesProducer.java

示例2: Lucene49DocValuesProducer

import org.apache.lucene.util.RamUsageEstimator; //导入方法依赖的package包/类
/** expert: instantiates a new reader */
Lucene49DocValuesProducer(SegmentReadState state, String dataCodec, String dataExtension, String metaCodec, String metaExtension) throws IOException {
  String metaName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, metaExtension);
  // read in the entries from the metadata file.
  ChecksumIndexInput in = state.directory.openChecksumInput(metaName, state.context);
  this.maxDoc = state.segmentInfo.getDocCount();
  boolean success = false;
  try {
    version = CodecUtil.checkHeader(in, metaCodec, 
                                    Lucene49DocValuesFormat.VERSION_START,
                                    Lucene49DocValuesFormat.VERSION_CURRENT);
    numerics = new HashMap<>();
    ords = new HashMap<>();
    ordIndexes = new HashMap<>();
    binaries = new HashMap<>();
    sortedSets = new HashMap<>();
    sortedNumerics = new HashMap<>();
    readFields(in, state.fieldInfos);

    CodecUtil.checkFooter(in);
    success = true;
  } finally {
    if (success) {
      IOUtils.close(in);
    } else {
      IOUtils.closeWhileHandlingException(in);
    }
  }

  String dataName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, dataExtension);
  this.data = state.directory.openInput(dataName, state.context);
  success = false;
  try {
    final int version2 = CodecUtil.checkHeader(data, dataCodec, 
                                               Lucene49DocValuesFormat.VERSION_START,
                                               Lucene49DocValuesFormat.VERSION_CURRENT);
    if (version != version2) {
      throw new CorruptIndexException("Format versions mismatch");
    }
    
    // NOTE: data file is too costly to verify checksum against all the bytes on open,
    // but for now we at least verify proper structure of the checksum footer: which looks
    // for FOOTER_MAGIC + algorithmID. This is cheap and can detect some forms of corruption
    // such as file truncation.
    CodecUtil.retrieveChecksum(data);

    success = true;
  } finally {
    if (!success) {
      IOUtils.closeWhileHandlingException(this.data);
    }
  }
  
  ramBytesUsed = new AtomicLong(RamUsageEstimator.shallowSizeOfInstance(getClass()));
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:56,代码来源:Lucene49DocValuesProducer.java

示例3: Lucene49NormsProducer

import org.apache.lucene.util.RamUsageEstimator; //导入方法依赖的package包/类
Lucene49NormsProducer(SegmentReadState state, String dataCodec, String dataExtension, String metaCodec, String metaExtension) throws IOException {
  maxDoc = state.segmentInfo.getDocCount();
  String metaName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, metaExtension);
  // read in the entries from the metadata file.
  ChecksumIndexInput in = state.directory.openChecksumInput(metaName, state.context);
  boolean success = false;
  ramBytesUsed = new AtomicLong(RamUsageEstimator.shallowSizeOfInstance(getClass()));
  try {
    version = CodecUtil.checkHeader(in, metaCodec, VERSION_START, VERSION_CURRENT);
    readFields(in, state.fieldInfos);
    CodecUtil.checkFooter(in);
    success = true;
  } finally {
    if (success) {
      IOUtils.close(in);
    } else {
      IOUtils.closeWhileHandlingException(in);
    }
  }

  String dataName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, dataExtension);
  this.data = state.directory.openInput(dataName, state.context);
  success = false;
  try {
    final int version2 = CodecUtil.checkHeader(data, dataCodec, VERSION_START, VERSION_CURRENT);
    if (version != version2) {
      throw new CorruptIndexException("Format versions mismatch");
    }
    
    // NOTE: data file is too costly to verify checksum against all the bytes on open,
    // but for now we at least verify proper structure of the checksum footer: which looks
    // for FOOTER_MAGIC + algorithmID. This is cheap and can detect some forms of corruption
    // such as file truncation.
    CodecUtil.retrieveChecksum(data);

    success = true;
  } finally {
    if (!success) {
      IOUtils.closeWhileHandlingException(this.data);
    }
  }
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:43,代码来源:Lucene49NormsProducer.java

示例4: TermInfosReaderIndex

import org.apache.lucene.util.RamUsageEstimator; //导入方法依赖的package包/类
/**
 * Loads the segment information at segment load time.
 * 
 * @param indexEnum
 *          the term enum.
 * @param indexDivisor
 *          the index divisor.
 * @param tiiFileLength
 *          the size of the tii file, used to approximate the size of the
 *          buffer.
 * @param totalIndexInterval
 *          the total index interval.
 */
TermInfosReaderIndex(SegmentTermEnum indexEnum, int indexDivisor, long tiiFileLength, int totalIndexInterval) throws IOException {
  this.totalIndexInterval = totalIndexInterval;
  indexSize = 1 + ((int) indexEnum.size - 1) / indexDivisor;
  skipInterval = indexEnum.skipInterval;
  // this is only an inital size, it will be GCed once the build is complete
  long initialSize = (long) (tiiFileLength * 1.5) / indexDivisor;
  PagedBytes dataPagedBytes = new PagedBytes(estimatePageBits(initialSize));
  PagedBytesDataOutput dataOutput = dataPagedBytes.getDataOutput();

  final int bitEstimate = 1+MathUtil.log(tiiFileLength, 2);
  GrowableWriter indexToTerms = new GrowableWriter(bitEstimate, indexSize, PackedInts.DEFAULT);

  String currentField = null;
  List<String> fieldStrs = new ArrayList<>();
  int fieldCounter = -1;
  for (int i = 0; indexEnum.next(); i++) {
    Term term = indexEnum.term();
    if (currentField == null || !currentField.equals(term.field())) {
      currentField = term.field();
      fieldStrs.add(currentField);
      fieldCounter++;
    }
    TermInfo termInfo = indexEnum.termInfo();
    indexToTerms.set(i, dataOutput.getPosition());
    dataOutput.writeVInt(fieldCounter);
    dataOutput.writeString(term.text());
    dataOutput.writeVInt(termInfo.docFreq);
    if (termInfo.docFreq >= skipInterval) {
      dataOutput.writeVInt(termInfo.skipOffset);
    }
    dataOutput.writeVLong(termInfo.freqPointer);
    dataOutput.writeVLong(termInfo.proxPointer);
    dataOutput.writeVLong(indexEnum.indexPointer);
    for (int j = 1; j < indexDivisor; j++) {
      if (!indexEnum.next()) {
        break;
      }
    }
  }

  fields = new Term[fieldStrs.size()];
  for (int i = 0; i < fields.length; i++) {
    fields[i] = new Term(fieldStrs.get(i));
  }
  
  dataPagedBytes.freeze(true);
  dataInput = dataPagedBytes.getDataInput();
  indexToDataOffset = indexToTerms.getMutable();

  long ramBytesUsed = RamUsageEstimator.shallowSizeOf(fields);
  ramBytesUsed += RamUsageEstimator.shallowSizeOf(dataInput);
  ramBytesUsed += fields.length * RamUsageEstimator.shallowSizeOfInstance(Term.class);
  ramBytesUsed += dataPagedBytes.ramBytesUsed();
  ramBytesUsed += indexToDataOffset.ramBytesUsed();
  this.ramBytesUsed = ramBytesUsed;
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:70,代码来源:TermInfosReaderIndex.java

示例5: Lucene42DocValuesProducer

import org.apache.lucene.util.RamUsageEstimator; //导入方法依赖的package包/类
Lucene42DocValuesProducer(SegmentReadState state, String dataCodec, String dataExtension, String metaCodec, String metaExtension) throws IOException {
  maxDoc = state.segmentInfo.getDocCount();
  String metaName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, metaExtension);
  // read in the entries from the metadata file.
  ChecksumIndexInput in = state.directory.openChecksumInput(metaName, state.context);
  boolean success = false;
  ramBytesUsed = new AtomicLong(RamUsageEstimator.shallowSizeOfInstance(getClass()));
  try {
    version = CodecUtil.checkHeader(in, metaCodec, 
                                    VERSION_START,
                                    VERSION_CURRENT);
    numerics = new HashMap<>();
    binaries = new HashMap<>();
    fsts = new HashMap<>();
    readFields(in, state.fieldInfos);

    if (version >= VERSION_CHECKSUM) {
      CodecUtil.checkFooter(in);
    } else {
      CodecUtil.checkEOF(in);
    }
    
    success = true;
  } finally {
    if (success) {
      IOUtils.close(in);
    } else {
      IOUtils.closeWhileHandlingException(in);
    }
  }

  String dataName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, dataExtension);
  this.data = state.directory.openInput(dataName, state.context);
  success = false;
  try {
    final int version2 = CodecUtil.checkHeader(data, dataCodec, 
                                               VERSION_START,
                                               VERSION_CURRENT);
    if (version != version2) {
      throw new CorruptIndexException("Format versions mismatch");
    }
    
    if (version >= VERSION_CHECKSUM) {
      // NOTE: data file is too costly to verify checksum against all the bytes on open,
      // but for now we at least verify proper structure of the checksum footer: which looks
      // for FOOTER_MAGIC + algorithmID. This is cheap and can detect some forms of corruption
      // such as file truncation.
      CodecUtil.retrieveChecksum(data);
    }

    success = true;
  } finally {
    if (!success) {
      IOUtils.closeWhileHandlingException(this.data);
    }
  }
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:58,代码来源:Lucene42DocValuesProducer.java

示例6: Lucene410DocValuesProducer

import org.apache.lucene.util.RamUsageEstimator; //导入方法依赖的package包/类
/** expert: instantiates a new reader */
Lucene410DocValuesProducer(SegmentReadState state, String dataCodec, String dataExtension, String metaCodec, String metaExtension) throws IOException {
  String metaName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, metaExtension);
  // read in the entries from the metadata file.
  ChecksumIndexInput in = state.directory.openChecksumInput(metaName, state.context);
  this.maxDoc = state.segmentInfo.getDocCount();
  boolean success = false;
  try {
    version = CodecUtil.checkHeader(in, metaCodec, 
                                    Lucene410DocValuesFormat.VERSION_START,
                                    Lucene410DocValuesFormat.VERSION_CURRENT);
    numerics = new HashMap<>();
    ords = new HashMap<>();
    ordIndexes = new HashMap<>();
    binaries = new HashMap<>();
    sortedSets = new HashMap<>();
    sortedNumerics = new HashMap<>();
    readFields(in, state.fieldInfos);

    CodecUtil.checkFooter(in);
    success = true;
  } finally {
    if (success) {
      IOUtils.close(in);
    } else {
      IOUtils.closeWhileHandlingException(in);
    }
  }

  String dataName = IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, dataExtension);
  this.data = state.directory.openInput(dataName, state.context);
  success = false;
  try {
    final int version2 = CodecUtil.checkHeader(data, dataCodec, 
                                               Lucene410DocValuesFormat.VERSION_START,
                                               Lucene410DocValuesFormat.VERSION_CURRENT);
    if (version != version2) {
      throw new CorruptIndexException("Format versions mismatch");
    }
    
    // NOTE: data file is too costly to verify checksum against all the bytes on open,
    // but for now we at least verify proper structure of the checksum footer: which looks
    // for FOOTER_MAGIC + algorithmID. This is cheap and can detect some forms of corruption
    // such as file truncation.
    CodecUtil.retrieveChecksum(data);

    success = true;
  } finally {
    if (!success) {
      IOUtils.closeWhileHandlingException(this.data);
    }
  }
  
  ramBytesUsed = new AtomicLong(RamUsageEstimator.shallowSizeOfInstance(getClass()));
}
 
开发者ID:lamsfoundation,项目名称:lams,代码行数:56,代码来源:Lucene410DocValuesProducer.java


注:本文中的org.apache.lucene.util.RamUsageEstimator.shallowSizeOfInstance方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。