当前位置: 首页>>代码示例>>Java>>正文


Java Object2IntMap.containsKey方法代码示例

本文整理汇总了Java中it.unimi.dsi.fastutil.objects.Object2IntMap.containsKey方法的典型用法代码示例。如果您正苦于以下问题:Java Object2IntMap.containsKey方法的具体用法?Java Object2IntMap.containsKey怎么用?Java Object2IntMap.containsKey使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在it.unimi.dsi.fastutil.objects.Object2IntMap的用法示例。


在下文中一共展示了Object2IntMap.containsKey方法的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: calculateTermFreq

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
/**
 * Calculates a vector of attributes from a list of tokens
 * 
 * @param tokens the input tokens 
 * @param prefix the prefix of each vector attribute
 * @return an Object2IntMap object mapping the attributes to their values
 */		
public static Object2IntMap<String> calculateTermFreq(List<String> tokens, String prefix, boolean freqWeights) {
	Object2IntMap<String> termFreq = new Object2IntOpenHashMap<String>();

	// Traverse the strings and increments the counter when the token was
	// already seen before
	for (String token : tokens) {
		// add frequency weights if the flat is set
		if(freqWeights)
			termFreq.put(prefix+token, termFreq.getInt(prefix+token) + 1);
		// otherwise, just consider boolean weights
		else{
			if(!termFreq.containsKey(token))
				termFreq.put(prefix+token, 1);
		}
	}

	return termFreq;
}
 
开发者ID:felipebravom,项目名称:AffectiveTweets,代码行数:26,代码来源:Utils.java

示例2: add

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
private <T> boolean add(Object2IntMap<T> map, T item) {
  if (!map.containsKey(item)) {
    map.put(item, NOT_SET);
    return true;
  }
  return false;
}
 
开发者ID:inferjay,项目名称:r8,代码行数:8,代码来源:FileWriter.java

示例3: updateFlushThresholdForSegmentMetadata

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
void updateFlushThresholdForSegmentMetadata(LLCRealtimeSegmentZKMetadata segmentZKMetadata,
    ZNRecord partitionAssignment, int tableFlushSize) {
  // If config does not have a flush threshold, use the default.
  if (tableFlushSize < 1) {
    tableFlushSize = KafkaHighLevelStreamProviderConfig.getDefaultMaxRealtimeRowsCount();
  }

  // Gather list of instances for this partition
  Object2IntMap<String> partitionCountForInstance = new Object2IntLinkedOpenHashMap<>();
  String segmentPartitionId = new LLCSegmentName(segmentZKMetadata.getSegmentName()).getPartitionRange();
  for (String instanceName : partitionAssignment.getListField(segmentPartitionId)) {
    partitionCountForInstance.put(instanceName, 0);
  }

  // Find the maximum number of partitions served for each instance that is serving this segment
  int maxPartitionCountPerInstance = 1;
  for (Map.Entry<String, List<String>> partitionAndInstanceList : partitionAssignment.getListFields().entrySet()) {
    for (String instance : partitionAndInstanceList.getValue()) {
      if (partitionCountForInstance.containsKey(instance)) {
        int partitionCountForThisInstance = partitionCountForInstance.getInt(instance);
        partitionCountForThisInstance++;
        partitionCountForInstance.put(instance, partitionCountForThisInstance);

        if (maxPartitionCountPerInstance < partitionCountForThisInstance) {
          maxPartitionCountPerInstance = partitionCountForThisInstance;
        }
      }
    }
  }

  // Configure the segment size flush limit based on the maximum number of partitions allocated to a replica
  int segmentFlushSize = (int) (((float) tableFlushSize) / maxPartitionCountPerInstance);
  segmentZKMetadata.setSizeThresholdToFlushSegment(segmentFlushSize);
}
 
开发者ID:linkedin,项目名称:pinot,代码行数:35,代码来源:PinotLLCRealtimeSegmentManager.java

示例4: removeSampleReads

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
private void removeSampleReads(final int sampleIndex, final Collection<GATKSAMRecord> readsToRemove, final int alleleCount) {
    final GATKSAMRecord[] sampleReads = readsBySampleIndex[sampleIndex];
    final int sampleReadCount = sampleReads.length;

    final Object2IntMap<GATKSAMRecord> indexByRead = readIndexBySampleIndex(sampleIndex);
    // Count how many we are going to remove, which ones (indexes) and remove entry from the read-index map.
    final boolean[] removeIndex = new boolean[sampleReadCount];
    int removeCount = 0; // captures the number of deletions.
    int firstDeleted = sampleReadCount;    // captures the first position that was deleted.

    final Iterator<GATKSAMRecord> readsToRemoveIterator = readsToRemove.iterator();
    while (readsToRemoveIterator.hasNext()) {
        final GATKSAMRecord read = readsToRemoveIterator.next();
        if (indexByRead.containsKey(read)) {
            final int index = indexByRead.getInt(read);
            if (firstDeleted > index)
                firstDeleted = index;
            removeCount++;
            removeIndex[index] = true;
            readsToRemoveIterator.remove();
            indexByRead.remove(read);
        }
    }

    // Nothing to remove we just finish here.
    if (removeCount == 0)
        return;

    final int newSampleReadCount = sampleReadCount - removeCount;

    // Now we skim out the removed reads from the read array.
    final GATKSAMRecord[] oldSampleReads = readsBySampleIndex[sampleIndex];
    final GATKSAMRecord[] newSampleReads = new GATKSAMRecord[newSampleReadCount];

    System.arraycopy(oldSampleReads, 0, newSampleReads, 0, firstDeleted);
    Utils.skimArray(oldSampleReads, firstDeleted, newSampleReads, firstDeleted, removeIndex, firstDeleted);

    // Update the indices for the extant reads from the first deletion onwards.
    for (int r = firstDeleted; r < newSampleReadCount; r++) {
        indexByRead.put(newSampleReads[r], r);
    }

    // Then we skim out the likelihoods of the removed reads.
    final double[][] oldSampleValues = valuesBySampleIndex[sampleIndex];
    final double[][] newSampleValues = new double[alleleCount][newSampleReadCount];
    for (int a = 0; a < alleleCount; a++) {
        System.arraycopy(oldSampleValues[a], 0, newSampleValues[a], 0, firstDeleted);
        Utils.skimArray(oldSampleValues[a], firstDeleted, newSampleValues[a], firstDeleted, removeIndex, firstDeleted);
    }
    valuesBySampleIndex[sampleIndex] = newSampleValues;
    readsBySampleIndex[sampleIndex] = newSampleReads;
    readListBySampleIndex[sampleIndex] = null; // reset the unmodifiable list.
}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:54,代码来源:ReadLikelihoods.java

示例5: create

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
@Override
public FreqKList create(ITable data) {
    data.getColumns(this.schema);
    Hash.Strategy<BaseRowSnapshot> hs = new Hash.Strategy<BaseRowSnapshot>() {
        @Override
        public int hashCode(BaseRowSnapshot brs) {
            if (brs instanceof VirtualRowSnapshot) {
                return brs.hashCode();
            } else if (brs instanceof RowSnapshot) {
                return brs.computeHashCode(ExactFreqSketch.this.schema);
            } else throw new RuntimeException("Uknown type encountered");
        }

        @Override
        public boolean equals(BaseRowSnapshot brs1, @Nullable BaseRowSnapshot brs2) {
            // brs2 is null because the hashmap explicitly calls with null
            // even if null cannot be a key.
            if (brs2 == null)
                return brs1 == null;
            return brs1.compareForEquality(brs2, ExactFreqSketch.this.schema);
        }
    };

    Object2IntMap<BaseRowSnapshot> hMap = new
            Object2IntOpenCustomHashMap<BaseRowSnapshot>(hs);
    this.rssList.forEach(rss -> hMap.put(rss, 0));
    IRowIterator rowIt = data.getRowIterator();
    int i = rowIt.getNextRow();
    VirtualRowSnapshot vrs = new VirtualRowSnapshot(data, this.schema);
    while (i != -1) {
        vrs.setRow(i);
        if (hMap.containsKey(vrs)) {
            int count = hMap.getInt(vrs);
            hMap.put(vrs, count + 1);
        }
        i = rowIt.getNextRow();
    }
    Object2IntOpenHashMap<RowSnapshot> hm = new Object2IntOpenHashMap<RowSnapshot>(this.rssList.size());
    this.rssList.forEach(rss -> hm.put(rss, hMap.getInt(rss)));
    return new FreqKList(data.getNumOfRows(), this.epsilon, hm);
}
 
开发者ID:vmware,项目名称:hillview,代码行数:42,代码来源:ExactFreqSketch.java

示例6: VRSTest2

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
@Test
public void VRSTest2() {
    ITable data = TestTables.testRepTable();
    Schema schema = data.getSchema();
    Hash.Strategy<BaseRowSnapshot> hs = new Hash.Strategy<BaseRowSnapshot>() {
        @Override
        public int hashCode(BaseRowSnapshot brs) {
            if (brs instanceof VirtualRowSnapshot) {
                return brs.hashCode();
            } else if (brs instanceof RowSnapshot) {
                return brs.computeHashCode(schema);
            } else
                throw new RuntimeException("Uknown type encountered");
        }

        @Override
        public boolean equals(BaseRowSnapshot brs1, @Nullable BaseRowSnapshot brs2) {
            // brs2 is null because the hashmap explicitly calls with null
            // even if null cannot be a key.
            if (brs2 == null)
                return brs1 == null;
            return brs1.compareForEquality(brs2, schema);
        }
    };
    Object2IntMap<BaseRowSnapshot> hMap = new
            Object2IntOpenCustomHashMap<BaseRowSnapshot>(hs);
    for (int i = 0; i < 2; i++ ) {
        BaseRowSnapshot rs = new RowSnapshot(data, i);
        hMap.put(rs, 0);
    }
    VirtualRowSnapshot vrs = new VirtualRowSnapshot(data);
    IRowIterator rowIt = data.getRowIterator();
    vrs.setRow(0);
    if (hMap.containsKey(vrs)) {
        System.out.println("A hit!\n");
        int count = hMap.getInt(vrs);
        hMap.put(vrs, count + 1);
    } else {
        throw new RuntimeException("Not found");
    }
}
 
开发者ID:vmware,项目名称:hillview,代码行数:42,代码来源:VirtualRowSnapshotTest.java

示例7: removeSampleReads

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
public void removeSampleReads(final int sampleIndex, final Collection<GATKRead> readsToRemove, final int alleleCount) {
    final GATKRead[] sampleReads = readsBySampleIndex[sampleIndex];
    final int sampleReadCount = sampleReads.length;

    final Object2IntMap<GATKRead> indexByRead = readIndexBySampleIndex(sampleIndex);
    // Count how many we are going to remove, which ones (indexes) and remove entry from the read-index map.
    final boolean[] removeIndex = new boolean[sampleReadCount];
    int removeCount = 0; // captures the number of deletions.
    int firstDeleted = sampleReadCount;    // captures the first position that was deleted.

    final Iterator<GATKRead> readsToRemoveIterator = readsToRemove.iterator();
    while (readsToRemoveIterator.hasNext()) {
        final GATKRead read = readsToRemoveIterator.next();
        if (indexByRead.containsKey(read)) {
            final int index = indexByRead.getInt(read);
            if (firstDeleted > index) {
                firstDeleted = index;
            }
            removeCount++;
            removeIndex[index] = true;
            readsToRemoveIterator.remove();
            indexByRead.remove(read);
        }
    }

    // Nothing to remove we just finish here.
    if (removeCount == 0) {
        return;
    }

    final int newSampleReadCount = sampleReadCount - removeCount;

    // Now we skim out the removed reads from the read array.
    final GATKRead[] oldSampleReads = readsBySampleIndex[sampleIndex];
    final GATKRead[] newSampleReads = new GATKRead[newSampleReadCount];

    System.arraycopy(oldSampleReads,0,newSampleReads,0,firstDeleted);
    Utils.skimArray(oldSampleReads,firstDeleted, newSampleReads, firstDeleted, removeIndex, firstDeleted);

    // Update the indices for the extant reads from the first deletion onwards.
    for (int r = firstDeleted; r < newSampleReadCount; r++) {
        indexByRead.put(newSampleReads[r], r);
    }

    // Then we skim out the likelihoods of the removed reads.
    final double[][] oldSampleValues = valuesBySampleIndex[sampleIndex];
    final double[][] newSampleValues = new double[alleleCount][newSampleReadCount];
    for (int a = 0; a < alleleCount; a++) {
        System.arraycopy(oldSampleValues[a],0,newSampleValues[a],0,firstDeleted);
        Utils.skimArray(oldSampleValues[a], firstDeleted, newSampleValues[a], firstDeleted, removeIndex, firstDeleted);
    }
    valuesBySampleIndex[sampleIndex] = newSampleValues;
    readsBySampleIndex[sampleIndex] = newSampleReads;
    readListBySampleIndex[sampleIndex] = null; // reset the unmodifiable list.
}
 
开发者ID:broadinstitute,项目名称:gatk,代码行数:56,代码来源:ReadLikelihoods.java


注:本文中的it.unimi.dsi.fastutil.objects.Object2IntMap.containsKey方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。