当前位置: 首页>>代码示例>>Java>>正文


Java Object2IntMap.getInt方法代码示例

本文整理汇总了Java中it.unimi.dsi.fastutil.objects.Object2IntMap.getInt方法的典型用法代码示例。如果您正苦于以下问题:Java Object2IntMap.getInt方法的具体用法?Java Object2IntMap.getInt怎么用?Java Object2IntMap.getInt使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在it.unimi.dsi.fastutil.objects.Object2IntMap的用法示例。


在下文中一共展示了Object2IntMap.getInt方法的11个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: addDoc

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
public void addDoc(Object2IntMap<String> docVector){
	this.numDoc++;
	for(String vecWord:docVector.keySet()){
		int vecWordFreq=docVector.getInt(vecWord);
		// if the word was seen before we add the current frequency
		this.wordSpace.put(vecWord,vecWordFreq+this.wordSpace.getInt(vecWord));
	}	

}
 
开发者ID:felipebravom,项目名称:AffectiveTweets,代码行数:10,代码来源:TweetCentroid.java

示例2: updateFlushThresholdForSegmentMetadata

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
void updateFlushThresholdForSegmentMetadata(LLCRealtimeSegmentZKMetadata segmentZKMetadata,
    ZNRecord partitionAssignment, int tableFlushSize) {
  // If config does not have a flush threshold, use the default.
  if (tableFlushSize < 1) {
    tableFlushSize = KafkaHighLevelStreamProviderConfig.getDefaultMaxRealtimeRowsCount();
  }

  // Gather list of instances for this partition
  Object2IntMap<String> partitionCountForInstance = new Object2IntLinkedOpenHashMap<>();
  String segmentPartitionId = new LLCSegmentName(segmentZKMetadata.getSegmentName()).getPartitionRange();
  for (String instanceName : partitionAssignment.getListField(segmentPartitionId)) {
    partitionCountForInstance.put(instanceName, 0);
  }

  // Find the maximum number of partitions served for each instance that is serving this segment
  int maxPartitionCountPerInstance = 1;
  for (Map.Entry<String, List<String>> partitionAndInstanceList : partitionAssignment.getListFields().entrySet()) {
    for (String instance : partitionAndInstanceList.getValue()) {
      if (partitionCountForInstance.containsKey(instance)) {
        int partitionCountForThisInstance = partitionCountForInstance.getInt(instance);
        partitionCountForThisInstance++;
        partitionCountForInstance.put(instance, partitionCountForThisInstance);

        if (maxPartitionCountPerInstance < partitionCountForThisInstance) {
          maxPartitionCountPerInstance = partitionCountForThisInstance;
        }
      }
    }
  }

  // Configure the segment size flush limit based on the maximum number of partitions allocated to a replica
  int segmentFlushSize = (int) (((float) tableFlushSize) / maxPartitionCountPerInstance);
  segmentZKMetadata.setSizeThresholdToFlushSegment(segmentFlushSize);
}
 
开发者ID:linkedin,项目名称:pinot,代码行数:35,代码来源:PinotLLCRealtimeSegmentManager.java

示例3: getKeyForValue

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
private int getKeyForValue(String value) {
  Object2IntMap<String> map = (Object2IntMap<String>) _groupKeyMap;
  int groupId = map.getInt(value);
  if (groupId == INVALID_ID) {
    groupId = _numGroupKeys;
    map.put(value, _numGroupKeys++);
  }
  return groupId;
}
 
开发者ID:linkedin,项目名称:pinot,代码行数:11,代码来源:NoDictionarySingleColumnGroupKeyGenerator.java

示例4: removeSampleReads

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
private void removeSampleReads(final int sampleIndex, final Collection<GATKSAMRecord> readsToRemove, final int alleleCount) {
    final GATKSAMRecord[] sampleReads = readsBySampleIndex[sampleIndex];
    final int sampleReadCount = sampleReads.length;

    final Object2IntMap<GATKSAMRecord> indexByRead = readIndexBySampleIndex(sampleIndex);
    // Count how many we are going to remove, which ones (indexes) and remove entry from the read-index map.
    final boolean[] removeIndex = new boolean[sampleReadCount];
    int removeCount = 0; // captures the number of deletions.
    int firstDeleted = sampleReadCount;    // captures the first position that was deleted.

    final Iterator<GATKSAMRecord> readsToRemoveIterator = readsToRemove.iterator();
    while (readsToRemoveIterator.hasNext()) {
        final GATKSAMRecord read = readsToRemoveIterator.next();
        if (indexByRead.containsKey(read)) {
            final int index = indexByRead.getInt(read);
            if (firstDeleted > index)
                firstDeleted = index;
            removeCount++;
            removeIndex[index] = true;
            readsToRemoveIterator.remove();
            indexByRead.remove(read);
        }
    }

    // Nothing to remove we just finish here.
    if (removeCount == 0)
        return;

    final int newSampleReadCount = sampleReadCount - removeCount;

    // Now we skim out the removed reads from the read array.
    final GATKSAMRecord[] oldSampleReads = readsBySampleIndex[sampleIndex];
    final GATKSAMRecord[] newSampleReads = new GATKSAMRecord[newSampleReadCount];

    System.arraycopy(oldSampleReads, 0, newSampleReads, 0, firstDeleted);
    Utils.skimArray(oldSampleReads, firstDeleted, newSampleReads, firstDeleted, removeIndex, firstDeleted);

    // Update the indices for the extant reads from the first deletion onwards.
    for (int r = firstDeleted; r < newSampleReadCount; r++) {
        indexByRead.put(newSampleReads[r], r);
    }

    // Then we skim out the likelihoods of the removed reads.
    final double[][] oldSampleValues = valuesBySampleIndex[sampleIndex];
    final double[][] newSampleValues = new double[alleleCount][newSampleReadCount];
    for (int a = 0; a < alleleCount; a++) {
        System.arraycopy(oldSampleValues[a], 0, newSampleValues[a], 0, firstDeleted);
        Utils.skimArray(oldSampleValues[a], firstDeleted, newSampleValues[a], firstDeleted, removeIndex, firstDeleted);
    }
    valuesBySampleIndex[sampleIndex] = newSampleValues;
    readsBySampleIndex[sampleIndex] = newSampleReads;
    readListBySampleIndex[sampleIndex] = null; // reset the unmodifiable list.
}
 
开发者ID:PAA-NCIC,项目名称:SparkSeq,代码行数:54,代码来源:ReadLikelihoods.java

示例5: create

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
@Override
public FreqKList create(ITable data) {
    data.getColumns(this.schema);
    Hash.Strategy<BaseRowSnapshot> hs = new Hash.Strategy<BaseRowSnapshot>() {
        @Override
        public int hashCode(BaseRowSnapshot brs) {
            if (brs instanceof VirtualRowSnapshot) {
                return brs.hashCode();
            } else if (brs instanceof RowSnapshot) {
                return brs.computeHashCode(ExactFreqSketch.this.schema);
            } else throw new RuntimeException("Uknown type encountered");
        }

        @Override
        public boolean equals(BaseRowSnapshot brs1, @Nullable BaseRowSnapshot brs2) {
            // brs2 is null because the hashmap explicitly calls with null
            // even if null cannot be a key.
            if (brs2 == null)
                return brs1 == null;
            return brs1.compareForEquality(brs2, ExactFreqSketch.this.schema);
        }
    };

    Object2IntMap<BaseRowSnapshot> hMap = new
            Object2IntOpenCustomHashMap<BaseRowSnapshot>(hs);
    this.rssList.forEach(rss -> hMap.put(rss, 0));
    IRowIterator rowIt = data.getRowIterator();
    int i = rowIt.getNextRow();
    VirtualRowSnapshot vrs = new VirtualRowSnapshot(data, this.schema);
    while (i != -1) {
        vrs.setRow(i);
        if (hMap.containsKey(vrs)) {
            int count = hMap.getInt(vrs);
            hMap.put(vrs, count + 1);
        }
        i = rowIt.getNextRow();
    }
    Object2IntOpenHashMap<RowSnapshot> hm = new Object2IntOpenHashMap<RowSnapshot>(this.rssList.size());
    this.rssList.forEach(rss -> hm.put(rss, hMap.getInt(rss)));
    return new FreqKList(data.getNumOfRows(), this.epsilon, hm);
}
 
开发者ID:vmware,项目名称:hillview,代码行数:42,代码来源:ExactFreqSketch.java

示例6: VRSTest2

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
@Test
public void VRSTest2() {
    ITable data = TestTables.testRepTable();
    Schema schema = data.getSchema();
    Hash.Strategy<BaseRowSnapshot> hs = new Hash.Strategy<BaseRowSnapshot>() {
        @Override
        public int hashCode(BaseRowSnapshot brs) {
            if (brs instanceof VirtualRowSnapshot) {
                return brs.hashCode();
            } else if (brs instanceof RowSnapshot) {
                return brs.computeHashCode(schema);
            } else
                throw new RuntimeException("Uknown type encountered");
        }

        @Override
        public boolean equals(BaseRowSnapshot brs1, @Nullable BaseRowSnapshot brs2) {
            // brs2 is null because the hashmap explicitly calls with null
            // even if null cannot be a key.
            if (brs2 == null)
                return brs1 == null;
            return brs1.compareForEquality(brs2, schema);
        }
    };
    Object2IntMap<BaseRowSnapshot> hMap = new
            Object2IntOpenCustomHashMap<BaseRowSnapshot>(hs);
    for (int i = 0; i < 2; i++ ) {
        BaseRowSnapshot rs = new RowSnapshot(data, i);
        hMap.put(rs, 0);
    }
    VirtualRowSnapshot vrs = new VirtualRowSnapshot(data);
    IRowIterator rowIt = data.getRowIterator();
    vrs.setRow(0);
    if (hMap.containsKey(vrs)) {
        System.out.println("A hit!\n");
        int count = hMap.getInt(vrs);
        hMap.put(vrs, count + 1);
    } else {
        throw new RuntimeException("Not found");
    }
}
 
开发者ID:vmware,项目名称:hillview,代码行数:42,代码来源:VirtualRowSnapshotTest.java

示例7: removeSampleReads

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
public void removeSampleReads(final int sampleIndex, final Collection<GATKRead> readsToRemove, final int alleleCount) {
    final GATKRead[] sampleReads = readsBySampleIndex[sampleIndex];
    final int sampleReadCount = sampleReads.length;

    final Object2IntMap<GATKRead> indexByRead = readIndexBySampleIndex(sampleIndex);
    // Count how many we are going to remove, which ones (indexes) and remove entry from the read-index map.
    final boolean[] removeIndex = new boolean[sampleReadCount];
    int removeCount = 0; // captures the number of deletions.
    int firstDeleted = sampleReadCount;    // captures the first position that was deleted.

    final Iterator<GATKRead> readsToRemoveIterator = readsToRemove.iterator();
    while (readsToRemoveIterator.hasNext()) {
        final GATKRead read = readsToRemoveIterator.next();
        if (indexByRead.containsKey(read)) {
            final int index = indexByRead.getInt(read);
            if (firstDeleted > index) {
                firstDeleted = index;
            }
            removeCount++;
            removeIndex[index] = true;
            readsToRemoveIterator.remove();
            indexByRead.remove(read);
        }
    }

    // Nothing to remove we just finish here.
    if (removeCount == 0) {
        return;
    }

    final int newSampleReadCount = sampleReadCount - removeCount;

    // Now we skim out the removed reads from the read array.
    final GATKRead[] oldSampleReads = readsBySampleIndex[sampleIndex];
    final GATKRead[] newSampleReads = new GATKRead[newSampleReadCount];

    System.arraycopy(oldSampleReads,0,newSampleReads,0,firstDeleted);
    Utils.skimArray(oldSampleReads,firstDeleted, newSampleReads, firstDeleted, removeIndex, firstDeleted);

    // Update the indices for the extant reads from the first deletion onwards.
    for (int r = firstDeleted; r < newSampleReadCount; r++) {
        indexByRead.put(newSampleReads[r], r);
    }

    // Then we skim out the likelihoods of the removed reads.
    final double[][] oldSampleValues = valuesBySampleIndex[sampleIndex];
    final double[][] newSampleValues = new double[alleleCount][newSampleReadCount];
    for (int a = 0; a < alleleCount; a++) {
        System.arraycopy(oldSampleValues[a],0,newSampleValues[a],0,firstDeleted);
        Utils.skimArray(oldSampleValues[a], firstDeleted, newSampleValues[a], firstDeleted, removeIndex, firstDeleted);
    }
    valuesBySampleIndex[sampleIndex] = newSampleValues;
    readsBySampleIndex[sampleIndex] = newSampleReads;
    readListBySampleIndex[sampleIndex] = null; // reset the unmodifiable list.
}
 
开发者ID:broadinstitute,项目名称:gatk,代码行数:56,代码来源:ReadLikelihoods.java

示例8: mapTargetInstance

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
public Instances mapTargetInstance(Instances inp){
	// Creates instances with the same format
	Instances result=getOutputFormat();
	Attribute contentAtt=inp.attribute(this.m_textIndex.getIndex());

	for(Instance inst:inp){
		String content=inst.stringValue(contentAtt);

		// tokenises the content 
		List<String> tokens = affective.core.Utils.tokenize(content, this.toLowerCase, this.standarizeUrlsUsers, this.reduceRepeatedLetters, this.m_tokenizer,this.m_stemmer,this.m_stopwordsHandler);

		// Identifies the distinct terms
		AbstractObjectSet<String> terms=new  ObjectOpenHashSet<String>(); 
		terms.addAll(tokens);


		Object2IntMap<String> docVec=this.calculateDocVec(tokens);

		double[] values = new double[result.numAttributes()];


		values[result.classIndex()]= inst.classValue();

		for(String att:docVec.keySet()){

			if(this.m_Dictionary.containsKey(att)){
				int attIndex=this.m_Dictionary.getInt(att);
				// we normalise the value by the number of documents
				values[attIndex]=docVec.getInt(att);					
			}


		}


		Instance outInst=new SparseInstance(1, values);

		inst.setDataset(result);

		result.add(outInst);

	}

	return result;

}
 
开发者ID:felipebravom,项目名称:AffectiveTweets,代码行数:47,代码来源:PTCM.java

示例9: process

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
@Override
protected Instances process(Instances instances) throws Exception {



	Instances result;


	// The first batch creates de labelled data		
	if(!this.isFirstBatchDone()){
		result = getOutputFormat();

		for(String word:this.wordInfo.keySet()){
			// get the word vector
			WordRep wordRep=this.wordInfo.get(word);

			// We just consider valid words
			if(wordRep.numDoc>=this.minInstDocs){

				// a list of lists of tweet vectors
				ObjectList<ObjectList<Object2IntMap<String>>> partitions=wordRep.partitionate(this.getPartNumber());

				// traverse the partitions
				for(ObjectList<Object2IntMap<String>> tweetPartition:partitions){
					// create one instance per partition	
					double[] values = new double[result.numAttributes()];

					// average the vectors of the tweets in the partition
					// traverse each feature space in the partition
					for(Object2IntMap<String> wordSpace:tweetPartition){

						for(String innerWord:wordSpace.keySet()){
							// only include valid words
							if(this.m_Dictionary.containsKey(innerWord)){
								int attIndex=this.m_Dictionary.getInt(innerWord);
								// we normalise the value by the number of documents
								values[attIndex]+=((double)wordSpace.getInt(innerWord))/tweetPartition.size();					
							}
						}
					}



					String wordPol=this.lex.getNomDict().get(word).get(this.polarityAttName);
					if(wordPol.equals(this.polarityAttNegValName))
						values[result.numAttributes()-1]=0;
					else if(wordPol.equals(this.polarityAttPosValName))
						values[result.numAttributes()-1]=1;
					else
						values[result.numAttributes()-1]= Utils.missingValue();					



					Instance inst=new SparseInstance(1, values);


					inst.setDataset(result);

					result.add(inst);




				}
			}
		}
	}

	// Second batch maps tweets into the corresponding feature space
	else{
		result=this.mapTargetInstance(instances);

	}

	return result;

}
 
开发者ID:felipebravom,项目名称:AffectiveTweets,代码行数:78,代码来源:PTCM.java

示例10: mapTargetInstance

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
public Instances mapTargetInstance(Instances inp){

		// Creates instances with the same format
		Instances result=getOutputFormat();


		Attribute contentAtt=inp.attribute(this.m_textIndex.getIndex());


		for(Instance inst:inp){
			String content=inst.stringValue(contentAtt);



			// tokenises the content 
			List<String> tokens = affective.core.Utils.tokenize(content, this.toLowerCase, this.standarizeUrlsUsers, this.reduceRepeatedLetters, this.m_tokenizer,this.m_stemmer,this.m_stopwordsHandler);

			// Identifies the distinct terms
			AbstractObjectSet<String> terms=new  ObjectOpenHashSet<String>(); 
			terms.addAll(tokens);


			Object2IntMap<String> docVec=this.calculateDocVec(tokens);

			double[] values = new double[result.numAttributes()];


			values[result.classIndex()]= inst.classValue();

			for(String att:docVec.keySet()){

				if(this.m_Dictionary.containsKey(att)){
					int attIndex=this.m_Dictionary.getInt(att);
					// we normalise the value by the number of documents
					values[attIndex]=docVec.getInt(att);					
				}


			}


			Instance outInst=new SparseInstance(1, values);

			inst.setDataset(result);

			result.add(outInst);

		}

		return result;

	}
 
开发者ID:felipebravom,项目名称:AffectiveTweets,代码行数:53,代码来源:ASA.java

示例11: process

import it.unimi.dsi.fastutil.objects.Object2IntMap; //导入方法依赖的package包/类
@Override
protected Instances process(Instances instances) throws Exception {



	Instances result = getOutputFormat();

	// if we are in the testing data we calculate the word vectors again
	if (this.isFirstBatchDone()) {
		this.tweetsToVectors(instances);
	}


	int i = 0;
	for (Object2IntMap<String> vec : this.procTweets) {
		double[] values = new double[result.numAttributes()];

		// copy previous attributes values
		for (int n = 0; n < instances.numAttributes(); n++)
			values[n] = instances.instance(i).value(n);

		// add words using the frequency as attribute value
		for (String innerAtt : vec.keySet()) {
			// we only add the value of valid attributes
			if (result.attribute(innerAtt) != null){
				int attIndex=result.attribute(innerAtt).index();					
				values[attIndex]=(double)vec.getInt(innerAtt);

			}


		}


		Instance inst=new SparseInstance(1, values);


		inst.setDataset(result);
		// copy possible strings, relational values...
		copyValues(inst, false, instances, result);

		result.add(inst);
		i++;

	}

	return result;
}
 
开发者ID:felipebravom,项目名称:AffectiveTweets,代码行数:49,代码来源:TweetToSparseFeatureVector.java


注:本文中的it.unimi.dsi.fastutil.objects.Object2IntMap.getInt方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。