当前位置: 首页>>代码示例>>Java>>正文


Java IntIntOpenHashMap.putOrAdd方法代码示例

本文整理汇总了Java中com.carrotsearch.hppc.IntIntOpenHashMap.putOrAdd方法的典型用法代码示例。如果您正苦于以下问题:Java IntIntOpenHashMap.putOrAdd方法的具体用法?Java IntIntOpenHashMap.putOrAdd怎么用?Java IntIntOpenHashMap.putOrAdd使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在com.carrotsearch.hppc.IntIntOpenHashMap的用法示例。


在下文中一共展示了IntIntOpenHashMap.putOrAdd方法的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: pipe

import com.carrotsearch.hppc.IntIntOpenHashMap; //导入方法依赖的package包/类
public Instance pipe(Instance instance) {
	
	IntIntOpenHashMap localCounter = new IntIntOpenHashMap();

	if (instance.getData() instanceof FeatureSequence) {
			
		FeatureSequence features = (FeatureSequence) instance.getData();

		for (int position = 0; position < features.size(); position++) {
			localCounter.putOrAdd(features.getIndexAtPosition(position), 1, 1);
		}

	}
	else {
		throw new IllegalArgumentException("Looking for a FeatureSequence, found a " + 
										   instance.getData().getClass());
	}

	for (int feature: localCounter.keys().toArray()) {
		counter.increment(feature);
	}

	numInstances++;

	return instance;
}
 
开发者ID:cmoen,项目名称:mallet,代码行数:27,代码来源:FeatureDocFreqPipe.java

示例2: apply

import com.carrotsearch.hppc.IntIntOpenHashMap; //导入方法依赖的package包/类
@Override
public IntDistribution apply(ColouredGraph graph) {
    IntIntOpenHashMap counts = new IntIntOpenHashMap();
    Grph g = graph.getGraph();
    IntArrayList outDegrees = g.getAllOutEdgeDegrees();
    for (int i = 0; i < outDegrees.elementsCount; ++i) {
        counts.putOrAdd(outDegrees.buffer[i], 1, 1);
    }
    return IntDistribution.fromMap(counts);
}
 
开发者ID:dice-group,项目名称:Lemming,代码行数:11,代码来源:OutDegreeDistributionMetric.java

示例3: apply

import com.carrotsearch.hppc.IntIntOpenHashMap; //导入方法依赖的package包/类
@Override
public IntDistribution apply(ColouredGraph graph) {
    IntIntOpenHashMap counts = new IntIntOpenHashMap();
    Grph g = graph.getGraph();
    IntArrayList inDegrees = g.getAllInEdgeDegrees();
    for (int i = 0; i < inDegrees.elementsCount; ++i) {
        counts.putOrAdd(inDegrees.buffer[i], 1, 1);
    }
    return IntDistribution.fromMap(counts);
}
 
开发者ID:dice-group,项目名称:Lemming,代码行数:11,代码来源:InDegreeDistributionMetric.java

示例4: sampleTopicsForOneTestDocAll

import com.carrotsearch.hppc.IntIntOpenHashMap; //导入方法依赖的package包/类
private void sampleTopicsForOneTestDocAll(FeatureSequence tokenSequence,
		LabelSequence topicSequence) {
	// TODO Auto-generated method stub
	int[] oneDocTopics = topicSequence.getFeatures();

	IntIntOpenHashMap currentTypeTopicCounts;
	int type, oldTopic, newTopic;
	double tw;
	double[] topicWeights = new double[numTopics];
	double topicWeightsSum;
	int docLength = tokenSequence.getLength();

	//		populate topic counts
	int[] localTopicCounts = new int[numTopics];
	for (int ti = 0; ti < numTopics; ti++){
		localTopicCounts[ti] = 0;
	}
	for (int position = 0; position < docLength; position++) {
		localTopicCounts[oneDocTopics[position]] ++;
	}

	// Iterate over the positions (words) in the document
	for (int si = 0; si < docLength; si++) {
		type = tokenSequence.getIndexAtPosition(si);
		oldTopic = oneDocTopics[si];

		// Remove this token from all counts
		localTopicCounts[oldTopic] --;

		currentTypeTopicCounts = typeTopicCounts[type];
		assert(currentTypeTopicCounts.get(oldTopic) >= 0);

		if (currentTypeTopicCounts.get(oldTopic) == 1) {
			currentTypeTopicCounts.remove(oldTopic);
		}
		else {
			currentTypeTopicCounts.addTo(oldTopic, -1);
		}
		tokensPerTopic[oldTopic]--;

		// Build a distribution over topics for this token
		Arrays.fill (topicWeights, 0.0);
		topicWeightsSum = 0;

		for (int ti = 0; ti < numTopics; ti++) {
			tw = ((currentTypeTopicCounts.get(ti) + beta) / (tokensPerTopic[ti] + betaSum))
			      * ((localTopicCounts[ti] + alpha[ti])); // (/docLen-1+tAlpha); is constant across all topics
			topicWeightsSum += tw;
			topicWeights[ti] = tw;
		}
		// Sample a topic assignment from this distribution
		newTopic = random.nextDiscrete (topicWeights, topicWeightsSum);

		// Put that new topic into the counts
		oneDocTopics[si] = newTopic;
		currentTypeTopicCounts.putOrAdd(newTopic, 1, 1);
		localTopicCounts[newTopic] ++;
		tokensPerTopic[newTopic]++;
	}
}
 
开发者ID:cmoen,项目名称:mallet,代码行数:61,代码来源:LDAStream.java

示例5: sampleTopicsForOneTestDoc

import com.carrotsearch.hppc.IntIntOpenHashMap; //导入方法依赖的package包/类
private void sampleTopicsForOneTestDoc(FeatureSequence tokenSequence,
		LabelSequence topicSequence) {
	// TODO Auto-generated method stub
	int[] oneDocTopics = topicSequence.getFeatures();

	IntIntOpenHashMap currentTypeTopicCounts;
	int type, oldTopic, newTopic;
	double tw;
	double[] topicWeights = new double[numTopics];
	double topicWeightsSum;
	int docLength = tokenSequence.getLength();

	//		populate topic counts
	int[] localTopicCounts = new int[numTopics];
	for (int ti = 0; ti < numTopics; ti++){
		localTopicCounts[ti] = 0;
	}
	for (int position = 0; position < docLength; position++) {
		if(oneDocTopics[position] != -1) {
			localTopicCounts[oneDocTopics[position]] ++;
		}
	}

	// Iterate over the positions (words) in the document
	for (int si = 0; si < docLength; si++) {
		type = tokenSequence.getIndexAtPosition(si);
		oldTopic = oneDocTopics[si];
		if(oldTopic == -1) {
			continue;
		}

		// Remove this token from all counts
    		localTopicCounts[oldTopic] --;
    		currentTypeTopicCounts = typeTopicCounts[type];
		assert(currentTypeTopicCounts.get(oldTopic) >= 0);

		if (currentTypeTopicCounts.get(oldTopic) == 1) {
			currentTypeTopicCounts.remove(oldTopic);
		}
		else {
			currentTypeTopicCounts.addTo(oldTopic, -1);
		}
		tokensPerTopic[oldTopic]--;

		// Build a distribution over topics for this token
		Arrays.fill (topicWeights, 0.0);
		topicWeightsSum = 0;

		for (int ti = 0; ti < numTopics; ti++) {
			tw = ((currentTypeTopicCounts.get(ti) + beta) / (tokensPerTopic[ti] + betaSum))
			      * ((localTopicCounts[ti] + alpha[ti])); // (/docLen-1+tAlpha); is constant across all topics
			topicWeightsSum += tw;
			topicWeights[ti] = tw;
		}
		// Sample a topic assignment from this distribution
		newTopic = random.nextDiscrete (topicWeights, topicWeightsSum);

		// Put that new topic into the counts
		oneDocTopics[si] = newTopic;
		currentTypeTopicCounts.putOrAdd(newTopic, 1, 1);
		localTopicCounts[newTopic] ++;
		tokensPerTopic[newTopic]++;
	}
}
 
开发者ID:cmoen,项目名称:mallet,代码行数:65,代码来源:LDAStream.java

示例6: sampleTopicsForOneDocWithTheta

import com.carrotsearch.hppc.IntIntOpenHashMap; //导入方法依赖的package包/类
private void sampleTopicsForOneDocWithTheta(FeatureSequence tokenSequence,
		LabelSequence topicSequence, double[] topicDistribution) {
	// TODO Auto-generated method stub
	int[] oneDocTopics = topicSequence.getFeatures();

   IntIntOpenHashMap currentTypeTopicCounts;
	int type, oldTopic, newTopic;
	double tw;
	double[] topicWeights = new double[numTopics];
	double topicWeightsSum;
	int docLength = tokenSequence.getLength();
	
	// Iterate over the positions (words) in the document
	for (int si = 0; si < docLength; si++) {
		type = tokenSequence.getIndexAtPosition(si);
		oldTopic = oneDocTopics[si];
		if(oldTopic == -1) {
			continue;
		}

 		currentTypeTopicCounts = typeTopicCounts[type];
		assert(currentTypeTopicCounts.get(oldTopic) >= 0);

		if (currentTypeTopicCounts.get(oldTopic) == 1) {
			currentTypeTopicCounts.remove(oldTopic);
		}
		else {
			currentTypeTopicCounts.addTo(oldTopic, -1);
		}
		tokensPerTopic[oldTopic]--;

		// Build a distribution over topics for this token
		Arrays.fill (topicWeights, 0.0);
		topicWeightsSum = 0;

		for (int ti = 0; ti < numTopics; ti++) {
			tw = ((currentTypeTopicCounts.get(ti) + beta) / (tokensPerTopic[ti] + betaSum))
			      * topicDistribution[ti]; // (/docLen-1+tAlpha); is constant across all topics
			topicWeightsSum += tw;
			topicWeights[ti] = tw;
		}
		// Sample a topic assignment from this distribution
		newTopic = random.nextDiscrete (topicWeights, topicWeightsSum);

		// Put that new topic into the counts
		oneDocTopics[si] = newTopic;
		currentTypeTopicCounts.putOrAdd(newTopic, 1, 1);
		tokensPerTopic[newTopic]++;
	}
}
 
开发者ID:cmoen,项目名称:mallet,代码行数:51,代码来源:LDAStream.java

示例7: detectSeparator

import com.carrotsearch.hppc.IntIntOpenHashMap; //导入方法依赖的package包/类
/**
 * Tries to detect the used separator and returns it. If it can not detect a separator ';' will be returned.
 *
 * @param file the file
 * @return the char
 * @throws IOException Signals that an I/O exception has occurred.
 */
private char detectSeparator(File file) throws IOException {

    char[] seps = { ';', ',', '|', '\t' };
    int maxLines = 100;

    final BufferedReader r = new BufferedReader(new FileReader(file));
    final IntIntOpenHashMap map = new IntIntOpenHashMap();
    final CharIntOpenHashMap separators = new CharIntOpenHashMap();
    for (int i = 0; i < seps.length; i++) {
        separators.put(seps[i], i);
    }
    int count = 0;

    /* Iterate over data */
    String line = r.readLine();
    while ((count < maxLines) && (line != null)) {

        /* Iterate over line character by character */
        final char[] a = line.toCharArray();
        for (final char c : a) {
            if (separators.containsKey(c)) {
                map.putOrAdd(separators.get(c), 0, 1);
            }
        }
        line = r.readLine();
        count++;
    }
    r.close();

    if (map.isEmpty()) {
        return seps[0];
    }

    /* Check which separator was used the most */
    int selection = 0;
    int max = Integer.MIN_VALUE;
    final int[] keys = map.keys;
    final int[] values = map.values;
    final boolean[] allocated = map.allocated;
    for (int i = 0; i < allocated.length; i++) {
        if (allocated[i] && (values[i] > max)) {
            max = values[i];
            selection = keys[i];
        }
    }

    return seps[selection];
}
 
开发者ID:arx-deidentifier,项目名称:arx-cli,代码行数:56,代码来源:CommandLineInterface.java

示例8: detectDelimiter

import com.carrotsearch.hppc.IntIntOpenHashMap; //导入方法依赖的package包/类
/**
 * Tries to detect the separator used within this file
 *
 * This goes through up to {@link ImportWizardModel#PREVIEW_MAX_LINES} lines
 * and tries to detect the used separator by counting how often each of
 * the available {@link #delimiters} is used.
 *
 * @throws IOException In case file couldn't be accessed successfully
 */
private void detectDelimiter() throws IOException {
    Charset charset = getCharset();

    final BufferedReader r = new BufferedReader(new InputStreamReader(new FileInputStream(comboLocation.getText()), charset));
    final IntIntOpenHashMap map = new IntIntOpenHashMap();
    final CharIntOpenHashMap delimitors = new CharIntOpenHashMap();
    for (int i=0; i<this.delimiters.length; i++) {
        delimitors.put(this.delimiters[i], i);
    }
    int countLines = 0;
    int countChars = 0;

    /* Iterate over data */
    String line = r.readLine();
    outer: while ((countLines < ImportWizardModel.PREVIEW_MAX_LINES) && (line != null)) {

        /* Iterate over line character by character */
        final char[] a = line.toCharArray();
        for (final char c : a) {
            if (delimitors.containsKey(c)) {
                map.putOrAdd(delimitors.get(c), 0, 1);
            }
            countChars++;
            if (countChars > ImportWizardModel.DETECT_MAX_CHARS) {
                break outer;
            }
        }
        line = r.readLine();
        countLines++;
    }
    r.close();

    if (map.isEmpty()) {
        selectedDelimiter = 0;
        return;
    }

    /* Check which separator was used the most */
    int max = Integer.MIN_VALUE;
    final int [] keys = map.keys;
    final int [] values = map.values;
    final boolean [] allocated = map.allocated;
    for (int i = 0; i < allocated.length; i++) {
        if (allocated[i] && values[i] > max) {
            max = values[i];
            selectedDelimiter = keys[i];
        }
    }
}
 
开发者ID:arx-deidentifier,项目名称:arx,代码行数:59,代码来源:ImportWizardPageCSV.java


注:本文中的com.carrotsearch.hppc.IntIntOpenHashMap.putOrAdd方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。