當前位置: 首頁>>代碼示例>>Java>>正文


Java HashMultiset.add方法代碼示例

本文整理匯總了Java中com.google.common.collect.HashMultiset.add方法的典型用法代碼示例。如果您正苦於以下問題:Java HashMultiset.add方法的具體用法?Java HashMultiset.add怎麽用?Java HashMultiset.add使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在com.google.common.collect.HashMultiset的用法示例。


在下文中一共展示了HashMultiset.add方法的14個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: update

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
private void update() {
    ArrayList<UUID> onlinePlayers = new ArrayList<UUID>();
    for (Object obj : FMLCommonHandler.instance().getMinecraftServerInstance().getPlayerList().getPlayerList()) {
        EntityPlayerMP player = (EntityPlayerMP) obj;
        UUID uuid = player.getUniqueID();

        onlinePlayers.add(uuid);
        timeOnCount.add(uuid);

        //Kick players who are on too long
        if ((maxTimeOn.containsKey(uuid) && timeOnCount.count(uuid) > maxTimeOn.get(uuid)) || (maxTimeOnGlobal != 0 && timeOnCount.count(uuid) > maxTimeOnGlobal)) {
            rejoinTime.put(uuid, System.currentTimeMillis() + (breakTime.containsKey(uuid) ? breakTime.get(uuid) * 50 : breakTimeGlobal * 50));
            kickPlayerForTime(player);
            timeOnCount.remove(uuid, timeOnCount.count(uuid));
        }
    }

    //Decrease timeOnCount time for players that aren't online
    HashMultiset<UUID> uuids = HashMultiset.create();
    for (UUID entry : timeOnCount.elementSet()) {
        if (!onlinePlayers.contains(entry)) {
            uuids.add(entry);
        }
    }
    Multisets.removeOccurrences(timeOnCount, uuids);
}
 
開發者ID:kihira,項目名稱:BeProductive,代碼行數:27,代碼來源:BeProductive.java

示例2: prepareNGramDictionary

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
private String[] prepareNGramDictionary(QGram qgram) throws IOException {
    final HashMultiset<String> set = HashMultiset.create();
    try (BufferedReader reader = new BufferedReader(new FileReader(
            inputFilePath))) {

        String line;
        while ((line = reader.readLine()) != null) {
            if (line.isEmpty()) {
                continue;
            }

            String[] split = SPLIT_PATTERN.split(line);
            String tkn = cleanToken(split[0]);
            Map<String, Integer> profile = qgram.getProfile(tkn);
            for (Map.Entry<String, Integer> entry : profile.entrySet()) {
                //noinspection ResultOfMethodCallIgnored
                set.add(entry.getKey(), entry.getValue());
            }
        }
    }

    // do some naive word statistics cut-off
    return set.entrySet()
            .stream()
            .filter(e -> e.getCount() > MIN_CHAR_NGRAM_OCCURRENCE)
            .map(Multiset.Entry::getElement)
            .sorted()
            .toArray(String[]::new);
}
 
開發者ID:thomasjungblut,項目名稱:ner-sequencelearning,代碼行數:30,代碼來源:VectorizerMain.java

示例3: getSyntaxElements

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
private Multiset<String> getSyntaxElements() {
    HashMultiset<String> result = HashMultiset.create();
    for (Method method : ClassesThat.class.getMethods()) {
        result.add(method.getName());
    }
    return result;
}
 
開發者ID:TNG,項目名稱:ArchUnit,代碼行數:8,代碼來源:ClassesThatTestsExistTest.java

示例4: multiNodeCluster2

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
@Test
public void multiNodeCluster2() throws Exception {
  final Wrapper wrapper = newWrapper(200, 1, 20,
      ImmutableList.of(
          new EndpointAffinity(N1_EP2, 0.15, true, 50),
          new EndpointAffinity(N2_EP2, 0.15, true, 50),
          new EndpointAffinity(N3_EP1, 0.10, true, 50),
          new EndpointAffinity(N4_EP2, 0.20, true, 50),
          new EndpointAffinity(N1_EP1, 0.20, true, 50)
      ));
  INSTANCE.parallelizeFragment(wrapper, newParameters(1, 5, 20), null);

  // Expect the fragment parallelization to be 20 because:
  // 1. the cost (200) is above the threshold (SLICE_TARGET_DEFAULT) (which gives 200/1=200 width) and
  // 2. Number of mandatory node assignments are 5 (current width 200 satisfies the requirement)
  // 3. max fragment width is 20 which limits the width
  assertEquals(20, wrapper.getWidth());

  final List<NodeEndpoint> assignedEps = wrapper.getAssignedEndpoints();
  assertEquals(20, assignedEps.size());
  final HashMultiset<NodeEndpoint> counts = HashMultiset.create();
  for(final NodeEndpoint ep : assignedEps) {
    counts.add(ep);
  }
  // Each node gets at max 5.
  assertTrue(counts.count(N1_EP2) <= 5);
  assertTrue(counts.count(N2_EP2) <= 5);
  assertTrue(counts.count(N3_EP1) <= 5);
  assertTrue(counts.count(N4_EP2) <= 5);
  assertTrue(counts.count(N1_EP1) <= 5);
}
 
開發者ID:dremio,項目名稱:dremio-oss,代碼行數:32,代碼來源:TestHardAffinityFragmentParallelizer.java

示例5: produceBagOfWords_Token

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
/**
	 * Loads document from file and transform it in a token multi-set using stanford PTBTokenizer.
	 * @param documentPath
	 * @return
	 * @throws IOException
	 */
	public HashMultiset<String> produceBagOfWords_Token(String documentPath) throws IOException{ 
		HashMultiset<String>tokenMultiset = HashMultiset.create();
		PTBTokenizer<CoreLabel> ptbt = new PTBTokenizer<>(new FileReader(documentPath),
	              new CoreLabelTokenFactory(), "");
	      while (ptbt.hasNext()) {
	        CoreLabel label = ptbt.next();
	        tokenMultiset.add(label.toString());
//	        System.out.println(label);
	      }
//	      System.out.println("\n\nMULTISET:\n\n");
//	      for (String token: tokenMultiset) System.out.println(token +"	"+ tokenMultiset.count(token));
	      return tokenMultiset;
	}
 
開發者ID:JULIELab,項目名稱:JEmAS,代碼行數:20,代碼來源:File2BagOfWords_Processor.java

示例6: produceBagOfWords_Lemma

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
/**
	 * Loads file at given path into string representation. Process it with Stanford-Lemmatizer and returns it as Mulitset.
	 * @param documentPath
	 * @return
	 * @throws IOException
	 */
	public HashMultiset<String> produceBagOfWords_Lemma(String documentPath) throws IOException{
		HashMultiset<String> lemmaMultiset = HashMultiset.create();
		String doc = Util.readfile2String(documentPath);
//		if (this.lemmatizer==null) this.lemmatizer = new StanfordLemmatizer(); //deprecated. will be constructed in class constructor
		List<String> lemmas = this.lemmatizer.lemmatize(doc);
		for (String lemma: lemmas){
			lemmaMultiset.add(lemma);
//			System.out.println(lemma);
		}
	
		return lemmaMultiset;
	}
 
開發者ID:JULIELab,項目名稱:JEmAS,代碼行數:19,代碼來源:File2BagOfWords_Processor.java

示例7: logRead

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
/**
 * Read a token produced by the given producer
 * 
 * @param buffer
 *            the buffer where the token has been read
 * @param producer
 *            the token producer
 */
public void logRead(Buffer buffer, ProfiledStep producer) {
	// get the set of tokens already produced by the given producer
	HashMultiset<Buffer> producedTokens = tokensProducers.get(producer);
	if (producedTokens == null) {
		producedTokens = HashMultiset.create();
		tokensProducers.put(producer, producedTokens);
	}
	producedTokens.add(buffer);

	// log the read
	consumedTokens.add(buffer);
}
 
開發者ID:turnus,項目名稱:turnus,代碼行數:21,代碼來源:StepDataBox.java

示例8: multiNodeCluster2

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
@Test
public void multiNodeCluster2() throws Exception {
  final Wrapper wrapper = newWrapper(200, 1, 20,
      ImmutableList.of(
          new EndpointAffinity(N1_EP2, 0.15, true, MAX_VALUE),
          new EndpointAffinity(N2_EP2, 0.15, true, MAX_VALUE),
          new EndpointAffinity(N3_EP1, 0.10, true, MAX_VALUE),
          new EndpointAffinity(N4_EP2, 0.20, true, MAX_VALUE),
          new EndpointAffinity(N1_EP1, 0.20, true, MAX_VALUE)
      ));
  INSTANCE.parallelizeFragment(wrapper, newParameters(1, 5, 20), null);

  // Expect the fragment parallelization to be 20 because:
  // 1. the cost (200) is above the threshold (SLICE_TARGET_DEFAULT) (which gives 200/1=200 width) and
  // 2. Number of mandatory node assignments are 5 (current width 200 satisfies the requirement)
  // 3. max fragment width is 20 which limits the width
  assertEquals(20, wrapper.getWidth());

  final List<DrillbitEndpoint> assignedEps = wrapper.getAssignedEndpoints();
  assertEquals(20, assignedEps.size());
  final HashMultiset<DrillbitEndpoint> counts = HashMultiset.create();
  for(final DrillbitEndpoint ep : assignedEps) {
    counts.add(ep);
  }
  // Each node gets at max 5.
  assertTrue(counts.count(N1_EP2) <= 5);
  assertTrue(counts.count(N2_EP2) <= 5);
  assertTrue(counts.count(N3_EP1) <= 5);
  assertTrue(counts.count(N4_EP2) <= 5);
  assertTrue(counts.count(N1_EP1) <= 5);
}
 
開發者ID:axbaretto,項目名稱:drill,代碼行數:32,代碼來源:TestHardAffinityFragmentParallelizer.java

示例9: setUp

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
@BeforeExperiment
void setUp() {
  Random random = new Random();
  multisets.clear();
  for (int i = 0; i < ARRAY_SIZE; i++) {
    HashMultiset<Integer> multiset = HashMultiset.<Integer>create();
    multisets.add(multiset);
    queries[i] = random.nextInt();
    multiset.add(queries[i]);
  }
}
 
開發者ID:sander120786,項目名稱:guava-libraries,代碼行數:12,代碼來源:HashMultisetAddPresentBenchmark.java

示例10: getMostUsedArticleCasing

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
public String getMostUsedArticleCasing() {
	HashMultiset<String> articleNames = HashMultiset.create();
	String result;

	for (Writable writable: super.get()) {
		LinkWritable link = (LinkWritable)writable;
		articleNames.add(link.getArticle().toString());
	}

	ImmutableMultiset<String> sorted = Multisets.copyHighestCountFirst(articleNames);
	result = (String)sorted.elementSet().toArray()[0];
	
	return result;
}
 
開發者ID:rossf7,項目名稱:wikireverse,代碼行數:15,代碼來源:LinkArrayWritable.java

示例11: generateHashMultiset

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
@Generates private static <E> HashMultiset<E> generateHashMultiset(E freshElement) {
  HashMultiset<E> multiset = HashMultiset.create();
  multiset.add(freshElement);
  return multiset;
}
 
開發者ID:zugzug90,項目名稱:guava-mock,代碼行數:6,代碼來源:FreshValueGenerator.java

示例12: testNodeInputSplit

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
@Test
public void testNodeInputSplit() throws IOException, InterruptedException {
  // Regression test for MAPREDUCE-4892. There are 2 nodes with all blocks on 
  // both nodes. The grouping ensures that both nodes get splits instead of 
  // just the first node
  DummyInputFormat inFormat = new DummyInputFormat();
  int numBlocks = 12;
  long totLength = 0;
  long blockSize = 100;
  long maxSize = 200;
  long minSizeNode = 50;
  long minSizeRack = 50;
  String[] locations = { "h1", "h2" };
  String[] racks = new String[0];
  Path path = new Path("hdfs://file");
  
  OneBlockInfo[] blocks = new OneBlockInfo[numBlocks];
  for(int i=0; i<numBlocks; ++i) {
    blocks[i] = new OneBlockInfo(path, i*blockSize, blockSize, locations, racks);
    totLength += blockSize;
  }
  
  List<InputSplit> splits = new ArrayList<InputSplit>();
  HashMap<String, Set<String>> rackToNodes = 
                            new HashMap<String, Set<String>>();
  HashMap<String, List<OneBlockInfo>> rackToBlocks = 
                            new HashMap<String, List<OneBlockInfo>>();
  HashMap<OneBlockInfo, String[]> blockToNodes = 
                            new HashMap<OneBlockInfo, String[]>();
  HashMap<String, Set<OneBlockInfo>> nodeToBlocks = 
                            new HashMap<String, Set<OneBlockInfo>>();
  
  OneFileInfo.populateBlockInfo(blocks, rackToBlocks, blockToNodes, 
                           nodeToBlocks, rackToNodes);
  
  inFormat.createSplits(nodeToBlocks, blockToNodes, rackToBlocks, totLength,  
                        maxSize, minSizeNode, minSizeRack, splits);
  
  int expectedSplitCount = (int)(totLength/maxSize);
  assertEquals(expectedSplitCount, splits.size());
  HashMultiset<String> nodeSplits = HashMultiset.create();
  for(int i=0; i<expectedSplitCount; ++i) {
    InputSplit inSplit = splits.get(i);
    assertEquals(maxSize, inSplit.getLength());
    assertEquals(1, inSplit.getLocations().length);
    nodeSplits.add(inSplit.getLocations()[0]);
  }
  assertEquals(3, nodeSplits.count(locations[0]));
  assertEquals(3, nodeSplits.count(locations[1]));
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:51,代碼來源:TestCombineFileInputFormat.java

示例13: run

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
public void run() throws Exception {
    HashMultimap<String, String> typemap = HashMultimap.create();
    BufferedReader in = new BufferedReader(new FileReader(counts_file));
    String line = null;
    while ((line = in.readLine()) != null) {
        String[] typedata = line.split(" ");
        typemap.put(typedata[1], typedata[0]);
    }
    in.close();

    in = new BufferedReader(new FileReader(qrels_file));
    QueryParser qps = new QueryParser(FreebaseTools.FIELD_NAME_SUBJECT, tools.getIndexAnalyzer());
    IndexSearcher searcher = tools.getIndexSearcher();
    IndexReader reader = tools.getIndexReader();
    Joiner.MapJoiner joiner = Joiner.on(", ").withKeyValueSeparator(" = ");

    int count = 0;
    int correct = 0;
    while ((line = in.readLine()) != null) {
        count++;
        String[] fields = line.split("\t");
        System.out.println("# [Query: " + fields[0] + "] [KBid: " + fields[1] + "] [type: " + fields[2] + "]");
        String lookup = "f_" + fields[1];
        String actual_type = fields[2];

        // execute a Lucene query for the entity, get back 10 docs
        int docid = tools.getSubjectDocID(lookup);
        if (docid == -1) {
            System.out.println("# kbid not found: " + lookup);
            continue;
        }
        Document d = tools.getDocumentInMode(docid);
        String[] types = d.getValues("r_type");
        HashMultiset<String> typecount = HashMultiset.create(4);
        for (String t : types) {
            if (typemap.containsKey(t))
                for (String tt : typemap.get(t))
                    typecount.add(tt);
        }
        if (typecount.size() > 0) {
            String guess_type = Multisets.copyHighestCountFirst(typecount).entrySet().asList().get(0).getElement();
            System.out.print(actual_type + ", guessing " + guess_type + " [");
            for (Multiset.Entry<String> me : typecount.entrySet()) {
                System.out.print(me.getElement() + " = " + me.getCount() + " ");
            }
            System.out.println("]");

            if (actual_type.equals(guess_type))
                correct++;
        }
    }

    System.out.println(correct + " correct out of " + count + " = " + (float)correct/count);
}
 
開發者ID:isoboroff,項目名稱:basekb-search,代碼行數:55,代碼來源:PredictType.java

示例14: testNodeInputSplit

import com.google.common.collect.HashMultiset; //導入方法依賴的package包/類
public void testNodeInputSplit() throws IOException, InterruptedException {
  // Regression test for MAPREDUCE-4892. There are 2 nodes with all blocks on 
  // both nodes. The grouping ensures that both nodes get splits instead of 
  // just the first node
  DummyInputFormat inFormat = new DummyInputFormat();
  int numBlocks = 12;
  long totLength = 0;
  long blockSize = 100;
  long maxSize = 200;
  long minSizeNode = 50;
  long minSizeRack = 50;
  String[] locations = { "h1", "h2" };
  String[] racks = new String[0];
  Path path = new Path("hdfs://file");
  
  OneBlockInfo[] blocks = new OneBlockInfo[numBlocks];
  for(int i=0; i<numBlocks; ++i) {
    blocks[i] = new OneBlockInfo(path, i*blockSize, blockSize, locations, racks);
    totLength += blockSize;
  }
  
  List<InputSplit> splits = new ArrayList<InputSplit>();
  HashMap<String, Set<String>> rackToNodes = 
                            new HashMap<String, Set<String>>();
  HashMap<String, List<OneBlockInfo>> rackToBlocks = 
                            new HashMap<String, List<OneBlockInfo>>();
  HashMap<OneBlockInfo, String[]> blockToNodes = 
                            new HashMap<OneBlockInfo, String[]>();
  HashMap<String, Set<OneBlockInfo>> nodeToBlocks = 
                            new HashMap<String, Set<OneBlockInfo>>();
  
  OneFileInfo.populateBlockInfo(blocks, rackToBlocks, blockToNodes, 
                           nodeToBlocks, rackToNodes);
  
  inFormat.createSplits(nodeToBlocks, blockToNodes, rackToBlocks, totLength,  
                        maxSize, minSizeNode, minSizeRack, splits);
  
  int expectedSplitCount = (int)(totLength/maxSize);
  Assert.assertEquals(expectedSplitCount, splits.size());
  HashMultiset<String> nodeSplits = HashMultiset.create();
  for(int i=0; i<expectedSplitCount; ++i) {
    InputSplit inSplit = splits.get(i);
    Assert.assertEquals(maxSize, inSplit.getLength());
    Assert.assertEquals(1, inSplit.getLocations().length);
    nodeSplits.add(inSplit.getLocations()[0]);
  }
  Assert.assertEquals(3, nodeSplits.count(locations[0]));
  Assert.assertEquals(3, nodeSplits.count(locations[1]));
}
 
開發者ID:Nextzero,項目名稱:hadoop-2.6.0-cdh5.4.3,代碼行數:50,代碼來源:TestCombineFileInputFormat.java


注:本文中的com.google.common.collect.HashMultiset.add方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。