当前位置: 首页>>代码示例>>Java>>正文


Java StringUtils.join方法代码示例

本文整理汇总了Java中edu.stanford.nlp.util.StringUtils.join方法的典型用法代码示例。如果您正苦于以下问题:Java StringUtils.join方法的具体用法?Java StringUtils.join怎么用?Java StringUtils.join使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在edu.stanford.nlp.util.StringUtils的用法示例。


在下文中一共展示了StringUtils.join方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: computeTopicSimilarity

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
private List<Pair<String, Double>> computeTopicSimilarity(Concept c, int topic) {

		if (simMeasures == null) {
			simMeasures = new HashMap<String, ConceptSimilarityMeasure>();
			simMeasures.put("topic_jaccard", new JaccardDistance());
			simMeasures.put("topic_wn", new WordBasedMeasure(WNSimMeasure.RES));
			simMeasures.put("topic_w2v", new WordEmbeddingDistance(EmbeddingType.WORD2VEC, 300, false));
		}

		String[] topicDesc = this.topicDescriptions.get(topic);
		Concept dummy = new Concept(StringUtils.join(topicDesc));
		dummy = NonUIMAPreprocessor.getInstance().preprocess(dummy);

		List<Pair<String, Double>> scores = new ArrayList<Pair<String, Double>>();
		for (String sim : simMeasures.keySet()) {
			double score = Muter.callMuted(simMeasures.get(sim)::computeSimilarity, c, dummy);
			scores.add(new Pair<String, Double>(sim, score));
		}
		return scores;
	}
 
开发者ID:UKPLab,项目名称:ijcnlp2017-cmaps,代码行数:21,代码来源:FeatureExtractor.java

示例2: compile

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
/**
 * Compiles a sequence of regular expression a TokenSequencePattern using the specified environment
 * @param env Environment to use
 * @param strings List of regular expression to be compiled
 * @return Compiled TokenSequencePattern
 */
public static TokenSequencePattern compile(Env env, String... strings)
{
  try {
    List<SequencePattern.PatternExpr> patterns = new ArrayList<SequencePattern.PatternExpr>();
    for (String string:strings) {
      // TODO: Check token sequence parser?
      SequencePattern.PatternExpr pattern = env.parser.parseSequence(env, string);
      patterns.add(pattern);
    }
    SequencePattern.PatternExpr nodeSequencePattern = new SequencePattern.SequencePatternExpr(patterns);
    return new TokenSequencePattern(StringUtils.join(strings), nodeSequencePattern);
  } catch (Exception ex) {
    throw new RuntimeException(ex);
  }
}
 
开发者ID:benblamey,项目名称:stanford-nlp,代码行数:22,代码来源:TokenSequencePattern.java

示例3: main

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public static void main(String[] args) throws IOException {
	int topics_num=200;
	String line;
	String []pairs;
	String [] words;
	//String indx;
	String txt;
	for (int i=0;i<topics_num;i++){
		
		Scanner segments_file = new Scanner(new File("C:/Users/skohail/Desktop/All Experiments files/LDA original data results/segments/"+i+".txtseggmented.txt"));
		FileWriter only_sentences = new FileWriter("C:/Users/skohail/Desktop/All Experiments files/LDA original data results/segments/only_sentences_less_80_words/only_sen"+i+".txt");
		FileWriter only_indx = new FileWriter("C:/Users/skohail/Desktop/All Experiments files/LDA original data results/segments/only_sentences_less_80_words/only_indx"+i+".txt");
		segments_file.useDelimiter("\n");
		while (segments_file.hasNext()){
			line = segments_file.next();
			pairs=line.split("\t");
			words=pairs[1].split("\\s+");
			txt=pairs[1].replaceAll("[\\s+\\{Punct}\\.\\_\\:]", "").trim();
			String joinedString = StringUtils.join(words, " ");
			//System.out.println(joinedString);
			//words=line.split("\\s");
			if(pairs.length==2&& words.length>80 && txt != null && !txt.isEmpty()){
				only_sentences.write(joinedString.trim().subSequence(0, 80)+"\n");
				only_indx.write(pairs[0]+"\n");
				//	System.out.println(i);
			}
			else if(pairs.length==2&& words.length>2 && words.length<=80 && txt != null && !txt.isEmpty()){
				only_sentences.write(joinedString.trim()+"\n");
				only_indx.write(pairs[0]+"\n");
			}
			
		}
		segments_file.close();	
		only_sentences.close();
		only_indx.close();
	}
	
}
 
开发者ID:tudarmstadt-lt,项目名称:sentiment,代码行数:39,代码来源:CheckLength.java

示例4: preorder

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
private static String preorder(Tree tree) {
  
  List<Tree> queue = new LinkedList<>();
  queue.add(tree);
  
  
  while ( ! queue.isEmpty()) {
    Tree currentNode = queue.remove(0);
    
    if (currentNode.isLeaf())
      continue;
    
    Tree children[] = currentNode.children();
    int childCount = children.length;
    IndexedWord hw = (IndexedWord) currentNode.label();
    List<FeatureNode> featureNodes = new ArrayList<>(childCount);
    for (int i = 0; i < childCount; i++) {
      featureNodes.add(new FeatureNode(children[i], hw));
      queue.add(children[i]);
    }
    if (childCount < 8) {
      Pair<Double, List<Integer>> result = search(featureNodes, new LinkedList<Integer>(), Double.NEGATIVE_INFINITY);
      if (result != null) {
        List<Integer> permutation = result.second;
        List<Tree> newChildren = new ArrayList<>(Arrays.asList(children));
        for (int i = 0; i < childCount; i++) {
          int idx = permutation.get(i);
          newChildren.set(idx, children[i]);
        }
        currentNode.setChildren(newChildren);
      } else {
        System.err.println("Warning: No path found.");
      }
    }
  }
  
  return StringUtils.join(tree.yieldWords());
}
 
开发者ID:stanfordnlp,项目名称:phrasal,代码行数:39,代码来源:DependencyBnBPreorderer.java

示例5: getPattern

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public String getPattern(List<CoreLabel> pTokens){

    ArrayList<String> phrase_string = new ArrayList<String>();
    String ne = "";
    for(CoreLabel token : pTokens){
      if(token.index() == headWord.index()){
        phrase_string.add(token.lemma());
        ne = "";

      } else if( (token.lemma().equals("and") || StringUtils.isPunct(token.lemma()))
          && pTokens.size() > pTokens.indexOf(token)+1
          && pTokens.indexOf(token) > 0
          && pTokens.get(pTokens.indexOf(token)+1).ner().equals(pTokens.get(pTokens.indexOf(token)-1).ner())){

      } else if(token.index() == headWord.index()-1
          && token.ner().equals(nerString)){
        phrase_string.add(token.lemma());
        ne = "";

      } else if(!token.ner().equals("O")){
        if(!token.ner().equals(ne)){
          ne = token.ner();
          phrase_string.add("<"+ne+">");
        }

      } else {
        phrase_string.add(token.lemma());
        ne = "";
      }
    }
    return StringUtils.join(phrase_string);
  }
 
开发者ID:benblamey,项目名称:stanford-nlp,代码行数:33,代码来源:Mention.java

示例6: getMissingRequirement

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public String getMissingRequirement(Set<String> alreadyAdded) {
  for (List<String> requirement : requirements) {
    boolean found = false;
    for (String annotator : requirement) {
      if (alreadyAdded.contains(annotator)) {
        found = true;
        break;
      }
    }
    if (!found) {
      return StringUtils.join(requirement, "|");
    }
  }
  return null;
}
 
开发者ID:benblamey,项目名称:stanford-nlp,代码行数:16,代码来源:Requirement.java

示例7: toFormattedString

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public String toFormattedString(int flags) {
    if (getTimeLabel() != null) {
        return getTimeLabel();
    }
    if ((flags & SUTime.FORMAT_ISO) != 0) {
        // TODO: is there iso standard?
        return null;
    }
    if ((flags & SUTime.FORMAT_TIMEX3_VALUE) != 0) {
        // TODO: is there timex3 standard?
        return null;
    }
    return "{" + StringUtils.join(temporals, ", ") + "}";
}
 
开发者ID:benblamey,项目名称:stanford-nlp,代码行数:15,代码来源:ExplicitTemporalSet.java

示例8: toEnUncollapsedSentenceString

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
/**
 * Similar to <code>toRecoveredString</code>, but will fill in words that were
 * collapsed into relations (i.e. prep_for --> 'for'). Mostly to deal with
 * collapsed dependency trees.
 *
 * TODO: consider merging with toRecoveredString() NOTE: assumptions currently
 * are for English. NOTE: currently takes immediate successors to current word
 * and expands them. This assumption may not be valid for other conditions or
 * languages?
 */
public String toEnUncollapsedSentenceString() {
  List<IndexedWord> uncompressedList = Generics.newLinkedList(vertexSet());
  List<Pair<String, IndexedWord>> specifics = Generics.newArrayList();

  // Collect the specific relations and the governed nodes, and then process
  // them one by one,
  // to avoid concurrent modification exceptions.
  for (IndexedWord word : vertexSet()) {
    for (SemanticGraphEdge edge : getIncomingEdgesSorted(word)) {
      GrammaticalRelation relation = edge.getRelation();
      // Extract the specific: need to account for possiblity that relation
      // can
      // be a String or GrammaticalRelation (how did it happen this way?)
      String specific = relation.getSpecific();

      if (specific == null) {
        if (edge.getRelation().equals(EnglishGrammaticalRelations.AGENT)) {
          specific = "by";
        }
      }

      // Insert the specific at the leftmost token that is not governed by
      // this node.
      if (specific != null) {
        Pair<String, IndexedWord> specPair = new Pair<String, IndexedWord>(specific, word);
        specifics.add(specPair);
      }
    }
  }

  for (Pair<String, IndexedWord> tuple : specifics) {
    insertSpecificIntoList(tuple.first(), tuple.second(), uncompressedList);
  }

  return StringUtils.join(uncompressedList, " ");
}
 
开发者ID:benblamey,项目名称:stanford-nlp,代码行数:47,代码来源:SemanticGraph.java

示例9: getAllPredicatesString

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public String getAllPredicatesString() {
	return StringUtils.join(predicates_, ", ");
}
 
开发者ID:uwnlp,项目名称:recipe-interpretation,代码行数:4,代码来源:EventType.java

示例10: toString

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public String toString() {
	return StringUtils.join(tokens_);
}
 
开发者ID:uwnlp,项目名称:recipe-interpretation,代码行数:4,代码来源:StringValue.java

示例11: getSubjectText

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public String getSubjectText() {
    return StringUtils.join(sentence.originalTexts().subList(subjectSpan.start(), subjectSpan.end()).stream(), " ");
}
 
开发者ID:intel-analytics,项目名称:InformationExtraction,代码行数:4,代码来源:IntelKBPRelationExtractor.java

示例12: getObjectText

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public String getObjectText() {
    return StringUtils.join(sentence.originalTexts().subList(objectSpan.start(), objectSpan.end()).stream(), " ");
}
 
开发者ID:intel-analytics,项目名称:InformationExtraction,代码行数:4,代码来源:IntelKBPRelationExtractor.java

示例13: getSubjectText

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public String getSubjectText() {
  return StringUtils.join(sentence.originalTexts().subList(subjectSpan.start(), subjectSpan.end()).stream(), " ");
}
 
开发者ID:intel-analytics,项目名称:InformationExtraction,代码行数:4,代码来源:KBPRelationExtractor.java

示例14: getObjectText

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public String getObjectText() {
  return StringUtils.join(sentence.originalTexts().subList(objectSpan.start(), objectSpan.end()).stream(), " ");
}
 
开发者ID:intel-analytics,项目名称:InformationExtraction,代码行数:4,代码来源:KBPRelationExtractor.java

示例15: sp

import edu.stanford.nlp.util.StringUtils; //导入方法依赖的package包/类
public static String sp(double[] x) {
	ArrayList<String> parts = new ArrayList<String>();
	for (int i=0; i < x.length; i++)
		parts.add(String.format("%.2g", x[i]));
	return "[" + StringUtils.join(parts) + "]";
}
 
开发者ID:UKPLab,项目名称:tac2015-event-detection,代码行数:7,代码来源:U.java


注:本文中的edu.stanford.nlp.util.StringUtils.join方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。