当前位置: 首页>>代码示例>>Java>>正文


Java SentimentCoreAnnotations类代码示例

本文整理汇总了Java中edu.stanford.nlp.sentiment.SentimentCoreAnnotations的典型用法代码示例。如果您正苦于以下问题:Java SentimentCoreAnnotations类的具体用法?Java SentimentCoreAnnotations怎么用?Java SentimentCoreAnnotations使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


SentimentCoreAnnotations类属于edu.stanford.nlp.sentiment包,在下文中一共展示了SentimentCoreAnnotations类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getStanfordSentimentRate

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
public int getStanfordSentimentRate(String sentimentText) {
    Properties props = new Properties();
    props.setProperty("annotators", "tokenize, ssplit, parse, sentiment");
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
    //StanfordCoreNLP
    int totalRate = 0;
    String[] linesArr = sentimentText.split("\\.");
    for (int i = 0; i < linesArr.length; i++) {
        if (linesArr[i] != null) {
            Annotation annotation = pipeline.process(linesArr[i]);
            for (CoreMap sentence : annotation.get(CoreAnnotations.SentencesAnnotation.class)) {
                Tree tree = sentence.get(SentimentCoreAnnotations.SentimentAnnotatedTree.class);
                int score = RNNCoreAnnotations.getPredictedClass(tree);
                totalRate = totalRate + (score - 2);
            }
        }
    }
    return totalRate;
}
 
开发者ID:wso2-incubator,项目名称:twitter-sentiment-analysis,代码行数:20,代码来源:StanfordNLP.java

示例2: findSentiment

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
public static int findSentiment(String tweet) {

		int mainSentiment = 0;
		if (tweet != null && tweet.length() > 0) {
			int longest = 0;
			Annotation annotation = pipeline.process(tweet);
			for (CoreMap sentence : annotation
					.get(CoreAnnotations.SentencesAnnotation.class)) {
				Tree tree = sentence
						.get(SentimentCoreAnnotations.SentimentAnnotatedTree.class);
				int sentiment = RNNCoreAnnotations.getPredictedClass(tree);
				String partText = sentence.toString();
				if (partText.length() > longest) {
					mainSentiment = sentiment;
					longest = partText.length();
				}

			}
		}
		return mainSentiment;
	}
 
开发者ID:Activiti,项目名称:activiti-cloud-examples,代码行数:22,代码来源:NLP.java

示例3: findSentiment

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
public static int findSentiment(String text) {

       int mainSentiment = 0;
       if (text != null && text.length() > 0) {
           int longest = 0;
           Annotation annotation = pipeline.process(text);
           for (CoreMap sentence : annotation
                   .get(CoreAnnotations.SentencesAnnotation.class)) {
               Tree tree = sentence
                       .get(SentimentCoreAnnotations.AnnotatedTree.class);
               int sentiment = RNNCoreAnnotations.getPredictedClass(tree);
               String partText = sentence.toString();
               if (partText.length() > longest) {
                   mainSentiment = sentiment;
                   longest = partText.length();
               }

           }
       }
       return mainSentiment;
   }
 
开发者ID:dflick-pivotal,项目名称:sentimentr-release,代码行数:22,代码来源:NLP.java

示例4: main

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
public static void main(String[] s) {
    Properties props = new Properties();
    props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

    // read some text in the text variable
    String text = "\"But I do not want to go among mad people,\" Alice remarked.\n" +
            "\"Oh, you can not help that,\" said the Cat: \"we are all mad here. I am mad. You are mad.\"\n" +
            "\"How do you know I am mad?\" said Alice.\n" +
            "\"You must be,\" said the Cat, \"or you would not have come here.\" This is awful, bad, disgusting";

    // create an empty Annotation just with the given text
    Annotation document = new Annotation(text);

    // run all Annotators on this text
    pipeline.annotate(document);

    List<CoreMap> sentences = document.get(CoreAnnotations.SentencesAnnotation.class);
    for (CoreMap sentence : sentences) {
        String sentiment = sentence.get(SentimentCoreAnnotations.SentimentClass.class);
        System.out.println(sentiment + "\t" + sentence);
    }
}
 
开发者ID:Vedenin,项目名称:java_in_examples,代码行数:24,代码来源:StanfordCoreNLPTest.java

示例5: run

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
public List<Pattern> run(List<Pattern> patterns) {

        Properties props = new Properties();
        props.setProperty("annotators", "tokenize, ssplit, pos, lemma, parse, sentiment");
        StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

        for (Pattern pattern : patterns) {
            Annotation annotation = pipeline.process(pattern.toSentences());
            for (CoreMap sentence : annotation.get(CoreAnnotations.SentencesAnnotation.class)) {
                    Tree tree = sentence.get(SentimentCoreAnnotations.AnnotatedTree.class);
                    int sentiment = RNNCoreAnnotations.getPredictedClass(tree);
                    for (CoreLabel token : sentence.get(CoreAnnotations.TokensAnnotation.class)) {
                        String lemma = token.get(CoreAnnotations.LemmaAnnotation.class);

                    }
            }
        }
        return null;
    }
 
开发者ID:vladsandulescu,项目名称:phrases,代码行数:20,代码来源:Postprocess.java

示例6: getSentiment

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
/**
 * Synchronized method to obtain the sentiment of the set of documents.
 * Synchronization is fine, because the method is invoked via a scheduled job
 * and only one execution at a time is permitted.
 * That allows to optimize the loading of the model as well
 * @param documents
 * @return
 */
public synchronized SentimentResult getSentiment(Set<String> documents, TimelineMusic meta) {

    double sentimentSum = 0;
    for (String document: documents) {
        int mainSentiment = 0;
        if (document != null && document.length() > 0) {
            int longest = 0;
            try {
                Annotation annotation = pipeline.process(document);
                // mainSentiment is the sentiment of the whole document. We find
                // the whole document by comparing the length of individual
                // annotated "fragments"
                for (CoreMap sentence : annotation.get(CoreAnnotations.SentencesAnnotation.class)) {
                    Tree tree = sentence.get(SentimentCoreAnnotations.AnnotatedTree.class);
                    int sentiment = RNNCoreAnnotations.getPredictedClass(tree);
                    String partText = sentence.toString();
                    if (partText.length() > longest) {
                        mainSentiment = sentiment;
                        longest = partText.length();
                    }
                }
            } catch (Exception ex) {
                logger.error("Problem analyzing document sentiment. " + document, ex);
                continue;
            }
        }
        sentimentSum += mainSentiment;
    }

    double average = sentimentSum / documents.size();
    meta.setAverageSentiment(average);

    if (average >= 2.25) {
        return SentimentResult.POSITIVE;
    } else if (average <= 1.75) {
        return SentimentResult.NEGATIVE;
    }
    return SentimentResult.NEUTRAL;
}
 
开发者ID:Glamdring,项目名称:computoser,代码行数:48,代码来源:SentimentAnalyzer.java

示例7: constructAnalyzedSentence

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
private AnalyzedSentence constructAnalyzedSentence(CoreMap sentence) {
    AnalyzedSentence analyzedSentence = new AnalyzedSentence(sentence.toString());

    // a CoreLabel is a CoreMap with additional token-specific methods
    for (CoreLabel token : sentence.get(TokensAnnotation.class)) {
        // this is the NER label of the token
        String ne = token.get(NamedEntityTagAnnotation.class);
        if (!StringUtils.equals("O", ne)) {
            String word = token.get(TextAnnotation.class);
            NamedEntity entity = new NamedEntity(word);
            entity.setType(ne);
            entity.setOffsetBegin(token.get(CharacterOffsetBeginAnnotation.class));
            entity.setOffsetEnd(token.get(CharacterOffsetEndAnnotation.class));
            analyzedSentence.addEntity(entity);
        }

    }

    // this is the parse tree of the current sentence
    Tree sentimentTree = sentence.get(SentimentCoreAnnotations.SentimentAnnotatedTree.class);
    int sentiment = RNNCoreAnnotations.getPredictedClass(sentimentTree);
    analyzedSentence.setSentiment(sentiment);
    return analyzedSentence;
}
 
开发者ID:davidbogue,项目名称:Textalytics,代码行数:25,代码来源:TextAnalyzer.java

示例8: analyze

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
/**
 * Analyse tweet text, returning the sentiment extracted from the longest
 * sentence (by character count).
 * @param text the tweet text.
 * @return a {@link Sentiment} object containing the sentiment value and
 * its label.
 */
public Sentiment analyze(String text) {
	Sentiment mainSentiment = null;

	if (text != null && text.length() > 0) {
		String psText = preprocessText(text);

		int longest = 0;
		Annotation annotation = pipeline.process(psText);
		for (CoreMap sentence : annotation.get(CoreAnnotations.SentencesAnnotation.class)) {
			String partText = sentence.toString();
			if (partText.length() > longest) {
				Tree tree = sentence.get(SentimentCoreAnnotations.AnnotatedTree.class);
				int sentiment = RNNCoreAnnotations.getPredictedClass(tree);
				mainSentiment = new Sentiment(sentence.get(SentimentCoreAnnotations.ClassName.class), sentiment);
				longest = partText.length();
			}
		}

		LOGGER.trace("Got '{}' sentiment from '{}'", mainSentiment.getSentimentClass(), psText);
	}

	return mainSentiment;
}
 
开发者ID:flaxsearch,项目名称:hackday,代码行数:31,代码来源:SentimentAnalysisService.java

示例9: getSentiment

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
private static List<Map.Entry<String, String>> getSentiment(StanfordCoreNLP pipeline, String document)
{
    List<Map.Entry<String, String>> ret = new ArrayList<Map.Entry<String, String>>();


    Annotation annotation = pipeline.process(document);
    /*
     * We're going to iterate over all of the sentences and extract the sentiment.  We'll adopt a majority rule policy
     */
    for( CoreMap sentence : annotation.get(CoreAnnotations.SentencesAnnotation.class))
    {
        //for each sentence, we get the sentiment annotation
        //this comes in the form of a tree of annotations
        Tree sentimentTree = sentence.get(SentimentCoreAnnotations.AnnotatedTree.class);
        //Letting CoreNLP roll up the sentiment for us
        int sentimentClassIdx = RNNCoreAnnotations.getPredictedClass(sentimentTree);
        //now we add to our list of sentences and sentiments
        SentimentClass sentimentClass = SentimentClass.getGeneral(sentimentClassIdx);
        List<Double> probs = new ArrayList<Double>();
        {
            SimpleMatrix mat = RNNCoreAnnotations.getPredictions(sentimentTree);
            for(int i = 0;i < SentimentClass.values().length;++i)
            {
                probs.add(mat.get(i));
            }
        }
        String sentenceStr = AnnotationUtils.sentenceToString(sentence).replace("\n", "");
        ret.add(new AbstractMap.SimpleEntry<String, String>(sentenceStr, sentimentClass + "," + Joiner.on(';').join(probs)));
    }
    return ret;
}
 
开发者ID:OSBI,项目名称:pdi-sentiment,代码行数:32,代码来源:GenerateTrainingData.java

示例10: getSentiment

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
private static int getSentiment(CoreMap sentence) {
	Tree tree = sentence.get(SentimentCoreAnnotations.AnnotatedTree.class);
    return RNNCoreAnnotations.getPredictedClass(tree);
}
 
开发者ID:2bhaskar,项目名称:SentencewiseSentimentAnalysis,代码行数:5,代码来源:TestStanfordSentiment.java

示例11: main

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
public static void main(String[] args) {
    Properties props = new Properties();
    //props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
    props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref, sentiment");
    
    /*
    boolean caseless = true;
    if (caseless) {
            props.put("","");
            props.put("pos.model","edu/stanford/nlp/models/pos-tagger/english-caseless-left3words-distsim.tagger");
            props.put("parse.model","edu/stanford/nlp/models/lexparser/englishPCFG.caseless.ser.gz ");
            props.put("ner.model","edu/stanford/nlp/models/ner/english.all.3class.caseless.distsim.crf.ser.gz edu/stanford/nlp/models/ner/english.muc.7class.caseless.distsim.crf.ser.gz edu/stanford/nlp/models/ner/english.conll.4class.caseless.distsim.crf.ser.gz ");
    }
            */

   
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);


    Annotation annotation;
    if (args.length > 0) {
        annotation = new Annotation(IOUtils.slurpFileNoExceptions(args[0]));
    } else {
        annotation = new Annotation("This is good.  I am parsing natural language now and can help people.");
    }

    pipeline.annotate(annotation);
    
    /*pipeline.prettyPrint(annotation, out);
    if (xmlOut != null) {
        pipeline.xmlPrint(annotation, xmlOut);
    }

    out.println(annotation.toShorterString());*/
    
    
    List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);
    if (sentences == null) return;
    
    for (CoreMap sentence : sentences) {
  // traversing the words in the current sentence
        // a CoreLabel is a CoreMap with additional token-specific methods
        for (CoreLabel token : sentence.get(TokensAnnotation.class)) {
            // this is the text of the token
            String word = token.get(TextAnnotation.class);
            // this is the POS tag of the token
            String pos = token.get(PartOfSpeechAnnotation.class);
            // this is the NER label of the token
            String ne = token.get(NamedEntityTagAnnotation.class);
            System.out.println(word + " " + pos + " " + ne + " " + token);
        }

        System.out.println("sentiment: " + sentence.get(SentimentCoreAnnotations.AnnotatedTree.class));
        System.out.println("sentiment: " + sentence.get(SentimentCoreAnnotations.ClassName.class));
        
        Tree tree = sentence.get(TreeCoreAnnotations.TreeAnnotation.class);
        tree.pennPrint(out);
        System.out.println(sentence.get(SemanticGraphCoreAnnotations.BasicDependenciesAnnotation.class).toString("plain"));

        SemanticGraph graph = sentence.get(SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation.class);
        System.out.println(graph.toString("plain"));

    }

    
}
 
开发者ID:automenta,项目名称:netentionj-desktop,代码行数:67,代码来源:CoreNLPDemo.java

示例12: getSentiment

import edu.stanford.nlp.sentiment.SentimentCoreAnnotations; //导入依赖的package包/类
public static String getSentiment(CoreMap sentence) {       
    return sentence.get(SentimentCoreAnnotations.ClassName.class);
}
 
开发者ID:automenta,项目名称:netentionj-desktop,代码行数:4,代码来源:TextParse.java


注:本文中的edu.stanford.nlp.sentiment.SentimentCoreAnnotations类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。