本文整理汇总了Java中edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation类的典型用法代码示例。如果您正苦于以下问题:Java TreeAnnotation类的具体用法?Java TreeAnnotation怎么用?Java TreeAnnotation使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。
TreeAnnotation类属于edu.stanford.nlp.trees.TreeCoreAnnotations包,在下文中一共展示了TreeAnnotation类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: parsingTest
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
private static void parsingTest()
{
// String exampleText = "The software developer who inserted a major security flaw into OpenSSL 1.2.4.8, using the file foo/bar/blah.php has said the error was \"quite trivial\" despite the severity of its impact, according to a new report. The Sydney Morning Herald published an interview today with Robin Seggelmann, who added the flawed code to OpenSSL, the world's most popular library for implementing HTTPS encryption in websites, e-mail servers, and applications. The flaw can expose user passwords and potentially the private key used in a website's cryptographic certificate (whether private keys are at risk is still being determined). This is a new paragraph about Apache Tomcat's latest update 7.0.1.";
String exampleText = "Microsoft Windows 7 before SP1 has Sun Java cross-site scripting vulnerability Java SE in file.php (refer to CVE-2014-1234).";
// String exampleText = "Oracle DBRM has vulnerability in ABCD plug-in via abcd.1234 (found on abcd.com).";
EntityLabeler labeler = new EntityLabeler();
Annotation doc = labeler.getAnnotatedDoc("My Doc", exampleText);
List<CoreMap> sentences = doc.get(SentencesAnnotation.class);
for ( CoreMap sentence : sentences)
{
for ( CoreLabel token : sentence.get(TokensAnnotation.class))
{
System.out.println(token.get(TextAnnotation.class) + "\t" + token.get(CyberAnnotation.class));
}
System.out.println("Entities:\n" + sentence.get(CyberEntityMentionsAnnotation.class));
System.out.println("Parse Tree:\n" + sentence.get(TreeAnnotation.class));
}
}
示例2: main
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
public static void main(String[] args) {
// String exampleText = "The software developer who inserted a major security flaw into OpenSSL 1.2.4.8, using the file foo/bar/blah.php has said the error was \"quite trivial\" despite the severity of its impact, according to a new report. The Sydney Morning Herald published an interview today with Robin Seggelmann, who added the flawed code to OpenSSL, the world's most popular library for implementing HTTPS encryption in websites, e-mail servers, and applications. The flaw can expose user passwords and potentially the private key used in a website's cryptographic certificate (whether private keys are at risk is still being determined). This is a new paragraph about Apache Tomcat's latest update 7.0.1.";
String exampleText = "Microsoft Windows 7 before SP1 has Sun Java cross-site scripting vulnerability Java SE in file.php (refer to CVE-2014-1234).";
// String exampleText = "Oracle DBRM has vulnerability in ABCD plug-in via abcd.1234 (found on abcd.com).";
EntityLabeler labeler = new EntityLabeler();
Annotation doc = labeler.getAnnotatedDoc("My Doc", exampleText);
List<CoreMap> sentences = doc.get(SentencesAnnotation.class);
for ( CoreMap sentence : sentences) {
for ( CoreLabel token : sentence.get(TokensAnnotation.class)) {
System.out.println(token.get(TextAnnotation.class) + "\t" + token.get(CyberAnnotation.class));
}
System.out.println("Entities:\n" + sentence.get(CyberEntityMentionsAnnotation.class));
System.out.println("Parse Tree:\n" + sentence.get(TreeAnnotation.class));
}
}
示例3: parse
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
public static List<CoreMap> parse(String text) {
// create an empty Annotation just with the given text
Annotation document = new Annotation(text);
// run all Annotators on this text
pipeline.annotate(document);
// these are all the sentences in this document
// a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
List<Tree> trees = new ArrayList<>();
List<Tree> dependencies = new ArrayList<>();
for(CoreMap sentence: sentences) {
// this is the parse tree of the current sentence
Tree t = sentence.get(TreeAnnotation.class);
SemanticGraph graph = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
trees.add(t);
}
return sentences;
}
示例4: getParseTree
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
public static ArrayList<Tree> getParseTree(Annotation document){
ArrayList<Tree> forest = new ArrayList<Tree>();
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for(CoreMap sentence: sentences) {
// traversing the words in the current sentence
// a CoreLabel is a CoreMap with additional token-specific methods
// for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
// // this is the text of the token
// String word = token.get(TextAnnotation.class);
// // this is the POS tag of the token
// String pos = token.get(PartOfSpeechAnnotation.class);
// // this is the NER label of the token
// String ne = token.get(NamedEntityTagAnnotation.class);
// }
// this is the parse tree of the current sentence
Tree tree = sentence.get(TreeAnnotation.class);
// Alternatively, this is the Stanford dependency graph of the current sentence, but without punctuations
// SemanticGraph dependencies = sentence.get(BasicDependenciesAnnotation.class);
forest.add(tree);
}
return forest;
}
示例5: needsReannotation
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
/**
* Checks for the presence of some critical annotations. In the case some of those entered among the
* parameters is missing, the texts needs to be re-annotated.
*
* @param a
* annotation
* @param r
* reader with the desired annotations
* @return
*/
private boolean needsReannotation(Annotation a, KpeReader r) {
List<CoreMap> sentences = a.get(SentencesAnnotation.class);
List<CoreLabel> tokens = a.get(TokensAnnotation.class);
if (tokens == null || sentences == null || tokens.size() == 0 || sentences.size() == 0) {
return true;
}
Set<Class<?>> sentenceAnnotations = sentences.get(0).keySet();
Set<Class<?>> tokenAnnotations = tokens.get(0).keySet();
if ((r.getIsMweOn() && !tokenAnnotations.contains(MWEAnnotation.class))
|| (r.getIsNeOn() && !tokenAnnotations.contains(NamedEntityTagAnnotation.class))) {
return true;
}
if (r.getIsSyntaxOn() && !sentenceAnnotations.contains(TreeAnnotation.class)) {
return true;
}
return false;
}
示例6: parseAndRemovePeriods
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
/**
* Return the CoreNLP annotated and parsed result of the entryString as a
* Array of {@link Tree} objects formated to remove periods.
* @param entryString String - The string to be processed
* @return Array - Array containing {@link Tree} objects
* representing each string
**/
public Tree[] parseAndRemovePeriods(String entryString) {
List<CoreMap> sentences = annotate(entryString);
ArrayList<Tree> ret = new ArrayList<Tree>();
for(CoreMap sentence: sentences) {
Tree tree = sentence.get(TreeAnnotation.class);
int numKids = tree.lastChild().numChildren();
String last = tree.lastChild().lastChild().toString();
if(last.indexOf("(. ") == 0) {
tree.lastChild().removeChild(numKids -1);
}
ret.add(tree);
}
Tree[] arr = ret.toArray(new Tree[ret.size()]);
return arr;
}
示例7: PreNERCoreMapWrapper
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
/**
*
*/
public PreNERCoreMapWrapper(final CoreMap cm, final HeadFinder hf, final AnalyticUUIDGenerator gen) {
this.wrapper = new CoreMapWrapper(cm, gen);
this.hf = hf;
this.tree = Optional.ofNullable(cm.get(TreeAnnotation.class));
this.basicDeps = Optional.ofNullable(cm.get(BasicDependenciesAnnotation.class));
this.colDeps = Optional.ofNullable(cm.get(CollapsedDependenciesAnnotation.class));
this.colCCDeps = Optional.ofNullable(cm.get(CollapsedCCProcessedDependenciesAnnotation.class));
this.gen = gen;
}
示例8: initialize
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
@Override
public void initialize(int sourceInputId,
Sequence<IString> source) {
Tree parseTree = CoreNLPCache.get(sourceInputId).get(TreeAnnotation.class);
this.posTags = parseTree.preTerminalYield();
}
示例9: toPhrases
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
/**
* Transform a CoreMap instance into a list of Phrase instances
*
* @param sentence
* @param text
* @return
*/
public static List<Phrase> toPhrases(CoreMap sentence, String text) {
Tree root = sentence.get(TreeAnnotation.class);
SemanticGraph graph = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
ArrayList<Phrase> phrases = new ArrayList<Phrase>();
for (Tree node : root.children())
if (node.isPrePreTerminal() || node.isPreTerminal())
phrases.add(toPhrase(node, graph, text));
else if (node.isPhrasal())
for (Phrase p : toPhrases(node, graph, text))
phrases.add(p);
return phrases;
}
示例10: _trees
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
private static List<Tree> _trees(Phrase p) {
// create an empty Annotation just with the given text
Annotation document = p.memo(Phrase.coreNLP);
try{
// Run the full parse on this text
constituency_parse_pipeline.annotate(document);
} catch (IllegalArgumentException | NullPointerException ex) {
/*
* On extremely rare occasions (< 0.00000593% of passages)
* it will throw an error like the following:
*
* Exception in thread "main" java.lang.IllegalArgumentException:
* No head rule defined for SYM using class edu.stanford.nlp.trees.SemanticHeadFinder in SYM-10
*
* On more frequent occasions, you get the following:
* Exception in thread "main" java.lang.NullPointerException
* at edu.stanford.nlp.dcoref.RuleBasedCorefMentionFinder.findHead(RuleBasedCorefMentionFinder.java:276)
*
* Both of these are fatal for the passage.
* Neither are a big deal for the index. Forget them.
*/
}
return p.memo(Phrase.sentences)
.stream()
.map(s -> s.get(TreeAnnotation.class))
.filter(Objects::nonNull)
.collect(toList());
}
示例11: syntactTree
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
private List<Integer> syntactTree(CoreMap sentence, NGram orthographicForm) {
List<Integer> toReturn = new LinkedList<Integer>();
Tree tree = sentence.get(TreeAnnotation.class);
if (tree == null)
return toReturn;
List<Tree> leaves = tree.getLeaves();
List<CoreLabel> tokens = sentence.get(TokensAnnotation.class);
NGram ngram = new NGram();
for (int l = 0; l < leaves.size(); ++l) {
CoreLabel token = tokens.get(l);
if (ngram.size() == orthographicForm.size()) {
ngram.remove(0);
}
ngram.add(token);
if (!ngram.equals(orthographicForm))
continue;
int heights[] = new int[ngram.size()];
for (int t = 0; t < ngram.size(); ++t) {
int subTreeHeight = tree.depth(leaves.get(l - t));
heights[t] = subTreeHeight;
for (int h = 2; h < subTreeHeight; ++h) {
Tree ancestor = leaves.get(l - t).ancestor(h, tree);
if (ancestor.value().matches("NP.{0,2}")) {
heights[t] = h - 1;
break;
}
}
}
int combinedHeight = 0;
for (int h = 0; h < heights.length; ++h) {
combinedHeight = Math.max(combinedHeight, heights[h]);
}
toReturn.add(combinedHeight);
}
return toReturn;
}
示例12: getGeneratedStructures
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
private List<String> getGeneratedStructures(Tree parse, Tree leaf, String op) {
Tree anc = leaf.ancestor(3, parse);
if (anc.label().toString().equals("ROOT")) {
return Arrays.asList(op.split(" and "));
}
Tree[] children = leaf.ancestor(2, parse).children();
if (children.length == 1) {
return Arrays.asList(op.split(" and "));
} else if (children.length != 3) {
List<List<String>> leafStrings = new ArrayList<List<String>>(2);
leafStrings.add(new LinkedList<String>());
for (Tree child : children) {
if (child.label().toString().equals("CC") && child.getLeaves().get(0).label().toString().equals("and")) {
leafStrings.add(new LinkedList<String>());
continue;
}
leafStrings.get(leafStrings.size() - 1).addAll(getPosLeaves(child, ".*"));
}
if (leafStrings.size() > 2 || leafStrings.get(0).size() > 0
|| (leafStrings.size() > 1 && leafStrings.get(1).size() > 0)) {
return Arrays.asList(op.split(" and "));
}
String[] leftAndRight = { conCat(leafStrings.get(0)), conCat(leafStrings.get(1)) };
Tree[] trees = new Tree[2];
int i = 0;
for (String side : leftAndRight) {
Annotation ann = new Annotation(side);
sentenceAnalyzer.annotate(ann);
trees[i++] = ann.get(SentencesAnnotation.class).get(0).get(TreeAnnotation.class);
}
return produceNewExpressions(parse, trees[0], trees[1], op);
}
return produceNewExpressions(parse, children[0], children[2], op);
}
示例13: parse
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
public static Tree parse(String sentence)
{
log.trace("parsing sentence: '"+sentence+"' as tree");
Annotation document = new Annotation(sentence);
treeParser.annotate(document);
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
return sentences.get(0).get(TreeAnnotation.class);
// return new ParseResult(sentences.get(0).get(TreeAnnotation.class),document.get(PartOfSpeechAnnotation.class));
}
示例14: fillInParseAnnotations
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
public static void fillInParseAnnotations(boolean verbose, boolean buildGraphs, CoreMap sentence, Tree tree) {
// make sure all tree nodes are CoreLabels
// TODO: why isn't this always true? something fishy is going on
ParserAnnotatorUtils.convertToCoreLabels(tree);
// index nodes, i.e., add start and end token positions to all nodes
// this is needed by other annotators down stream, e.g., the NFLAnnotator
tree.indexSpans(0);
sentence.set(TreeAnnotation.class, tree);
if (verbose) {
System.err.println("Tree is:");
tree.pennPrint(System.err);
}
if (buildGraphs) {
// generate the dependency graph
SemanticGraph deps = generateCollapsedDependencies(tree);
SemanticGraph uncollapsedDeps = generateUncollapsedDependencies(tree);
SemanticGraph ccDeps = generateCCProcessedDependencies(tree);
if (verbose) {
System.err.println("SDs:");
System.err.println(deps.toString("plain"));
}
sentence.set(SemanticGraphCoreAnnotations.CollapsedDependenciesAnnotation.class, deps);
sentence.set(SemanticGraphCoreAnnotations.BasicDependenciesAnnotation.class, uncollapsedDeps);
sentence.set(SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation.class, ccDeps);
}
setMissingTags(sentence, tree);
}
示例15: getTextAnnotatedTree
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; //导入依赖的package包/类
public List<Tree> getTextAnnotatedTree(String text) {
ArrayList<Tree> results = new ArrayList<Tree>();
List<CoreMap> annotationResults = annotate(text);
for(CoreMap sentence : annotationResults) {
results.add(sentence.get(TreeAnnotation.class));
}
return results;
}