當前位置: 首頁>>代碼示例>>Java>>正文


Java TokenizerModel類代碼示例

本文整理匯總了Java中opennlp.tools.tokenize.TokenizerModel的典型用法代碼示例。如果您正苦於以下問題:Java TokenizerModel類的具體用法?Java TokenizerModel怎麽用?Java TokenizerModel使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


TokenizerModel類屬於opennlp.tools.tokenize包,在下文中一共展示了TokenizerModel類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: tokenDetect

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
public String[] tokenDetect(String sentence) {
	File modelIn = null;
	String tokens[] = null;
	try {
		File userDir = new File(System.getProperty("user.dir"));
		if (this.turNLPInstance.getLanguage().equals("en_US")) {
			modelIn = new File(userDir.getAbsolutePath().concat("/models/opennlp/en/en-token.bin"));
		} else if (this.turNLPInstance.getLanguage().equals("pt_BR")) {
			modelIn = new File(userDir.getAbsolutePath().concat("/models/opennlp/pt/pt-token.bin"));
		}
		TokenizerModel model = new TokenizerModel(modelIn);
		Tokenizer tokenizer = new TokenizerME(model);
		tokens = tokenizer.tokenize(sentence);
	} catch (IOException e) {
		e.printStackTrace();
	}
	return tokens;
}
 
開發者ID:openviglet,項目名稱:turing,代碼行數:19,代碼來源:TurOpenNLPConnector.java

示例2: initialize

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
/**
 * Initializes the current instance with the given context.
 * 
 * Note: Do all initialization in this method, do not use the constructor.
 */
public void initialize(UimaContext context) throws ResourceInitializationException {

    super.initialize(context);

    TokenizerModel model;

    try {
        TokenizerModelResource modelResource =
                        (TokenizerModelResource) context.getResourceObject(UimaUtil.MODEL_PARAMETER);

        model = modelResource.getModel();
    } catch (ResourceAccessException e) {
        throw new ResourceInitializationException(e);
    }

    tokenizer = new TokenizerME(model);
}
 
開發者ID:deeplearning4j,項目名稱:DataVec,代碼行數:23,代碼來源:ConcurrentTokenizer.java

示例3: init

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
/**
 * Initialization method. Creates a new graph and initializes the StanfordNLPCore pipeline if needed
 * @param sent
 * @param token
 */
private void init(InputStream sent, InputStream token, InputStream stop, InputStream exstop) throws IOException {
    // creates a new SentenceDetector, POSTagger, and Tokenizer
    SentenceModel sentModel = new SentenceModel(sent);
    sent.close();
    sdetector = new SentenceDetectorME(sentModel);
    TokenizerModel tokenModel = new TokenizerModel(token);
    token.close();
    tokenizer = new TokenizerME(tokenModel);
    BufferedReader br = new BufferedReader(new InputStreamReader(stop));
    String line;
    while ((line = br.readLine()) != null) {
        stopwords.add(line);
    }
    br.close();
    br = new BufferedReader(new InputStreamReader(exstop));
    while ((line = br.readLine()) != null) {
        extendedStopwords.add(line);
    }
    br.close();
}
 
開發者ID:J0Nreynolds,項目名稱:Articleate,代碼行數:26,代碼來源:TextRank.java

示例4: doInitialize

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
@Override
public void doInitialize(UimaContext aContext) throws ResourceInitializationException {
	try {
		tokensModel.loadModel(TokenizerModel.class, getClass().getResourceAsStream("en_token.bin"));
		sentencesModel.loadModel(SentenceModel.class, getClass().getResourceAsStream("en_sent.bin"));
		posModel.loadModel(POSModel.class, getClass().getResourceAsStream("en_pos_maxent.bin"));
		chunkModel.loadModel(ChunkerModel.class, getClass().getResourceAsStream("en_chunker.bin"));
	} catch (BaleenException be) {
		getMonitor().error("Unable to load OpenNLP Language Models", be);
		throw new ResourceInitializationException(be);
	}

	try {
		sentenceDetector = new SentenceDetectorME((SentenceModel) sentencesModel.getModel());
		wordTokenizer = new TokenizerME((TokenizerModel) tokensModel.getModel());
		posTagger = new POSTaggerME((POSModel) posModel.getModel());
		phraseChunker = new ChunkerME((ChunkerModel) chunkModel.getModel());
	} catch (Exception e) {
		getMonitor().error("Unable to create OpenNLP taggers", e);
		throw new ResourceInitializationException(e);
	}
}
 
開發者ID:dstl,項目名稱:baleen,代碼行數:23,代碼來源:OpenNLP.java

示例5: testLoad

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
@Test
public void testLoad() throws Exception{
	SharedOpenNLPModel m = new SharedOpenNLPModel();
	
	m.loadModel(TokenizerModel.class, OpenNLP.class.getResourceAsStream("en_token.bin"));
	
	BaseModel bm = m.getModel();
	assertNotNull(bm);
	assertTrue(bm instanceof TokenizerModel);
	assertEquals("en", bm.getLanguage());
	
	//Trying to load a different model shouldn't change the resource
	m.loadModel(SentenceModel.class, OpenNLP.class.getResourceAsStream("en_sent.bin"));
	assertEquals(bm, m.getModel());
	
	m.doDestroy();
}
 
開發者ID:dstl,項目名稱:baleen,代碼行數:18,代碼來源:SharedOpenNLPModelTest.java

示例6: scoreStructure

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
public double scoreStructure(String ca, String q, String passage, boolean verbose) throws InvalidFormatException, IOException{
	POSTaggerME parserModel = new POSTaggerME(new POSModel(new FileInputStream(new File("en-pos-model.bin"))));
	Tokenizer tokenizer = new TokenizerME(new TokenizerModel(new FileInputStream(new File("en-token.bin"))));
	Parser parser = ParserFactory.create(new ParserModel(new FileInputStream(new File("en-parser.bin"))));
	double score = 0;
	
	Parse[] questionParse = ParserTool.parseLine(q, parser, 1);
	Parse[] passageParse = ParserTool.parseLine(q, parser, 1);
	
	if (passage.contains(ca)) {
		for (int i =0; i < questionParse.length; i++) {
			score += matchChildren(questionParse[i],passageParse[i]);
		}
	}
	
	return score;
}
 
開發者ID:SeanTater,項目名稱:uncc2014watsonsim,代碼行數:18,代碼來源:JM_Scorer.java

示例7: startStage

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
@Override
public void startStage(StageConfiguration config) {

  // parse the config to map the params properly
  textField = config.getProperty("textField", textField);
  peopleField = config.getProperty("peopleField", peopleField);
  posTextField = config.getProperty("posTextField", posTextField);

  try {
    // Sentence finder
    SentenceModel sentModel = new SentenceModel(new FileInputStream(sentenceModelFile));
    sentenceDetector = new SentenceDetectorME(sentModel);
    // tokenizer
    TokenizerModel tokenModel = new TokenizerModel(new FileInputStream(tokenModelFile));
    tokenizer = new TokenizerME(tokenModel);
    // person name finder
    TokenNameFinderModel nameModel = new TokenNameFinderModel(new FileInputStream(personModelFile));
    nameFinder = new NameFinderME(nameModel);
    // load the part of speech tagger.
    posTagger = new POSTaggerME(new POSModel(new FileInputStream(posModelFile)));
  } catch (IOException e) {
    log.info("Error loading up OpenNLP Models. {}", e.getLocalizedMessage());
    e.printStackTrace();
  }
}
 
開發者ID:MyRobotLab,項目名稱:myrobotlab,代碼行數:26,代碼來源:NounPhraseExtractor.java

示例8: exec

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
public DataBag exec(Tuple input) throws IOException
{
    if(input.size() != 1) {
        throw new IOException();
    }

    String inputString = input.get(0).toString();
    if(inputString == null || inputString == "") {
        return null;
    }
    DataBag outBag = bf.newDefaultBag();
    if(this.tokenizer == null) {
        String loadFile = CachedFile.getFileName(MODEL_FILE, this.modelPath);;
        InputStream file = new FileInputStream(loadFile);
        InputStream buffer = new BufferedInputStream(file);
        TokenizerModel model = new TokenizerModel(buffer);
        this.tokenizer = new TokenizerME(model);
    }
    String tokens[] = this.tokenizer.tokenize(inputString);
    for(String token : tokens) {
        Tuple outTuple = tf.newTuple(token);
        outBag.add(outTuple);
    }
    return outBag;
}
 
開發者ID:apache,項目名稱:incubator-datafu,代碼行數:26,代碼來源:TokenizeME.java

示例9: KeyPhraseChunkExtractor

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
public KeyPhraseChunkExtractor() throws Exception, IOException {

		InputStream modelIn = getClass().getResourceAsStream(
				"/nlptools/data/en-pos-maxent.bin");
		posModel = new POSModel(modelIn);
		tagger = new POSTaggerME(posModel);

		modelIn = getClass().getResourceAsStream(
				"/nlptools/data/en-chunker.bin");
		chunkModel = new ChunkerModel(modelIn);
		chunker = new ChunkerME(chunkModel);

		modelIn = getClass().getResourceAsStream("/nlptools/data/en-token.bin");
		nlTokenizerModel = new TokenizerModel(modelIn);
		nlTokenizer = new TokenizerME(nlTokenizerModel);
	}
 
開發者ID:mast-group,項目名稱:nlptools,代碼行數:17,代碼來源:KeyPhraseChunkExtractor.java

示例10: inform

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
@Override
public void inform(ResourceLoader loader) throws IOException {
    if(sentenceModelFile!=null) {
        sentenceOp = new SentenceDetectorME(new SentenceModel(
                loader.openResource(sentenceModelFile)));
    }

    if(tokenizerModelFile==null)
        throw new IOException("Parameter 'tokenizerModle' is required, but is invalid:"+tokenizerModelFile);
    tokenizerOp = new TokenizerME(new TokenizerModel(
            loader.openResource(tokenizerModelFile)
    ));

    if(parChunkingClass!=null) {
        try {
            Class c = Class.forName(parChunkingClass);
            Object o = c.newInstance();
            paragraphChunker = (ParagraphChunker) o;
        }catch (Exception e){
            throw new IOException(e);
        }
    }

}
 
開發者ID:ziqizhang,項目名稱:jate,代碼行數:25,代碼來源:OpenNLPTokenizerFactory.java

示例11: initialize

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
public static void initialize() throws IOException {
	
	/* normal model */
	/*
	model = new POSModelLoader().load(new File(RESOURCES + "pt.postagger.model"));
       tModel = new TokenizerModel(new FileInputStream(RESOURCES + "pt.tokenizer.model")); 
       sModel = new SentenceModel(new FileInputStream(RESOURCES + "pt.sentdetect.model"));
       */
	
       /* with VPP tag */
       model = new POSModelLoader().load(new File(RESOURCES + "pt.postaggerVerbPP.model"));
       tModel = new TokenizerModel(new FileInputStream(RESOURCES + "pt.tokenizerVerbPP.model")); 
       sModel = new SentenceModel(new FileInputStream(RESOURCES + "pt.sentDetectVerbPP.model"));
               
       tagger = new POSTaggerME(model); 
       token = new TokenizerME(tModel);
       sent = new SentenceDetectorME(sModel);
}
 
開發者ID:davidsbatista,項目名稱:MuSICo,代碼行數:19,代碼來源:PortuguesePOSTagger.java

示例12: segmentWords

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
public List<String> segmentWords(String text) {
	
	List<String> wordsList = new ArrayList<String>();
    
    try {
    	InputStream modelIn = getClass().getResourceAsStream(wordBin);;
		TokenizerModel model = new TokenizerModel(modelIn);
		TokenizerME tokenizer = new TokenizerME(model);
		String[] words = tokenizer.tokenize(text);
		for(String word : words)
			if (!punctuation.contains(word))
				wordsList.add(word);
		
		modelIn.close();
	} catch (IOException e) {
		e.printStackTrace();
	}
    
    return wordsList;
}
 
開發者ID:kariminf,項目名稱:langpi,代碼行數:21,代碼來源:OpennlpSegmenter.java

示例13: initialize

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
/**
 * Initializes the current instance with the given context.
 * 
 * Note: Do all initialization in this method, do not use the constructor.
 */
public void initialize(UimaContext context)
    throws ResourceInitializationException {

  super.initialize(context);

  TokenizerModel model;

  try {
    TokenizerModelResource modelResource = (TokenizerModelResource) context
        .getResourceObject(UimaUtil.MODEL_PARAMETER);

    model = modelResource.getModel();
  } catch (ResourceAccessException e) {
    throw new ResourceInitializationException(e);
  }

  tokenizer = new TokenizerME(model);
}
 
開發者ID:jpatanooga,項目名稱:Canova,代碼行數:24,代碼來源:ConcurrentTokenizer.java

示例14: initialize

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
/**
 * Initializes the current instance with the given context.
 *
 * Note: Do all initialization in this method, do not use the constructor.
 */
public void initialize(UimaContext context) throws ResourceInitializationException {

    super.initialize(context);

    TokenizerModel model;

    try {
        TokenizerModelResource modelResource =
                        (TokenizerModelResource) context.getResourceObject(UimaUtil.MODEL_PARAMETER);

        model = modelResource.getModel();
    } catch (ResourceAccessException e) {
        throw new ResourceInitializationException(e);
    }

    tokenizer = new TokenizerME(model);
}
 
開發者ID:deeplearning4j,項目名稱:deeplearning4j,代碼行數:23,代碼來源:ConcurrentTokenizer.java

示例15: AbstractTokenizeDataset

import opennlp.tools.tokenize.TokenizerModel; //導入依賴的package包/類
public AbstractTokenizeDataset(
        String name,
        String folder) {
    super(name);
    this.folder = folder;
    this.formatFilename = name; // by default
    try {
        // initiate tokenizer
        InputStream tokenizeIn = new FileInputStream(GlobalConstants.TokenizerFilePath);
        TokenizerModel tokenizeModel = new TokenizerModel(tokenizeIn);
        this.tokenizer = new TokenizerME(tokenizeModel);
    } catch (Exception e) {
        e.printStackTrace();
        System.exit(1);
    }
}
 
開發者ID:vietansegan,項目名稱:segan,代碼行數:17,代碼來源:AbstractTokenizeDataset.java


注:本文中的opennlp.tools.tokenize.TokenizerModel類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。