当前位置: 首页>>代码示例>>Python>>正文


Python Parser.tokenize方法代码示例

本文整理汇总了Python中Parser.Parser.tokenize方法的典型用法代码示例。如果您正苦于以下问题:Python Parser.tokenize方法的具体用法?Python Parser.tokenize怎么用?Python Parser.tokenize使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在Parser.Parser的用法示例。


在下文中一共展示了Parser.tokenize方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: main

# 需要导入模块: from Parser import Parser [as 别名]
# 或者: from Parser.Parser import tokenize [as 别名]
def main():
    parser = Parser(True)

    # Tokenize the data
    parser.tokenize("src/europarl-v7.es-en.es")
    parser.tokenize("src/europarl-v7.es-en.en")
    parser.tokenize("src/europarl-v7.fr-en.en")
    parser.tokenize("src/europarl-v7.fr-en.fr")

    # Normalize the data
    parser.cleanse("data/europarl-v7.es-en.es.tok", "data/europarl-v7.es-en.en.tok")
    parser.cleanse("data/europarl-v7.fr-en.en.tok", "data/europarl-v7.fr-en.fr.tok")

    # Split data into train, tune, test sets
    parser.split_train_tune_test("data/europarl-v7.es-en.es.tok.cleansed", "data/europarl-v7.es-en.en.tok.cleansed",
        "data/europarl-v7.fr-en.en.tok.cleansed", "data/europarl-v7.fr-en.fr.tok.cleansed", .6, .2)

    parser.match("data/test/europarl-v7.es-en.es.tok.cleansed.test", "data/test/europarl-v7.es-en.en.tok.cleansed.test",
        "data/test/europarl-v7.fr-en.en.tok.cleansed.test", "data/test/europarl-v7.fr-en.fr.tok.cleansed.test")

    trainer = Train(True)
    # Build target language models
    trainer.build_language_models("data/train/europarl-v7.es-en.en.tok.cleansed.train")
    trainer.build_language_models("data/train/europarl-v7.fr-en.fr.tok.cleansed.train")

    # Train each leg of the translation system
    trainer.train("data/train/europarl-v7.es-en.es.tok.cleansed.train",
        "data/train/europarl-v7.es-en.en.tok.cleansed.train", "es-en.working")
    trainer.train("data/train/europarl-v7.fr-en.en.tok.cleansed.train",
        "data/train/europarl-v7.fr-en.fr.tok.cleansed.train", "en-fr.working")

    # Tune the system on held out data
    tuner = Tune(True)
    tuner.tune("data/tune/europarl-v7.es-en.es.tok.cleansed.tune",
        "data/tune/europarl-v7.es-en.en.tok.cleansed.tune", "es-en.working")
    tuner.tune("data/tune/europarl-v7.fr-en.en.tok.cleansed.tune",
        "data/tune/europarl-v7.fr-en.fr.tok.cleansed.tune", "en-fr.working")

    test = Test(True)
    # Run interactive translator server
    test.test_translator_interactive("es-en.working")
    test.test_translator_interactive("en-fr.working")

    # Score translation quality between pivot translations using held out test data
    test.test_translation_quality("data/test/europarl-v7.es-en.es.tok.cleansed.test",
        "data/test/europarl-v7.es-en.en.tok.cleansed.test", "es-en.working")
    test.test_translation_quality("data/test/europarl-v7.fr-en.en.tok.cleansed.test",
        "data/test/europarl-v7.fr-en.fr.tok.cleansed.test", "en-fr.working")
    # Run interactive translator on pivoting system
    test.test_pivoting_interactive("es-en.working", "en-fr.working")

    # Score translation quality on entire translation using matched test data
    test.test_pivoting_quality("data/test/europarl-v7.es-en.es.tok.cleansed.test.matched",
        "es-en.working", "data/test/europarl-v7.fr-en.fr.tok.cleansed.test.matched", "en-fr.working")
开发者ID:urielmandujano,项目名称:Neural-Network-Machine-Translation,代码行数:56,代码来源:decode.py

示例2: wordProcess

# 需要导入模块: from Parser import Parser [as 别名]
# 或者: from Parser.Parser import tokenize [as 别名]
def wordProcess( lst ) :
	parser = Parser()
	termString = parser.clean( lst )
	termLst = parser.tokenize( termString )
	termLst = parser.removeStopWords( termLst ) 
	termLst = util.removeDuplicates( termLst ) 
	return termLst
开发者ID:JoliLin,项目名称:block_structure_format,代码行数:9,代码来源:bsFormat.py

示例3: SearchEngine

# 需要导入模块: from Parser import Parser [as 别名]
# 或者: from Parser.Parser import tokenize [as 别名]

#.........这里部分代码省略.........

                    self.documents[docId] = Document(docId, year, title, authors, norm)
            # if there's an empty line the documents part have ended
            else:
                break

        # regex for parsing the self.invertedIndex
        indexRegex = re.compile(r"(?P<word>.+);(?P<idf>.+);(?P<lst>.+)")
        # parsing the inverted index data
        for line in fin:
            line = line.strip()
            if not line.startswith("#"):
                match = indexRegex.match(line)
                word = match.group("word")
                idf = float(match.group("idf"))
                lst = ast.literal_eval(match.group("lst"))
                pair = (idf, lst)
                self.invertedIndex[word] = pair
        fin.close()

    def processQuery(self, query, K=10, evaluate=False):
        """
        Given an util.Query object returns the top K documents most similar
        according to the vector model, and evaluation results if param evaluate
        is True.

        param query: util.Query object.
        param K: get the K most similar documents.
        param evaluate: whether or not to evaluate the results of the query.
        return: a pair (results, evalResulst), where results is a list of
        tuples (similarity, util.Document) ordered in decrescent similarity,
        and evalResults is a dict with data on the evaluation.
        """
        words = self.parser.tokenize(query.queryString)
        qCounter = Counter(words)

        accumulators = {}

        for word in qCounter.iterkeys():
            # in the case a word in the query doesn't exist in the inverted
            # index the word in the query is ignored
            try:
                idf, lst  = self.invertedIndex[word]
            except KeyError:
                print("[*] The word '{}' doesn't exist in the inverted index and will be ignored.".format(word))
                continue
            qCounter[word] = qCounter[word] * idf
            for pair in lst:
                docId, weight = pair
                partialAcc = accumulators.get(docId, 0)
                partialAcc += weight * qCounter[word]
                accumulators[docId] = partialAcc

        # more efficient way of getting the top K similarities without having
        # to sort all the results
        heap = [] # min heap to keep the top K similarities
        for docId, acc in accumulators.iteritems():
            doc = self.documents[docId]
            # normalize the accumulator for the doc with the doc length, at
            # this point acc holds the final similarity value with the query
            acc = acc / doc.norm
            # if the heap is not full, add the similarity regardless
            if len(heap) < K:
                heapq.heappush(heap, (acc, doc))
            # the heap is full, but the current similarity is greater than the
            # smallest similarity in the heap, so we pop the min heap to remove
开发者ID:jpaulofb,项目名称:cfc_search_engine_tri,代码行数:70,代码来源:SearchEngine.py


注:本文中的Parser.Parser.tokenize方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。