当前位置: 首页>>代码示例>>Python>>正文


Python TextLexer.get_tokens方法代码示例

本文整理汇总了Python中pygments.lexers.TextLexer.get_tokens方法的典型用法代码示例。如果您正苦于以下问题:Python TextLexer.get_tokens方法的具体用法?Python TextLexer.get_tokens怎么用?Python TextLexer.get_tokens使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在pygments.lexers.TextLexer的用法示例。


在下文中一共展示了TextLexer.get_tokens方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: get_line_types

# 需要导入模块: from pygments.lexers import TextLexer [as 别名]
# 或者: from pygments.lexers.TextLexer import get_tokens [as 别名]
def get_line_types(repo, repo_uri, rev, path):
    """Returns an array, where each item means a line of code.
       Each item is labled 'code', 'comment' or 'empty'"""

    #profiler_start("Processing LineTypes for revision %s:%s", (self.rev, self.file_path))
    uri = os.path.join(repo_uri, path) # concat repo_uri and file_path for full path
    file_content = _get_file_content(repo, uri, rev)  # get file_content

    if file_content is None or file_content == '':
        printerr("[get_line_types] Error: No file content for " + str(rev) + ":" + str(path) + " found! Skipping.")
        line_types = None
    else:
        try:
            lexer = get_lexer_for_filename(path)
        except ClassNotFound:
            try:
                printdbg("[get_line_types] Guessing lexer for" + str(rev) + ":" + str(path) + ".")
                lexer = guess_lexer(file_content)
            except ClassNotFound:
                printdbg("[get_line_types] No guess or lexer found for " + str(rev) + ":" + str(path) + ". Using TextLexer instead.")
                lexer = TextLexer()

        if isinstance(lexer, NemerleLexer):
            # this lexer is broken and yield an unstoppable process
            # see https://bitbucket.org/birkenfeld/pygments-main/issue/706/nemerle-lexer-ends-in-an-infinite-loop
            lexer = TextLexer()

        # Not shure if this should be skipped, when the language uses off-side rules (e.g. python,
        # see http://en.wikipedia.org/wiki/Off-side_rule for list)
        stripped_code = _strip_lines(file_content)
        lexer_output = _iterate_lexer_output(lexer.get_tokens(stripped_code))
        line_types_str = _comment_empty_or_code(lexer_output)
        line_types = line_types_str.split("\n")

    return line_types
开发者ID:ProjectHistory,项目名称:MininGit,代码行数:37,代码来源:line_types.py


注:本文中的pygments.lexers.TextLexer.get_tokens方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。