當前位置: 首頁>>代碼示例>>Python>>正文


Python RegexLexer.get_tokens_unprocessed方法代碼示例

本文整理匯總了Python中pygments.lexer.RegexLexer.get_tokens_unprocessed方法的典型用法代碼示例。如果您正苦於以下問題:Python RegexLexer.get_tokens_unprocessed方法的具體用法?Python RegexLexer.get_tokens_unprocessed怎麽用?Python RegexLexer.get_tokens_unprocessed使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在pygments.lexer.RegexLexer的用法示例。


在下文中一共展示了RegexLexer.get_tokens_unprocessed方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: get_tokens_unprocessed

# 需要導入模塊: from pygments.lexer import RegexLexer [as 別名]
# 或者: from pygments.lexer.RegexLexer import get_tokens_unprocessed [as 別名]
def get_tokens_unprocessed(self, text):
        for index, token, value in RegexLexer.get_tokens_unprocessed(self, text):
            if token is Name:
                lowercase_value = value.lower()
                if lowercase_value in self.builtins:
                    yield index, Name.Builtin, value
                    continue
                if lowercase_value in self.keywords:
                    yield index, Keyword, value
                    continue
                if lowercase_value in self.functions:
                    yield index, Name.Builtin, value
                    continue
                if lowercase_value in self.operators:
                    yield index, Operator, value
                    continue
            yield index, token, value 
開發者ID:joxeankoret,項目名稱:pigaios,代碼行數:19,代碼來源:dylan.py

示例2: get_tokens_unprocessed

# 需要導入模塊: from pygments.lexer import RegexLexer [as 別名]
# 或者: from pygments.lexer.RegexLexer import get_tokens_unprocessed [as 別名]
def get_tokens_unprocessed(self, text):
        # TODO: builtins are only subsequent tokens on lines
        #       and 'keywords' only happen at the beginning except
        #       for :au ones
        for index, token, value in \
                RegexLexer.get_tokens_unprocessed(self, text):
            if token is Name.Other:
                if self.is_in(value, self._cmd):
                    yield index, Keyword, value
                elif self.is_in(value, self._opt) or \
                        self.is_in(value, self._aut):
                    yield index, Name.Builtin, value
                else:
                    yield index, Text, value
            else:
                yield index, token, value 
開發者ID:joxeankoret,項目名稱:pigaios,代碼行數:18,代碼來源:textedit.py

示例3: get_tokens_unprocessed

# 需要導入模塊: from pygments.lexer import RegexLexer [as 別名]
# 或者: from pygments.lexer.RegexLexer import get_tokens_unprocessed [as 別名]
def get_tokens_unprocessed(self, text):
        stack = ['root']
        for index, token, value in RegexLexer.get_tokens_unprocessed(self, text, stack):
            if token is Name.Variable:
                if value in EmacsLispLexer.builtin_function:
                    yield index, Name.Function, value
                    continue
                if value in EmacsLispLexer.special_forms:
                    yield index, Keyword, value
                    continue
                if value in EmacsLispLexer.error_keywords:
                    yield index, Name.Exception, value
                    continue
                if value in EmacsLispLexer.builtin_function_highlighted:
                    yield index, Name.Builtin, value
                    continue
                if value in EmacsLispLexer.macros:
                    yield index, Name.Builtin, value
                    continue
                if value in EmacsLispLexer.lambda_list_keywords:
                    yield index, Keyword.Pseudo, value
                    continue
            yield index, token, value 
開發者ID:joxeankoret,項目名稱:pigaios,代碼行數:25,代碼來源:lisp.py

示例4: content_callback

# 需要導入模塊: from pygments.lexer import RegexLexer [as 別名]
# 或者: from pygments.lexer.RegexLexer import get_tokens_unprocessed [as 別名]
def content_callback(self, match):
        content_type = getattr(self, 'content_type', None)
        content = match.group()
        offset = match.start()
        if content_type:
            from pygments.lexers import get_lexer_for_mimetype
            possible_lexer_mimetypes = [content_type]
            if '+' in content_type:
                # application/calendar+xml can be treated as application/xml
                # if there's not a better match.
                general_type = re.sub(r'^(.*)/.*\+(.*)$', r'\1/\2',
                                      content_type)
                possible_lexer_mimetypes.append(general_type)

            for i in possible_lexer_mimetypes:
                try:
                    lexer = get_lexer_for_mimetype(i)
                except ClassNotFound:
                    pass
                else:
                    for idx, token, value in lexer.get_tokens_unprocessed(content):
                        yield offset + idx, token, value
                    return
        yield offset, Text, content 
開發者ID:luckystarufo,項目名稱:pySINDy,代碼行數:26,代碼來源:textfmts.py


注:本文中的pygments.lexer.RegexLexer.get_tokens_unprocessed方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。