當前位置: 首頁>>代碼示例>>Python>>正文


Python tokenize.ERRORTOKEN屬性代碼示例

本文整理匯總了Python中tokenize.ERRORTOKEN屬性的典型用法代碼示例。如果您正苦於以下問題:Python tokenize.ERRORTOKEN屬性的具體用法?Python tokenize.ERRORTOKEN怎麽用?Python tokenize.ERRORTOKEN使用的例子?那麽, 這裏精選的屬性代碼示例或許可以為您提供幫助。您也可以進一步了解該屬性所在tokenize的用法示例。


在下文中一共展示了tokenize.ERRORTOKEN屬性的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: _get_all_tokens

# 需要導入模塊: import tokenize [as 別名]
# 或者: from tokenize import ERRORTOKEN [as 別名]
def _get_all_tokens(line, lines):
    '''Starting from *line*, generate the necessary tokens which represent the
    shortest tokenization possible. This is done by catching
    :exc:`tokenize.TokenError` when a multi-line string or statement is
    encountered.
    :returns: tokens, lines
    '''
    buffer = line
    used_lines = [line]
    while True:
        try:
            tokens = _generate(buffer)
        except tokenize.TokenError:
            # A multi-line string or statement has been encountered:
            # start adding lines and stop when tokenize stops complaining
            pass
        else:
            if not any(t[0] == tokenize.ERRORTOKEN for t in tokens):
                return tokens, used_lines

        # Add another line
        next_line = next(lines)
        buffer = buffer + '\n' + next_line
        used_lines.append(next_line) 
開發者ID:AtomLinter,項目名稱:linter-pylama,代碼行數:26,代碼來源:raw.py

示例2: __get_tokens

# 需要導入模塊: import tokenize [as 別名]
# 或者: from tokenize import ERRORTOKEN [as 別名]
def __get_tokens(it):
        tokens: List[tokenize.TokenInfo] = []

        try:
            for t in it:
                if t.type in tokenizer.SKIP_TOKENS:
                    continue
                if t.type == tokenize.NEWLINE and t.string == '':
                    continue
                if t.type == tokenize.DEDENT:
                    continue
                if t.type == tokenize.ERRORTOKEN:
                    continue
                tokens.append(t)
        except tokenize.TokenError as e:
            if not e.args[0].startswith('EOF in'):
                print(e)
        except IndentationError as e:
            print(e)

        return tokens 
開發者ID:vpj,項目名稱:python_autocomplete,代碼行數:23,代碼來源:evaluate.py

示例3: python_tokenize

# 需要導入模塊: import tokenize [as 別名]
# 或者: from tokenize import ERRORTOKEN [as 別名]
def python_tokenize(code):
    # Since formulas can only contain Python expressions, and Python
    # expressions cannot meaningfully contain newlines, we'll just remove all
    # the newlines up front to avoid any complications:
    code = code.replace("\n", " ").strip()
    it = tokenize.generate_tokens(StringIO(code).readline)
    try:
        for (pytype, string, (_, start), (_, end), code) in it:
            if pytype == tokenize.ENDMARKER:
                break
            origin = Origin(code, start, end)
            assert pytype not in (tokenize.NL, tokenize.NEWLINE)
            if pytype == tokenize.ERRORTOKEN:
                raise PatsyError("error tokenizing input "
                                 "(maybe an unclosed string?)",
                                 origin)
            if pytype == tokenize.COMMENT:
                raise PatsyError("comments are not allowed", origin)
            yield (pytype, string, origin)
        else: # pragma: no cover
            raise ValueError("stream ended without ENDMARKER?!?")
    except tokenize.TokenError as e:
        # TokenError is raised iff the tokenizer thinks that there is
        # some sort of multi-line construct in progress (e.g., an
        # unclosed parentheses, which in Python lets a virtual line
        # continue past the end of the physical line), and it hits the
        # end of the source text. We have our own error handling for
        # such cases, so just treat this as an end-of-stream.
        # 
        # Just in case someone adds some other error case:
        assert e.args[0].startswith("EOF in multi-line")
        return 
開發者ID:birforce,項目名稱:vnpy_crypto,代碼行數:34,代碼來源:tokens.py

示例4: _advance_one_token

# 需要導入模塊: import tokenize [as 別名]
# 或者: from tokenize import ERRORTOKEN [as 別名]
def _advance_one_token(self):
    self._current_token = ConfigParser.Token(*next(self._token_generator))
    # Certain symbols (e.g., "$") cause ERRORTOKENs on all preceding space
    # characters. Find the first non-space or non-ERRORTOKEN token.
    while (self._current_token.kind == tokenize.ERRORTOKEN and
           self._current_token.value in ' \t'):
      self._current_token = ConfigParser.Token(*next(self._token_generator)) 
開發者ID:google,項目名稱:gin-config,代碼行數:9,代碼來源:config_parser.py


注:本文中的tokenize.ERRORTOKEN屬性示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。