当前位置: 首页>>代码示例>>Python>>正文


Python token.type方法代码示例

本文整理汇总了Python中token.type方法的典型用法代码示例。如果您正苦于以下问题:Python token.type方法的具体用法?Python token.type怎么用?Python token.type使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在token的用法示例。


在下文中一共展示了token.type方法的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: visit_ExceptHandler

# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def visit_ExceptHandler(self, node):
    self.token('except')
    if node.type:
      self.visit(node.type)
    if node.type and node.name:
      self.attr(node, 'as', [self.ws, self.one_of_symbols("as", ","), self.ws],
                default=' as ')
    if node.name:
      if isinstance(node.name, ast.AST):
        self.visit(node.name)
      else:
        self.token(node.name)
    self.attr(node, 'open_block', [self.ws, ':', self.ws_oneline],
              default=':\n')
    for stmt in self.indented(node, 'body'):
      self.visit(stmt) 
开发者ID:google,项目名称:pasta,代码行数:18,代码来源:annotate.py

示例2: __repr__

# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def __repr__(self):
        annotated_type = '%d (%s)' % (self.type, tok_name[self.type])
        return ('TokenInfo(type=%s, string=%r, start=%r, end=%r, line=%r)' %
                self._replace(type=annotated_type)) 
开发者ID:war-and-code,项目名称:jawfish,代码行数:6,代码来源:tokenize.py

示例3: exact_type

# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def exact_type(self):
        if self.type == OP and self.string in EXACT_TOKEN_TYPES:
            return EXACT_TOKEN_TYPES[self.string]
        else:
            return self.type 
开发者ID:war-and-code,项目名称:jawfish,代码行数:7,代码来源:tokenize.py

示例4: tokenize

# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def tokenize(readline):
    """
    The tokenize() generator requires one argment, readline, which
    must be a callable object which provides the same interface as the
    readline() method of built-in file objects.  Each call to the function
    should return one line of input as bytes.  Alternately, readline
    can be a callable function terminating with StopIteration:
        readline = open(myfile, 'rb').__next__  # Example of alternate readline

    The generator produces 5-tuples with these members: the token type; the
    token string; a 2-tuple (srow, scol) of ints specifying the row and
    column where the token begins in the source; a 2-tuple (erow, ecol) of
    ints specifying the row and column where the token ends in the source;
    and the line on which the token was found.  The line passed is the
    logical line; continuation lines are included.

    The first token sequence will always be an ENCODING token
    which tells you which encoding was used to decode the bytes stream.
    """
    # This import is here to avoid problems when the itertools module is not
    # built yet and tokenize is imported.
    from itertools import chain, repeat
    encoding, consumed = detect_encoding(readline)
    rl_gen = iter(readline, b"")
    empty = repeat(b"")
    return _tokenize(chain(consumed, rl_gen, empty).__next__, encoding) 
开发者ID:war-and-code,项目名称:jawfish,代码行数:28,代码来源:tokenize.py

示例5: tokenize

# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def tokenize(readline):
    """
    The tokenize() generator requires one argument, readline, which
    must be a callable object which provides the same interface as the
    readline() method of built-in file objects.  Each call to the function
    should return one line of input as bytes.  Alternatively, readline
    can be a callable function terminating with StopIteration:
        readline = open(myfile, 'rb').__next__  # Example of alternate readline

    The generator produces 5-tuples with these members: the token type; the
    token string; a 2-tuple (srow, scol) of ints specifying the row and
    column where the token begins in the source; a 2-tuple (erow, ecol) of
    ints specifying the row and column where the token ends in the source;
    and the line on which the token was found.  The line passed is the
    logical line; continuation lines are included.

    The first token sequence will always be an ENCODING token
    which tells you which encoding was used to decode the bytes stream.
    """
    # This import is here to avoid problems when the itertools module is not
    # built yet and tokenize is imported.
    from itertools import chain, repeat
    encoding, consumed = detect_encoding(readline)
    rl_gen = iter(readline, b"")
    empty = repeat(b"")
    return _tokenize(chain(consumed, rl_gen, empty).__next__, encoding) 
开发者ID:Xython,项目名称:YAPyPy,代码行数:28,代码来源:yapypy_tokenize36.py

示例6: token_repr

# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def token_repr(tok_type, string):
  """Returns a human-friendly representation of a token with the given type and string."""
  # repr() prefixes unicode with 'u' on Python2 but not Python3; strip it out for consistency.
  return '%s:%s' % (token.tok_name[tok_type], repr(string).lstrip('u')) 
开发者ID:gristlabs,项目名称:asttokens,代码行数:6,代码来源:util.py

示例7: __str__

# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def __str__(self):
    return token_repr(self.type, self.string) 
开发者ID:gristlabs,项目名称:asttokens,代码行数:4,代码来源:util.py

示例8: match_token

# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def match_token(token, tok_type, tok_str=None):
  """Returns true if token is of the given type and, if a string is given, has that string."""
  return token.type == tok_type and (tok_str is None or token.string == tok_str) 
开发者ID:gristlabs,项目名称:asttokens,代码行数:5,代码来源:util.py


注:本文中的token.type方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。