本文整理汇总了Python中token.type方法的典型用法代码示例。如果您正苦于以下问题:Python token.type方法的具体用法?Python token.type怎么用?Python token.type使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类token
的用法示例。
在下文中一共展示了token.type方法的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: visit_ExceptHandler
# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def visit_ExceptHandler(self, node):
self.token('except')
if node.type:
self.visit(node.type)
if node.type and node.name:
self.attr(node, 'as', [self.ws, self.one_of_symbols("as", ","), self.ws],
default=' as ')
if node.name:
if isinstance(node.name, ast.AST):
self.visit(node.name)
else:
self.token(node.name)
self.attr(node, 'open_block', [self.ws, ':', self.ws_oneline],
default=':\n')
for stmt in self.indented(node, 'body'):
self.visit(stmt)
示例2: __repr__
# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def __repr__(self):
annotated_type = '%d (%s)' % (self.type, tok_name[self.type])
return ('TokenInfo(type=%s, string=%r, start=%r, end=%r, line=%r)' %
self._replace(type=annotated_type))
示例3: exact_type
# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def exact_type(self):
if self.type == OP and self.string in EXACT_TOKEN_TYPES:
return EXACT_TOKEN_TYPES[self.string]
else:
return self.type
示例4: tokenize
# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def tokenize(readline):
"""
The tokenize() generator requires one argment, readline, which
must be a callable object which provides the same interface as the
readline() method of built-in file objects. Each call to the function
should return one line of input as bytes. Alternately, readline
can be a callable function terminating with StopIteration:
readline = open(myfile, 'rb').__next__ # Example of alternate readline
The generator produces 5-tuples with these members: the token type; the
token string; a 2-tuple (srow, scol) of ints specifying the row and
column where the token begins in the source; a 2-tuple (erow, ecol) of
ints specifying the row and column where the token ends in the source;
and the line on which the token was found. The line passed is the
logical line; continuation lines are included.
The first token sequence will always be an ENCODING token
which tells you which encoding was used to decode the bytes stream.
"""
# This import is here to avoid problems when the itertools module is not
# built yet and tokenize is imported.
from itertools import chain, repeat
encoding, consumed = detect_encoding(readline)
rl_gen = iter(readline, b"")
empty = repeat(b"")
return _tokenize(chain(consumed, rl_gen, empty).__next__, encoding)
示例5: tokenize
# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def tokenize(readline):
"""
The tokenize() generator requires one argument, readline, which
must be a callable object which provides the same interface as the
readline() method of built-in file objects. Each call to the function
should return one line of input as bytes. Alternatively, readline
can be a callable function terminating with StopIteration:
readline = open(myfile, 'rb').__next__ # Example of alternate readline
The generator produces 5-tuples with these members: the token type; the
token string; a 2-tuple (srow, scol) of ints specifying the row and
column where the token begins in the source; a 2-tuple (erow, ecol) of
ints specifying the row and column where the token ends in the source;
and the line on which the token was found. The line passed is the
logical line; continuation lines are included.
The first token sequence will always be an ENCODING token
which tells you which encoding was used to decode the bytes stream.
"""
# This import is here to avoid problems when the itertools module is not
# built yet and tokenize is imported.
from itertools import chain, repeat
encoding, consumed = detect_encoding(readline)
rl_gen = iter(readline, b"")
empty = repeat(b"")
return _tokenize(chain(consumed, rl_gen, empty).__next__, encoding)
示例6: token_repr
# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def token_repr(tok_type, string):
"""Returns a human-friendly representation of a token with the given type and string."""
# repr() prefixes unicode with 'u' on Python2 but not Python3; strip it out for consistency.
return '%s:%s' % (token.tok_name[tok_type], repr(string).lstrip('u'))
示例7: __str__
# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def __str__(self):
return token_repr(self.type, self.string)
示例8: match_token
# 需要导入模块: import token [as 别名]
# 或者: from token import type [as 别名]
def match_token(token, tok_type, tok_str=None):
"""Returns true if token is of the given type and, if a string is given, has that string."""
return token.type == tok_type and (tok_str is None or token.string == tok_str)