当前位置: 首页>>代码示例>>Python>>正文


Python token.start方法代码示例

本文整理汇总了Python中token.start方法的典型用法代码示例。如果您正苦于以下问题:Python token.start方法的具体用法?Python token.start怎么用?Python token.start使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在token的用法示例。


在下文中一共展示了token.start方法的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: untokenize

# 需要导入模块: import token [as 别名]
# 或者: from token import start [as 别名]
def untokenize(self, iterable):
        for t in iterable:
            if len(t) == 2:
                self.compat(t, iterable)
                break
            tok_type, token, start, end, line = t
            if tok_type == ENCODING:
                self.encoding = token
                continue
            self.add_whitespace(start)
            self.tokens.append(token)
            self.prev_row, self.prev_col = end
            if tok_type in (NEWLINE, NL):
                self.prev_row += 1
                self.prev_col = 0
        return "".join(self.tokens) 
开发者ID:war-and-code,项目名称:jawfish,代码行数:18,代码来源:tokenize.py

示例2: replace

# 需要导入模块: import token [as 别名]
# 或者: from token import start [as 别名]
def replace(text, replacements):
  """
  Replaces multiple slices of text with new values. This is a convenience method for making code
  modifications of ranges e.g. as identified by ``ASTTokens.get_text_range(node)``. Replacements is
  an iterable of ``(start, end, new_text)`` tuples.

  For example, ``replace("this is a test", [(0, 4, "X"), (8, 9, "THE")])`` produces
  ``"X is THE test"``.
  """
  p = 0
  parts = []
  for (start, end, new_text) in sorted(replacements):
    parts.append(text[p:start])
    parts.append(new_text)
    p = end
  parts.append(text[p:])
  return ''.join(parts) 
开发者ID:gristlabs,项目名称:asttokens,代码行数:19,代码来源:util.py

示例3: __repr__

# 需要导入模块: import token [as 别名]
# 或者: from token import start [as 别名]
def __repr__(self):
        annotated_type = '%d (%s)' % (self.type, tok_name[self.type])
        return ('TokenInfo(type=%s, string=%r, start=%r, end=%r, line=%r)' %
                self._replace(type=annotated_type)) 
开发者ID:war-and-code,项目名称:jawfish,代码行数:6,代码来源:tokenize.py

示例4: add_whitespace

# 需要导入模块: import token [as 别名]
# 或者: from token import start [as 别名]
def add_whitespace(self, start):
        row, col = start
        assert row <= self.prev_row
        col_offset = col - self.prev_col
        if col_offset:
            self.tokens.append(" " * col_offset) 
开发者ID:war-and-code,项目名称:jawfish,代码行数:8,代码来源:tokenize.py

示例5: add_whitespace

# 需要导入模块: import token [as 别名]
# 或者: from token import start [as 别名]
def add_whitespace(self, start):
        row, col = start
        if row < self.prev_row or row == self.prev_row and col < self.prev_col:
            raise ValueError("start ({},{}) precedes previous end ({},{})"
                             .format(row, col, self.prev_row, self.prev_col))
        row_offset = row - self.prev_row
        if row_offset:
            self.tokens.append("\\\n" * row_offset)
            self.prev_col = 0
        col_offset = col - self.prev_col
        if col_offset:
            self.tokens.append(" " * col_offset) 
开发者ID:Xython,项目名称:YAPyPy,代码行数:14,代码来源:yapypy_tokenize36.py

示例6: untokenize

# 需要导入模块: import token [as 别名]
# 或者: from token import start [as 别名]
def untokenize(self, iterable):
        it = iter(iterable)
        indents = []
        startline = False
        for t in it:
            if len(t) == 2:
                self.compat(t, it)
                break
            tok_type, token, start, end, line = t
            if tok_type == ENCODING:
                self.encoding = token
                continue
            if tok_type == ENDMARKER:
                break
            if tok_type == INDENT:
                indents.append(token)
                continue
            elif tok_type == DEDENT:
                indents.pop()
                self.prev_row, self.prev_col = end
                continue
            elif tok_type in (NEWLINE, NL):
                startline = True
            elif startline and indents:
                indent = indents[-1]
                if start[1] >= len(indent):
                    self.tokens.append(indent)
                    self.prev_col = len(indent)
                startline = False
            self.add_whitespace(start)
            self.tokens.append(token)
            self.prev_row, self.prev_col = end
            if tok_type in (NEWLINE, NL):
                self.prev_row += 1
                self.prev_col = 0
        return "".join(self.tokens) 
开发者ID:Xython,项目名称:YAPyPy,代码行数:38,代码来源:yapypy_tokenize36.py

示例7: expect_token

# 需要导入模块: import token [as 别名]
# 或者: from token import start [as 别名]
def expect_token(token, tok_type, tok_str=None):
  """
  Verifies that the given token is of the expected type. If tok_str is given, the token string
  is verified too. If the token doesn't match, raises an informative ValueError.
  """
  if not match_token(token, tok_type, tok_str):
    raise ValueError("Expected token %s, got %s on line %s col %s" % (
      token_repr(tok_type, tok_str), str(token),
      token.start[0], token.start[1] + 1))

# These were previously defined in tokenize.py and distinguishable by being greater than
# token.N_TOKEN. As of python3.7, they are in token.py, and we check for them explicitly. 
开发者ID:gristlabs,项目名称:asttokens,代码行数:14,代码来源:util.py


注:本文中的token.start方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。