當前位置: 首頁>>代碼示例>>Python>>正文


Python token.start方法代碼示例

本文整理匯總了Python中token.start方法的典型用法代碼示例。如果您正苦於以下問題:Python token.start方法的具體用法?Python token.start怎麽用?Python token.start使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在token的用法示例。


在下文中一共展示了token.start方法的7個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: untokenize

# 需要導入模塊: import token [as 別名]
# 或者: from token import start [as 別名]
def untokenize(self, iterable):
        for t in iterable:
            if len(t) == 2:
                self.compat(t, iterable)
                break
            tok_type, token, start, end, line = t
            if tok_type == ENCODING:
                self.encoding = token
                continue
            self.add_whitespace(start)
            self.tokens.append(token)
            self.prev_row, self.prev_col = end
            if tok_type in (NEWLINE, NL):
                self.prev_row += 1
                self.prev_col = 0
        return "".join(self.tokens) 
開發者ID:war-and-code,項目名稱:jawfish,代碼行數:18,代碼來源:tokenize.py

示例2: replace

# 需要導入模塊: import token [as 別名]
# 或者: from token import start [as 別名]
def replace(text, replacements):
  """
  Replaces multiple slices of text with new values. This is a convenience method for making code
  modifications of ranges e.g. as identified by ``ASTTokens.get_text_range(node)``. Replacements is
  an iterable of ``(start, end, new_text)`` tuples.

  For example, ``replace("this is a test", [(0, 4, "X"), (8, 9, "THE")])`` produces
  ``"X is THE test"``.
  """
  p = 0
  parts = []
  for (start, end, new_text) in sorted(replacements):
    parts.append(text[p:start])
    parts.append(new_text)
    p = end
  parts.append(text[p:])
  return ''.join(parts) 
開發者ID:gristlabs,項目名稱:asttokens,代碼行數:19,代碼來源:util.py

示例3: __repr__

# 需要導入模塊: import token [as 別名]
# 或者: from token import start [as 別名]
def __repr__(self):
        annotated_type = '%d (%s)' % (self.type, tok_name[self.type])
        return ('TokenInfo(type=%s, string=%r, start=%r, end=%r, line=%r)' %
                self._replace(type=annotated_type)) 
開發者ID:war-and-code,項目名稱:jawfish,代碼行數:6,代碼來源:tokenize.py

示例4: add_whitespace

# 需要導入模塊: import token [as 別名]
# 或者: from token import start [as 別名]
def add_whitespace(self, start):
        row, col = start
        assert row <= self.prev_row
        col_offset = col - self.prev_col
        if col_offset:
            self.tokens.append(" " * col_offset) 
開發者ID:war-and-code,項目名稱:jawfish,代碼行數:8,代碼來源:tokenize.py

示例5: add_whitespace

# 需要導入模塊: import token [as 別名]
# 或者: from token import start [as 別名]
def add_whitespace(self, start):
        row, col = start
        if row < self.prev_row or row == self.prev_row and col < self.prev_col:
            raise ValueError("start ({},{}) precedes previous end ({},{})"
                             .format(row, col, self.prev_row, self.prev_col))
        row_offset = row - self.prev_row
        if row_offset:
            self.tokens.append("\\\n" * row_offset)
            self.prev_col = 0
        col_offset = col - self.prev_col
        if col_offset:
            self.tokens.append(" " * col_offset) 
開發者ID:Xython,項目名稱:YAPyPy,代碼行數:14,代碼來源:yapypy_tokenize36.py

示例6: untokenize

# 需要導入模塊: import token [as 別名]
# 或者: from token import start [as 別名]
def untokenize(self, iterable):
        it = iter(iterable)
        indents = []
        startline = False
        for t in it:
            if len(t) == 2:
                self.compat(t, it)
                break
            tok_type, token, start, end, line = t
            if tok_type == ENCODING:
                self.encoding = token
                continue
            if tok_type == ENDMARKER:
                break
            if tok_type == INDENT:
                indents.append(token)
                continue
            elif tok_type == DEDENT:
                indents.pop()
                self.prev_row, self.prev_col = end
                continue
            elif tok_type in (NEWLINE, NL):
                startline = True
            elif startline and indents:
                indent = indents[-1]
                if start[1] >= len(indent):
                    self.tokens.append(indent)
                    self.prev_col = len(indent)
                startline = False
            self.add_whitespace(start)
            self.tokens.append(token)
            self.prev_row, self.prev_col = end
            if tok_type in (NEWLINE, NL):
                self.prev_row += 1
                self.prev_col = 0
        return "".join(self.tokens) 
開發者ID:Xython,項目名稱:YAPyPy,代碼行數:38,代碼來源:yapypy_tokenize36.py

示例7: expect_token

# 需要導入模塊: import token [as 別名]
# 或者: from token import start [as 別名]
def expect_token(token, tok_type, tok_str=None):
  """
  Verifies that the given token is of the expected type. If tok_str is given, the token string
  is verified too. If the token doesn't match, raises an informative ValueError.
  """
  if not match_token(token, tok_type, tok_str):
    raise ValueError("Expected token %s, got %s on line %s col %s" % (
      token_repr(tok_type, tok_str), str(token),
      token.start[0], token.start[1] + 1))

# These were previously defined in tokenize.py and distinguishable by being greater than
# token.N_TOKEN. As of python3.7, they are in token.py, and we check for them explicitly. 
開發者ID:gristlabs,項目名稱:asttokens,代碼行數:14,代碼來源:util.py


注:本文中的token.start方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。