當前位置: 首頁>>代碼示例>>Python>>正文


Python lex.token方法代碼示例

本文整理匯總了Python中ply.lex.token方法的典型用法代碼示例。如果您正苦於以下問題:Python lex.token方法的具體用法?Python lex.token怎麽用?Python lex.token使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在ply.lex的用法示例。


在下文中一共展示了lex.token方法的9個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: trigraph

# 需要導入模塊: from ply import lex [as 別名]
# 或者: from ply.lex import token [as 別名]
def trigraph(input):
    return _trigraph_pat.sub(lambda g: _trigraph_rep[g.group()[-1]],input)

# ------------------------------------------------------------------
# Macro object
#
# This object holds information about preprocessor macros
#
#    .name      - Macro name (string)
#    .value     - Macro value (a list of tokens)
#    .arglist   - List of argument names
#    .variadic  - Boolean indicating whether or not variadic macro
#    .vararg    - Name of the variadic parameter
#
# When a macro is created, the macro replacement token sequence is
# pre-scanned and used to create patch lists that are later used
# during macro expansion
# ------------------------------------------------------------------ 
開發者ID:nojanath,項目名稱:SublimeKSP,代碼行數:20,代碼來源:cpp.py

示例2: p_pla_declaration

# 需要導入模塊: from ply import lex [as 別名]
# 或者: from ply.lex import token [as 別名]
def p_pla_declaration(p):
  """pla_declaration : I NUMBER NEWLINE
                     | O NUMBER NEWLINE
                     | P NUMBER NEWLINE
                     | MV number_list NEWLINE
                     | ILB symbol_list NEWLINE
                     | OB symbol_list NEWLINE
                     | L NUMBER symbol_list NEWLINE
                     | TYPE SYMBOL NEWLINE
  """
  token = p[1].lower()
  if token == ".i":
    pla.ni = int(p[2])
  elif token == ".o":
    pla.no = int(p[2])
  elif token == ".mv":
    pla.mv = [int(v) for v in p[2]]
  elif token == ".ilb":
    pla.ilb = p[2]
  elif token == ".ob":
    pla.ob = p[2]
  elif token == ".l":
    pla.label = p[2]
  elif token == ".type":
    pla.set_type = p[2] 
開發者ID:google,項目名稱:qkeras,代碼行數:27,代碼來源:parser.py

示例3: tokenize

# 需要導入模塊: from ply import lex [as 別名]
# 或者: from ply.lex import token [as 別名]
def tokenize(self,text):
        tokens = []
        self.lexer.input(text)
        while True:
            tok = self.lexer.token()
            if not tok: break
            tokens.append(tok)
        return tokens

    # ---------------------------------------------------------------------
    # error()
    #
    # Report a preprocessor error/warning of some kind
    # ---------------------------------------------------------------------- 
開發者ID:nojanath,項目名稱:SublimeKSP,代碼行數:16,代碼來源:cpp.py

示例4: error

# 需要導入模塊: from ply import lex [as 別名]
# 或者: from ply.lex import token [as 別名]
def error(self,file,line,msg):
        print("%s:%d %s" % (file,line,msg))

    # ----------------------------------------------------------------------
    # lexprobe()
    #
    # This method probes the preprocessor lexer object to discover
    # the token types of symbols that are important to the preprocessor.
    # If this works right, the preprocessor will simply "work"
    # with any suitable lexer regardless of how tokens have been named.
    # ---------------------------------------------------------------------- 
開發者ID:nojanath,項目名稱:SublimeKSP,代碼行數:13,代碼來源:cpp.py

示例5: group_lines

# 需要導入模塊: from ply import lex [as 別名]
# 或者: from ply.lex import token [as 別名]
def group_lines(self,input):
        lex = self.lexer.clone()
        lines = [x.rstrip() for x in input.splitlines()]
        for i in xrange(len(lines)):
            j = i+1
            while lines[i].endswith('\\') and (j < len(lines)):
                lines[i] = lines[i][:-1]+lines[j]
                lines[j] = ""
                j += 1

        input = "\n".join(lines)
        lex.input(input)
        lex.lineno = 1

        current_line = []
        while True:
            tok = lex.token()
            if not tok:
                break
            current_line.append(tok)
            if tok.type in self.t_WS and '\n' in tok.value:
                yield current_line
                current_line = []

        if current_line:
            yield current_line

    # ----------------------------------------------------------------------
    # tokenstrip()
    # 
    # Remove leading/trailing whitespace tokens from a token list
    # ---------------------------------------------------------------------- 
開發者ID:nojanath,項目名稱:SublimeKSP,代碼行數:34,代碼來源:cpp.py

示例6: parse

# 需要導入模塊: from ply import lex [as 別名]
# 或者: from ply.lex import token [as 別名]
def parse(self,input,source=None,ignore={}):
        self.ignore = ignore
        self.parser = self.parsegen(input,source)
        
    # ----------------------------------------------------------------------
    # token()
    #
    # Method to return individual tokens
    # ---------------------------------------------------------------------- 
開發者ID:nojanath,項目名稱:SublimeKSP,代碼行數:11,代碼來源:cpp.py

示例7: token

# 需要導入模塊: from ply import lex [as 別名]
# 或者: from ply.lex import token [as 別名]
def token(self):
        try:
            while True:
                tok = next(self.parser)
                if tok.type not in self.ignore: return tok
        except StopIteration:
            self.parser = None
            return None 
開發者ID:nojanath,項目名稱:SublimeKSP,代碼行數:10,代碼來源:cpp.py

示例8: group_lines

# 需要導入模塊: from ply import lex [as 別名]
# 或者: from ply.lex import token [as 別名]
def group_lines(self,input):
        lex = self.lexer.clone()
        lines = [x.rstrip() for x in input.splitlines()]
        for i in xrange(len(lines)):
            j = i+1
            while lines[i].endswith('\\') and (j < len(lines)):
                lines[i] = lines[i][:-1]+lines[j]
                lines[j] = ""
                j += 1

        input = "\n".join(lines)
        lex.input(input)
        lex.lineno = 1

        current_line = []
        while True:
            tok = lex.token()
            if not tok:
                break
            current_line.append(tok)
            if tok.type in self.t_WS and '\n' in tok.value:
                yield current_line
                current_line = []

        if current_line:
            yield current_line

    # ----------------------------------------------------------------------
    # tokenstrip()
    #
    # Remove leading/trailing whitespace tokens from a token list
    # ---------------------------------------------------------------------- 
開發者ID:remg427,項目名稱:misp42splunk,代碼行數:34,代碼來源:cpp.py

示例9: parse

# 需要導入模塊: from ply import lex [as 別名]
# 或者: from ply.lex import token [as 別名]
def parse(self,input,source=None,ignore={}):
        self.ignore = ignore
        self.parser = self.parsegen(input,source)

    # ----------------------------------------------------------------------
    # token()
    #
    # Method to return individual tokens
    # ---------------------------------------------------------------------- 
開發者ID:remg427,項目名稱:misp42splunk,代碼行數:11,代碼來源:cpp.py


注:本文中的ply.lex.token方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。