当前位置: 首页>>代码示例>>Python>>正文


Python lex.token方法代码示例

本文整理汇总了Python中ply.lex.token方法的典型用法代码示例。如果您正苦于以下问题:Python lex.token方法的具体用法?Python lex.token怎么用?Python lex.token使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在ply.lex的用法示例。


在下文中一共展示了lex.token方法的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: trigraph

# 需要导入模块: from ply import lex [as 别名]
# 或者: from ply.lex import token [as 别名]
def trigraph(input):
    return _trigraph_pat.sub(lambda g: _trigraph_rep[g.group()[-1]],input)

# ------------------------------------------------------------------
# Macro object
#
# This object holds information about preprocessor macros
#
#    .name      - Macro name (string)
#    .value     - Macro value (a list of tokens)
#    .arglist   - List of argument names
#    .variadic  - Boolean indicating whether or not variadic macro
#    .vararg    - Name of the variadic parameter
#
# When a macro is created, the macro replacement token sequence is
# pre-scanned and used to create patch lists that are later used
# during macro expansion
# ------------------------------------------------------------------ 
开发者ID:nojanath,项目名称:SublimeKSP,代码行数:20,代码来源:cpp.py

示例2: p_pla_declaration

# 需要导入模块: from ply import lex [as 别名]
# 或者: from ply.lex import token [as 别名]
def p_pla_declaration(p):
  """pla_declaration : I NUMBER NEWLINE
                     | O NUMBER NEWLINE
                     | P NUMBER NEWLINE
                     | MV number_list NEWLINE
                     | ILB symbol_list NEWLINE
                     | OB symbol_list NEWLINE
                     | L NUMBER symbol_list NEWLINE
                     | TYPE SYMBOL NEWLINE
  """
  token = p[1].lower()
  if token == ".i":
    pla.ni = int(p[2])
  elif token == ".o":
    pla.no = int(p[2])
  elif token == ".mv":
    pla.mv = [int(v) for v in p[2]]
  elif token == ".ilb":
    pla.ilb = p[2]
  elif token == ".ob":
    pla.ob = p[2]
  elif token == ".l":
    pla.label = p[2]
  elif token == ".type":
    pla.set_type = p[2] 
开发者ID:google,项目名称:qkeras,代码行数:27,代码来源:parser.py

示例3: tokenize

# 需要导入模块: from ply import lex [as 别名]
# 或者: from ply.lex import token [as 别名]
def tokenize(self,text):
        tokens = []
        self.lexer.input(text)
        while True:
            tok = self.lexer.token()
            if not tok: break
            tokens.append(tok)
        return tokens

    # ---------------------------------------------------------------------
    # error()
    #
    # Report a preprocessor error/warning of some kind
    # ---------------------------------------------------------------------- 
开发者ID:nojanath,项目名称:SublimeKSP,代码行数:16,代码来源:cpp.py

示例4: error

# 需要导入模块: from ply import lex [as 别名]
# 或者: from ply.lex import token [as 别名]
def error(self,file,line,msg):
        print("%s:%d %s" % (file,line,msg))

    # ----------------------------------------------------------------------
    # lexprobe()
    #
    # This method probes the preprocessor lexer object to discover
    # the token types of symbols that are important to the preprocessor.
    # If this works right, the preprocessor will simply "work"
    # with any suitable lexer regardless of how tokens have been named.
    # ---------------------------------------------------------------------- 
开发者ID:nojanath,项目名称:SublimeKSP,代码行数:13,代码来源:cpp.py

示例5: group_lines

# 需要导入模块: from ply import lex [as 别名]
# 或者: from ply.lex import token [as 别名]
def group_lines(self,input):
        lex = self.lexer.clone()
        lines = [x.rstrip() for x in input.splitlines()]
        for i in xrange(len(lines)):
            j = i+1
            while lines[i].endswith('\\') and (j < len(lines)):
                lines[i] = lines[i][:-1]+lines[j]
                lines[j] = ""
                j += 1

        input = "\n".join(lines)
        lex.input(input)
        lex.lineno = 1

        current_line = []
        while True:
            tok = lex.token()
            if not tok:
                break
            current_line.append(tok)
            if tok.type in self.t_WS and '\n' in tok.value:
                yield current_line
                current_line = []

        if current_line:
            yield current_line

    # ----------------------------------------------------------------------
    # tokenstrip()
    # 
    # Remove leading/trailing whitespace tokens from a token list
    # ---------------------------------------------------------------------- 
开发者ID:nojanath,项目名称:SublimeKSP,代码行数:34,代码来源:cpp.py

示例6: parse

# 需要导入模块: from ply import lex [as 别名]
# 或者: from ply.lex import token [as 别名]
def parse(self,input,source=None,ignore={}):
        self.ignore = ignore
        self.parser = self.parsegen(input,source)
        
    # ----------------------------------------------------------------------
    # token()
    #
    # Method to return individual tokens
    # ---------------------------------------------------------------------- 
开发者ID:nojanath,项目名称:SublimeKSP,代码行数:11,代码来源:cpp.py

示例7: token

# 需要导入模块: from ply import lex [as 别名]
# 或者: from ply.lex import token [as 别名]
def token(self):
        try:
            while True:
                tok = next(self.parser)
                if tok.type not in self.ignore: return tok
        except StopIteration:
            self.parser = None
            return None 
开发者ID:nojanath,项目名称:SublimeKSP,代码行数:10,代码来源:cpp.py

示例8: group_lines

# 需要导入模块: from ply import lex [as 别名]
# 或者: from ply.lex import token [as 别名]
def group_lines(self,input):
        lex = self.lexer.clone()
        lines = [x.rstrip() for x in input.splitlines()]
        for i in xrange(len(lines)):
            j = i+1
            while lines[i].endswith('\\') and (j < len(lines)):
                lines[i] = lines[i][:-1]+lines[j]
                lines[j] = ""
                j += 1

        input = "\n".join(lines)
        lex.input(input)
        lex.lineno = 1

        current_line = []
        while True:
            tok = lex.token()
            if not tok:
                break
            current_line.append(tok)
            if tok.type in self.t_WS and '\n' in tok.value:
                yield current_line
                current_line = []

        if current_line:
            yield current_line

    # ----------------------------------------------------------------------
    # tokenstrip()
    #
    # Remove leading/trailing whitespace tokens from a token list
    # ---------------------------------------------------------------------- 
开发者ID:remg427,项目名称:misp42splunk,代码行数:34,代码来源:cpp.py

示例9: parse

# 需要导入模块: from ply import lex [as 别名]
# 或者: from ply.lex import token [as 别名]
def parse(self,input,source=None,ignore={}):
        self.ignore = ignore
        self.parser = self.parsegen(input,source)

    # ----------------------------------------------------------------------
    # token()
    #
    # Method to return individual tokens
    # ---------------------------------------------------------------------- 
开发者ID:remg427,项目名称:misp42splunk,代码行数:11,代码来源:cpp.py


注:本文中的ply.lex.token方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。