当前位置: 首页>>代码示例>>Python>>正文


Python Tokenizer.consume_token方法代码示例

本文整理汇总了Python中tokenizer.Tokenizer.consume_token方法的典型用法代码示例。如果您正苦于以下问题:Python Tokenizer.consume_token方法的具体用法?Python Tokenizer.consume_token怎么用?Python Tokenizer.consume_token使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tokenizer.Tokenizer的用法示例。


在下文中一共展示了Tokenizer.consume_token方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: Compiler

# 需要导入模块: from tokenizer import Tokenizer [as 别名]
# 或者: from tokenizer.Tokenizer import consume_token [as 别名]

#.........这里部分代码省略.........
            if token.value == 'to' or token.value == 'from':
                break
                
        return statement
    
    def _read_destination_statement(self):
        """ destination statement. This is where to store the data.
        
            Args:
               None
               
            Returns:
               return 
        
            Raises:
               exception 
        """ 
        statement = DestinationStatement()
        token = self._tokenizer.current_token()
        
        # after a destination statement, it is possible to have 
        while True:
            
            if token.type == 'ENDMARKER' or token.value == 'from':
                # leave loop 
                break
            # for the moment look for file or format
            elif token.value == 'file':
                
                statement.add_type(token.value)
                
                # next token and look for =
                self._tokenizer.next()
                token = self._tokenizer.consume_token('=')
                
                if token.type == 'STRING':
                    statement.add_value(token.value)
                else:
                    raise ParsingError("Expected a STRING type but instead got %s with type %s"%(token.value,token.type),token.begin[1],token.begin[0])
            elif token.value == 'format':
                # next token and look for =
                self._tokenizer.next()
                token = self._tokenizer.consume_token('=')
                
                # if should be a name
                if token.type == 'NAME':
                    statement.add_format(token.value)
                else:
                    raise ParsingError("Expected a NAME type but instead got %s with type %s"%(token.value,token.type),token.begin[1],token.begin[0])
            elif token.value != ',':
                raise ParsingError("Expected a file or format parameter but instead got %s with type %s"%(token.value,token.type),token.begin[1],token.begin[0])
            
            # in case we have , do nothing eat it
            
            #get next token
            token = self._tokenizer.next()
        
        return statement
    
    def _read_origin_statement(self):
        """ origin statement. This is where to read the data.
        
            Args:
               None
               
            Returns:
开发者ID:gaubert,项目名称:java-balivernes,代码行数:70,代码来源:compiler.py


注:本文中的tokenizer.Tokenizer.consume_token方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。