当前位置: 首页>>代码示例>>Python>>正文


Python Lexer.get_tokens方法代码示例

本文整理汇总了Python中lexer.Lexer.get_tokens方法的典型用法代码示例。如果您正苦于以下问题:Python Lexer.get_tokens方法的具体用法?Python Lexer.get_tokens怎么用?Python Lexer.get_tokens使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在lexer.Lexer的用法示例。


在下文中一共展示了Lexer.get_tokens方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: Lexer

# 需要导入模块: from lexer import Lexer [as 别名]
# 或者: from lexer.Lexer import get_tokens [as 别名]
from lexer import Lexer

#################################### Lexer #####################################
#          The lexer class converts input text to tokens. See lexer.py         #
################################################################################

lexer = Lexer()

#################################### Stacks ####################################
#                These stacks will be modified by the main loop                #
################################################################################

input_stack = lexer.get_tokens('1+2\n1+2\n1+2\n')
parser_stack = [lexer.F] # F is the start rule
rules = []

#################################### Rules #####################################
#   * rules_table defines the relations between terminals and non-terminals    #
#          for example, if the next token in the input_stack is a zero,        #
#           and the current non-terminal in the parser_stack is a D,           #
#                          the rule 3 must be applied                          #
#                                                                              #
#    * rules_tokens contains what is expected when following a given rule      #
#            for example, after apllying rule 3 a zero is expected             #
################################################################################

rules_table = [
#   |F|---|L|---|D|--|
    [0,    1,    3   ], #TOKEN_0
    [0,    1,    4   ], #TOKEN_1
    [0,    1,    5   ], #TOKEN_2
开发者ID:Ghuizing,项目名称:Additions,代码行数:33,代码来源:parser.py

示例2: Converter

# 需要导入模块: from lexer import Lexer [as 别名]
# 或者: from lexer.Lexer import get_tokens [as 别名]
class Converter(object):
	"""Takes a string that represents a raw
	Lithp program, and converts it to C++.
	All of this is done at construction,
	and the C++ code can be accessed with
	the .get_cpp() method.
	"""

	TYPES_DICT = { #only types that change names when converted to C++
		'long'    : 'long long',
		'ints'    : 'vector<int>',
		'longs'   : 'vector<long long>',
		'floats'  : 'vector<float>',
		'doubles' : 'vector<double>',
		'chars'   : 'vector<char>'
	}

	FUNCS_DICT = { #only functions that change names when converted to C++
		#These three can't be used as macros
		'or'  : 'or_',
		'and' : 'and_',
		'not' : 'not_',

		'do'  : '' #this way do(a,b,c) becomes (a,b,c) which uses
		           #the comma operator to execute a, b, and c
	}

	def __init__(self, program):
		self.program = program
		self.lexer = Lexer(self.program)
		self.tokens = self.lexer.get_tokens()


		#Initializations (some are pointless (self.main, self.converted))
		self.func_count = 0
		self.func_dict = {} #{func_name: func_header_and_body, ...}
		self.cpp_declarations = {} #{func_name : cpp_func_decl, ...}
		self.func_bodies = {} #{func_name: func_body, ...}
		self.cpp_func_bodies = {} #{func_name: cpp_func_body, ...}
		self.main = ''
		self.converted = ''
		self.convert() #sets self.converted

	def convert(self):
		"""Converts the program into C++ code
		Code must be compiled wth lithp.hpp
		"""
		self.make_func_dict() #sets self.func_dict
		self.make_main_function() #sets self.main
		self.remove_lambda_nesting()
		self.replace_self_with_func_names()
		self.make_func_declarations() #sets self.cpp_declarations
		self.make_func_bodies() #sets self.cpp_func_bodies		
		self.make_cpp_func_bodies()
		lines = []
		lines.append('#include "lithp.hpp"')
		for name, signature in self.cpp_declarations.iteritems():
			lines.append(signature + ';')

		for name, signature in self.cpp_declarations.iteritems():
			if name == 'main': continue
			lines.append(signature + '{')
			lines.append('    return ' + self.cpp_func_bodies[name] + ';\n}')
		lines.append(
"""
int main(){
    %s;
    return 0;
}
""" % self.cpp_func_bodies['main'])
		self.converted = '\n'.join(lines)		
		return self.converted

	def make_func_dict(self):
		"""Looks at tokens and forms dictionary
		mapping generated function names to function
		bodies
		"""
		index = 0
		while index < len(self.tokens):
			if self.tokens[index] == '\\': #Lambda
				#Every lambda looks like this:
				#(\ (param1:type1, ...) : return_type
				#  expression)

				#That expression can then be used as a function
				#i.e. (  (\(...):type (...))  param1 param2 ...)
				#     calls the lambda

				#Parentheses around entire function
				i = self.tokens.match_paren(index - 1)

				#Create unique function name
				func_name = 'f%d' % self.func_count

				#                           function body
				self.func_dict[func_name] = self.tokens[index-1:i+1].get_joined()
				self.func_count += 1

			index += 1
#.........这里部分代码省略.........
开发者ID:plusgood,项目名称:lithp,代码行数:103,代码来源:converter.py


注:本文中的lexer.Lexer.get_tokens方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。