本文整理汇总了Java中com.romanenco.cfrm.Lexer类的典型用法代码示例。如果您正苦于以下问题:Java Lexer类的具体用法?Java Lexer怎么用?Java Lexer使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。
Lexer类属于com.romanenco.cfrm包,在下文中一共展示了Lexer类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: test
import com.romanenco.cfrm.Lexer; //导入依赖的package包/类
@Test
public void test() {
final ABGrammar grammar = new ABGrammar();
final Lexer lexer = new RegLexer(grammar);
final Parser parser = new LLParser(grammar);
final ParsingTreeNode parsingTree = parser.parse(lexer.tokenize("1+2"));
Assert.assertNotNull(parsingTree);
final ASTBuilder builder = new ASTBuilder();
builder.addSDTHandler(grammar.getProduction("SUM -> int + int"), new SumVisitor());
builder.build(parsingTree);
final Object attribute = parsingTree.getAttribute(SUM_ATTR);
Assert.assertNotNull(attribute);
Assert.assertEquals(3, attribute);
}
示例2: goodTest
import com.romanenco.cfrm.Lexer; //导入依赖的package包/类
@Test
public void goodTest() {
final Lexer lexer = getLexer(getGrammar());
String input;
List<Token> tokens;
input = "44 + 123 * -89";
tokens = lexer.tokenize(input);
Assert.assertEquals(6, tokens.size());
input = "44*123*89*5+90";
tokens = lexer.tokenize(input);
Assert.assertEquals(10, tokens.size());
input = "44+4--56";
tokens = lexer.tokenize(input);
Assert.assertEquals(6, tokens.size());
//these are valid inputs for lexer
input = "****";
tokens = lexer.tokenize(input);
Assert.assertEquals(5, tokens.size());
input = "*+ 56 ++ 67";
tokens = lexer.tokenize(input);
Assert.assertEquals(7, tokens.size());
}
示例3: getLexer
import com.romanenco.cfrm.Lexer; //导入依赖的package包/类
private Lexer getLexer(Grammar grammar) {
return new RegLexer(grammar);
}
示例4: badTest
import com.romanenco.cfrm.Lexer; //导入依赖的package包/类
@Test(expected = LexerError.class)
public void badTest() {
final Lexer lexer = getLexer(getGrammar());
final String input = "44+X*89";
lexer.tokenize(input);
}
示例5: parseSource
import com.romanenco.cfrm.Lexer; //导入依赖的package包/类
public static ParsingTreeNode parseSource(String input) {
final Lexer lexer = new RegLexer(GRAMMAR);
final Parser parser = new LLParser(GRAMMAR);
return parser.parse(lexer.tokenize(input));
}