当前位置: 首页>>代码示例>>Java>>正文


Java ATNDeserializer类代码示例

本文整理汇总了Java中org.antlr.v4.runtime.atn.ATNDeserializer的典型用法代码示例。如果您正苦于以下问题:Java ATNDeserializer类的具体用法?Java ATNDeserializer怎么用?Java ATNDeserializer使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


ATNDeserializer类属于org.antlr.v4.runtime.atn包,在下文中一共展示了ATNDeserializer类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getATNWithBypassAlts

import org.antlr.v4.runtime.atn.ATNDeserializer; //导入依赖的package包/类
/**
 * The ATN with bypass alternatives is expensive to create so we create it
 * lazily.
 *
 * @throws UnsupportedOperationException if the current parser does not
 * implement the {@link #getSerializedATN()} method.
 */
@NotNull
public ATN getATNWithBypassAlts() {
	String serializedAtn = getSerializedATN();
	if (serializedAtn == null) {
		throw new UnsupportedOperationException("The current parser does not support an ATN with bypass alternatives.");
	}

	synchronized (bypassAltsAtnCache) {
		ATN result = bypassAltsAtnCache.get(serializedAtn);
		if (result == null) {
			ATNDeserializationOptions deserializationOptions = new ATNDeserializationOptions();
			deserializationOptions.setGenerateRuleBypassTransitions(true);
			result = new ATNDeserializer(deserializationOptions).deserialize(serializedAtn.toCharArray());
			bypassAltsAtnCache.put(serializedAtn, result);
		}

		return result;
	}
}
 
开发者ID:MegaApuTurkUltra,项目名称:Scratch-ApuC,代码行数:27,代码来源:Parser.java

示例2: createLexerInterpreter

import org.antlr.v4.runtime.atn.ATNDeserializer; //导入依赖的package包/类
public LexerInterpreter createLexerInterpreter(CharStream input) {
	if (this.isParser()) {
		throw new IllegalStateException("A lexer interpreter can only be created for a lexer or combined grammar.");
	}

	if (this.isCombined()) {
		return implicitLexer.createLexerInterpreter(input);
	}

	char[] serializedAtn = ATNSerializer.getSerializedAsChars(atn);
	ATN deserialized = new ATNDeserializer().deserialize(serializedAtn);
	return new LexerInterpreter(fileName, getVocabulary(), Arrays.asList(getRuleNames()), ((LexerGrammar)this).modes.keySet(), deserialized, input);
}
 
开发者ID:antlr,项目名称:codebuff,代码行数:14,代码来源:Grammar.java

示例3: createGrammarParserInterpreter

import org.antlr.v4.runtime.atn.ATNDeserializer; //导入依赖的package包/类
/** @since 4.5.1 */
public GrammarParserInterpreter createGrammarParserInterpreter(TokenStream tokenStream) {
	if (this.isLexer()) {
		throw new IllegalStateException("A parser interpreter can only be created for a parser or combined grammar.");
	}
	char[] serializedAtn = ATNSerializer.getSerializedAsChars(atn);
	ATN deserialized = new ATNDeserializer().deserialize(serializedAtn);
	return new GrammarParserInterpreter(this, deserialized, tokenStream);
}
 
开发者ID:antlr,项目名称:codebuff,代码行数:10,代码来源:Grammar.java

示例4: createParserInterpreter

import org.antlr.v4.runtime.atn.ATNDeserializer; //导入依赖的package包/类
public ParserInterpreter createParserInterpreter(TokenStream tokenStream) {
	if (this.isLexer()) {
		throw new IllegalStateException("A parser interpreter can only be created for a parser or combined grammar.");
	}

	char[] serializedAtn = ATNSerializer.getSerializedAsChars(atn);
	ATN deserialized = new ATNDeserializer().deserialize(serializedAtn);
	return new ParserInterpreter(fileName, getVocabulary(), Arrays.asList(getRuleNames()), deserialized, tokenStream);
}
 
开发者ID:antlr,项目名称:codebuff,代码行数:10,代码来源:Grammar.java

示例5: loadTokens

import org.antlr.v4.runtime.atn.ATNDeserializer; //导入依赖的package包/类
private void loadTokens(final Document document, LexerInterpreterData interpreterData, LexerTraceAnalyzer analyzer) {
    try {
        TracingCharStream charStream = new TracingCharStream(analyzer, document.getText(0, document.getLength()));
        TracingLexer lexer = new TracingLexer(interpreterData, analyzer, charStream);
        ATN atn = new ATNDeserializer().deserialize(interpreterData.serializedAtn.toCharArray());
        TracingLexerATNSimulator atnSimulator = new TracingLexerATNSimulator(analyzer, lexer, atn);
        lexer.setInterpreter(atnSimulator);
        CommonTokenStream commonTokenStream = new CommonTokenStream(lexer);
        commonTokenStream.fill();
    } catch (BadLocationException ex) {
        Exceptions.printStackTrace(ex);
    }
}
 
开发者ID:tunnelvisionlabs,项目名称:goworks,代码行数:14,代码来源:AbstractGrammarDebuggerEditorKit.java

示例6: createLexer

import org.antlr.v4.runtime.atn.ATNDeserializer; //导入依赖的package包/类
@Override
protected TokenSourceWithStateV4<SimpleLexerState> createLexer(CharStream input, SimpleLexerState startState) {
    ATN atn = new ATNDeserializer().deserialize(lexerInterpreterData.serializedAtn.toCharArray());
    Vocabulary vocabulary = lexerInterpreterData.vocabulary;
    String grammarFileName = lexerInterpreterData.grammarFileName;
    List<String> ruleNames = lexerInterpreterData.ruleNames;
    List<String> modeNames = lexerInterpreterData.modeNames;
    ParserDebuggerLexerWrapper lexer = new ParserDebuggerLexerWrapper(grammarFileName, vocabulary, ruleNames, modeNames, atn, input);
    startState.apply(lexer);
    return lexer;
}
 
开发者ID:tunnelvisionlabs,项目名称:goworks,代码行数:12,代码来源:ParserDebuggerTokensTaskTaggerSnapshot.java

示例7: getEffectiveTokenSource

import org.antlr.v4.runtime.atn.ATNDeserializer; //导入依赖的package包/类
@Override
protected TokenSource getEffectiveTokenSource(TokenSourceWithStateV4<SimpleLexerState> lexer) {
    ATN atn = new ATNDeserializer().deserialize(lexerInterpreterData.serializedAtn.toCharArray());
    Vocabulary vocabulary = lexerInterpreterData.vocabulary;
    String grammarFileName = lexerInterpreterData.grammarFileName;
    List<String> ruleNames = lexerInterpreterData.ruleNames;
    List<String> modeNames = lexerInterpreterData.modeNames;
    return new ParserDebuggerLexerWrapper(grammarFileName, vocabulary, ruleNames, modeNames, atn, lexer.getInputStream());
}
 
开发者ID:tunnelvisionlabs,项目名称:goworks,代码行数:10,代码来源:ParserDebuggerTokensTaskTaggerSnapshot.java

示例8: parse

import org.antlr.v4.runtime.atn.ATNDeserializer; //导入依赖的package包/类
@Override
public void parse(ParserTaskManager taskManager, ParseContext context, DocumentSnapshot snapshot, Collection<? extends ParserDataDefinition<?>> requestedData, ParserResultHandler results)
    throws InterruptedException, ExecutionException {

    //ParserDebuggerEditorKit.LEX
    synchronized (lock) {
        ParserData<FileParseResult> fileParseResultData = taskManager.getData(snapshot, ParserDebuggerParserDataDefinitions.FILE_PARSE_RESULT, EnumSet.of(ParserDataOptions.NO_UPDATE)).get();
        ParserData<ParserRuleContext> parseTreeResult = taskManager.getData(snapshot, ParserDebuggerParserDataDefinitions.REFERENCE_PARSE_TREE, EnumSet.of(ParserDataOptions.NO_UPDATE)).get();
        if (fileParseResultData == null || parseTreeResult == null) {
            Future<ParserData<Tagger<TokenTag<Token>>>> futureTokensData = taskManager.getData(snapshot, ParserDebuggerParserDataDefinitions.LEXER_TOKENS);
            Tagger<TokenTag<Token>> tagger = futureTokensData.get().getData();
            TaggerTokenSource tokenSource = new TaggerTokenSource(tagger, snapshot);
            InterruptableTokenStream tokenStream = new InterruptableTokenStream(tokenSource);
            ParserRuleContext parseResult;

            ParserInterpreterData parserInterpreterData = (ParserInterpreterData)snapshot.getVersionedDocument().getDocument().getProperty(ParserDebuggerEditorKit.PROP_PARSER_INTERP_DATA);
            String grammarFileName = parserInterpreterData.grammarFileName;
            Vocabulary vocabulary = parserInterpreterData.vocabulary;
            List<String> ruleNames = parserInterpreterData.ruleNames;
            ATN atn = new ATNDeserializer().deserialize(parserInterpreterData.serializedAtn.toCharArray());
            TracingParserInterpreter parser = new TracingParserInterpreter(grammarFileName, vocabulary, ruleNames, atn, tokenStream);

            long startTime = System.nanoTime();
            parser.setInterpreter(new StatisticsParserATNSimulator(parser, atn));
            parser.getInterpreter().optimize_ll1 = false;
            parser.getInterpreter().reportAmbiguities = true;
            parser.getInterpreter().setPredictionMode(PredictionMode.LL_EXACT_AMBIG_DETECTION);
            parser.removeErrorListeners();
            parser.addErrorListener(DescriptiveErrorListener.INSTANCE);
            parser.addErrorListener(new StatisticsParserErrorListener());
            SyntaxErrorListener syntaxErrorListener = new SyntaxErrorListener(snapshot);
            parser.addErrorListener(syntaxErrorListener);
            parser.setBuildParseTree(true);
            parser.setErrorHandler(new DefaultErrorStrategy());
            parseResult = parser.parse(parserInterpreterData.startRuleIndex);

            String sourceName = (String)document.getDocument().getProperty(Document.TitleProperty);
            FileParseResult fileParseResult = new FileParseResult(sourceName, 0, parseResult, syntaxErrorListener.getSyntaxErrors(), tokenStream.size(), startTime, null, parser);
            fileParseResultData = new BaseParserData<>(context, ParserDebuggerParserDataDefinitions.FILE_PARSE_RESULT, snapshot, fileParseResult);
            parseTreeResult = new BaseParserData<>(context, ParserDebuggerParserDataDefinitions.REFERENCE_PARSE_TREE, snapshot, parseResult);
        }

        results.addResult(fileParseResultData);
        results.addResult(parseTreeResult);
    }
}
 
开发者ID:tunnelvisionlabs,项目名称:goworks,代码行数:47,代码来源:ParserDebuggerReferenceAnchorsParserTask.java

示例9: PreviewParser

import org.antlr.v4.runtime.atn.ATNDeserializer; //导入依赖的package包/类
public PreviewParser(Grammar g, TokenStream input) {
	this(g, new ATNDeserializer().deserialize(ATNSerializer.getSerializedAsChars(g.getATN())), input);
}
 
开发者ID:antlr,项目名称:intellij-plugin-v4,代码行数:4,代码来源:PreviewParser.java


注:本文中的org.antlr.v4.runtime.atn.ATNDeserializer类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。