当前位置: 首页>>代码示例>>Python>>正文


Python token.Punctuation方法代码示例

本文整理汇总了Python中pygments.token.Punctuation方法的典型用法代码示例。如果您正苦于以下问题:Python token.Punctuation方法的具体用法?Python token.Punctuation怎么用?Python token.Punctuation使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在pygments.token的用法示例。


在下文中一共展示了token.Punctuation方法的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: test_can_cope_with_destructuring

# 需要导入模块: from pygments import token [as 别名]
# 或者: from pygments.token import Punctuation [as 别名]
def test_can_cope_with_destructuring(lexer):
    fragment = u'val (a, b) = '
    tokens = [
        (Keyword, u'val'),
        (Text, u' '),
        (Punctuation, u'('),
        (Name.Property, u'a'),
        (Punctuation, u','),
        (Text, u' '),
        (Name.Property, u'b'),
        (Punctuation, u')'),
        (Text, u' '),
        (Punctuation, u'='),
        (Text, u' '),
        (Text, u'\n')
    ]
    assert list(lexer.get_tokens(fragment)) == tokens 
开发者ID:pygments,项目名称:pygments,代码行数:19,代码来源:test_kotlin.py

示例2: test_can_cope_generics_in_destructuring

# 需要导入模块: from pygments import token [as 别名]
# 或者: from pygments.token import Punctuation [as 别名]
def test_can_cope_generics_in_destructuring(lexer):
    fragment = u'val (a: List<Something>, b: Set<Wobble>) ='
    tokens = [
        (Keyword, u'val'),
        (Text, u' '),
        (Punctuation, u'('),
        (Name.Property, u'a'),
        (Punctuation, u':'),
        (Text, u' '),
        (Name.Property, u'List'),
        (Punctuation, u'<'),
        (Name, u'Something'),
        (Punctuation, u'>'),
        (Punctuation, u','),
        (Text, u' '),
        (Name.Property, u'b'),
        (Punctuation, u':'),
        (Text, u' '),
        (Name.Property, u'Set'),
        (Punctuation, u'<'),
        (Name, u'Wobble'),
        (Punctuation, u'>'),
        (Punctuation, u')'),
        (Text, u' '),
        (Punctuation, u'='),
        (Text, u'\n')
    ]
    assert list(lexer.get_tokens(fragment)) == tokens 
开发者ID:pygments,项目名称:pygments,代码行数:30,代码来源:test_kotlin.py

示例3: test_can_cope_with_generics

# 需要导入模块: from pygments import token [as 别名]
# 或者: from pygments.token import Punctuation [as 别名]
def test_can_cope_with_generics(lexer):
    fragment = u'inline fun <reified T : ContractState> VaultService.queryBy(): Vault.Page<T> {'
    tokens = [
        (Keyword, u'inline fun'),
        (Text, u' '),
        (Punctuation, u'<'),
        (Keyword, u'reified'),
        (Text, u' '),
        (Name, u'T'),
        (Text, u' '),
        (Punctuation, u':'),
        (Text, u' '),
        (Name, u'ContractState'),
        (Punctuation, u'>'),
        (Text, u' '),
        (Name.Class, u'VaultService'),
        (Punctuation, u'.'),
        (Name.Function, u'queryBy'),
        (Punctuation, u'('),
        (Punctuation, u')'),
        (Punctuation, u':'),
        (Text, u' '),
        (Name, u'Vault'),
        (Punctuation, u'.'),
        (Name, u'Page'),
        (Punctuation, u'<'),
        (Name, u'T'),
        (Punctuation, u'>'),
        (Text, u' '),
        (Punctuation, u'{'),
        (Text, u'\n')
    ]
    assert list(lexer.get_tokens(fragment)) == tokens 
开发者ID:pygments,项目名称:pygments,代码行数:35,代码来源:test_kotlin.py

示例4: test_can_reject_almost_float

# 需要导入模块: from pygments import token [as 别名]
# 或者: from pygments.token import Punctuation [as 别名]
def test_can_reject_almost_float(lexer):
    _assert_tokens_match(lexer, '.e1', ((Punctuation, '.'), (Name, 'e1'))) 
开发者ID:pygments,项目名称:pygments,代码行数:4,代码来源:test_sql.py

示例5: test_can_reject_almost_float

# 需要导入模块: from pygments import token [as 别名]
# 或者: from pygments.token import Punctuation [as 别名]
def test_can_reject_almost_float(lexer):
    assert_tokens_match(lexer, '.e1', ((Punctuation, '.'), (Name, 'e1'))) 
开发者ID:pygments,项目名称:pygments,代码行数:4,代码来源:test_basic.py

示例6: test_call

# 需要导入模块: from pygments import token [as 别名]
# 或者: from pygments.token import Punctuation [as 别名]
def test_call(lexer):
    fragment = u'f(1, a)\n'
    tokens = [
        (Name.Function, u'f'),
        (Punctuation, u'('),
        (Token.Literal.Number, u'1'),
        (Punctuation, u','),
        (Token.Text, u' '),
        (Token.Name, u'a'),
        (Punctuation, u')'),
        (Token.Text, u'\n'),
    ]
    assert list(lexer.get_tokens(fragment)) == tokens 
开发者ID:pygments,项目名称:pygments,代码行数:15,代码来源:test_r.py

示例7: test_indexing

# 需要导入模块: from pygments import token [as 别名]
# 或者: from pygments.token import Punctuation [as 别名]
def test_indexing(lexer):
    fragment = u'a[1]'
    tokens = [
        (Token.Name, u'a'),
        (Token.Punctuation, u'['),
        (Token.Literal.Number, u'1'),
        (Token.Punctuation, u']'),
        (Token.Text, u'\n'),
    ]
    assert list(lexer.get_tokens(fragment)) == tokens 
开发者ID:pygments,项目名称:pygments,代码行数:12,代码来源:test_r.py

示例8: test_dot_indexing

# 需要导入模块: from pygments import token [as 别名]
# 或者: from pygments.token import Punctuation [as 别名]
def test_dot_indexing(lexer):
    fragment = u'.[1]'
    tokens = [
        (Token.Name, u'.'),
        (Token.Punctuation, u'['),
        (Token.Literal.Number, u'1'),
        (Token.Punctuation, u']'),
        (Token.Text, u'\n'),
    ]
    assert list(lexer.get_tokens(fragment)) == tokens 
开发者ID:pygments,项目名称:pygments,代码行数:12,代码来源:test_r.py


注:本文中的pygments.token.Punctuation方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。