本文整理匯總了Python中pygments.lexer.Lexer.get_tokens方法的典型用法代碼示例。如果您正苦於以下問題:Python Lexer.get_tokens方法的具體用法?Python Lexer.get_tokens怎麽用?Python Lexer.get_tokens使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類pygments.lexer.Lexer
的用法示例。
在下文中一共展示了Lexer.get_tokens方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: get_tokens
# 需要導入模塊: from pygments.lexer import Lexer [as 別名]
# 或者: from pygments.lexer.Lexer import get_tokens [as 別名]
def get_tokens(self, text):
if isinstance(text, text_type):
# raw token stream never has any non-ASCII characters
text = text.encode('ascii')
if self.compress == 'gz':
import gzip
gzipfile = gzip.GzipFile('', 'rb', 9, BytesIO(text))
text = gzipfile.read()
elif self.compress == 'bz2':
import bz2
text = bz2.decompress(text)
# do not call Lexer.get_tokens() because we do not want Unicode
# decoding to occur, and stripping is not optional.
text = text.strip(b'\n') + b'\n'
for i, t, v in self.get_tokens_unprocessed(text):
yield t, v
示例2: get_tokens
# 需要導入模塊: from pygments.lexer import Lexer [as 別名]
# 或者: from pygments.lexer.Lexer import get_tokens [as 別名]
def get_tokens(self, text):
if isinstance(text, str):
# raw token stream never has any non-ASCII characters
text = text.encode('ascii')
if self.compress == 'gz':
import gzip
gzipfile = gzip.GzipFile('', 'rb', 9, BytesIO(text))
text = gzipfile.read()
elif self.compress == 'bz2':
import bz2
text = bz2.decompress(text)
# do not call Lexer.get_tokens() because we do not want Unicode
# decoding to occur, and stripping is not optional.
text = text.strip(b'\n') + b'\n'
for i, t, v in self.get_tokens_unprocessed(text):
yield t, v