本文整理汇总了Python中tokenization._is_control方法的典型用法代码示例。如果您正苦于以下问题:Python tokenization._is_control方法的具体用法?Python tokenization._is_control怎么用?Python tokenization._is_control使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类tokenization
的用法示例。
在下文中一共展示了tokenization._is_control方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: test_is_control
# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import _is_control [as 别名]
def test_is_control(self):
self.assertTrue(tokenization._is_control(u"\u0005"))
self.assertFalse(tokenization._is_control(u"A"))
self.assertFalse(tokenization._is_control(u" "))
self.assertFalse(tokenization._is_control(u"\t"))
self.assertFalse(tokenization._is_control(u"\r"))
示例2: test_is_control
# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import _is_control [as 别名]
def test_is_control(self):
self.assertTrue(tokenization._is_control(u"\u0005"))
self.assertFalse(tokenization._is_control(u"A"))
self.assertFalse(tokenization._is_control(u" "))
self.assertFalse(tokenization._is_control(u"\t"))
self.assertFalse(tokenization._is_control(u"\r"))
self.assertFalse(tokenization._is_control(u"\U0001F4A9"))
开发者ID:Nagakiran1,项目名称:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代码行数:10,代码来源:tokenization_test.py
示例3: customize_tokenizer
# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import _is_control [as 别名]
def customize_tokenizer(text, do_lower_case=False):
tokenizer = tokenization.BasicTokenizer(do_lower_case=do_lower_case)
temp_x = ""
text = tokenization.convert_to_unicode(text)
for c in text:
if tokenizer._is_chinese_char(ord(c)) or tokenization._is_punctuation(c) or tokenization._is_whitespace(c) or tokenization._is_control(c):
temp_x += " " + c + " "
else:
temp_x += c
if do_lower_case:
temp_x = temp_x.lower()
return temp_x.split()
#
示例4: test_is_control
# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import _is_control [as 别名]
def test_is_control(self):
self.assertTrue(tokenization._is_control(u"\u0005"))
self.assertFalse(tokenization._is_control(u"A"))
self.assertFalse(tokenization._is_control(u" "))
self.assertFalse(tokenization._is_control(u"\t"))
self.assertFalse(tokenization._is_control(u"\r"))