当前位置: 首页>>代码示例>>Python>>正文


Python tokenization.BasicTokenizer方法代码示例

本文整理汇总了Python中tokenization.BasicTokenizer方法的典型用法代码示例。如果您正苦于以下问题:Python tokenization.BasicTokenizer方法的具体用法?Python tokenization.BasicTokenizer怎么用?Python tokenization.BasicTokenizer使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tokenization的用法示例。


在下文中一共展示了tokenization.BasicTokenizer方法的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: test_basic_tokenizer_lower

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import BasicTokenizer [as 别名]
def test_basic_tokenizer_lower(self):
        tokenizer = tokenization.BasicTokenizer(do_lower_case=True)

        self.assertAllEqual(
            tokenizer.tokenize(u" \tHeLLo!how  \n Are yoU?  "),
            ["hello", "!", "how", "are", "you", "?"])
        self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"]) 
开发者ID:Socialbird-AILab,项目名称:BERT-Classification-Tutorial,代码行数:9,代码来源:tokenization_test.py

示例2: test_basic_tokenizer_no_lower

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import BasicTokenizer [as 别名]
def test_basic_tokenizer_no_lower(self):
        tokenizer = tokenization.BasicTokenizer(do_lower_case=False)

        self.assertAllEqual(
            tokenizer.tokenize(u" \tHeLLo!how  \n Are yoU?  "),
            ["HeLLo", "!", "how", "Are", "yoU", "?"]) 
开发者ID:Socialbird-AILab,项目名称:BERT-Classification-Tutorial,代码行数:8,代码来源:tokenization_test.py

示例3: test_chinese

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import BasicTokenizer [as 别名]
def test_chinese(self):
    tokenizer = tokenization.BasicTokenizer()

    self.assertAllEqual(
        tokenizer.tokenize(u"ah\u535A\u63A8zz"),
        [u"ah", u"\u535A", u"\u63A8", u"zz"]) 
开发者ID:Nagakiran1,项目名称:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代码行数:8,代码来源:tokenization_test.py

示例4: test_basic_tokenizer_lower

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import BasicTokenizer [as 别名]
def test_basic_tokenizer_lower(self):
    tokenizer = tokenization.BasicTokenizer(do_lower_case=True)

    self.assertAllEqual(
        tokenizer.tokenize(u" \tHeLLo!how  \n Are yoU?  "),
        ["hello", "!", "how", "are", "you", "?"])
    self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"]) 
开发者ID:Nagakiran1,项目名称:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代码行数:9,代码来源:tokenization_test.py

示例5: test_basic_tokenizer_no_lower

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import BasicTokenizer [as 别名]
def test_basic_tokenizer_no_lower(self):
    tokenizer = tokenization.BasicTokenizer(do_lower_case=False)

    self.assertAllEqual(
        tokenizer.tokenize(u" \tHeLLo!how  \n Are yoU?  "),
        ["HeLLo", "!", "how", "Are", "yoU", "?"]) 
开发者ID:Nagakiran1,项目名称:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代码行数:8,代码来源:tokenization_test.py

示例6: customize_tokenizer

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import BasicTokenizer [as 别名]
def customize_tokenizer(text, do_lower_case=False):
  tokenizer = tokenization.BasicTokenizer(do_lower_case=do_lower_case)
  temp_x = ""
  text = tokenization.convert_to_unicode(text)
  for c in text:
    if tokenizer._is_chinese_char(ord(c)) or tokenization._is_punctuation(c) or tokenization._is_whitespace(c) or tokenization._is_control(c):
      temp_x += " " + c + " "
    else:
      temp_x += c
  if do_lower_case:
    temp_x = temp_x.lower()
  return temp_x.split()

# 
开发者ID:ymcui,项目名称:Cross-Lingual-MRC,代码行数:16,代码来源:run_clmrc.py

示例7: test_chinese

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import BasicTokenizer [as 别名]
def test_chinese(self):
        tokenizer = tokenization.BasicTokenizer()

        self.assertAllEqual(
            tokenizer.tokenize(u"ah\u535A\u63A8zz"),
            [u"ah", u"\u535A", u"\u63A8", u"zz"]) 
开发者ID:guoyaohua,项目名称:BERT-Chinese-Annotation,代码行数:8,代码来源:tokenization_test.py


注:本文中的tokenization.BasicTokenizer方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。