當前位置: 首頁>>代碼示例>>Python>>正文


Python tokenization.BasicTokenizer方法代碼示例

本文整理匯總了Python中tokenization.BasicTokenizer方法的典型用法代碼示例。如果您正苦於以下問題:Python tokenization.BasicTokenizer方法的具體用法?Python tokenization.BasicTokenizer怎麽用?Python tokenization.BasicTokenizer使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在tokenization的用法示例。


在下文中一共展示了tokenization.BasicTokenizer方法的7個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: test_basic_tokenizer_lower

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import BasicTokenizer [as 別名]
def test_basic_tokenizer_lower(self):
        tokenizer = tokenization.BasicTokenizer(do_lower_case=True)

        self.assertAllEqual(
            tokenizer.tokenize(u" \tHeLLo!how  \n Are yoU?  "),
            ["hello", "!", "how", "are", "you", "?"])
        self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"]) 
開發者ID:Socialbird-AILab,項目名稱:BERT-Classification-Tutorial,代碼行數:9,代碼來源:tokenization_test.py

示例2: test_basic_tokenizer_no_lower

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import BasicTokenizer [as 別名]
def test_basic_tokenizer_no_lower(self):
        tokenizer = tokenization.BasicTokenizer(do_lower_case=False)

        self.assertAllEqual(
            tokenizer.tokenize(u" \tHeLLo!how  \n Are yoU?  "),
            ["HeLLo", "!", "how", "Are", "yoU", "?"]) 
開發者ID:Socialbird-AILab,項目名稱:BERT-Classification-Tutorial,代碼行數:8,代碼來源:tokenization_test.py

示例3: test_chinese

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import BasicTokenizer [as 別名]
def test_chinese(self):
    tokenizer = tokenization.BasicTokenizer()

    self.assertAllEqual(
        tokenizer.tokenize(u"ah\u535A\u63A8zz"),
        [u"ah", u"\u535A", u"\u63A8", u"zz"]) 
開發者ID:Nagakiran1,項目名稱:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代碼行數:8,代碼來源:tokenization_test.py

示例4: test_basic_tokenizer_lower

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import BasicTokenizer [as 別名]
def test_basic_tokenizer_lower(self):
    tokenizer = tokenization.BasicTokenizer(do_lower_case=True)

    self.assertAllEqual(
        tokenizer.tokenize(u" \tHeLLo!how  \n Are yoU?  "),
        ["hello", "!", "how", "are", "you", "?"])
    self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"]) 
開發者ID:Nagakiran1,項目名稱:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代碼行數:9,代碼來源:tokenization_test.py

示例5: test_basic_tokenizer_no_lower

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import BasicTokenizer [as 別名]
def test_basic_tokenizer_no_lower(self):
    tokenizer = tokenization.BasicTokenizer(do_lower_case=False)

    self.assertAllEqual(
        tokenizer.tokenize(u" \tHeLLo!how  \n Are yoU?  "),
        ["HeLLo", "!", "how", "Are", "yoU", "?"]) 
開發者ID:Nagakiran1,項目名稱:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代碼行數:8,代碼來源:tokenization_test.py

示例6: customize_tokenizer

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import BasicTokenizer [as 別名]
def customize_tokenizer(text, do_lower_case=False):
  tokenizer = tokenization.BasicTokenizer(do_lower_case=do_lower_case)
  temp_x = ""
  text = tokenization.convert_to_unicode(text)
  for c in text:
    if tokenizer._is_chinese_char(ord(c)) or tokenization._is_punctuation(c) or tokenization._is_whitespace(c) or tokenization._is_control(c):
      temp_x += " " + c + " "
    else:
      temp_x += c
  if do_lower_case:
    temp_x = temp_x.lower()
  return temp_x.split()

# 
開發者ID:ymcui,項目名稱:Cross-Lingual-MRC,代碼行數:16,代碼來源:run_clmrc.py

示例7: test_chinese

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import BasicTokenizer [as 別名]
def test_chinese(self):
        tokenizer = tokenization.BasicTokenizer()

        self.assertAllEqual(
            tokenizer.tokenize(u"ah\u535A\u63A8zz"),
            [u"ah", u"\u535A", u"\u63A8", u"zz"]) 
開發者ID:guoyaohua,項目名稱:BERT-Chinese-Annotation,代碼行數:8,代碼來源:tokenization_test.py


注:本文中的tokenization.BasicTokenizer方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。