當前位置: 首頁>>代碼示例>>Python>>正文


Python tokenization.WordpieceTokenizer方法代碼示例

本文整理匯總了Python中tokenization.WordpieceTokenizer方法的典型用法代碼示例。如果您正苦於以下問題:Python tokenization.WordpieceTokenizer方法的具體用法?Python tokenization.WordpieceTokenizer怎麽用?Python tokenization.WordpieceTokenizer使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在tokenization的用法示例。


在下文中一共展示了tokenization.WordpieceTokenizer方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: test_wordpiece_tokenizer

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import WordpieceTokenizer [as 別名]
def test_wordpiece_tokenizer(self):
        vocab_tokens = [
            "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
            "##ing"
        ]

        vocab = {}
        for (i, token) in enumerate(vocab_tokens):
            vocab[token] = i
        tokenizer = tokenization.WordpieceTokenizer(vocab=vocab)

        self.assertAllEqual(tokenizer.tokenize(""), [])

        self.assertAllEqual(
            tokenizer.tokenize("unwanted running"),
            ["un", "##want", "##ed", "runn", "##ing"])

        self.assertAllEqual(
            tokenizer.tokenize("unwantedX running"), ["[UNK]", "runn", "##ing"]) 
開發者ID:Socialbird-AILab,項目名稱:BERT-Classification-Tutorial,代碼行數:21,代碼來源:tokenization_test.py

示例2: test_wordpiece_tokenizer

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import WordpieceTokenizer [as 別名]
def test_wordpiece_tokenizer(self):
    vocab_tokens = [
        "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
        "##ing"
    ]

    vocab = {}
    for (i, token) in enumerate(vocab_tokens):
      vocab[token] = i
    tokenizer = tokenization.WordpieceTokenizer(vocab=vocab)

    self.assertAllEqual(tokenizer.tokenize(""), [])

    self.assertAllEqual(
        tokenizer.tokenize("unwanted running"),
        ["un", "##want", "##ed", "runn", "##ing"])

    self.assertAllEqual(
        tokenizer.tokenize("unwantedX running"), ["[UNK]", "runn", "##ing"]) 
開發者ID:Nagakiran1,項目名稱:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代碼行數:21,代碼來源:tokenization_test.py

示例3: test_wordpiece_tokenizer

# 需要導入模塊: import tokenization [as 別名]
# 或者: from tokenization import WordpieceTokenizer [as 別名]
def test_wordpiece_tokenizer(self):
        vocab_tokens = [
            "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un",
            "runn", "##ing"
        ]

        vocab = {}
        for (i, token) in enumerate(vocab_tokens):
            vocab[token] = i
        tokenizer = tokenization.WordpieceTokenizer(vocab=vocab)

        self.assertAllEqual(tokenizer.tokenize(""), [])

        self.assertAllEqual(
            tokenizer.tokenize("unwanted running"),
            ["un", "##want", "##ed", "runn", "##ing"])

        self.assertAllEqual(
            tokenizer.tokenize("unwantedX running"),
            ["[UNK]", "runn", "##ing"]) 
開發者ID:guoyaohua,項目名稱:BERT-Chinese-Annotation,代碼行數:22,代碼來源:tokenization_test.py


注:本文中的tokenization.WordpieceTokenizer方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。