当前位置: 首页>>代码示例>>Python>>正文


Python tokenization.printable_text方法代码示例

本文整理汇总了Python中tokenization.printable_text方法的典型用法代码示例。如果您正苦于以下问题:Python tokenization.printable_text方法的具体用法?Python tokenization.printable_text怎么用?Python tokenization.printable_text使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tokenization的用法示例。


在下文中一共展示了tokenization.printable_text方法的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: __repr__

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import printable_text [as 别名]
def __repr__(self):
        s = ""
        s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
        s += ", question_text: %s" % (
            tokenization.printable_text(self.question_text))
        s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
        if self.start_position:
            s += ", start_position: %d" % (self.start_position)
        if self.start_position:
            s += ", end_position: %d" % (self.end_position)
        return s 
开发者ID:eva-n27,项目名称:BERT-for-Chinese-Question-Answering,代码行数:13,代码来源:run_squad.py

示例2: __repr__

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import printable_text [as 别名]
def __repr__(self):
    s = ""
    s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
    s += ", question_text: %s" % (
        tokenization.printable_text(self.question_text))
    s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
    if self.start_position:
      s += ", start_position: %d" % (self.start_position)
    if self.start_position:
      s += ", end_position: %d" % (self.end_position)
    if self.start_position:
      s += ", is_impossible: %r" % (self.is_impossible)
    return s 
开发者ID:Nagakiran1,项目名称:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代码行数:15,代码来源:run_squad.py

示例3: __str__

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import printable_text [as 别名]
def __str__(self):
    s = ""
    s += "tokens: %s\n" % (" ".join(
        [tokenization.printable_text(x) for x in self.tokens]))
    s += "segment_ids: %s\n" % (" ".join([str(x) for x in self.segment_ids]))
    s += "is_random_next: %s\n" % self.is_random_next
    s += "masked_lm_positions: %s\n" % (" ".join(
        [str(x) for x in self.masked_lm_positions]))
    s += "masked_lm_labels: %s\n" % (" ".join(
        [tokenization.printable_text(x) for x in self.masked_lm_labels]))
    s += "\n"
    return s 
开发者ID:Nagakiran1,项目名称:Extending-Google-BERT-as-Question-and-Answering-model-and-Chatbot,代码行数:14,代码来源:create_pretraining_data.py

示例4: __repr__

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import printable_text [as 别名]
def __repr__(self):
    s = ""
    s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
    s += ", question_text: %s" % (
        tokenization.printable_text(self.question_text))
    s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
    if self.start_position:
      s += ", start_position: %d" % (self.start_position)
    if self.start_position:
      s += ", end_position: %d" % (self.end_position)
    return s 
开发者ID:ymcui,项目名称:Cross-Lingual-MRC,代码行数:13,代码来源:run_clmrc.py

示例5: __str__

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import printable_text [as 别名]
def __str__(self):
		s = ""
		s += "tokens: %s\n" % (" ".join(
				[tokenization.printable_text(x) for x in self.tokens]))
		s += "segment_ids: %s\n" % (" ".join([str(x) for x in self.segment_ids]))
		s += "is_random_next: %s\n" % self.is_random_next
		s += "masked_lm_positions: %s\n" % (" ".join(
				[str(x) for x in self.masked_lm_positions]))
		s += "masked_lm_labels: %s\n" % (" ".join(
				[tokenization.printable_text(x) for x in self.masked_lm_labels]))
		s += "\n"
		return s 
开发者ID:yyht,项目名称:BERT,代码行数:14,代码来源:create_pretraining_data.py

示例6: __repr__

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import printable_text [as 别名]
def __repr__(self):
        s = ""
        s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
        s += ", question_text: %s" % (
            tokenization.printable_text(self.question_text))
        s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
        if self.start_position:
            s += ", start_position: %d" % (self.start_position)
        if self.start_position:
            s += ", end_position: %d" % (self.end_position)
        if self.start_position:
            s += ", is_impossible: %r" % (self.is_impossible)
        return s 
开发者ID:husseinmozannar,项目名称:SOQAL,代码行数:15,代码来源:Bert_model.py

示例7: __str__

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import printable_text [as 别名]
def __str__(self):
        s = ""
        s += "tokens: %s\n" % (" ".join(
            [tokenization.printable_text(x) for x in self.tokens]))
        s += "segment_ids: %s\n" % (" ".join(
            [str(x) for x in self.segment_ids]))
        s += "is_random_next: %s\n" % self.is_random_next
        s += "masked_lm_positions: %s\n" % (" ".join(
            [str(x) for x in self.masked_lm_positions]))
        s += "masked_lm_labels: %s\n" % (" ".join(
            [tokenization.printable_text(x) for x in self.masked_lm_labels]))
        s += "\n"
        return s 
开发者ID:guoyaohua,项目名称:BERT-Chinese-Annotation,代码行数:15,代码来源:create_pretraining_data.py

示例8: __repr__

# 需要导入模块: import tokenization [as 别名]
# 或者: from tokenization import printable_text [as 别名]
def __repr__(self):
        s = ""
        s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
        s += ", question_text: %s" % (
            tokenization.printable_text(self.question_text))
        s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
        if self.start_position:
            s += ", start_position: %d" % (self.start_position)
        if self.start_position:
            s += ", end_position: %d" % (self.end_position)
        if self.history_answer_marker:
            s += ', history_answer_marker: {}'.format(json.dumps(self.history_answer_marker))
        if self.metadata:
            s += ', metadata: ' + json.dumps(self.metadata)
        return s 
开发者ID:prdwb,项目名称:bert_hae,代码行数:17,代码来源:cqa_supports.py


注:本文中的tokenization.printable_text方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。