当前位置: 首页>>代码示例>>Python>>正文


Python util.batch_tensor_dicts方法代码示例

本文整理汇总了Python中allennlp.nn.util.batch_tensor_dicts方法的典型用法代码示例。如果您正苦于以下问题:Python util.batch_tensor_dicts方法的具体用法?Python util.batch_tensor_dicts怎么用?Python util.batch_tensor_dicts使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在allennlp.nn.util的用法示例。


在下文中一共展示了util.batch_tensor_dicts方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: batch_tensors

# 需要导入模块: from allennlp.nn import util [as 别名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 别名]
def batch_tensors(self, tensor_list: List[TextFieldTensors]) -> TextFieldTensors:
        # This is creating a dict of {token_indexer_name: {token_indexer_outputs: batched_tensor}}
        # for each token indexer used to index this field.
        indexer_lists: Dict[str, List[Dict[str, torch.Tensor]]] = defaultdict(list)
        for tensor_dict in tensor_list:
            for indexer_name, indexer_output in tensor_dict.items():
                indexer_lists[indexer_name].append(indexer_output)
        batched_tensors = {
            # NOTE(mattg): if an indexer has its own nested structure, rather than one tensor per
            # argument, then this will break.  If that ever happens, we should move this to an
            # `indexer.batch_tensors` method, with this logic as the default implementation in the
            # base class.
            indexer_name: util.batch_tensor_dicts(indexer_outputs)
            for indexer_name, indexer_outputs in indexer_lists.items()
        }
        return batched_tensors 
开发者ID:allenai,项目名称:allennlp,代码行数:18,代码来源:text_field.py

示例2: batch_tensors

# 需要导入模块: from allennlp.nn import util [as 别名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 别名]
def batch_tensors(self, tensor_list: List[Dict[str, torch.Tensor]]) -> Dict[str, torch.Tensor]:
        # pylint: disable=no-self-use
        # This is creating a dict of {token_indexer_key: batch_tensor} for each token indexer used
        # to index this field.
        return util.batch_tensor_dicts(tensor_list) 
开发者ID:yuweijiang,项目名称:HGL-pytorch,代码行数:7,代码来源:bert_field.py

示例3: batch_tensors

# 需要导入模块: from allennlp.nn import util [as 别名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 别名]
def batch_tensors(self, tensor_list                               )                           :
        # pylint: disable=no-self-use
        # This is creating a dict of {token_indexer_key: batch_tensor} for each token indexer used
        # to index this field.
        return util.batch_tensor_dicts(tensor_list) 
开发者ID:plasticityai,项目名称:magnitude,代码行数:7,代码来源:text_field.py

示例4: batch_tensors

# 需要导入模块: from allennlp.nn import util [as 别名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 别名]
def batch_tensors(self, tensor_list                               )                           :
        # pylint: disable=no-self-use
        batched_text = nn_util.batch_tensor_dicts(tensor[u'text'] for tensor in tensor_list)  # type: ignore
        batched_linking = torch.stack([tensor[u'linking'] for tensor in tensor_list])
        return {u'text': batched_text, u'linking': batched_linking}

    # Below here we have feature extractor functions.  To keep a consistent API for easy logic
    # above, some of these functions have unused arguments.
    # pylint: disable=unused-argument,no-self-use

    # These feature extractors are generally pretty specific to the logical form language and
    # problem setting in WikiTableQuestions.  This whole notion of feature extraction should
    # eventually be made more general (or just removed, if we can replace it with CNN features...).
    # For the feature functions used in the original parser written in PNP, see here:
    # https://github.com/allenai/pnp/blob/wikitables2/src/main/scala/org/allenai/wikitables/SemanticParserFeatureGenerator.scala

    # One notable difference between how the features work here and how they worked in PNP is that
    # we're using the table text when computing string matches, while PNP used the _entity name_.
    # It turns out that the entity name is derived from the table text, so this should be roughly
    # equivalent, except in the case of some numbers.  If there are cells with different text that
    # normalize to the same name, you could get `_2` or similar appended to the name, so the way we
    # do it here should just be better.  But it's a possible minor source of variation from the
    # original parser.

    # Another difference between these features and the PNP features is that the span overlap used
    # a weighting scheme to downweight matches on frequent words (like "the"), and the lemma
    # overlap feature value was calculated a little differently.  I'm guessing that doesn't make a
    # huge difference... 
开发者ID:plasticityai,项目名称:magnitude,代码行数:30,代码来源:knowledge_graph_field.py

示例5: batch_tensors

# 需要导入模块: from allennlp.nn import util [as 别名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 别名]
def batch_tensors(self, tensor_list: List[Dict[str, torch.Tensor]]) -> Dict[str, torch.Tensor]:
        # pylint: disable=no-self-use
        batched_text = nn_util.batch_tensor_dicts(tensor['text'] for tensor in tensor_list)  # type: ignore
        batched_linking = torch.stack([tensor['linking'] for tensor in tensor_list])
        return {'text': batched_text, 'linking': batched_linking}

    # Below here we have feature extractor functions.  To keep a consistent API for easy logic
    # above, some of these functions have unused arguments.
    # pylint: disable=unused-argument,no-self-use

    # These feature extractors are generally pretty specific to the logical form language and
    # problem setting in WikiTableQuestions.  This whole notion of feature extraction should
    # eventually be made more general (or just removed, if we can replace it with CNN features...).
    # For the feature functions used in the original parser written in PNP, see here:
    # https://github.com/allenai/pnp/blob/wikitables2/src/main/scala/org/allenai/wikitables/SemanticParserFeatureGenerator.scala

    # One notable difference between how the features work here and how they worked in PNP is that
    # we're using the table text when computing string matches, while PNP used the _entity name_.
    # It turns out that the entity name is derived from the table text, so this should be roughly
    # equivalent, except in the case of some numbers.  If there are cells with different text that
    # normalize to the same name, you could get `_2` or similar appended to the name, so the way we
    # do it here should just be better.  But it's a possible minor source of variation from the
    # original parser.

    # Another difference between these features and the PNP features is that the span overlap used
    # a weighting scheme to downweight matches on frequent words (like "the"), and the lemma
    # overlap feature value was calculated a little differently.  I'm guessing that doesn't make a
    # huge difference... 
开发者ID:jcyk,项目名称:gtos,代码行数:30,代码来源:knowledge_graph_field.py


注:本文中的allennlp.nn.util.batch_tensor_dicts方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。