當前位置: 首頁>>代碼示例>>Python>>正文


Python util.batch_tensor_dicts方法代碼示例

本文整理匯總了Python中allennlp.nn.util.batch_tensor_dicts方法的典型用法代碼示例。如果您正苦於以下問題:Python util.batch_tensor_dicts方法的具體用法?Python util.batch_tensor_dicts怎麽用?Python util.batch_tensor_dicts使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在allennlp.nn.util的用法示例。


在下文中一共展示了util.batch_tensor_dicts方法的5個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: batch_tensors

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 別名]
def batch_tensors(self, tensor_list: List[TextFieldTensors]) -> TextFieldTensors:
        # This is creating a dict of {token_indexer_name: {token_indexer_outputs: batched_tensor}}
        # for each token indexer used to index this field.
        indexer_lists: Dict[str, List[Dict[str, torch.Tensor]]] = defaultdict(list)
        for tensor_dict in tensor_list:
            for indexer_name, indexer_output in tensor_dict.items():
                indexer_lists[indexer_name].append(indexer_output)
        batched_tensors = {
            # NOTE(mattg): if an indexer has its own nested structure, rather than one tensor per
            # argument, then this will break.  If that ever happens, we should move this to an
            # `indexer.batch_tensors` method, with this logic as the default implementation in the
            # base class.
            indexer_name: util.batch_tensor_dicts(indexer_outputs)
            for indexer_name, indexer_outputs in indexer_lists.items()
        }
        return batched_tensors 
開發者ID:allenai,項目名稱:allennlp,代碼行數:18,代碼來源:text_field.py

示例2: batch_tensors

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 別名]
def batch_tensors(self, tensor_list: List[Dict[str, torch.Tensor]]) -> Dict[str, torch.Tensor]:
        # pylint: disable=no-self-use
        # This is creating a dict of {token_indexer_key: batch_tensor} for each token indexer used
        # to index this field.
        return util.batch_tensor_dicts(tensor_list) 
開發者ID:yuweijiang,項目名稱:HGL-pytorch,代碼行數:7,代碼來源:bert_field.py

示例3: batch_tensors

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 別名]
def batch_tensors(self, tensor_list                               )                           :
        # pylint: disable=no-self-use
        # This is creating a dict of {token_indexer_key: batch_tensor} for each token indexer used
        # to index this field.
        return util.batch_tensor_dicts(tensor_list) 
開發者ID:plasticityai,項目名稱:magnitude,代碼行數:7,代碼來源:text_field.py

示例4: batch_tensors

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 別名]
def batch_tensors(self, tensor_list                               )                           :
        # pylint: disable=no-self-use
        batched_text = nn_util.batch_tensor_dicts(tensor[u'text'] for tensor in tensor_list)  # type: ignore
        batched_linking = torch.stack([tensor[u'linking'] for tensor in tensor_list])
        return {u'text': batched_text, u'linking': batched_linking}

    # Below here we have feature extractor functions.  To keep a consistent API for easy logic
    # above, some of these functions have unused arguments.
    # pylint: disable=unused-argument,no-self-use

    # These feature extractors are generally pretty specific to the logical form language and
    # problem setting in WikiTableQuestions.  This whole notion of feature extraction should
    # eventually be made more general (or just removed, if we can replace it with CNN features...).
    # For the feature functions used in the original parser written in PNP, see here:
    # https://github.com/allenai/pnp/blob/wikitables2/src/main/scala/org/allenai/wikitables/SemanticParserFeatureGenerator.scala

    # One notable difference between how the features work here and how they worked in PNP is that
    # we're using the table text when computing string matches, while PNP used the _entity name_.
    # It turns out that the entity name is derived from the table text, so this should be roughly
    # equivalent, except in the case of some numbers.  If there are cells with different text that
    # normalize to the same name, you could get `_2` or similar appended to the name, so the way we
    # do it here should just be better.  But it's a possible minor source of variation from the
    # original parser.

    # Another difference between these features and the PNP features is that the span overlap used
    # a weighting scheme to downweight matches on frequent words (like "the"), and the lemma
    # overlap feature value was calculated a little differently.  I'm guessing that doesn't make a
    # huge difference... 
開發者ID:plasticityai,項目名稱:magnitude,代碼行數:30,代碼來源:knowledge_graph_field.py

示例5: batch_tensors

# 需要導入模塊: from allennlp.nn import util [as 別名]
# 或者: from allennlp.nn.util import batch_tensor_dicts [as 別名]
def batch_tensors(self, tensor_list: List[Dict[str, torch.Tensor]]) -> Dict[str, torch.Tensor]:
        # pylint: disable=no-self-use
        batched_text = nn_util.batch_tensor_dicts(tensor['text'] for tensor in tensor_list)  # type: ignore
        batched_linking = torch.stack([tensor['linking'] for tensor in tensor_list])
        return {'text': batched_text, 'linking': batched_linking}

    # Below here we have feature extractor functions.  To keep a consistent API for easy logic
    # above, some of these functions have unused arguments.
    # pylint: disable=unused-argument,no-self-use

    # These feature extractors are generally pretty specific to the logical form language and
    # problem setting in WikiTableQuestions.  This whole notion of feature extraction should
    # eventually be made more general (or just removed, if we can replace it with CNN features...).
    # For the feature functions used in the original parser written in PNP, see here:
    # https://github.com/allenai/pnp/blob/wikitables2/src/main/scala/org/allenai/wikitables/SemanticParserFeatureGenerator.scala

    # One notable difference between how the features work here and how they worked in PNP is that
    # we're using the table text when computing string matches, while PNP used the _entity name_.
    # It turns out that the entity name is derived from the table text, so this should be roughly
    # equivalent, except in the case of some numbers.  If there are cells with different text that
    # normalize to the same name, you could get `_2` or similar appended to the name, so the way we
    # do it here should just be better.  But it's a possible minor source of variation from the
    # original parser.

    # Another difference between these features and the PNP features is that the span overlap used
    # a weighting scheme to downweight matches on frequent words (like "the"), and the lemma
    # overlap feature value was calculated a little differently.  I'm guessing that doesn't make a
    # huge difference... 
開發者ID:jcyk,項目名稱:gtos,代碼行數:30,代碼來源:knowledge_graph_field.py


注:本文中的allennlp.nn.util.batch_tensor_dicts方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。