當前位置: 首頁>>代碼示例>>Python>>正文


Python scatter_gather.scatter_kwargs方法代碼示例

本文整理匯總了Python中torch.nn.parallel.scatter_gather.scatter_kwargs方法的典型用法代碼示例。如果您正苦於以下問題:Python scatter_gather.scatter_kwargs方法的具體用法?Python scatter_gather.scatter_kwargs怎麽用?Python scatter_gather.scatter_kwargs使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在torch.nn.parallel.scatter_gather的用法示例。


在下文中一共展示了scatter_gather.scatter_kwargs方法的6個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: _data_parallel_wrapper

# 需要導入模塊: from torch.nn.parallel import scatter_gather [as 別名]
# 或者: from torch.nn.parallel.scatter_gather import scatter_kwargs [as 別名]
def _data_parallel_wrapper(func_name, device_ids, output_device):
    r"""
    這個函數是用於對需要多卡執行的函數的wrapper函數。參考的nn.DataParallel的forward函數

    :param str, func_name: 對network中的這個函數進行多卡運行
    :param device_ids: nn.DataParallel中的device_ids
    :param output_device: nn.DataParallel中的output_device
    :return:
    """
    
    def wrapper(network, *inputs, **kwargs):
        inputs, kwargs = scatter_kwargs(inputs, kwargs, device_ids, dim=0)
        if len(device_ids) == 1:
            return getattr(network, func_name)(*inputs[0], **kwargs[0])
        replicas = replicate(network, device_ids[:len(inputs)])
        outputs = parallel_apply(replicas, func_name, inputs, kwargs, device_ids[:len(replicas)])
        return gather(outputs, output_device)
    
    return wrapper 
開發者ID:fastnlp,項目名稱:fastNLP,代碼行數:21,代碼來源:_parallel_utils.py

示例2: scatter

# 需要導入模塊: from torch.nn.parallel import scatter_gather [as 別名]
# 或者: from torch.nn.parallel.scatter_gather import scatter_kwargs [as 別名]
def scatter(self, inputs, kwargs, device_ids):
        return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) 
開發者ID:PistonY,項目名稱:torch-toolbox,代碼行數:4,代碼來源:EncodingDataParallel.py

示例3: _data_parallel

# 需要導入模塊: from torch.nn.parallel import scatter_gather [as 別名]
# 或者: from torch.nn.parallel.scatter_gather import scatter_kwargs [as 別名]
def _data_parallel(self, batch):
        u"""
        Do the forward pass using multiple GPUs.  This is a simplification
        of torch.nn.parallel.data_parallel to support the allennlp model
        interface.
        """
        inputs, module_kwargs = scatter_kwargs((), batch, self._cuda_devices, 0)
        used_device_ids = self._cuda_devices[:len(inputs)]
        replicas = replicate(self._model, used_device_ids)
        outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)

        # Only the 'loss' is needed.
        # a (num_gpu, ) tensor with loss on each GPU
        losses = gather([output[u'loss'].unsqueeze(0) for output in outputs], used_device_ids[0], 0)
        return {u'loss': losses.mean()} 
開發者ID:plasticityai,項目名稱:magnitude,代碼行數:17,代碼來源:trainer.py

示例4: scatter

# 需要導入模塊: from torch.nn.parallel import scatter_gather [as 別名]
# 或者: from torch.nn.parallel.scatter_gather import scatter_kwargs [as 別名]
def scatter(self, inputs, kwargs, device_ids):
        try:
            params = kwargs.pop('params')
        except KeyError:
            return super(DataParallel, self).scatter(inputs, kwargs, device_ids)

        inputs_, kwargs_ = scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
        # Add params argument unchanged back in kwargs
        replicas = self._replicate_params(params, inputs_, device_ids,
                                          detach=not torch.is_grad_enabled())
        kwargs_ = tuple(dict(params=replica, **kwarg)
                        for (kwarg, replica) in zip(kwargs_, replicas))
        return inputs_, kwargs_ 
開發者ID:tristandeleu,項目名稱:pytorch-meta,代碼行數:15,代碼來源:parallel.py

示例5: _data_parallel

# 需要導入模塊: from torch.nn.parallel import scatter_gather [as 別名]
# 或者: from torch.nn.parallel.scatter_gather import scatter_kwargs [as 別名]
def _data_parallel(self, batch):
        """
        Do the forward pass using multiple GPUs.  This is a simplification
        of torch.nn.parallel.data_parallel to support the allennlp model
        interface.
        """
        inputs, module_kwargs = scatter_kwargs((), batch, self._cuda_devices, 0)
        used_device_ids = self._cuda_devices[:len(inputs)]
        replicas = replicate(self._model, used_device_ids)
        outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)

        # Only the 'loss' is needed.
        # a (num_gpu, ) tensor with loss on each GPU
        losses = gather([output['loss'].unsqueeze(0) for output in outputs], used_device_ids[0], 0)
        return {'loss': losses.mean()} 
開發者ID:allenai,項目名稱:scicite,代碼行數:17,代碼來源:multitask_trainer.py

示例6: scatter

# 需要導入模塊: from torch.nn.parallel import scatter_gather [as 別名]
# 或者: from torch.nn.parallel.scatter_gather import scatter_kwargs [as 別名]
def scatter(self, inputs, kwargs, device_ids):
    return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) 
開發者ID:namisan,項目名稱:mt-dnn,代碼行數:4,代碼來源:dataparallel.py


注:本文中的torch.nn.parallel.scatter_gather.scatter_kwargs方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。