當前位置: 首頁>>代碼示例 >>用法及示例精選 >>正文


Python PyTorch TensorPipeRpcBackendOptions.set_device_map用法及代碼示例


本文簡要介紹python語言中 torch.distributed.rpc.TensorPipeRpcBackendOptions.set_device_map 的用法。

用法:

set_device_map(to, device_map)

參數

  • worker_name(str) -被叫方姓名。

  • device_map(python的字典:int,str, 或者torch.device) -從該工作人員到被調用方的設備放置映射。該映射必須是可逆的。

設置每個 RPC 調用者和被調用者對之間的設備映射。可以多次調用此函數以增量添加設備放置配置。

例子:

>>> # both workers
>>> def add(x, y):
>>>     print(x)  # tensor([1., 1.], device='cuda:1')
>>>     return x + y, (x + y).to(2)
>>>
>>> # on worker 0
>>> options = TensorPipeRpcBackendOptions(
>>>     num_worker_threads=8,
>>>     device_maps={"worker1": {0: 1}}
>>>     # maps worker0's cuda:0 to worker1's cuda:1
>>> )
>>> options.set_device_map("worker1", {1: 2})
>>> # maps worker0's cuda:1 to worker1's cuda:2
>>>
>>> rpc.init_rpc(
>>>     "worker0",
>>>     rank=0,
>>>     world_size=2,
>>>     backend=rpc.BackendType.TENSORPIPE,
>>>     rpc_backend_options=options
>>> )
>>>
>>> x = torch.ones(2)
>>> rets = rpc.rpc_sync("worker1", add, args=(x.to(0), 1))
>>> # The first argument will be moved to cuda:1 on worker1. When
>>> # sending the return value back, it will follow the invert of
>>> # the device map, and hence will be moved back to cuda:0 and
>>> # cuda:1 on worker0
>>> print(rets[0])  # tensor([2., 2.], device='cuda:0')
>>> print(rets[1])  # tensor([2., 2.], device='cuda:1')

相關用法


注:本文由純淨天空篩選整理自pytorch.org大神的英文原創作品 torch.distributed.rpc.TensorPipeRpcBackendOptions.set_device_map。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。