当前位置: 首页>>代码示例 >>用法及示例精选 >>正文


Python PyTorch TensorPipeRpcBackendOptions.set_device_map用法及代码示例


本文简要介绍python语言中 torch.distributed.rpc.TensorPipeRpcBackendOptions.set_device_map 的用法。

用法:

set_device_map(to, device_map)

参数

  • worker_name(str) -被叫方姓名。

  • device_map(python的字典:int,str, 或者torch.device) -从该工作人员到被调用方的设备放置映射。该映射必须是可逆的。

设置每个 RPC 调用者和被调用者对之间的设备映射。可以多次调用此函数以增量添加设备放置配置。

例子:

>>> # both workers
>>> def add(x, y):
>>>     print(x)  # tensor([1., 1.], device='cuda:1')
>>>     return x + y, (x + y).to(2)
>>>
>>> # on worker 0
>>> options = TensorPipeRpcBackendOptions(
>>>     num_worker_threads=8,
>>>     device_maps={"worker1": {0: 1}}
>>>     # maps worker0's cuda:0 to worker1's cuda:1
>>> )
>>> options.set_device_map("worker1", {1: 2})
>>> # maps worker0's cuda:1 to worker1's cuda:2
>>>
>>> rpc.init_rpc(
>>>     "worker0",
>>>     rank=0,
>>>     world_size=2,
>>>     backend=rpc.BackendType.TENSORPIPE,
>>>     rpc_backend_options=options
>>> )
>>>
>>> x = torch.ones(2)
>>> rets = rpc.rpc_sync("worker1", add, args=(x.to(0), 1))
>>> # The first argument will be moved to cuda:1 on worker1. When
>>> # sending the return value back, it will follow the invert of
>>> # the device map, and hence will be moved back to cuda:0 and
>>> # cuda:1 on worker0
>>> print(rets[0])  # tensor([2., 2.], device='cuda:0')
>>> print(rets[1])  # tensor([2., 2.], device='cuda:1')

相关用法


注:本文由纯净天空筛选整理自pytorch.org大神的英文原创作品 torch.distributed.rpc.TensorPipeRpcBackendOptions.set_device_map。非经特殊声明,原始代码版权归原作者所有,本译文未经允许或授权,请勿转载或复制。