当前位置: 首页>>代码示例 >>用法及示例精选 >>正文


Python PyTorch SparseFeaturesAllToAll用法及代码示例


本文简要介绍python语言中 torchrec.distributed.embedding_sharding.SparseFeaturesAllToAll 的用法。

用法:

class torchrec.distributed.embedding_sharding.SparseFeaturesAllToAll(pg: torch._C._distributed_c10d.ProcessGroup, id_list_features_per_rank: List[int], id_score_list_features_per_rank: List[int], device: Optional[torch.device] = None, stagger: int = 1)

参数

  • pg(dist.ProcessGroup) -AlltoAll 通信的进程组。

  • id_list_features_per_rank(List[int]) -要发送到每个等级的 id 列表函数的数量。

  • id_score_list_features_per_rank(List[int]) -要发送到每个等级的 id 分数列表特征的数量

  • device(可选的[torch.device]) -将分配缓冲区的设备。

  • stagger(int) -交错值应用于 recat 张量,有关更多详细信息,请参阅_recat 函数。

基础:torch.nn.modules.module.Module

利用 AlltoAll 集合将稀疏特征重新分配到 ProcessGroup

例子:

id_list_features_per_rank = [2, 1]
id_score_list_features_per_rank = [1, 3]
sfa2a = SparseFeaturesAllToAll(
        pg,
        id_list_features_per_rank,
        id_score_list_features_per_rank
    )
awaitable = sfa2a(rank0_input: SparseFeatures)

# where:
#     rank0_input.id_list_features is KeyedJaggedTensor holding

#             0           1           2
#     'A'    [A.V0]       None        [A.V1, A.V2]
#     'B'    None         [B.V0]      [B.V1]
#     'C'    [C.V0]       [C.V1]      None

#     rank1_input.id_list_features is KeyedJaggedTensor holding

#             0           1           2
#     'A'     [A.V3]      [A.V4]      None
#     'B'     None        [B.V2]      [B.V3, B.V4]
#     'C'     [C.V2]      [C.V3]      None

#     rank0_input.id_score_list_features is KeyedJaggedTensor holding

#             0           1           2
#     'A'    [A.V0]       None        [A.V1, A.V2]
#     'B'    None         [B.V0]      [B.V1]
#     'C'    [C.V0]       [C.V1]      None
#     'D'    None         [D.V0]      None

#     rank1_input.id_score_list_features is KeyedJaggedTensor holding

#             0           1           2
#     'A'     [A.V3]      [A.V4]      None
#     'B'     None        [B.V2]      [B.V3, B.V4]
#     'C'     [C.V2]      [C.V3]      None
#     'D'     [D.V1]      [D.V2]      [D.V3, D.V4]

rank0_output: SparseFeatures = awaitable.wait()

# rank0_output.id_list_features is KeyedJaggedTensor holding

#         0           1           2           3           4           5
# 'A'     [A.V0]      None      [A.V1, A.V2]  [A.V3]      [A.V4]      None
# 'B'     None        [B.V0]    [B.V1]        None        [B.V2]     [B.V3, B.V4]

# rank1_output.id_list_features is KeyedJaggedTensor holding
#         0           1           2           3           4           5
# 'C'     [C.V0]      [C.V1]      None        [C.V2]      [C.V3]      None

# rank0_output.id_score_list_features is KeyedJaggedTensor holding

#         0           1           2           3           4           5
# 'A'     [A.V0]      None      [A.V1, A.V2]  [A.V3]      [A.V4]      None

# rank1_output.id_score_list_features is KeyedJaggedTensor holding

#         0           1           2           3           4           5
# 'B'     None        [B.V0]      [B.V1]      None        [B.V2]      [B.V3, B.V4]
# 'C'     [C.V0]       [C.V1]      None       [C.V2]      [C.V3]      None
# 'D      None         [D.V0]      None       [D.V1]      [D.V2]      [D.V3, D.V4]

相关用法


注:本文由纯净天空筛选整理自pytorch.org大神的英文原创作品 torchrec.distributed.embedding_sharding.SparseFeaturesAllToAll。非经特殊声明,原始代码版权归原作者所有,本译文未经允许或授权,请勿转载或复制。