本文簡要介紹python語言中 torchrec.distributed.embedding_sharding.SparseFeaturesAllToAll
的用法。
用法:
class torchrec.distributed.embedding_sharding.SparseFeaturesAllToAll(pg: torch._C._distributed_c10d.ProcessGroup, id_list_features_per_rank: List[int], id_score_list_features_per_rank: List[int], device: Optional[torch.device] = None, stagger: int = 1)
pg(dist.ProcessGroup) -AlltoAll 通信的進程組。
id_list_features_per_rank(List[int]) -要發送到每個等級的 id 列表函數的數量。
id_score_list_features_per_rank(List[int]) -要發送到每個等級的 id 分數列表特征的數量
device(可選的[torch.device]) -將分配緩衝區的設備。
stagger(int) -交錯值應用於 recat 張量,有關更多詳細信息,請參閱
_recat
函數。
基礎:
torch.nn.modules.module.Module
利用 AlltoAll 集合將稀疏特征重新分配到
ProcessGroup
。例子:
id_list_features_per_rank = [2, 1] id_score_list_features_per_rank = [1, 3] sfa2a = SparseFeaturesAllToAll( pg, id_list_features_per_rank, id_score_list_features_per_rank ) awaitable = sfa2a(rank0_input: SparseFeatures) # where: # rank0_input.id_list_features is KeyedJaggedTensor holding # 0 1 2 # 'A' [A.V0] None [A.V1, A.V2] # 'B' None [B.V0] [B.V1] # 'C' [C.V0] [C.V1] None # rank1_input.id_list_features is KeyedJaggedTensor holding # 0 1 2 # 'A' [A.V3] [A.V4] None # 'B' None [B.V2] [B.V3, B.V4] # 'C' [C.V2] [C.V3] None # rank0_input.id_score_list_features is KeyedJaggedTensor holding # 0 1 2 # 'A' [A.V0] None [A.V1, A.V2] # 'B' None [B.V0] [B.V1] # 'C' [C.V0] [C.V1] None # 'D' None [D.V0] None # rank1_input.id_score_list_features is KeyedJaggedTensor holding # 0 1 2 # 'A' [A.V3] [A.V4] None # 'B' None [B.V2] [B.V3, B.V4] # 'C' [C.V2] [C.V3] None # 'D' [D.V1] [D.V2] [D.V3, D.V4] rank0_output: SparseFeatures = awaitable.wait() # rank0_output.id_list_features is KeyedJaggedTensor holding # 0 1 2 3 4 5 # 'A' [A.V0] None [A.V1, A.V2] [A.V3] [A.V4] None # 'B' None [B.V0] [B.V1] None [B.V2] [B.V3, B.V4] # rank1_output.id_list_features is KeyedJaggedTensor holding # 0 1 2 3 4 5 # 'C' [C.V0] [C.V1] None [C.V2] [C.V3] None # rank0_output.id_score_list_features is KeyedJaggedTensor holding # 0 1 2 3 4 5 # 'A' [A.V0] None [A.V1, A.V2] [A.V3] [A.V4] None # rank1_output.id_score_list_features is KeyedJaggedTensor holding # 0 1 2 3 4 5 # 'B' None [B.V0] [B.V1] None [B.V2] [B.V3, B.V4] # 'C' [C.V0] [C.V1] None [C.V2] [C.V3] None # 'D None [D.V0] None [D.V1] [D.V2] [D.V3, D.V4]
參數:
相關用法
- Python PyTorch SparseArch用法及代碼示例
- Python PyTorch Spectrogram用法及代碼示例
- Python PyTorch SpectralCentroid用法及代碼示例
- Python PyTorch ScaledDotProduct.__init__用法及代碼示例
- Python PyTorch Sigmoid用法及代碼示例
- Python PyTorch ShardedEmbeddingBagCollection.named_parameters用法及代碼示例
- Python PyTorch SummaryWriter.add_histogram用法及代碼示例
- Python PyTorch ScriptModule.state_dict用法及代碼示例
- Python PyTorch Softmin用法及代碼示例
- Python PyTorch SummaryWriter.add_pr_curve用法及代碼示例
- Python PyTorch Softmax2d用法及代碼示例
- Python PyTorch ShardedEmbeddingBag.named_parameters用法及代碼示例
- Python PyTorch ScriptModule.register_full_backward_hook用法及代碼示例
- Python PyTorch SummaryWriter.add_custom_scalars用法及代碼示例
- Python PyTorch ScriptModule.parameters用法及代碼示例
- Python PyTorch ShardedEmbeddingBag.state_dict用法及代碼示例
- Python PyTorch SummaryWriter.add_image用法及代碼示例
- Python PyTorch Store.num_keys用法及代碼示例
- Python PyTorch ShardedEmbeddingBagCollection.named_modules用法及代碼示例
- Python PyTorch SummaryWriter.add_hparams用法及代碼示例
- Python PyTorch ScriptModule.register_forward_hook用法及代碼示例
- Python PyTorch ShardedEmbeddingBagCollection.state_dict用法及代碼示例
- Python PyTorch ScriptModule.modules用法及代碼示例
- Python PyTorch SummaryWriter.__init__用法及代碼示例
- Python PyTorch SequentialLR用法及代碼示例
注:本文由純淨天空篩選整理自pytorch.org大神的英文原創作品 torchrec.distributed.embedding_sharding.SparseFeaturesAllToAll。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。