本文简要介绍python语言中 torchrec.distributed.planner.partitioners.GreedyPerfPartitioner.partition
的用法。
用法:
partition(proposal: List[torchrec.distributed.planner.types.ShardingOption], storage_constraint: torchrec.distributed.planner.types.Topology) → List[torchrec.distributed.planner.types.ShardingOption]
proposal(List[ShardingOption]) -填充的分片选项列表。
storage_constraint(Topology) -设备拓扑。
所选计划的分片选项列表。
列表[ShardingOption]
根据每个分片选项的
partition_by
属性在拓扑上放置分片选项。拓扑存储和性能在放置结束时更新。例子:
sharding_options = [ ShardingOption(partition_by="uniform", shards=[ Shards(storage=1, perf=1), Shards(storage=1, perf=1), ]), ShardingOption(partition_by="uniform", shards=[ Shards(storage=2, perf=2), Shards(storage=2, perf=2), ]), ShardingOption(partition_by="device", shards=[ Shards(storage=3, perf=3), Shards(storage=3, perf=3), ]) ShardingOption(partition_by="device", shards=[ Shards(storage=4, perf=4), Shards(storage=4, perf=4), ]), ] topology = Topology(world_size=2) # First [sharding_options[0] and sharding_options[1]] will be placed on the # topology with the uniform strategy, resulting in topology.devices[0].perf = (1,2) topology.devices[1].perf = (1,2) # Finally sharding_options[2] and sharding_options[3]] will be placed on the # topology with the device strategy (see docstring of `partition_by_device` for # more details). topology.devices[0].perf = (1,2) + (3,4) topology.devices[1].perf = (1,2) + (3,4) # The topology updates are done after the end of all the placements (the other # in the example is just for clarity).
参数:
返回:
返回类型:
相关用法
- Python PyTorch Graph.eliminate_dead_code用法及代码示例
- Python PyTorch GroupedPositionWeightedModule.named_parameters用法及代码示例
- Python PyTorch Graph.inserting_before用法及代码示例
- Python PyTorch GradScaler.unscale_用法及代码示例
- Python PyTorch GroupedPooledEmbeddingsLookup.named_buffers用法及代码示例
- Python PyTorch Graph.inserting_after用法及代码示例
- Python PyTorch GroupNorm用法及代码示例
- Python PyTorch Graph用法及代码示例
- Python PyTorch GroupedPositionWeightedModule.named_buffers用法及代码示例
- Python PyTorch GroupedPooledEmbeddingsLookup.state_dict用法及代码示例
- Python PyTorch GriffinLim用法及代码示例
- Python PyTorch GroupedPooledEmbeddingsLookup.named_parameters用法及代码示例
- Python PyTorch Graph.node_copy用法及代码示例
- Python PyTorch GroupedPositionWeightedModule.state_dict用法及代码示例
- Python PyTorch GroupedEmbeddingsLookup.state_dict用法及代码示例
- Python PyTorch GroupedEmbeddingsLookup.named_parameters用法及代码示例
- Python PyTorch GroupedEmbeddingsLookup.named_buffers用法及代码示例
- Python PyTorch Grouper用法及代码示例
- Python PyTorch Generator.set_state用法及代码示例
- Python PyTorch Generator.seed用法及代码示例
- Python PyTorch GLU用法及代码示例
- Python PyTorch GDriveReader用法及代码示例
- Python PyTorch GRUCell用法及代码示例
- Python PyTorch Gumbel用法及代码示例
- Python PyTorch Generator.get_state用法及代码示例
注:本文由纯净天空筛选整理自pytorch.org大神的英文原创作品 torchrec.distributed.planner.partitioners.GreedyPerfPartitioner.partition。非经特殊声明,原始代码版权归原作者所有,本译文未经允许或授权,请勿转载或复制。