當前位置: 首頁>>代碼示例>>Python>>正文


Python muji.Allreduce方法代碼示例

本文整理匯總了Python中caffe2.python.muji.Allreduce方法的典型用法代碼示例。如果您正苦於以下問題:Python muji.Allreduce方法的具體用法?Python muji.Allreduce怎麽用?Python muji.Allreduce使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在caffe2.python.muji的用法示例。


在下文中一共展示了muji.Allreduce方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: _add_allreduce_graph

# 需要導入模塊: from caffe2.python import muji [as 別名]
# 或者: from caffe2.python.muji import Allreduce [as 別名]
def _add_allreduce_graph(model):
    """Construct the graph that performs Allreduce on the gradients."""
    # Need to all-reduce the per-GPU gradients if training with more than 1 GPU
    all_params = model.TrainableParams()
    assert len(all_params) % cfg.NUM_GPUS == 0
    # The model parameters are replicated on each GPU, get the number
    # distinct parameter blobs (i.e., the number of parameter blobs on
    # each GPU)
    params_per_gpu = int(len(all_params) / cfg.NUM_GPUS)
    with c2_utils.CudaScope(0):
        # Iterate over distinct parameter blobs
        for i in range(params_per_gpu):
            # Gradients from all GPUs for this parameter blob
            gradients = [
                model.param_to_grad[p] for p in all_params[i::params_per_gpu]
            ]
            if len(gradients) > 0:
                if cfg.USE_NCCL:
                    model.net.NCCLAllreduce(gradients, gradients)
                else:
                    muji.Allreduce(model.net, gradients, reduced_affix='') 
開發者ID:yihui-he,項目名稱:KL-Loss,代碼行數:23,代碼來源:optimizer.py

示例2: build_data_parallel_model

# 需要導入模塊: from caffe2.python import muji [as 別名]
# 或者: from caffe2.python.muji import Allreduce [as 別名]
def build_data_parallel_model(model, single_gpu_build_func):
    if model.train:
        all_loss_gradients = {}  # Will include loss gradients from all GPUs
        # Build the model on each GPU with correct name and device scoping
        for gpu_id in range(cfg.NUM_GPUS):
            with core.NameScope('gpu_{}'.format(gpu_id)):
                with core.DeviceScope(muji.OnGPU(gpu_id)):
                    all_loss_gradients.update(
                        single_gpu_build_func(model))
        # Add backward pass on all GPUs
        model.AddGradientOperators(all_loss_gradients)
        if cfg.NUM_GPUS > 1:
            # Need to all-reduce the per-GPU gradients if training with more
            # than 1 GPU
            all_params = model.TrainableParams()
            assert len(all_params) % cfg.NUM_GPUS == 0, \
                'This should not happen.'
            # The model parameters are replicated on each GPU, get the number
            # distinct parameter blobs (i.e., the number of parameter blobs on
            # each GPU)
            params_per_gpu = int(len(all_params) / cfg.NUM_GPUS)
            with core.DeviceScope(muji.OnGPU(cfg.ROOT_GPU_ID)):
                # Iterate over distinct parameter blobs
                for i in range(params_per_gpu):
                    # Gradients from all GPUs for this parameter blob
                    gradients = [
                        model.param_to_grad[p]
                        for p in all_params[i::params_per_gpu]
                    ]
                    if len(gradients) > 0:
                        if cfg.USE_NCCL:
                            model.net.NCCLAllreduce(gradients, gradients)
                        else:
                            muji.Allreduce(
                                model.net, gradients, reduced_affix='')
        for gpu_id in range(cfg.NUM_GPUS):
            # After all-reduce, all GPUs perform SGD updates on their identical
            # params and gradients in parallel
            add_parameter_update_ops(model, gpu_id)
    else:
        # Testing only supports running on a single GPU
        with core.NameScope('gpu_{}'.format(cfg.ROOT_GPU_ID)):
            with core.DeviceScope(muji.OnGPU(cfg.ROOT_GPU_ID)):
                single_gpu_build_func(model) 
開發者ID:facebookresearch,項目名稱:DetectAndTrack,代碼行數:46,代碼來源:model_builder.py


注:本文中的caffe2.python.muji.Allreduce方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。