当前位置: 首页>>代码示例>>Python>>正文


Python torch.rank方法代码示例

本文整理汇总了Python中horovod.torch.rank方法的典型用法代码示例。如果您正苦于以下问题:Python torch.rank方法的具体用法?Python torch.rank怎么用?Python torch.rank使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在horovod.torch的用法示例。


在下文中一共展示了torch.rank方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: test

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def test():
    model.eval()
    test_loss = 0.
    test_accuracy = 0.
    for data, target in test_loader:
        if args.cuda:
            data, target = data.cuda(), target.cuda()
        data, target = Variable(data, volatile=True), Variable(target)
        output = model(data)
        # sum up batch loss
        test_loss += F.nll_loss(output, target, size_average=False).data[0]
        # get the index of the max log-probability
        pred = output.data.max(1, keepdim=True)[1]
        test_accuracy += pred.eq(target.data.view_as(pred)).cpu().float().sum()

    test_loss /= len(test_sampler)
    test_accuracy /= len(test_sampler)

    test_loss = metric_average(test_loss, 'avg_loss')
    test_accuracy = metric_average(test_accuracy, 'avg_accuracy')

    if hvd.rank() == 0:
        print('\nTest set: Average loss: {:.4f}, Accuracy: {:.2f}%\n'.format(
            test_loss, 100. * test_accuracy)) 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:26,代码来源:pytorch_mnist.py

示例2: allgather_async

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def allgather_async(tensor, name=None):
    """
    A function that asynchronously concatenates the input tensor with the same input
    tensor on all other Horovod processes. The input tensor is not modified.

    The concatenation is done on the first dimension, so the input tensors on the
    different processes must have the same rank and shape, except for the first
    dimension, which is allowed to be different.

    Arguments:
        tensor: A tensor to allgather.
        name: A name of the allgather operation.

    Returns:
        A handle to the allgather operation that can be used with `poll()` or
        `synchronize()`.
    """
    output = tensor.new()
    return _allgather_async(tensor, output, name) 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:21,代码来源:mpi_ops.py

示例3: allgather

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def allgather(tensor, name=None):
    """
    A function that concatenates the input tensor with the same input tensor on
    all other Horovod processes. The input tensor is not modified.

    The concatenation is done on the first dimension, so the input tensors on the
    different processes must have the same rank and shape, except for the first
    dimension, which is allowed to be different.

    This acts as a thin wrapper around an autograd function.  If your input
    tensor requires gradients, then callings this function will allow gradients
    to be computed and backpropagated.

    Arguments:
        tensor: A tensor to allgather.
        name: A name of the allgather operation.

    Returns:
        A tensor of the same type as `tensor`, concatenated on dimension zero
        across all processes. The shape is identical to the input shape, except for
        the first dimension, which may be greater and is the sum of all first
        dimensions of the tensors in different Horovod processes.
    """
    return HorovodAllgather.apply(tensor, name) 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:26,代码来源:mpi_ops.py

示例4: broadcast

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def broadcast(tensor, root_rank, name=None):
    """
    A function that broadcasts the input tensor on root rank to the same input tensor
    on all other Horovod processes. The input tensor is not modified.

    The broadcast operation is keyed by the name. If name is not provided, an incremented
    auto-generated name is used. The tensor type and shape must be the same on all
    Horovod processes for a given name. The broadcast will not start until all processes
    are ready to send and receive the tensor.

    This acts as a thin wrapper around an autograd function.  If your input
    tensor requires gradients, then callings this function will allow gradients
    to be computed and backpropagated.

    Arguments:
        tensor: A tensor to broadcast.
        root_rank: The rank to broadcast the value from.
        name: A name of the broadcast operation.

    Returns:
        A tensor of the same shape and type as `tensor`, with the value broadcasted
        from root rank.
    """
    return HorovodBroadcast.apply(tensor, root_rank, name) 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:26,代码来源:mpi_ops.py

示例5: broadcast_async_

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def broadcast_async_(tensor, root_rank, name=None):
    """
    A function that asynchronously broadcasts the input tensor on root rank to the same
    input tensor on all other Horovod processes. The operation is performed in-place.

    The broadcast operation is keyed by the name. If name is not provided, an incremented
    auto-generated name is used. The tensor type and shape must be the same on all
    Horovod processes for a given name. The broadcast will not start until all processes
    are ready to send and receive the tensor.

    Arguments:
        tensor: A tensor to broadcast.
        root_rank: The rank to broadcast the value from.
        name: A name of the broadcast operation.

    Returns:
        A handle to the broadcast operation that can be used with `poll()` or
        `synchronize()`.
    """
    return _broadcast_async(tensor, tensor, root_rank, name) 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:22,代码来源:mpi_ops.py

示例6: broadcast_

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def broadcast_(tensor, root_rank, name=None):
    """
    A function that broadcasts the input tensor on root rank to the same input tensor
    on all other Horovod processes. The operation is performed in-place.

    The broadcast operation is keyed by the name. If name is not provided, an incremented
    auto-generated name is used. The tensor type and shape must be the same on all
    Horovod processes for a given name. The broadcast will not start until all processes
    are ready to send and receive the tensor.

    Arguments:
        tensor: A tensor to broadcast.
        root_rank: The rank to broadcast the value from.
        name: A name of the broadcast operation.

    Returns:
        A tensor of the same shape and type as `tensor`, with the value broadcasted
        from root rank.
    """
    handle = broadcast_async_(tensor, root_rank, name)
    return synchronize(handle) 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:23,代码来源:mpi_ops.py

示例7: test_horovod_allgather_error

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def test_horovod_allgather_error(self):
        """Test that the allgather returns an error if any dimension besides
        the first is different among the tensors being gathered."""
        hvd.init()
        rank = hvd.rank()
        size = hvd.size()

        # This test does not apply if there is only one worker.
        if size == 1:
            return

        tensor_size = [17] * 3
        tensor_size[1] = 10 * (rank + 1)
        tensor = torch.FloatTensor(*tensor_size).fill_(1).mul_(rank)

        try:
            hvd.allgather(tensor)
            assert False, 'hvd.allgather did not throw error'
        except torch.FatalError:
            pass 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:22,代码来源:test_torch.py

示例8: test_horovod_allgather_type_error

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def test_horovod_allgather_type_error(self):
        """Test that the allgather returns an error if the types being gathered
        differ among the processes"""
        hvd.init()
        rank = hvd.rank()
        size = hvd.size()

        # This test does not apply if there is only one worker.
        if size == 1:
            return

        tensor_size = [17] * 3
        if rank % 2 == 0:
            tensor = torch.IntTensor(*tensor_size)
        else:
            tensor = torch.FloatTensor(*tensor_size)

        try:
            hvd.allgather(tensor)
            assert False, 'hvd.allgather did not throw error'
        except torch.FatalError:
            pass 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:24,代码来源:test_torch.py

示例9: test_horovod_broadcast_error

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def test_horovod_broadcast_error(self):
        """Test that the broadcast returns an error if any dimension besides
        the first is different among the tensors being broadcasted."""
        hvd.init()
        rank = hvd.rank()
        size = hvd.size()

        # This test does not apply if there is only one worker.
        if size == 1:
            return

        tensor_size = [17] * 3
        tensor_size[1] = 10 * (rank + 1)
        tensor = torch.FloatTensor(*tensor_size).fill_(1).mul_(rank)

        try:
            hvd.broadcast(tensor, 0)
            assert False, 'hvd.broadcast did not throw error'
        except torch.FatalError:
            pass 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:22,代码来源:test_torch.py

示例10: test_horovod_broadcast_type_error

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def test_horovod_broadcast_type_error(self):
        """Test that the broadcast returns an error if the types being broadcasted
        differ among the processes"""
        hvd.init()
        rank = hvd.rank()
        size = hvd.size()

        # This test does not apply if there is only one worker.
        if size == 1:
            return

        tensor_size = [17] * 3
        if rank % 2 == 0:
            tensor = torch.IntTensor(*tensor_size)
        else:
            tensor = torch.FloatTensor(*tensor_size)

        try:
            hvd.broadcast(tensor, 0)
            assert False, 'hvd.broadcast did not throw error'
        except torch.FatalError:
            pass 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:24,代码来源:test_torch.py

示例11: test_horovod_broadcast_rank_error

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def test_horovod_broadcast_rank_error(self):
        """Test that the broadcast returns an error if different ranks
        specify different root rank."""
        hvd.init()
        rank = hvd.rank()
        size = hvd.size()

        # This test does not apply if there is only one worker.
        if size == 1:
            return

        tensor = torch.FloatTensor(*([17] * 3)).fill_(1)

        try:
            hvd.broadcast(tensor, rank)
            assert False, 'hvd.broadcast did not throw error'
        except torch.FatalError:
            pass 
开发者ID:mlperf,项目名称:training_results_v0.6,代码行数:20,代码来源:test_torch.py

示例12: get_train_loader

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def get_train_loader(batch_size=25):
    if hvd.rank() == 0:
        print('Train: ', end="")
    train_dataset = datasets.ImageFolder(root=datapath+'/train',
                                         transform=data_transform)

    train_sampler = torch.utils.data.distributed.DistributedSampler(
        train_dataset, num_replicas=hvd.size(), rank=hvd.rank())

    train_loader = DataLoader(train_dataset, batch_size=batch_size,
                              sampler=train_sampler, num_workers=4, pin_memory=True)

    if hvd.rank() == 0:
        print('Found', len(train_dataset), 'images belonging to',
              len(train_dataset.classes), 'classes')
    return train_loader, train_sampler 
开发者ID:csc-training,项目名称:intro-to-dl,代码行数:18,代码来源:pytorch_dvc_cnn_hvd.py

示例13: _get_distributed_sampler

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def _get_distributed_sampler(self, dataloader):
        if self.use_tpu:
            kwargs = dict(num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal())
        elif self.use_horovod:
            kwargs = dict(num_replicas=hvd.size(), rank=hvd.rank())
        else:
            world_size = {
                'ddp': self.num_nodes * self.num_processes,
                'ddp_spawn': self.num_nodes * self.num_processes,
                'ddp2': self.num_nodes,
                'ddp_cpu': self.num_processes * self.num_nodes
            }
            assert self.distributed_backend is not None
            kwargs = dict(num_replicas=world_size[self.distributed_backend], rank=self.global_rank)
        sampler = DistributedSampler(dataloader.dataset, **kwargs)
        return sampler 
开发者ID:PyTorchLightning,项目名称:pytorch-lightning,代码行数:18,代码来源:data_loading.py

示例14: train

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def train(epoch):
    model.train()
    # Horovod: set epoch to sampler for shuffling.
    train_sampler.set_epoch(epoch)
    for batch_idx, (data, target) in enumerate(train_loader):
        if args.cuda:
            data, target = data.cuda(), target.cuda()
        optimizer.zero_grad()
        output = model(data)
        loss = F.nll_loss(output, target)
        loss.backward()
        optimizer.step()
        if batch_idx % args.log_interval == 0:
            # Horovod: use train_sampler to determine the number of examples in
            # this worker's partition.
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_sampler),
                100. * batch_idx / len(train_loader), loss.item()))
            if hvd.rank() == 0:
                experiment.log_metrics(step=epoch,
                                       loss=loss.item()) 
开发者ID:polyaxon,项目名称:polyaxon,代码行数:23,代码来源:mnist.py

示例15: __init__

# 需要导入模块: from horovod import torch [as 别名]
# 或者: from horovod.torch import rank [as 别名]
def __init__(self, dataset, batch_size, distributed=False, num_workers=0, timeout=1000):
 
        if not distributed: 
            super(ChunkDataloader, self).__init__(dataset,
                                              batch_size=batch_size,
                                              shuffle=True,
                                              num_workers=num_workers,
                                              collate_fn=self.collate_fn)
        else:
            import horovod.torch as hvd
            sampler = DistributedSampler(dataset, num_replicas=hvd.size(), rank=hvd.rank())
            super(ChunkDataloader, self).__init__(dataset,
                                           batch_size=batch_size,
                                           sampler=sampler,
                                           num_workers=num_workers,
                                           collate_fn=self.collate_fn,
                                           drop_last=False,
                                           timeout=timeout) 
开发者ID:jzlianglu,项目名称:pykaldi2,代码行数:20,代码来源:dataloader.py


注:本文中的horovod.torch.rank方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。