当前位置: 首页>>代码示例>>Python>>正文


Python brew.iter方法代码示例

本文整理汇总了Python中caffe2.python.brew.iter方法的典型用法代码示例。如果您正苦于以下问题:Python brew.iter方法的具体用法?Python brew.iter怎么用?Python brew.iter使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在caffe2.python.brew的用法示例。


在下文中一共展示了brew.iter方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: AddTrainingOperators

# 需要导入模块: from caffe2.python import brew [as 别名]
# 或者: from caffe2.python.brew import iter [as 别名]
def AddTrainingOperators(model, softmax, label):
    """Adds training operators to the model."""
    xent = model.LabelCrossEntropy([softmax, label], 'xent')
    # compute the expected loss
    loss = model.AveragedLoss(xent, "loss")
    # track the accuracy of the model
    AddAccuracy(model, softmax, label)
    # use the average loss we just computed to add gradient operators to the model
    model.AddGradientOperators([loss])
    # do a simple stochastic gradient descent
    ITER = brew.iter(model, "iter")
    # set the learning rate schedule
    LR = model.LearningRate(
        ITER, "LR", base_lr=-0.1, policy="step", stepsize=1, gamma=0.999 )
    # ONE is a constant value that is used in the gradient update. We only need
    # to create it once, so it is explicitly placed in param_init_net.
    ONE = model.param_init_net.ConstantFill([], "ONE", shape=[1], value=1.0)
    # Now, for each parameter, we do the gradient updates.
    for param in model.params:
        # Note how we get the gradient of each parameter - ModelHelper keeps
        # track of that.
        param_grad = model.param_to_grad[param]
        # The update is a simple weighted sum: param = param + param_grad * LR
        model.WeightedSum([param, ONE, param_grad, LR], param) 
开发者ID:Azure,项目名称:batch-shipyard,代码行数:26,代码来源:mnist.py

示例2: AddTrainingOperators

# 需要导入模块: from caffe2.python import brew [as 别名]
# 或者: from caffe2.python.brew import iter [as 别名]
def AddTrainingOperators(model, softmax, label):
    """Adds training operators to the model."""
    xent = model.LabelCrossEntropy([softmax, label], 'xent')
    # compute the expected loss
    loss = model.AveragedLoss(xent, "loss")
    # track the accuracy of the model
    AddAccuracy(model, softmax, label)
    # use the average loss we just computed to add gradient operators to the
    # model
    model.AddGradientOperators([loss])
    # do a simple stochastic gradient descent
    ITER = brew.iter(model, "iter")
    # set the learning rate schedule
    LR = model.LearningRate(
        ITER, "LR", base_lr=-0.1, policy="step", stepsize=1, gamma=0.999)
    # ONE is a constant value that is used in the gradient update. We only need
    # to create it once, so it is explicitly placed in param_init_net.
    ONE = model.param_init_net.ConstantFill([], "ONE", shape=[1], value=1.0)
    # Now, for each parameter, we do the gradient updates.
    for param in model.params:
        # Note how we get the gradient of each parameter - ModelHelper keeps
        # track of that.
        param_grad = model.param_to_grad[param]
        # The update is a simple weighted sum: param = param + param_grad * LR
        model.WeightedSum([param, ONE, param_grad, LR], param) 
开发者ID:lanpa,项目名称:tensorboardX,代码行数:27,代码来源:demo_caffe2.py

示例3: add_parameter_update_ops

# 需要导入模块: from caffe2.python import brew [as 别名]
# 或者: from caffe2.python.brew import iter [as 别名]
def add_parameter_update_ops(model):
    brew.add_weight_decay(model, weight_decay)
    iter = brew.iter(model, "iter")
    lr = model.net.LearningRate(
        [iter],
        "lr",
        base_lr=base_learning_rate,
        policy="step",
        stepsize=stepsize,
        gamma=0.1,
    )
    for param in model.GetParams():
        param_grad = model.param_to_grad[param]
        param_momentum = model.param_init_net.ConstantFill(
            [param], param + '_momentum', value=0.0
        )

        # Update param_grad and param_momentum in place
        model.net.MomentumSGDUpdate(
            [param_grad, param_momentum, lr, param],
            [param_grad, param_momentum, param],
            # almost 100% but with room to grow
            momentum=0.9,
            # netsterov is a defenseman for the Montreal Canadiens, but
            # Nesterov Momentum works slightly better than standard momentum
            nesterov=1,
        )


# In[ ]:


# SOLUTION for Part 7 
开发者ID:facebookarchive,项目名称:tutorials,代码行数:35,代码来源:Multi-GPU_Training.py

示例4: add_parameter_update_ops

# 需要导入模块: from caffe2.python import brew [as 别名]
# 或者: from caffe2.python.brew import iter [as 别名]
def add_parameter_update_ops(model):
        """A simple parameter update code.

        :param model_helper.ModelHelper model: Model to add update parameters operators for.
        """
        iteration = brew.iter(model, "ITER")
        learning_rate = model.net.LearningRate([iteration], "LR", base_lr=0.01, policy="fixed")
        one = model.param_init_net.ConstantFill([], "ONE", shape=[1], value=1.0)
        for param in model.GetParams():
            grad = model.param_to_grad[param]
            model.WeightedSum([param, one, grad, learning_rate], param) 
开发者ID:HewlettPackard,项目名称:dlcookbook-dlbs,代码行数:13,代码来源:model.py

示例5: create_resnet50_model_ops

# 需要导入模块: from caffe2.python import brew [as 别名]
# 或者: from caffe2.python.brew import iter [as 别名]
def create_resnet50_model_ops(model, loss_scale):
    raise NotImplementedError #remove this from the function stub
    


# ## Part 6: Make the Network Learn
# 
# 
# Caffe2 model helper object has several built in functions that will help with this learning by using backpropagation where it will be adjusting weights as it runs through iterations.
# 
# * AddWeightDecay
# * Iter
# * net.LearningRate
# 
# Below is a reference implementation:
# 
# ```python
# def add_parameter_update_ops(model):
#     model.AddWeightDecay(weight_decay)
#     iter = model.Iter("iter")
#     lr = model.net.LearningRate(
#         [iter],
#         "lr",
#         base_lr=base_learning_rate,
#         policy="step",
#         stepsize=stepsize,
#         gamma=0.1,
#     )
#     # Momentum SGD update
#     for param in model.GetParams():
#         param_grad = model.param_to_grad[param]
#         param_momentum = model.param_init_net.ConstantFill(
#             [param], param + '_momentum', value=0.0
#         )
# 
#         # Update param_grad and param_momentum in place
#         model.net.MomentumSGDUpdate(
#             [param_grad, param_momentum, lr, param],
#             [param_grad, param_momentum, param],
#             momentum=0.9,
#             # Nesterov Momentum works slightly better than standard momentum
#             nesterov=1,
#         )
# ```
# 
# ### Task: Implement the forward_pass_builder_fun Using Resnet-50
# Several of our Configuration variables will get used in this step. Take a look at the Configuration section from Part 2 and refresh your memory. We stubbed out the `add_parameter_update_ops` function, so to finish it, utilize `model.AddWeightDecay` and set `weight_decay`. Calculate your stepsize using `int(10 * train_data_count / total_batch_size)` or pull the value from the config. Instantiate the learning iterations with `iter = model.Iter("iter")`. Use `model.net.LearningRate()` to finalize your parameter update operations. You can optionally update you SGD's momentum. It might not make a difference in this small implementation, but if you're gonna go big later, then you'll want to do this.
# 
# Refer to the reference implementation for help on this task.
# 

# In[ ]:


# LAB WORK AREA FOR PART 6 
开发者ID:facebookarchive,项目名称:tutorials,代码行数:57,代码来源:Multi-GPU_Training.py


注:本文中的caffe2.python.brew.iter方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。