当前位置: 首页>>代码示例>>Python>>正文


Python optimizers.GradientDescentMomentum方法代码示例

本文整理汇总了Python中neon.optimizers.GradientDescentMomentum方法的典型用法代码示例。如果您正苦于以下问题:Python optimizers.GradientDescentMomentum方法的具体用法?Python optimizers.GradientDescentMomentum怎么用?Python optimizers.GradientDescentMomentum使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在neon.optimizers的用法示例。


在下文中一共展示了optimizers.GradientDescentMomentum方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: create_model

# 需要导入模块: from neon import optimizers [as 别名]
# 或者: from neon.optimizers import GradientDescentMomentum [as 别名]
def create_model(args, hyper_params):
    # setup layers
    imagenet_layers = [
        Conv((11, 11, 64), init=Gaussian(scale=0.01), bias=Constant(0), activation=Rectlin(),
             padding=3, strides=4),
        Pooling(3, strides=2),
        Conv((5, 5, 192), init=Gaussian(scale=0.01), bias=Constant(1), activation=Rectlin(),
             padding=2),
        Pooling(3, strides=2),
        Conv((3, 3, 384), init=Gaussian(scale=0.03), bias=Constant(0), activation=Rectlin(),
             padding=1),
        Conv((3, 3, 256), init=Gaussian(scale=0.03), bias=Constant(1), activation=Rectlin(),
             padding=1),
        Conv((3, 3, 256), init=Gaussian(scale=0.03), bias=Constant(1), activation=Rectlin(),
             padding=1),
        Pooling(3, strides=2),
        Affine(nout=4096, init=Gaussian(scale=0.01), bias=Constant(1), activation=Rectlin()),
        Dropout(keep=0.5),
        Affine(nout=4096, init=Gaussian(scale=0.01), bias=Constant(1), activation=Rectlin()),
        # The following layers are used in Alexnet, but are not used in the new model
        Dropout(keep=0.5),
        # Affine(nout=1000, init=Gaussian(scale=0.01), bias=Constant(-7), activation=Softmax())
    ]
    
    target_layers = imagenet_layers + [    
        Affine(nout=4096, init=Gaussian(scale=0.005), bias=Constant(.1), activation=Rectlin()),
        Dropout(keep=0.5),
        Affine(nout=21, init=Gaussian(scale=0.01), bias=Constant(0), activation=Softmax())]
    
    # setup optimizer
    opt = GradientDescentMomentum(hyper_params.learning_rate_scale, 
                                  hyper_params.momentum, wdecay=0.0005,
                                  schedule=hyper_params.learning_rate_sched)
    
    # setup model
    if args.model_file:
        model = Model(layers=args.model_file)
    else:
        model = Model(layers=target_layers)
    
    return model, opt 
开发者ID:NervanaSystems,项目名称:ModelZoo,代码行数:43,代码来源:transfer_learning.py

示例2: get_function

# 需要导入模块: from neon import optimizers [as 别名]
# 或者: from neon.optimizers import GradientDescentMomentum [as 别名]
def get_function(name):
    mapping = {}

    # activation
    mapping['relu'] = neon.transforms.activation.Rectlin
    mapping['sigmoid'] = neon.transforms.activation.Logistic
    mapping['tanh'] = neon.transforms.activation.Tanh
    mapping['linear'] = neon.transforms.activation.Identity

    # loss
    mapping['mse'] = neon.transforms.cost.MeanSquared
    mapping['binary_crossentropy'] = neon.transforms.cost.CrossEntropyBinary
    mapping['categorical_crossentropy'] = neon.transforms.cost.CrossEntropyMulti

    # optimizer
    def SGD(learning_rate=0.01, momentum_coef=0.9, gradient_clip_value=5):
        return GradientDescentMomentum(learning_rate, momentum_coef, gradient_clip_value)

    mapping['sgd'] = SGD
    mapping['rmsprop'] = neon.optimizers.optimizer.RMSProp
    mapping['adam'] = neon.optimizers.optimizer.Adam
    mapping['adagrad'] = neon.optimizers.optimizer.Adagrad
    mapping['adadelta'] = neon.optimizers.optimizer.Adadelta

    mapped = mapping.get(name)
    if not mapped:
        raise Exception('No neon function found for "{}"'.format(name))

    return mapped 
开发者ID:ECP-CANDLE,项目名称:Benchmarks,代码行数:31,代码来源:p1b3_baseline_neon.py


注:本文中的neon.optimizers.GradientDescentMomentum方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。