本文整理汇总了Python中keras.initializers方法的典型用法代码示例。如果您正苦于以下问题:Python keras.initializers方法的具体用法?Python keras.initializers怎么用?Python keras.initializers使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类keras
的用法示例。
在下文中一共展示了keras.initializers方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: highway_keras
# 需要导入模块: import keras [as 别名]
# 或者: from keras import initializers [as 别名]
def highway_keras(x):
# writter by my own
# paper; Highway Network(http://arxiv.org/abs/1505.00387).
# 公式
# 1. s = sigmoid(Wx + b)
# 2. z = s * relu(Wx + b) + (1 - s) * x
# x shape : [N * time_depth, sum(filters)]
# Table 1. CIFAR-10 test set accuracy of convolutional highway networks with
# rectified linear activation and sigmoid gates.
# For comparison, results reported by Romero et al. (2014)
# using maxout networks are also shown.
# Fitnets were trained using a two step training procedure using soft targets from the trained Teacher network,
# which was trained using backpropagation. We trained all highway networks directly using backpropagation.
# * indicates networks which were trained only on a set of 40K out of 50K examples in the training set.
# Figure 2. Visualization of certain internals of the blocks in the best 50 hidden layer highway networks trained on MNIST
# (top row) and CIFAR-100 (bottom row). The first hidden layer is a plain layer which changes the dimensionality of the representation to 50. Each of
# the 49 highway layers (y-axis) consists of 50 blocks (x-axis).
# The first column shows the transform gate biases, which were initialized to -2 and -4 respectively.
# In the second column the mean output of the transform gate over 10,000 training examples is depicted.
# The third and forth columns show the output of the transform gates and
# the block outputs for a single random training sample.
gate_transform = Dense(units=K.int_shape(x)[1],
activation='sigmoid',
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer=keras.initializers.Constant(value=-2))(x)
gate_cross = 1 - gate_transform
block_state = Dense(units=K.int_shape(x)[1],
activation='relu',
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zero')(x)
high_way = gate_transform * block_state + gate_cross * x
return high_way
示例2: get_deep_convnet
# 需要导入模块: import keras [as 别名]
# 或者: from keras import initializers [as 别名]
def get_deep_convnet(window_size=4096, channels=2, output_size=84):
inputs = Input(shape=(window_size, channels))
outs = inputs
outs = (ComplexConv1D(
16, 6, strides=2, padding='same',
activation='linear',
kernel_initializer='complex_independent'))(outs)
outs = (ComplexBN(axis=-1))(outs)
outs = (keras.layers.Activation('relu'))(outs)
outs = (keras.layers.AveragePooling1D(pool_size=2, strides=2))(outs)
outs = (ComplexConv1D(
32, 3, strides=2, padding='same',
activation='linear',
kernel_initializer='complex_independent'))(outs)
outs = (ComplexBN(axis=-1))(outs)
outs = (keras.layers.Activation('relu'))(outs)
outs = (keras.layers.AveragePooling1D(pool_size=2, strides=2))(outs)
outs = (ComplexConv1D(
64, 3, strides=1, padding='same',
activation='linear',
kernel_initializer='complex_independent'))(outs)
outs = (ComplexBN(axis=-1))(outs)
outs = (keras.layers.Activation('relu'))(outs)
outs = (keras.layers.AveragePooling1D(pool_size=2, strides=2))(outs)
outs = (ComplexConv1D(
64, 3, strides=1, padding='same',
activation='linear',
kernel_initializer='complex_independent'))(outs)
outs = (ComplexBN(axis=-1))(outs)
outs = (keras.layers.Activation('relu'))(outs)
outs = (keras.layers.AveragePooling1D(pool_size=2, strides=2))(outs)
outs = (ComplexConv1D(
128, 3, strides=1, padding='same',
activation='relu',
kernel_initializer='complex_independent'))(outs)
outs = (ComplexConv1D(
128, 3, strides=1, padding='same',
activation='linear',
kernel_initializer='complex_independent'))(outs)
outs = (ComplexBN(axis=-1))(outs)
outs = (keras.layers.Activation('relu'))(outs)
outs = (keras.layers.AveragePooling1D(pool_size=2, strides=2))(outs)
#outs = (keras.layers.MaxPooling1D(pool_size=2))
#outs = (Permute([2, 1]))
outs = (keras.layers.Flatten())(outs)
outs = (keras.layers.Dense(2048, activation='relu',
kernel_initializer='glorot_normal'))(outs)
predictions = (keras.layers.Dense(output_size, activation='sigmoid',
bias_initializer=keras.initializers.Constant(value=-5)))(outs)
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer=keras.optimizers.Adam(lr=1e-4),
loss='binary_crossentropy',
metrics=['accuracy'])
return model