本文整理汇总了Python中tensorflow.keras.layers.GlobalAveragePooling1D方法的典型用法代码示例。如果您正苦于以下问题:Python layers.GlobalAveragePooling1D方法的具体用法?Python layers.GlobalAveragePooling1D怎么用?Python layers.GlobalAveragePooling1D使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类tensorflow.keras.layers
的用法示例。
在下文中一共展示了layers.GlobalAveragePooling1D方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: keras_estimator
# 需要导入模块: from tensorflow.keras import layers [as 别名]
# 或者: from tensorflow.keras.layers import GlobalAveragePooling1D [as 别名]
def keras_estimator(model_dir, config, learning_rate, vocab_size):
"""Creates a Keras Sequential model with layers.
Args:
model_dir: (str) file path where training files will be written.
config: (tf.estimator.RunConfig) Configuration options to save model.
learning_rate: (int) Learning rate.
vocab_size: (int) Size of the vocabulary in number of words.
Returns:
A keras.Model
"""
model = models.Sequential()
model.add(Embedding(vocab_size, 16))
model.add(GlobalAveragePooling1D())
model.add(Dense(16, activation=tf.nn.relu))
model.add(Dense(1, activation=tf.nn.sigmoid))
# Compile model with learning parameters.
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
model.compile(
optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(
keras_model=model, model_dir=model_dir, config=config)
return estimator
示例2: _build_model
# 需要导入模块: from tensorflow.keras import layers [as 别名]
# 或者: from tensorflow.keras.layers import GlobalAveragePooling1D [as 别名]
def _build_model(self):
model = keras.Sequential([
layers.Embedding(self.encoder.vocab_size, self.embedding_dim),
layers.GlobalAveragePooling1D(),
layers.Dense(16, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
return model