本文整理匯總了Python中tensorflow.keras.layers.GlobalAveragePooling1D方法的典型用法代碼示例。如果您正苦於以下問題:Python layers.GlobalAveragePooling1D方法的具體用法?Python layers.GlobalAveragePooling1D怎麽用?Python layers.GlobalAveragePooling1D使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類tensorflow.keras.layers
的用法示例。
在下文中一共展示了layers.GlobalAveragePooling1D方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: keras_estimator
# 需要導入模塊: from tensorflow.keras import layers [as 別名]
# 或者: from tensorflow.keras.layers import GlobalAveragePooling1D [as 別名]
def keras_estimator(model_dir, config, learning_rate, vocab_size):
"""Creates a Keras Sequential model with layers.
Args:
model_dir: (str) file path where training files will be written.
config: (tf.estimator.RunConfig) Configuration options to save model.
learning_rate: (int) Learning rate.
vocab_size: (int) Size of the vocabulary in number of words.
Returns:
A keras.Model
"""
model = models.Sequential()
model.add(Embedding(vocab_size, 16))
model.add(GlobalAveragePooling1D())
model.add(Dense(16, activation=tf.nn.relu))
model.add(Dense(1, activation=tf.nn.sigmoid))
# Compile model with learning parameters.
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
model.compile(
optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(
keras_model=model, model_dir=model_dir, config=config)
return estimator
示例2: _build_model
# 需要導入模塊: from tensorflow.keras import layers [as 別名]
# 或者: from tensorflow.keras.layers import GlobalAveragePooling1D [as 別名]
def _build_model(self):
model = keras.Sequential([
layers.Embedding(self.encoder.vocab_size, self.embedding_dim),
layers.GlobalAveragePooling1D(),
layers.Dense(16, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
return model