用法
experimental_distribute_values_from_function(
value_fn
)
參數
-
value_fn
運行以生成值的函數。每個副本都會調用它,並以tf.distribute.ValueContext
作為唯一參數。它必須返回一個張量或可以轉換為張量的類型。
返回
-
tf.distribute.DistributedValues
包含每個副本的值。
從 value_fn
生成 tf.distribute.DistributedValues
。
此函數用於生成 tf.distribute.DistributedValues
以傳遞給 run
, reduce
或其他在不使用數據集時采用分布式值的方法。
示例用法:
- 每個副本返回常量值:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor:shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor:shape=(), dtype=float32, numpy=1.0>)
- 根據replica_id在數組中分配值:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
- 使用 num_replicas_in_sync 指定值:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
- 在設備上放置值並分發:
strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
相關用法
- Python tf.distribute.experimental.TPUStrategy.experimental_distribute_dataset用法及代碼示例
- Python tf.distribute.experimental.TPUStrategy.gather用法及代碼示例
- Python tf.distribute.experimental.TPUStrategy.reduce用法及代碼示例
- Python tf.distribute.experimental.TPUStrategy.scope用法及代碼示例
- Python tf.distribute.experimental.TPUStrategy用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.gather用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy用法及代碼示例
- Python tf.distribute.experimental.rpc.Server.create用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_dataset用法及代碼示例
- Python tf.distribute.experimental.partitioners.Partitioner.__call__用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.run用法及代碼示例
- Python tf.distribute.experimental.partitioners.MaxSizePartitioner.__call__用法及代碼示例
- Python tf.distribute.experimental.partitioners.FixedShardsPartitioner用法及代碼示例
- Python tf.distribute.experimental.ParameterServerStrategy.gather用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.scope用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.reduce用法及代碼示例
- Python tf.distribute.experimental.partitioners.MinSizePartitioner用法及代碼示例
- Python tf.distribute.experimental.ParameterServerStrategy.experimental_distribute_values_from_function用法及代碼示例
- Python tf.distribute.experimental.CentralStorageStrategy.experimental_distribute_values_from_function用法及代碼示例
- Python tf.distribute.experimental.CentralStorageStrategy用法及代碼示例
注:本文由純淨天空篩選整理自tensorflow.org大神的英文原創作品 tf.distribute.experimental.TPUStrategy.experimental_distribute_values_from_function。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。