用法
fetch(
    val
)參數
- 
val從中獲取結果的值。如果這是tf.distribute.experimental.coordinator.RemoteValue的結構,則將在個人tf.distribute.experimental.coordinator.RemoteValue上調用fetch()以獲得結果。
返回
- 
如果 val是一個tf.distribute.experimental.coordinator.RemoteValue或tf.distribute.experimental.coordinator.RemoteValue的結構,則如果獲取的tf.distribute.experimental.coordinator.RemoteValue值可用,則立即返回它們,或者阻塞調用直到它們可用,然後返回獲取的tf.distribute.experimental.coordinator.RemoteValue值相同的結構。如果val為其他類型,返回as-is。
阻止調用以從遠程值中獲取結果。
這是RemoteValue 結構的tf.distribute.experimental.coordinator.RemoteValue.fetch 的包裝器;它返回RemoteValue s的執行結果。如果尚未準備好,請在阻止調用者的同時等待他們。
例子:
strategy = ...
coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
    strategy)
def dataset_fn():
  return tf.data.Dataset.from_tensor_slices([1, 1, 1])
with strategy.scope():
  v = tf.Variable(initial_value=0)
@tf.function
def worker_fn(iterator):
  def replica_fn(x):
    v.assign_add(x)
    return v.read_value()
  return strategy.run(replica_fn, args=(next(iterator),))
distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn)
distributed_iterator = iter(distributed_dataset)
result = coordinator.schedule(worker_fn, args=(distributed_iterator,))
assert coordinator.fetch(result) == 1相關用法
- Python tf.distribute.experimental.coordinator.ClusterCoordinator.create_per_worker_dataset用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.gather用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy用法及代碼示例
- Python tf.distribute.experimental.rpc.Server.create用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.experimental_distribute_dataset用法及代碼示例
- Python tf.distribute.experimental.partitioners.Partitioner.__call__用法及代碼示例
- Python tf.distribute.experimental.TPUStrategy.experimental_distribute_values_from_function用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.run用法及代碼示例
- Python tf.distribute.experimental.partitioners.MaxSizePartitioner.__call__用法及代碼示例
- Python tf.distribute.experimental.partitioners.FixedShardsPartitioner用法及代碼示例
- Python tf.distribute.experimental.TPUStrategy.experimental_distribute_dataset用法及代碼示例
- Python tf.distribute.experimental.ParameterServerStrategy.gather用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.scope用法及代碼示例
- Python tf.distribute.experimental.TPUStrategy用法及代碼示例
- Python tf.distribute.experimental.MultiWorkerMirroredStrategy.reduce用法及代碼示例
- Python tf.distribute.experimental.partitioners.MinSizePartitioner用法及代碼示例
- Python tf.distribute.experimental.ParameterServerStrategy.experimental_distribute_values_from_function用法及代碼示例
- Python tf.distribute.experimental.CentralStorageStrategy.experimental_distribute_values_from_function用法及代碼示例
- Python tf.distribute.experimental.CentralStorageStrategy用法及代碼示例
- Python tf.distribute.experimental.ParameterServerStrategy.experimental_distribute_dataset用法及代碼示例
注:本文由純淨天空篩選整理自tensorflow.org大神的英文原創作品 tf.distribute.experimental.coordinator.ClusterCoordinator.fetch。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。
