当前位置: 首页>>代码示例 >>用法及示例精选 >>正文


Python tf.distribute.TPUStrategy.experimental_replicate_to_logical_devices用法及代码示例


用法

experimental_replicate_to_logical_devices(
    tensor
)

返回

  • tensor 具有相同值的带注释的张量。

添加tensor 将被复制到所有逻辑设备的注释。

这为张量 tensor 添加了一个注释,指定将在所有逻辑设备上调用 tensor 上的操作。

# Initializing TPU system with 2 logical devices and 4 replicas.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
topology = tf.tpu.experimental.initialize_tpu_system(resolver)
device_assignment = tf.tpu.experimental.DeviceAssignment.build(
    topology,
    computation_shape=[1, 1, 1, 2],
    num_replicas=4)
strategy = tf.distribute.TPUStrategy(
    resolver, experimental_device_assignment=device_assignment)

iterator = iter(inputs)

@tf.function()
def step_fn(inputs):
  images, labels = inputs
  images = strategy.experimental_split_to_logical_devices(
    inputs, [1, 2, 4, 1])

  # model() function will be executed on 8 logical devices with `inputs`
  # split 2 * 4  ways.
  output = model(inputs)

  # For loss calculation, all logical devices share the same logits
  # and labels.
  labels = strategy.experimental_replicate_to_logical_devices(labels)
  output = strategy.experimental_replicate_to_logical_devices(output)
  loss = loss_fn(labels, output)

  return loss

strategy.run(step_fn, args=(next(iterator),))

Args:张量:输入张量进行注释。

相关用法


注:本文由纯净天空筛选整理自tensorflow.org大神的英文原创作品 tf.distribute.TPUStrategy.experimental_replicate_to_logical_devices。非经特殊声明,原始代码版权归原作者所有,本译文未经允许或授权,请勿转载或复制。