当前位置: 首页>>代码示例>>Python>>正文


Python context.context函数代码示例

本文整理汇总了Python中tensorflow.python.eager.context.context函数的典型用法代码示例。如果您正苦于以下问题:Python context函数的具体用法?Python context怎么用?Python context使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。


在下文中一共展示了context函数的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: set_optimizer_experimental_options

def set_optimizer_experimental_options(options):
  """Set experimental optimizer options.

  Note that optimizations are only applied in graph mode, (within tf.function).
  In addition, as these are experimental options, the list is subject to change.

  Args:
    options: Dictionary of experimental optimizer options to configure.
      Valid keys:
      - layout_optimizer: Optimize tensor layouts
        e.g. This will try to use NCHW layout on GPU which is faster.
      - constant_folding: Fold constants
        Statically infer the value of tensors when possible, and materialize the
        result using constants.
      - shape_optimization: Simplify computations made on shapes.
      - remapping: Remap subgraphs onto more efficient implementations.
      - arithmetic_optimization: Simplify arithmetic ops with common
        sub-expression elimination and arithmetic simplification.
      - dependency_optimization: Control dependency optimizations. Remove
        redundant control dependencies, which may enable other optimization.
        This optimizer is also essential for pruning Identity and NoOp nodes.
      - loop_optimization: Loop optimizations.
      - function_optimization: Function optimizations and inlining.
      - debug_stripper: Strips debug-related nodes from the graph.
      - disable_model_pruning: Disable removal of unnecessary ops from the graph
      - scoped_allocator_optimization: Try to allocate some independent Op
        outputs contiguously in order to merge or eliminate downstream Ops.
      - pin_to_host_optimization: Force small ops onto the CPU.
      - implementation_selector: Enable the swap of kernel implementations based
        on the device placement.
      - disable_meta_optimizer: Disable the entire meta optimizer.
      - min_graph_nodes: The minimum number of nodes in a graph to optimizer.
        For smaller graphs, optimization is skipped.
  """
  context.context().set_optimizer_experimental_options(options)
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:35,代码来源:config.py

示例2: set_visible_devices

def set_visible_devices(devices, device_type=None):
  """Set the list of visible devices.

  Sets the list of PhysicalDevices to be marked as visible to the runtime. Any
  devices that are not marked as visible means TensorFlow will not allocate
  memory on it and will not be able to place any operations on it as no
  LogicalDevice will be created on it. By default all discovered devices are
  marked as visible.

  The following example demonstrates disabling the first GPU on the machine.

  ```python
  physical_devices = config.experimental.list_physical_devices('GPU')
  assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
  # Disable first GPU
  tf.config.experimental.set_visible_devices(physical_devices[1:], 'GPU')
  logical_devices = config.experimental.list_logical_devices('GPU')
  # Logical device was not created for first GPU
  assert len(logical_devices) == len(physical_devices) - 1
  ```

  Args:
    devices: (optional) List of PhysicalDevice objects to make visible
    device_type: (optional) Device types to limit visibility configuration to.
      Other device types will be left unaltered.
  """
  context.context().set_visible_devices(devices, device_type)
开发者ID:aritratony,项目名称:tensorflow,代码行数:27,代码来源:config.py

示例3: testBadConstructorArgs

 def testBadConstructorArgs(self):
   ctx = context.context()
   handle = ctx._handle
   device = ctx.device_name
   # Missing context.
   with self.assertRaisesRegexp(
       TypeError, r"Required argument 'context' \(pos 2\) not found"):
     ops.EagerTensor(1, device=device)
   # Missing device.
   with self.assertRaisesRegexp(
       TypeError, r"Required argument 'device' \(pos 3\) not found"):
     ops.EagerTensor(1, context=handle)
   # Bad dtype type.
   with self.assertRaisesRegexp(TypeError,
                                "Expecting a DataType value for dtype. Got"):
     ops.EagerTensor(1, context=handle, device=device, dtype="1")
   # Following errors happen when trying to copy to GPU.
   if not context.context().num_gpus():
     self.skipTest("No GPUs found")
   with ops.device("/device:GPU:0"):
     device = ctx.device_name
     # Bad context.
     with self.assertRaisesRegexp(
         TypeError, "Expecting a PyCapsule encoded context handle. Got"):
       ops.EagerTensor(1.0, context=1, device=device)
     # Bad device.
     with self.assertRaisesRegexp(
         TypeError, "Error parsing device argument to CopyToDevice"):
       ops.EagerTensor(1.0, context=handle, device=1)
开发者ID:marcomarchesi,项目名称:tensorflow,代码行数:29,代码来源:tensor_test.py

示例4: testGpuInvalidConfig

  def testGpuInvalidConfig(self):
    gpus = config.list_physical_devices('GPU')
    self.assertNotEqual(len(gpus), 0)

    for gpu in gpus:
      config.set_memory_growth(gpu, True)

    c = context.context().config
    self.assertTrue(c.gpu_options.allow_growth)

    with self.assertRaisesRegexp(ValueError, 'memory limit'):
      config.set_virtual_device_configuration(gpus[-1], [
          context.VirtualDeviceConfiguration(),
          context.VirtualDeviceConfiguration()
      ])

    self.assertIsNone(config.get_virtual_device_configuration(gpus[-1]))
    config.set_virtual_device_configuration(gpus[-1], [
        context.VirtualDeviceConfiguration(memory_limit=10),
        context.VirtualDeviceConfiguration(memory_limit=10)
    ])

    c = context.context().config
    self.assertFalse(c.gpu_options.allow_growth)

    with self.assertRaisesRegexp(ValueError, 'virtual devices'):
      config.set_memory_growth(gpus[-1], False)
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:27,代码来源:config_test.py

示例5: __init__

  def __init__(self, dataset):
    """Creates a new iterator over the given dataset.

    For example:
    ```python
    dataset = tf.data.Dataset.range(4)
    for x in Iterator(dataset):
      print(x)
    ```

    Tensors produced will be placed on the device on which this iterator object
    was created.

    Args:
      dataset: A `tf.data.Dataset` object.

    Raises:
      TypeError: If `dataset` is an unsupported type.
      RuntimeError: When invoked without eager execution enabled.
    """
    if not context.context().device_spec.device_type:
      is_remote_device = False
    else:
      is_remote_device = context.context().device_spec.device_type != "CPU"
    if is_remote_device:
      with ops.device(None):
        # Let the placer figure out where to place the various functions etc.
        # created by the CopyToDeviceDataset.
        dataset = dataset.apply(prefetching_ops.copy_to_device(
            context.context().device_name))
        dataset = dataset.prefetch(1)
    super(Iterator, self).__init__(dataset)
开发者ID:Albert-Z-Guo,项目名称:tensorflow,代码行数:32,代码来源:datasets.py

示例6: testJit

  def testJit(self):
    self.assertEqual(config.get_optimizer_jit(), False)

    # the following function should cause Op fusion to occur. However, there is
    # unfortunately no straightforward way to ensure this. We will just have to
    # settle for creating a test that can trigger JIT.
    @def_function.function
    def fun(a, b):
      c = a * b
      d = c + a
      return d

    a = constant_op.constant([2., 2.])
    b = constant_op.constant([2., 2.])

    self.evaluate(fun(a, b))

    config.set_optimizer_jit(True)
    self.assertEqual(config.get_optimizer_jit(), True)
    self.assertEqual(config.get_optimizer_jit(),
                     context.context().optimizer_jit)

    self.evaluate(fun(a, b))

    config.set_optimizer_jit(False)
    self.assertEqual(config.get_optimizer_jit(), False)
    self.assertEqual(config.get_optimizer_jit(),
                     context.context().optimizer_jit)

    self.evaluate(fun(a, b))
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:30,代码来源:config_test.py

示例7: testSoftPlacement

  def testSoftPlacement(self):
    if context.executing_eagerly():
      self.assertTrue(config.get_soft_device_placement())
    else:
      self.assertFalse(config.get_soft_device_placement())

    @def_function.function
    def mod():
      with ops.device('/device:GPU:0'):
        a = constant_op.constant(1.0)
        b = constant_op.constant(1.0)
        return math_ops.mod(a, b)

    config.set_soft_device_placement(True)
    self.assertEqual(config.get_soft_device_placement(), True)
    self.assertEqual(
        config.get_soft_device_placement(),
        context.context().soft_device_placement)

    # Since soft placement is enabled, the mod operation should work with CPU
    mod()

    config.set_soft_device_placement(False)
    self.assertEqual(config.get_soft_device_placement(), False)
    self.assertEqual(
        config.get_soft_device_placement(),
        context.context().soft_device_placement)

    # Since soft placement is disabled, the mod operation should fail on GPU
    with self.assertRaises(errors.InvalidArgumentError):
      mod()
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:31,代码来源:config_test.py

示例8: testCopyScope

 def testCopyScope(self):
   if not context.context().num_gpus():
     self.skipTest('No GPUs found')
   constant = constant_op.constant(1.0)
   with ops.device('gpu:0'):
     with context.context().device_policy(context.DEVICE_PLACEMENT_SILENT):
       c = constant + 1.0
   self.assertAllEqual(c, 2.0)
开发者ID:StephenOman,项目名称:tensorflow,代码行数:8,代码来源:core_test.py

示例9: __del__

 def __del__(self):
   if self._created_eagerly:
     try:
       context.context().end_step()
     except AttributeError:
       pass
     except TypeError:
       pass
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:8,代码来源:backprop.py

示例10: __init__

  def __init__(self, dataset):
    """Creates a new iterator over the given dataset.

    For example:
    ```python
    dataset = tf.data.Dataset.range(4)
    for x in Iterator(dataset):
      print(x)
    ```

    Tensors produced will be placed on the device on which this iterator object
    was created.

    Args:
      dataset: A `tf.data.Dataset` object.

    Raises:
      TypeError: If `dataset` is an unsupported type.
      RuntimeError: When invoked without eager execution enabled.
    """
    if isinstance(dataset, prefetching_ops._PrefetchToDeviceDataset):  # pylint: disable=protected-access
      raise TypeError(
          "`tf.contrib.data.prefetch_to_device()` is not compatible with "
          "`tf.contrib.eager.Iterator`. Use `for ... in dataset:` to iterate "
          "over the dataset instead.")

    super(Iterator, self).__init__(dataset)
    if not context.context().device_spec.device_type:
      is_remote_device = False
    else:
      is_remote_device = context.context().device_spec.device_type != "CPU"
    self._buffer_resource_handle = None
    if is_remote_device:
      with ops.device("/device:CPU:0"):
        iter_string_handle = gen_dataset_ops.iterator_to_string_handle(
            self._resource)

        @function.Defun(dtypes.string)
        def remote_fn(h):
          remote_iterator = iterator_ops.Iterator.from_string_handle(
              h, self.output_types, self.output_shapes, self.output_classes)
          return remote_iterator.get_next()

        remote_fn.add_to_graph(None)
        target = constant_op.constant("/device:CPU:0")
      with ops.device(self._device):
        self._buffer_resource_handle = prefetching_ops.function_buffering_resource(  # pylint: disable=line-too-long
            string_arg=iter_string_handle,
            output_types=self._flat_output_types,
            f=remote_fn,
            target_device=target,
            buffer_size=10,
            container="",
            shared_name=_generate_shared_name(
                "contrib_eager_iterator_function_buffer_resource"))
        self._buffer_resource_deleter = resource_variable_ops.EagerResourceDeleter(  # pylint: disable=line-too-long
            handle=self._buffer_resource_handle,
            handle_device=self._device)
开发者ID:Eagle732,项目名称:tensorflow,代码行数:58,代码来源:datasets.py

示例11: testV1CompatibilityDummyInivisibleDeviceList

  def testV1CompatibilityDummyInivisibleDeviceList(self):
    gpus = config.list_physical_devices('GPU')
    if gpus:
      self.skipTest('Test requires no GPUs')

    # Ensure GPU options left untouched on CPU only environments
    context.context()._physical_devices = None
    context.context()._config = config_pb2.ConfigProto(
        gpu_options=config_pb2.GPUOptions(visible_device_list='0'))
    new_config = context.context().config
    self.assertEqual(new_config.gpu_options.visible_device_list, '0')
开发者ID:aritratony,项目名称:tensorflow,代码行数:11,代码来源:config_test.py

示例12: as_default

 def as_default(self):
   """Enables summary writing within a `with` block."""
   if self._resource is None:
     yield self
   else:
     old = context.context().summary_writer_resource
     context.context().summary_writer_resource = self._resource
     yield self
     # Flushes the summary writer in eager mode or in graph functions, but not
     # in legacy graph mode (you're on your own there).
     self.flush()
     context.context().summary_writer_resource = old
开发者ID:kimr843,项目名称:tensorflow,代码行数:12,代码来源:summary_ops_v2.py

示例13: testBenchmarks

  def testBenchmarks(self):
    # This isn't actually a test, but benchmarks packaged as a test
    # so that continuous integration runs catch any breakages.
    print(context.context())
    benchmark_create_tensor(FLAGS.iters or 30000)
    benchmark_matmul([2, 2], FLAGS.iters or 30000)
    benchmark_matmul([100, 28 * 28], FLAGS.iters or 1000)

    if context.context().num_gpus() > 0:
      print("---- RUNNING ON GPU NOW ----")
      benchmark_matmul([2, 2], FLAGS.iters or 30000, use_gpu=True)
      benchmark_matmul([100, 28 * 28], FLAGS.iters or 1000, use_gpu=True)
开发者ID:1000sprites,项目名称:tensorflow,代码行数:12,代码来源:benchmarks_test.py

示例14: as_default

 def as_default(self):
   if self._resource is None:
     yield
   else:
     old = context.context().summary_writer_resource
     context.context().summary_writer_resource = self._resource
     yield
     # Flushes the summary writer in eager mode or in graph functions, but not
     # in legacy graph mode (you're on your own there).
     with ops.device("cpu:0"):
       gen_summary_ops.flush_summary_writer(self._resource)
     context.context().summary_writer_resource = old
开发者ID:SylChan,项目名称:tensorflow,代码行数:12,代码来源:summary_ops.py

示例15: __init__

  def __init__(self, persistent=False):
    """Creates a new GradientTape.

    Args:
      persistent: Boolean controlling whether a persistent gradient tape
        is created. False by default, which means at most one call can
        be made to the gradient() method on this object.
    """
    self._tape = None
    self._persistent = persistent
    self._recording = False
    context.context().start_step()
开发者ID:mbrukman,项目名称:tensorflow,代码行数:12,代码来源:backprop.py


注:本文中的tensorflow.python.eager.context.context函数示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。