当前位置: 首页>>代码示例>>Python>>正文


Python dragnn_ops.extract_link_features方法代码示例

本文整理汇总了Python中dragnn.python.dragnn_ops.extract_link_features方法的典型用法代码示例。如果您正苦于以下问题:Python dragnn_ops.extract_link_features方法的具体用法?Python dragnn_ops.extract_link_features怎么用?Python dragnn_ops.extract_link_features使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在dragnn.python.dragnn_ops的用法示例。


在下文中一共展示了dragnn_ops.extract_link_features方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: activation_lookup_other

# 需要导入模块: from dragnn.python import dragnn_ops [as 别名]
# 或者: from dragnn.python.dragnn_ops import extract_link_features [as 别名]
def activation_lookup_other(component, state, channel_id, source_tensor,
                            source_layer_size):
  """Looks up activations from tensors.

  If the linked feature's embedding_dim is set to -1, the feature vectors are
  not passed through (i.e. multiplied by) an embedding matrix.

  Args:
    component: Component object in which to look up the fixed features.
    state: MasterState object for the live nlp_saft::dragnn::MasterState.
    channel_id: int id of the fixed feature to look up.
    source_tensor: Tensor from which to fetch feature vectors. Expected to have
        have shape [steps + 1, stride, D].
    source_layer_size: int length of feature vectors before embedding (D). It
        would in principle be possible to get this dimension dynamically from
        the second dimension of source_tensor. However, having it statically is
        more convenient.

  Returns:
    NamedTensor object containing the embedding vectors.
  """
  feature_spec = component.spec.linked_feature[channel_id]

  with tf.name_scope('activation_lookup_other_%s' % feature_spec.name):
    # Linked features are returned as a pair of tensors, one indexing into
    # steps, and one indexing within the stride (beam x batch) of each step.
    step_idx, idx = dragnn_ops.extract_link_features(
        state.handle, component=component.name, channel_id=channel_id)

    # The first element of each tensor array is reserved for an
    # initialization variable, so we offset all step indices by +1.
    indices = tf.stack([step_idx + 1, idx], axis=1)
    act_block = tf.gather_nd(source_tensor, indices)
    act_block = tf.reshape(act_block, [-1, source_layer_size])

    if feature_spec.embedding_dim != -1:
      embedding_matrix = component.get_variable(
          linked_embeddings_name(channel_id))
      act_block = pass_through_embedding_matrix(act_block, embedding_matrix,
                                                step_idx)
      dim = feature_spec.size * feature_spec.embedding_dim
    else:
      # If embedding_dim is -1, just output concatenation of activations.
      dim = feature_spec.size * source_layer_size

    return NamedTensor(
        tf.reshape(act_block, [-1, dim]), feature_spec.name, dim=dim) 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:49,代码来源:network_units.py

示例2: activation_lookup_other

# 需要导入模块: from dragnn.python import dragnn_ops [as 别名]
# 或者: from dragnn.python.dragnn_ops import extract_link_features [as 别名]
def activation_lookup_other(component, state, channel_id, source_tensor,
                            source_layer_size):
  """Looks up activations from tensors.

  If the linked feature's embedding_dim is set to -1, the feature vectors are
  not passed through (i.e. multiplied by) an embedding matrix.

  Args:
    component: Component object in which to look up the fixed features.
    state: MasterState object for the live ComputeSession.
    channel_id: int id of the fixed feature to look up.
    source_tensor: Tensor from which to fetch feature vectors. Expected to have
        have shape [steps + 1, stride, D].
    source_layer_size: int length of feature vectors before embedding (D). It
        would in principle be possible to get this dimension dynamically from
        the second dimension of source_tensor. However, having it statically is
        more convenient.

  Returns:
    NamedTensor object containing the embedding vectors.
  """
  feature_spec = component.spec.linked_feature[channel_id]

  with tf.name_scope('activation_lookup_other_%s' % feature_spec.name):
    # Linked features are returned as a pair of tensors, one indexing into
    # steps, and one indexing within the stride (beam x batch) of each step.
    step_idx, idx = dragnn_ops.extract_link_features(
        state.handle, component=component.name, channel_id=channel_id)

    # The first element of each tensor array is reserved for an
    # initialization variable, so we offset all step indices by +1.
    indices = tf.stack([step_idx + 1, idx], axis=1)
    act_block = tf.gather_nd(source_tensor, indices)
    act_block = tf.reshape(act_block, [-1, source_layer_size])

    if feature_spec.embedding_dim != -1:
      embedding_matrix = component.get_variable(
          linked_embeddings_name(channel_id))
      act_block = pass_through_embedding_matrix(act_block, embedding_matrix,
                                                step_idx)
      dim = feature_spec.size * feature_spec.embedding_dim
    else:
      # If embedding_dim is -1, just output concatenation of activations.
      dim = feature_spec.size * source_layer_size

    return NamedTensor(
        tf.reshape(act_block, [-1, dim]), feature_spec.name, dim=dim) 
开发者ID:rky0930,项目名称:yolo_v2,代码行数:49,代码来源:network_units.py

示例3: activation_lookup_other

# 需要导入模块: from dragnn.python import dragnn_ops [as 别名]
# 或者: from dragnn.python.dragnn_ops import extract_link_features [as 别名]
def activation_lookup_other(component, state, channel_id, source_tensor,
                            source_layer_size):
  """Looks up activations from tensors.

  If the linked feature's embedding_dim is set to -1, the feature vectors are
  not passed through (i.e. multiplied by) an embedding matrix.

  Args:
    component: Component object in which to look up the linked features.
    state: MasterState object for the live ComputeSession.
    channel_id: int id of the linked feature to look up.
    source_tensor: Tensor from which to fetch feature vectors. Expected to have
        have shape [steps + 1, stride, D].
    source_layer_size: int length of feature vectors before embedding (D). It
        would in principle be possible to get this dimension dynamically from
        the second dimension of source_tensor. However, having it statically is
        more convenient.

  Returns:
    NamedTensor object containing the embedding vectors.
  """
  feature_spec = component.spec.linked_feature[channel_id]

  with tf.name_scope('activation_lookup_other_%s' % feature_spec.name):
    # Linked features are returned as a pair of tensors, one indexing into
    # steps, and one indexing within the stride (beam x batch) of each step.
    step_idx, idx = dragnn_ops.extract_link_features(
        state.handle, component=component.name, channel_id=channel_id)

    # The first element of each tensor array is reserved for an
    # initialization variable, so we offset all step indices by +1.
    indices = tf.stack([step_idx + 1, idx], axis=1)
    act_block = tf.gather_nd(source_tensor, indices)
    act_block = tf.reshape(act_block, [-1, source_layer_size])

    if component.master.build_runtime_graph:
      act_block = component.add_cell_input(act_block.dtype, [
          feature_spec.size, source_layer_size
      ], 'linked_channel_{}_activations'.format(channel_id))

    if feature_spec.embedding_dim != -1:
      embedding_matrix = component.get_variable(
          linked_embeddings_name(channel_id))
      act_block = pass_through_embedding_matrix(component, channel_id,
                                                feature_spec.size, act_block,
                                                embedding_matrix, step_idx)
      dim = feature_spec.size * feature_spec.embedding_dim
    else:
      # If embedding_dim is -1, just output concatenation of activations.
      dim = feature_spec.size * source_layer_size

    return NamedTensor(
        tf.reshape(act_block, [-1, dim]), feature_spec.name, dim=dim) 
开发者ID:generalized-iou,项目名称:g-tensorflow-models,代码行数:55,代码来源:network_units.py


注:本文中的dragnn.python.dragnn_ops.extract_link_features方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。