當前位置: 首頁>>代碼示例>>Python>>正文


Python network_units.get_input_tensor方法代碼示例

本文整理匯總了Python中dragnn.python.network_units.get_input_tensor方法的典型用法代碼示例。如果您正苦於以下問題:Python network_units.get_input_tensor方法的具體用法?Python network_units.get_input_tensor怎麽用?Python network_units.get_input_tensor使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在dragnn.python.network_units的用法示例。


在下文中一共展示了network_units.get_input_tensor方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: create

# 需要導入模塊: from dragnn.python import network_units [as 別名]
# 或者: from dragnn.python.network_units import get_input_tensor [as 別名]
def create(self,
             fixed_embeddings,
             linked_embeddings,
             context_tensor_arrays,
             attention_tensor,
             during_training,
             stride=None):
    """See base class."""
    # NB: This cell pulls the lstm's h and c vectors from context_tensor_arrays
    # instead of through linked features.
    check.Eq(
        len(context_tensor_arrays), 2 * len(self._hidden_layer_sizes),
        'require two context tensors per hidden layer')

    # Rearrange the context tensors into a tuple of LSTM sub-states.
    length = context_tensor_arrays[0].size()
    substates = []
    for index, num_units in enumerate(self._hidden_layer_sizes):
      state_c = context_tensor_arrays[2 * index].read(length - 1)
      state_h = context_tensor_arrays[2 * index + 1].read(length - 1)

      # Fix shapes that for some reason are not set properly for an unknown
      # reason. TODO(googleuser): Why are the shapes not set?
      state_c.set_shape([tf.Dimension(None), num_units])
      state_h.set_shape([tf.Dimension(None), num_units])
      substates.append(tf.contrib.rnn.LSTMStateTuple(state_c, state_h))
    state = tuple(substates)

    input_tensor = dragnn.get_input_tensor(fixed_embeddings, linked_embeddings)
    cell = self._train_cell if during_training else self._inference_cell

    def _cell_closure(scope):
      """Applies the LSTM cell to the current inputs and state."""
      return cell(input_tensor, state, scope)

    unused_h, state = self._apply_with_captured_variables(_cell_closure)

    # Return tensors to be put into the tensor arrays / used to compute
    # objective.
    output_tensors = []
    for new_substate in state:
      new_c, new_h = new_substate
      output_tensors.append(new_c)
      output_tensors.append(new_h)
    return self._append_base_layers(output_tensors) 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:47,代碼來源:wrapped_units.py

示例2: create

# 需要導入模塊: from dragnn.python import network_units [as 別名]
# 或者: from dragnn.python.network_units import get_input_tensor [as 別名]
def create(self,
             fixed_embeddings,
             linked_embeddings,
             context_tensor_arrays,
             attention_tensor,
             during_training,
             stride=None):
    """See base class."""
    # NB: This cell pulls the lstm's h and c vectors from context_tensor_arrays
    # instead of through linked features.
    check.Eq(
        len(context_tensor_arrays), 2 * len(self._hidden_layer_sizes),
        'require two context tensors per hidden layer')

    # Rearrange the context tensors into a tuple of LSTM sub-states.
    length = context_tensor_arrays[0].size()
    substates = []
    for index, num_units in enumerate(self._hidden_layer_sizes):
      state_c = context_tensor_arrays[2 * index].read(length - 1)
      state_h = context_tensor_arrays[2 * index + 1].read(length - 1)

      # Fix shapes that for some reason are not set properly for an unknown
      # reason. TODO(googleuser): Why are the shapes not set?
      state_c.set_shape([tf.Dimension(None), num_units])
      state_h.set_shape([tf.Dimension(None), num_units])
      substates.append(tf.contrib.rnn.LSTMStateTuple(state_c, state_h))
    state = tuple(substates)

    input_tensor = dragnn.get_input_tensor(fixed_embeddings, linked_embeddings)
    cell = self._train_cell if during_training else self._inference_cell

    def _cell_closure(scope):
      """Applies the LSTM cell to the current inputs and state."""
      return cell(input_tensor, state, scope=scope)

    unused_h, state = self._apply_with_captured_variables(_cell_closure)

    # Return tensors to be put into the tensor arrays / used to compute
    # objective.
    output_tensors = []
    for new_substate in state:
      new_c, new_h = new_substate
      output_tensors.append(new_c)
      output_tensors.append(new_h)
    return self._append_base_layers(output_tensors) 
開發者ID:generalized-iou,項目名稱:g-tensorflow-models,代碼行數:47,代碼來源:wrapped_units.py


注:本文中的dragnn.python.network_units.get_input_tensor方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。