本文整理汇总了Python中tensorflow.python.ops.rnn_cell.RNNCell方法的典型用法代码示例。如果您正苦于以下问题:Python rnn_cell.RNNCell方法的具体用法?Python rnn_cell.RNNCell怎么用?Python rnn_cell.RNNCell使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类tensorflow.python.ops.rnn_cell
的用法示例。
在下文中一共展示了rnn_cell.RNNCell方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: basic_rnn_seq2seq
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def basic_rnn_seq2seq(
encoder_inputs, decoder_inputs, cell, dtype=dtypes.float32, scope=None):
"""Basic RNN sequence-to-sequence model.
This model first runs an RNN to encode encoder_inputs into a state vector,
then runs decoder, initialized with the last encoder state, on decoder_inputs.
Encoder and decoder use the same RNN cell type, but don't share parameters.
Args:
encoder_inputs: A list of 2D Tensors [batch_size x input_size].
decoder_inputs: A list of 2D Tensors [batch_size x input_size].
cell: rnn_cell.RNNCell defining the cell function and size.
dtype: The dtype of the initial state of the RNN cell (default: tf.float32).
scope: VariableScope for the created subgraph; default: "basic_rnn_seq2seq".
Returns:
A tuple of the form (outputs, state), where:
outputs: A list of the same length as decoder_inputs of 2D Tensors with
shape [batch_size x output_size] containing the generated outputs.
state: The state of each decoder cell in the final time-step.
It is a 2D Tensor of shape [batch_size x cell.state_size].
"""
with variable_scope.variable_scope(scope or "basic_rnn_seq2seq"):
_, enc_state = rnn.rnn(cell, encoder_inputs, dtype=dtype)
return rnn_decoder(decoder_inputs, enc_state, cell)
示例2: basic_rnn_seq2seq
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def basic_rnn_seq2seq(
encoder_inputs, decoder_inputs, cell, dtype=dtypes.float32, scope=None):
"""Basic RNN sequence-to-sequence model.
This model first runs an RNN to encode encoder_inputs into a state vector,
then runs decoder, initialized with the last encoder state, on decoder_inputs.
Encoder and decoder use the same RNN cell type, but don't share parameters.
Args:
encoder_inputs: A list of 2D Tensors [batch_size x input_size].
decoder_inputs: A list of 2D Tensors [batch_size x input_size].
cell: rnn_cell.RNNCell defining the cell function and size.
dtype: The dtype of the initial state of the RNN cell (default: tf.float32).
scope: VariableScope for the created subgraph; default: "basic_rnn_seq2seq".
Returns:
A tuple of the form (outputs, state), where:
outputs: A list of the same length as decoder_inputs of 2D Tensors with
shape [batch_size x output_size] containing the generated outputs.
state: The state of each decoder cell in the final time-step.
It is a 2D Tensor of shape [batch_size x cell.state_size].
"""
with variable_scope.variable_scope(scope or "basic_rnn_seq2seq"):
_, enc_state = rnn.rnn(cell, encoder_inputs, dtype=dtype)
return rnn_decoder(decoder_inputs, enc_state, cell)
示例3: basic_rnn_seq2seq
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def basic_rnn_seq2seq(
encoder_inputs, decoder_inputs, cell, dtype=dtypes.float32, scope=None):
"""Basic RNN sequence-to-sequence model.
This model first runs an RNN to encode encoder_inputs into a state vector,
then runs decoder, initialized with the last encoder state, on decoder_inputs.
Encoder and decoder use the same RNN cell type, but don't share parameters.
Args:
encoder_inputs: A list of 2D Tensors [batch_size x input_size].
decoder_inputs: A list of 2D Tensors [batch_size x input_size].
cell: rnn_cell.RNNCell defining the cell function and size.
dtype: The dtype of the initial state of the RNN cell (default: tf.float32).
scope: VariableScope for the created subgraph; default: "basic_rnn_seq2seq".
Returns:
A tuple of the form (outputs, state), where:
outputs: A list of the same length as decoder_inputs of 2D Tensors with
shape [batch_size x output_size] containing the generated outputs.
state: The state of each decoder cell in the final time-step.
It is a 2D Tensor of shape [batch_size x cell.state_size].
"""
with variable_scope.variable_scope(scope or "basic_rnn_seq2seq"):
_, enc_state = rnn.rnn(cell, encoder_inputs, dtype=dtype)
return rnn_decoder(decoder_inputs, enc_state, cell)
示例4: state_saving_rnn
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def state_saving_rnn(cell, inputs, state_saver, state_name,
sequence_length=None, scope=None):
"""RNN that accepts a state saver for time-truncated RNN calculation.
Args:
cell: An instance of RNNCell.
inputs: A length T list of inputs, each a tensor of shape
[batch_size, input_size].
state_saver: A state saver object with methods `state` and `save_state`.
state_name: The name to use with the state_saver.
sequence_length: (optional) An int32/int64 vector size [batch_size].
See the documentation for rnn() for more details about sequence_length.
scope: VariableScope for the created subgraph; defaults to "RNN".
Returns:
A pair (outputs, state) where:
outputs is a length T list of outputs (one for each input)
states is the final state
Raises:
TypeError: If "cell" is not an instance of RNNCell.
ValueError: If inputs is None or an empty list.
"""
initial_state = state_saver.state(state_name)
(outputs, state) = rnn(cell, inputs, initial_state=initial_state,
sequence_length=sequence_length, scope=scope)
save_state = state_saver.save_state(state_name, state)
with ops.control_dependencies([save_state]):
outputs[-1] = array_ops.identity(outputs[-1])
return (outputs, state)
示例5: __init__
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def __init__(self, cell, input_keep_prob=1.0, output_keep_prob=1.0,
seed=None, is_train=None):
"""Create a cell with added input and/or output dropout.
Dropout is never used on the state.
Args:
cell: an RNNCell, a projection to output_size is added to it.
input_keep_prob: unit Tensor or float between 0 and 1, input keep
probability; if it is float and 1, no input dropout will be added.
output_keep_prob: unit Tensor or float between 0 and 1, output keep
probability; if it is float and 1, no output dropout will be added.
seed: (optional) integer, the randomness seed.
is_train: boolean tensor (often placeholder). If indicated, then when
is_train is False, dropout is not applied.
Raises:
TypeError: if cell is not an RNNCell.
ValueError: if keep_prob is not between 0 and 1.
"""
if not isinstance(cell, RNNCell):
raise TypeError("The parameter cell is not a RNNCell.")
if (isinstance(input_keep_prob, float) and
not (input_keep_prob >= 0.0 and input_keep_prob <= 1.0)):
raise ValueError("Parameter input_keep_prob must be between 0 and 1: %d"
% input_keep_prob)
if (isinstance(output_keep_prob, float) and
not (output_keep_prob >= 0.0 and output_keep_prob <= 1.0)):
raise ValueError("Parameter input_keep_prob must be between 0 and 1: %d"
% output_keep_prob)
self._cell = cell
self._input_keep_prob = input_keep_prob
self._output_keep_prob = output_keep_prob
self._seed = seed
self._is_train = is_train
示例6: tied_rnn_seq2seq
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def tied_rnn_seq2seq(encoder_inputs, decoder_inputs, cell,
loop_function=None, dtype=dtypes.float32, scope=None):
"""RNN sequence-to-sequence model with tied encoder and decoder parameters.
This model first runs an RNN to encode encoder_inputs into a state vector, and
then runs decoder, initialized with the last encoder state, on decoder_inputs.
Encoder and decoder use the same RNN cell and share parameters.
Args:
encoder_inputs: A list of 2D Tensors [batch_size x input_size].
decoder_inputs: A list of 2D Tensors [batch_size x input_size].
cell: rnn_cell.RNNCell defining the cell function and size.
loop_function: If not None, this function will be applied to i-th output
in order to generate i+1-th input, and decoder_inputs will be ignored,
except for the first element ("GO" symbol), see rnn_decoder for details.
dtype: The dtype of the initial state of the rnn cell (default: tf.float32).
scope: VariableScope for the created subgraph; default: "tied_rnn_seq2seq".
Returns:
A tuple of the form (outputs, state), where:
outputs: A list of the same length as decoder_inputs of 2D Tensors with
shape [batch_size x output_size] containing the generated outputs.
state: The state of each decoder cell in each time-step. This is a list
with length len(decoder_inputs) -- one item for each time-step.
It is a 2D Tensor of shape [batch_size x cell.state_size].
"""
with variable_scope.variable_scope("combined_tied_rnn_seq2seq"):
scope = scope or "tied_rnn_seq2seq"
_, enc_state = rnn.rnn(
cell, encoder_inputs, dtype=dtype, scope=scope)
variable_scope.get_variable_scope().reuse_variables()
return rnn_decoder(decoder_inputs, enc_state, cell,
loop_function=loop_function, scope=scope)
示例7: tied_rnn_seq2seq
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def tied_rnn_seq2seq(encoder_inputs, decoder_inputs, cell,
loop_function=None, dtype=dtypes.float32, scope=None):
"""RNN sequence-to-sequence model with tied encoder and decoder parameters.
This model first runs an RNN to encode encoder_inputs into a state vector, and
then runs decoder, initialized with the last encoder state, on decoder_inputs.
Encoder and decoder use the same RNN cell and share parameters.
Args:
encoder_inputs: A list of 2D Tensors [batch_size x input_size].
decoder_inputs: A list of 2D Tensors [batch_size x input_size].
cell: rnn_cell.RNNCell defining the cell function and size.
loop_function: If not None, this function will be applied to i-th output
in order to generate i+1-th input, and decoder_inputs will be ignored,
except for the first element ("GO" symbol), see rnn_decoder for details.
dtype: The dtype of the initial state of the rnn cell (default: tf.float32).
scope: VariableScope for the created subgraph; default: "tied_rnn_seq2seq".
Returns:
A tuple of the form (outputs, state), where:
outputs: A list of the same length as decoder_inputs of 2D Tensors with
shape [batch_size x output_size] containing the generated outputs.
state: The state of each decoder cell in each time-step. This is a list
with length len(decoder_inputs) -- one item for each time-step.
It is a 2D Tensor of shape [batch_size x cell.state_size].
"""
with variable_scope.variable_scope("combined_tied_rnn_seq2seq"):
scope = scope or "tied_rnn_seq2seq"
_, enc_state = rnn.rnn(
cell, encoder_inputs, dtype=dtype, scope=scope)
variable_scope.get_variable_scope().reuse_variables()
return rnn_decoder(decoder_inputs, enc_state, cell,
loop_function=loop_function, scope=scope)
示例8: _get_rnn_cell
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def _get_rnn_cell(cell_type, num_units, num_layers):
"""Constructs and return an `RNNCell`.
Args:
cell_type: either a string identifying the `RNNCell` type, or a subclass of
`RNNCell`.
num_units: the number of units in the `RNNCell`.
num_layers: the number of layers in the RNN.
Returns:
An initialized `RNNCell`.
Raises:
ValueError: `cell_type` is an invalid `RNNCell` name.
TypeError: `cell_type` is not a string or a subclass of `RNNCell`.
"""
if isinstance(cell_type, str):
cell_type = _CELL_TYPES.get(cell_type)
if cell_type is None:
raise ValueError('The supported cell types are {}; got {}'.format(
list(_CELL_TYPES.keys()), cell_type))
if not issubclass(cell_type, rnn_cell.RNNCell):
raise TypeError(
'cell_type must be a subclass of RNNCell or one of {}.'.format(
list(_CELL_TYPES.keys())))
cell = cell_type(num_units=num_units)
if num_layers > 1:
cell = rnn_cell.MultiRNNCell(
[cell] * num_layers, state_is_tuple=True)
return cell
示例9: __init__
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def __init__(self, cell_fn, partition_size=128, partitions=1, layers=2):
"""Create a RNN cell composed sequentially of a number of RNNCells.
Args:
cell_fn: reference to RNNCell function to create each partition in each layer.
partition_size: how many horizontal cells to include in each partition.
partitions: how many horizontal partitions to include in each layer.
layers: how many layers to include in the net.
"""
super(PartitionedMultiRNNCell, self).__init__()
self._cells = []
for i in range(layers):
self._cells.append([cell_fn(partition_size) for _ in range(partitions)])
self._partitions = partitions
示例10: tied_rnn_seq2seq
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def tied_rnn_seq2seq(encoder_inputs, decoder_inputs, cell,
loop_function=None, dtype=dtypes.float32, scope=None):
"""RNN sequence-to-sequence model with tied encoder and decoder parameters.
This model first runs an RNN to encode encoder_inputs into a state vector, and
then runs decoder, initialized with the last encoder state, on decoder_inputs.
Encoder and decoder use the same RNN cell and share parameters.
Args:
encoder_inputs: A list of 2D Tensors [batch_size x input_size].
decoder_inputs: A list of 2D Tensors [batch_size x input_size].
cell: rnn_cell.RNNCell defining the cell function and size.
loop_function: If not None, this function will be applied to i-th output
in order to generate i+1-th input, and decoder_inputs will be ignored,
except for the first element ("GO" symbol), see rnn_decoder for details.
dtype: The dtype of the initial state of the rnn cell (default: tf.float32).
scope: VariableScope for the created subgraph; default: "tied_rnn_seq2seq".
Returns:
A tuple of the form (outputs, state), where:
outputs: A list of the same length as decoder_inputs of 2D Tensors with
shape [batch_size x output_size] containing the generated outputs.
state: The state of each decoder cell in each time-step. This is a list
with length len(decoder_inputs) -- one item for each time-step.
It is a 2D Tensor of shape [batch_size x cell.state_size].
"""
with variable_scope.variable_scope("combined_tied_rnn_seq2seq"):
scope = scope or "tied_rnn_seq2seq"
_, enc_state = rnn.rnn(
cell, encoder_inputs, dtype=dtype, scope=scope)
variable_scope.get_variable_scope().reuse_variables()
return rnn_decoder(decoder_inputs, enc_state, cell,
loop_function=loop_function, scope=scope)
示例11: lm_rnn
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def lm_rnn(x, t, token_embed, layers, seq_len=None, context_vector=None, cell=tf.nn.rnn_cell.BasicLSTMCell):
"""
Token level LSTM language model that uses a sentence level context vector.
:param x: (tensor) Input to rnn
:param t: (tensor) Targets for language model predictions (typically next token in sequence)
:param token_embed: (tensor) MB X ALPHABET_SIZE.
:param layers: A list of hidden layer sizes for stacked lstm
:param seq_len: A 1D tensor of mini-batch size for variable length sequences
:param context_vector: (tensor) MB X 2*CONTEXT_LSTM_OUTPUT_DIM. Optional context to append to each token embedding
:param cell: (class) A tensorflow RNNCell sub-class
:return: (tuple) token_losses (tensor), hidden_states (list of tensors), final_hidden (tensor)
"""
token_set_size = token_embed.get_shape().as_list()[0]
cells = [cell(num_units) for num_units in layers]
cell = tf.nn.rnn_cell.MultiRNNCell(cells, state_is_tuple=True)
# mb X sentence_length X embedding_size
x_lookup = tf.nn.embedding_lookup(token_embed, x)
# List of mb X embedding_size tensors
input_features = tf.unstack(x_lookup, axis=1)
# input_features: list max_length of sentence long tensors (mb X embedding_size+context_size)
if context_vector is not None:
input_features = [tf.concat([embedding, context_vector], 1) for embedding in input_features]
# hidden_states: sentence length long list of tensors (mb X final_layer_size)
# cell_state: data structure that contains the cell state for each hidden layer for a mini-batch (complicated)
hidden_states, cell_state = tf.nn.static_rnn(cell, input_features,
initial_state=None,
dtype=tf.float32,
sequence_length=seq_len,
scope='language_model')
# batch_size X sequence_length (see tf_ops for def)
token_losses = batch_softmax_dist_loss(t, hidden_states, token_set_size)
final_hidden = cell_state[-1].h
return token_losses, hidden_states, final_hidden
示例12: rnn_decoder
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def rnn_decoder(decoder_inputs, initial_state, cell, loop_function=None,
scope=None):
"""RNN decoder for the sequence-to-sequence model.
Args:
decoder_inputs: A list of 2D Tensors [batch_size x input_size].
initial_state: 2D Tensor with shape [batch_size x cell.state_size].
cell: rnn_cell.RNNCell defining the cell function and size.
loop_function: If not None, this function will be applied to the i-th output
in order to generate the i+1-st input, and decoder_inputs will be ignored,
except for the first element ("GO" symbol). This can be used for decoding,
but also for training to emulate http://arxiv.org/abs/1506.03099.
Signature -- loop_function(prev, i) = next
* prev is a 2D Tensor of shape [batch_size x output_size],
* i is an integer, the step number (when advanced control is needed),
* next is a 2D Tensor of shape [batch_size x input_size].
scope: VariableScope for the created subgraph; defaults to "rnn_decoder".
Returns:
A tuple of the form (outputs, state), where:
outputs: A list of the same length as decoder_inputs of 2D Tensors with
shape [batch_size x output_size] containing generated outputs.
state: The state of each cell at the final time-step.
It is a 2D Tensor of shape [batch_size x cell.state_size].
(Note that in some cases, like basic RNN cell or GRU cell, outputs and
states can be the same. They are different for LSTM cells though.)
"""
with variable_scope.variable_scope(scope or "rnn_decoder"):
state = initial_state
outputs = []
prev = None
for i, inp in enumerate(decoder_inputs):
if loop_function is not None and prev is not None:
with variable_scope.variable_scope("loop_function", reuse=True):
inp = loop_function(prev, i)
if i > 0:
variable_scope.get_variable_scope().reuse_variables()
output, state = cell(inp, state)
outputs.append(output)
if loop_function is not None:
prev = output
return outputs, state
示例13: rnn_decoder
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def rnn_decoder(decoder_inputs, initial_state, cell, loop_function=None,
scope=None):
"""RNN decoder for the sequence-to-sequence model.
Args:
decoder_inputs: A list of 2D Tensors [batch_size x input_size].
initial_state: 2D Tensor with shape [batch_size x cell.state_size].
cell: rnn_cell.RNNCell defining the cell function and size.
loop_function: If not None, this function will be applied to the i-th output
in order to generate the i+1-st input, and decoder_inputs will be ignored,
except for the first element ("GO" symbol). This can be used for decoding,
but also for training to emulate http://arxiv.org/abs/1506.03099.
Signature -- loop_function(prev, i) = next
* prev is a 2D Tensor of shape [batch_size x output_size],
* i is an integer, the step number (when advanced control is needed),
* next is a 2D Tensor of shape [batch_size x input_size].
scope: VariableScope for the created subgraph; defaults to "rnn_decoder".
Returns:
A tuple of the form (outputs, state), where:
outputs: A list of the same length as decoder_inputs of 2D Tensors with
shape [batch_size x output_size] containing generated outputs.
state: The state of each cell at the final time-step.
It is a 2D Tensor of shape [batch_size x cell.state_size].
(Note that in some cases, like basic RNN cell or GRU cell, outputs and
states can be the same. They are different for LSTM cells though.)
"""
with variable_scope.variable_scope(scope or "rnn_decoder"):
state = initial_state
outputs = []
prev = None
for i, inp in enumerate(decoder_inputs):
if loop_function is not None and prev is not None:
with variable_scope.variable_scope("loop_function", reuse=True):
inp = loop_function(prev, i)
if i > 0:
variable_scope.get_variable_scope().reuse_variables()
output, state = cell(inp, state)
outputs.append(output)
if loop_function is not None:
prev = output
return outputs, state
示例14: __init__
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def __init__(self, cell, attn_length, attn_size=None, attn_vec_size=None,
input_size=None, state_is_tuple=False):
"""Create a cell with attention.
Args:
cell: an RNNCell, an attention is added to it.
attn_length: integer, the size of an attention window.
attn_size: integer, the size of an attention vector. Equal to
cell.output_size by default.
attn_vec_size: integer, the number of convolutional features calculated
on attention state and a size of the hidden layer built from
base cell state. Equal attn_size to by default.
input_size: integer, the size of a hidden linear layer,
built from inputs and attention. Derived from the input tensor
by default.
state_is_tuple: If True, accepted and returned states are n-tuples, where
`n = len(cells)`. By default (False), the states are all
concatenated along the column axis.
Raises:
TypeError: if cell is not an RNNCell.
ValueError: if cell returns a state tuple but the flag
`state_is_tuple` is `False` or if attn_length is zero or less.
"""
if not isinstance(cell, rnn_cell.RNNCell):
raise TypeError("The parameter cell is not RNNCell.")
if nest.is_sequence(cell.state_size) and not state_is_tuple:
raise ValueError("Cell returns tuple of states, but the flag "
"state_is_tuple is not set. State size is: %s"
% str(cell.state_size))
if attn_length <= 0:
raise ValueError("attn_length should be greater than zero, got %s"
% str(attn_length))
if not state_is_tuple:
logging.warn(
"%s: Using a concatenated state is slower and will soon be "
"deprecated. Use state_is_tuple=True." % self)
if attn_size is None:
attn_size = cell.output_size
if attn_vec_size is None:
attn_vec_size = attn_size
self._state_is_tuple = state_is_tuple
self._cell = cell
self._attn_vec_size = attn_vec_size
self._input_size = input_size
self._attn_size = attn_size
self._attn_length = attn_length
示例15: multi_value_rnn_classifier
# 需要导入模块: from tensorflow.python.ops import rnn_cell [as 别名]
# 或者: from tensorflow.python.ops.rnn_cell import RNNCell [as 别名]
def multi_value_rnn_classifier(num_classes,
num_units,
sequence_feature_columns,
context_feature_columns=None,
cell_type='basic_rnn',
cell_dtype=dtypes.float32,
num_rnn_layers=1,
optimizer_type='SGD',
learning_rate=0.1,
momentum=None,
gradient_clipping_norm=10.0,
model_dir=None,
config=None):
"""Creates a RNN `Estimator` that predicts sequences of labels.
Args:
num_classes: the number of classes for categorization.
num_units: the size of the RNN cells.
sequence_feature_columns: An iterable containing all the feature columns
describing sequence features. All items in the set should be instances
of classes derived from `FeatureColumn`.
context_feature_columns: An iterable containing all the feature columns
describing context features i.e. features that apply accross all time
steps. All items in the set should be instances of classes derived from
`FeatureColumn`.
cell_type: subclass of `RNNCell` or one of 'basic_rnn,' 'lstm' or 'gru'.
cell_dtype: the dtype of the state and output for the given `cell_type`.
num_rnn_layers: number of RNN layers.
optimizer_type: the type of optimizer to use. Either a subclass of
`Optimizer` or a string.
learning_rate: learning rate.
momentum: momentum value. Only used if `optimizer_type` is 'Momentum'.
gradient_clipping_norm: parameter used for gradient clipping. If `None`,
then no clipping is performed.
model_dir: directory to use for The directory in which to save and restore
the model graph, parameters, etc.
config: A `RunConfig` instance.
Returns:
An initialized instance of `_MultiValueRNNEstimator`.
"""
optimizer = _get_optimizer(optimizer_type, learning_rate, momentum)
cell = _get_rnn_cell(cell_type, num_units, num_rnn_layers)
target_column = layers.multi_class_target(n_classes=num_classes)
return _MultiValueRNNEstimator(cell,
target_column,
optimizer,
sequence_feature_columns,
context_feature_columns,
model_dir,
config,
gradient_clipping_norm,
dtype=cell_dtype)