本文整理匯總了Python中tensorflow.contrib.slim.python.slim.data.data_provider.DataProvider方法的典型用法代碼示例。如果您正苦於以下問題:Python data_provider.DataProvider方法的具體用法?Python data_provider.DataProvider怎麽用?Python data_provider.DataProvider使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類tensorflow.contrib.slim.python.slim.data.data_provider
的用法示例。
在下文中一共展示了data_provider.DataProvider方法的1個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: make_parallel_data_provider
# 需要導入模塊: from tensorflow.contrib.slim.python.slim.data import data_provider [as 別名]
# 或者: from tensorflow.contrib.slim.python.slim.data.data_provider import DataProvider [as 別名]
def make_parallel_data_provider(data_sources_source,
data_sources_target,
reader=tf.TextLineReader,
num_samples=None,
source_delimiter=" ",
target_delimiter=" ",
**kwargs):
"""Creates a DataProvider that reads parallel text data.
Args:
data_sources_source: A list of data sources for the source text files.
data_sources_target: A list of data sources for the target text files.
Can be None for inference mode.
num_samples: Optional, number of records in the dataset
delimiter: Split tokens in the data on this delimiter. Defaults to space.
kwargs: Additional arguments (shuffle, num_epochs, etc) that are passed
to the data provider
Returns:
A DataProvider instance
"""
decoder_source = split_tokens_decoder.SplitTokensDecoder(
tokens_feature_name="source_tokens",
length_feature_name="source_len",
append_token="SEQUENCE_END",
delimiter=source_delimiter)
dataset_source = tf.contrib.slim.dataset.Dataset(
data_sources=data_sources_source,
reader=reader,
decoder=decoder_source,
num_samples=num_samples,
items_to_descriptions={})
dataset_target = None
if data_sources_target is not None:
decoder_target = split_tokens_decoder.SplitTokensDecoder(
tokens_feature_name="target_tokens",
length_feature_name="target_len",
prepend_token="SEQUENCE_START",
append_token="SEQUENCE_END",
delimiter=target_delimiter)
dataset_target = tf.contrib.slim.dataset.Dataset(
data_sources=data_sources_target,
reader=reader,
decoder=decoder_target,
num_samples=num_samples,
items_to_descriptions={})
return ParallelDataProvider(
dataset1=dataset_source, dataset2=dataset_target, **kwargs)
開發者ID:akanimax,項目名稱:natural-language-summary-generation-from-structured-data,代碼行數:55,代碼來源:parallel_data_provider.py