本文整理汇总了Python中tensorflow.contrib.slim.python.slim.data.data_provider.DataProvider方法的典型用法代码示例。如果您正苦于以下问题:Python data_provider.DataProvider方法的具体用法?Python data_provider.DataProvider怎么用?Python data_provider.DataProvider使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类tensorflow.contrib.slim.python.slim.data.data_provider
的用法示例。
在下文中一共展示了data_provider.DataProvider方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: make_parallel_data_provider
# 需要导入模块: from tensorflow.contrib.slim.python.slim.data import data_provider [as 别名]
# 或者: from tensorflow.contrib.slim.python.slim.data.data_provider import DataProvider [as 别名]
def make_parallel_data_provider(data_sources_source,
data_sources_target,
reader=tf.TextLineReader,
num_samples=None,
source_delimiter=" ",
target_delimiter=" ",
**kwargs):
"""Creates a DataProvider that reads parallel text data.
Args:
data_sources_source: A list of data sources for the source text files.
data_sources_target: A list of data sources for the target text files.
Can be None for inference mode.
num_samples: Optional, number of records in the dataset
delimiter: Split tokens in the data on this delimiter. Defaults to space.
kwargs: Additional arguments (shuffle, num_epochs, etc) that are passed
to the data provider
Returns:
A DataProvider instance
"""
decoder_source = split_tokens_decoder.SplitTokensDecoder(
tokens_feature_name="source_tokens",
length_feature_name="source_len",
append_token="SEQUENCE_END",
delimiter=source_delimiter)
dataset_source = tf.contrib.slim.dataset.Dataset(
data_sources=data_sources_source,
reader=reader,
decoder=decoder_source,
num_samples=num_samples,
items_to_descriptions={})
dataset_target = None
if data_sources_target is not None:
decoder_target = split_tokens_decoder.SplitTokensDecoder(
tokens_feature_name="target_tokens",
length_feature_name="target_len",
prepend_token="SEQUENCE_START",
append_token="SEQUENCE_END",
delimiter=target_delimiter)
dataset_target = tf.contrib.slim.dataset.Dataset(
data_sources=data_sources_target,
reader=reader,
decoder=decoder_target,
num_samples=num_samples,
items_to_descriptions={})
return ParallelDataProvider(
dataset1=dataset_source, dataset2=dataset_target, **kwargs)
开发者ID:akanimax,项目名称:natural-language-summary-generation-from-structured-data,代码行数:55,代码来源:parallel_data_provider.py