当前位置: 首页>>代码示例>>Python>>正文


Python skip_thoughts_encoder.SkipThoughtsEncoder方法代码示例

本文整理汇总了Python中skip_thoughts.skip_thoughts_encoder.SkipThoughtsEncoder方法的典型用法代码示例。如果您正苦于以下问题:Python skip_thoughts_encoder.SkipThoughtsEncoder方法的具体用法?Python skip_thoughts_encoder.SkipThoughtsEncoder怎么用?Python skip_thoughts_encoder.SkipThoughtsEncoder使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在skip_thoughts.skip_thoughts_encoder的用法示例。


在下文中一共展示了skip_thoughts_encoder.SkipThoughtsEncoder方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: load_model

# 需要导入模块: from skip_thoughts import skip_thoughts_encoder [as 别名]
# 或者: from skip_thoughts.skip_thoughts_encoder import SkipThoughtsEncoder [as 别名]
def load_model(self, model_config, vocabulary_file, embedding_matrix_file,
                 checkpoint_path):
    """Loads a skip-thoughts model.

    Args:
      model_config: Object containing parameters for building the model.
      vocabulary_file: Path to vocabulary file containing a list of newline-
        separated words where the word id is the corresponding 0-based index in
        the file.
      embedding_matrix_file: Path to a serialized numpy array of shape
        [vocab_size, embedding_dim].
      checkpoint_path: SkipThoughtsModel checkpoint file or a directory
        containing a checkpoint file.
    """
    tf.logging.info("Reading vocabulary from %s", vocabulary_file)
    with tf.gfile.GFile(vocabulary_file, mode="r") as f:
      lines = list(f.readlines())
    reverse_vocab = [line.decode("utf-8").strip() for line in lines]
    tf.logging.info("Loaded vocabulary with %d words.", len(reverse_vocab))

    tf.logging.info("Loading embedding matrix from %s", embedding_matrix_file)
    # Note: tf.gfile.GFile doesn't work here because np.load() calls f.seek()
    # with 3 arguments.
    embedding_matrix = np.load(embedding_matrix_file)
    tf.logging.info("Loaded embedding matrix with shape %s",
                    embedding_matrix.shape)

    word_embeddings = collections.OrderedDict(
        zip(reverse_vocab, embedding_matrix))

    g = tf.Graph()
    with g.as_default():
      encoder = skip_thoughts_encoder.SkipThoughtsEncoder(word_embeddings)
      restore_model = encoder.build_graph_from_config(model_config,
                                                      checkpoint_path)

    sess = tf.Session(graph=g)
    restore_model(sess)

    self.encoders.append(encoder)
    self.sessions.append(sess) 
开发者ID:itsamitgoel,项目名称:Gun-Detector,代码行数:43,代码来源:encoder_manager.py

示例2: load_model

# 需要导入模块: from skip_thoughts import skip_thoughts_encoder [as 别名]
# 或者: from skip_thoughts.skip_thoughts_encoder import SkipThoughtsEncoder [as 别名]
def load_model(self, model_config, vocabulary_file, embedding_matrix_file,
                 checkpoint_path):
    """Loads a skip-thoughts model.

    Args:
      model_config: Object containing parameters for building the model.
      vocabulary_file: Path to vocabulary file containing a list of newline-
        separated words where the word id is the corresponding 0-based index in
        the file.
      embedding_matrix_file: Path to a serialized numpy array of shape
        [vocab_size, embedding_dim].
      checkpoint_path: SkipThoughtsModel checkpoint file or a directory
        containing a checkpoint file.
    """
    tf.logging.info("Reading vocabulary from %s", vocabulary_file)
    with tf.gfile.GFile(vocabulary_file, mode="r") as f:
      lines = list(f.readlines())
    reverse_vocab = [line.decode("utf-8").strip() for line in lines]
    tf.logging.info("Loaded vocabulary with %d words.", len(reverse_vocab))

    tf.logging.info("Loading embedding matrix from %s", embedding_matrix_file)
    # Note: tf.gfile.GFile doesn't work here because np.load() calls f.seek()
    # with 3 arguments.
    with open(embedding_matrix_file, "r") as f:
      embedding_matrix = np.load(f)
    tf.logging.info("Loaded embedding matrix with shape %s",
                    embedding_matrix.shape)

    word_embeddings = collections.OrderedDict(
        zip(reverse_vocab, embedding_matrix))

    g = tf.Graph()
    with g.as_default():
      encoder = skip_thoughts_encoder.SkipThoughtsEncoder(word_embeddings)
      restore_model = encoder.build_graph_from_config(model_config,
                                                      checkpoint_path)

    sess = tf.Session(graph=g)
    restore_model(sess)

    self.encoders.append(encoder)
    self.sessions.append(sess) 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:44,代码来源:encoder_manager.py

示例3: load_model

# 需要导入模块: from skip_thoughts import skip_thoughts_encoder [as 别名]
# 或者: from skip_thoughts.skip_thoughts_encoder import SkipThoughtsEncoder [as 别名]
def load_model(self, model_config, vocabulary_file, embedding_matrix_file,
                 checkpoint_path):
    """Loads a skip-thoughts model.

    Args:
      model_config: Object containing parameters for building the model.
      vocabulary_file: Path to vocabulary file containing a list of newline-
        separated words where the word id is the corresponding 0-based index in
        the file.
      embedding_matrix_file: Path to a serialized numpy array of shape
        [vocab_size, embedding_dim].
      checkpoint_path: SkipThoughtsModel checkpoint file or a directory
        containing a checkpoint file.
    """
    tf.logging.info("Reading vocabulary from %s", vocabulary_file)
    with tf.gfile.GFile(vocabulary_file, mode="rb") as f:
      lines = list(f.readlines())
    reverse_vocab = [line.decode("utf-8").strip() for line in lines]
    
    tf.logging.info("Loaded vocabulary with %d words.", len(reverse_vocab))

    tf.logging.info("Loading embedding matrix from %s", embedding_matrix_file)
    # Note: tf.gfile.GFile doesn't work here because np.load() calls f.seek()
    # with 3 arguments.
    embedding_matrix = np.load(embedding_matrix_file)
    tf.logging.info("Loaded embedding matrix with shape %s",
                    embedding_matrix.shape)

    word_embeddings = collections.OrderedDict(
        zip(reverse_vocab, embedding_matrix))

    g = tf.Graph()
    with g.as_default():
      encoder = skip_thoughts_encoder.SkipThoughtsEncoder(word_embeddings)
      restore_model = encoder.build_graph_from_config(model_config,
                                                      checkpoint_path)

    sess = tf.Session(graph=g)
    restore_model(sess)

    self.encoders.append(encoder)
    self.sessions.append(sess) 
开发者ID:tensorflow,项目名称:models,代码行数:44,代码来源:encoder_manager.py


注:本文中的skip_thoughts.skip_thoughts_encoder.SkipThoughtsEncoder方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。