本文整理汇总了Python中syntaxnet.dictionary_pb2.TokenEmbedding方法的典型用法代码示例。如果您正苦于以下问题:Python dictionary_pb2.TokenEmbedding方法的具体用法?Python dictionary_pb2.TokenEmbedding怎么用?Python dictionary_pb2.TokenEmbedding使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类syntaxnet.dictionary_pb2
的用法示例。
在下文中一共展示了dictionary_pb2.TokenEmbedding方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: testWordEmbeddingInitializer
# 需要导入模块: from syntaxnet import dictionary_pb2 [as 别名]
# 或者: from syntaxnet.dictionary_pb2 import TokenEmbedding [as 别名]
def testWordEmbeddingInitializer(self):
def _TokenEmbedding(token, embedding):
e = dictionary_pb2.TokenEmbedding()
e.token = token
e.vector.values.extend(embedding)
return e.SerializeToString()
# Provide embeddings for the first three words in the word map.
records_path = os.path.join(FLAGS.test_tmpdir, 'sstable-00000-of-00001')
writer = tf.python_io.TFRecordWriter(records_path)
writer.write(_TokenEmbedding('.', [1, 2]))
writer.write(_TokenEmbedding(',', [3, 4]))
writer.write(_TokenEmbedding('the', [5, 6]))
del writer
with self.test_session():
embeddings = gen_parser_ops.word_embedding_initializer(
vectors=records_path,
task_context=self._task_context).eval()
self.assertAllClose(
np.array([[1. / (1 + 4) ** .5, 2. / (1 + 4) ** .5],
[3. / (9 + 16) ** .5, 4. / (9 + 16) ** .5],
[5. / (25 + 36) ** .5, 6. / (25 + 36) ** .5]]),
embeddings[:3,])
示例2: _token_embedding
# 需要导入模块: from syntaxnet import dictionary_pb2 [as 别名]
# 或者: from syntaxnet.dictionary_pb2 import TokenEmbedding [as 别名]
def _token_embedding(self, token, embedding):
e = dictionary_pb2.TokenEmbedding()
e.token = token
e.vector.values.extend(embedding)
return e.SerializeToString()