本文整理匯總了Python中syntaxnet.dictionary_pb2.TokenEmbedding方法的典型用法代碼示例。如果您正苦於以下問題:Python dictionary_pb2.TokenEmbedding方法的具體用法?Python dictionary_pb2.TokenEmbedding怎麽用?Python dictionary_pb2.TokenEmbedding使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類syntaxnet.dictionary_pb2
的用法示例。
在下文中一共展示了dictionary_pb2.TokenEmbedding方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: testWordEmbeddingInitializer
# 需要導入模塊: from syntaxnet import dictionary_pb2 [as 別名]
# 或者: from syntaxnet.dictionary_pb2 import TokenEmbedding [as 別名]
def testWordEmbeddingInitializer(self):
def _TokenEmbedding(token, embedding):
e = dictionary_pb2.TokenEmbedding()
e.token = token
e.vector.values.extend(embedding)
return e.SerializeToString()
# Provide embeddings for the first three words in the word map.
records_path = os.path.join(FLAGS.test_tmpdir, 'sstable-00000-of-00001')
writer = tf.python_io.TFRecordWriter(records_path)
writer.write(_TokenEmbedding('.', [1, 2]))
writer.write(_TokenEmbedding(',', [3, 4]))
writer.write(_TokenEmbedding('the', [5, 6]))
del writer
with self.test_session():
embeddings = gen_parser_ops.word_embedding_initializer(
vectors=records_path,
task_context=self._task_context).eval()
self.assertAllClose(
np.array([[1. / (1 + 4) ** .5, 2. / (1 + 4) ** .5],
[3. / (9 + 16) ** .5, 4. / (9 + 16) ** .5],
[5. / (25 + 36) ** .5, 6. / (25 + 36) ** .5]]),
embeddings[:3,])
示例2: _token_embedding
# 需要導入模塊: from syntaxnet import dictionary_pb2 [as 別名]
# 或者: from syntaxnet.dictionary_pb2 import TokenEmbedding [as 別名]
def _token_embedding(self, token, embedding):
e = dictionary_pb2.TokenEmbedding()
e.token = token
e.vector.values.extend(embedding)
return e.SerializeToString()