當前位置: 首頁>>代碼示例>>Python>>正文


Python gen_parser_ops.lexicon_builder方法代碼示例

本文整理匯總了Python中syntaxnet.ops.gen_parser_ops.lexicon_builder方法的典型用法代碼示例。如果您正苦於以下問題:Python gen_parser_ops.lexicon_builder方法的具體用法?Python gen_parser_ops.lexicon_builder怎麽用?Python gen_parser_ops.lexicon_builder使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在syntaxnet.ops.gen_parser_ops的用法示例。


在下文中一共展示了gen_parser_ops.lexicon_builder方法的5個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: setUp

# 需要導入模塊: from syntaxnet.ops import gen_parser_ops [as 別名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 別名]
def setUp(self):
    # Creates a task context with the correct testing paths.
    initial_task_context = os.path.join(FLAGS.test_srcdir,
                                        'syntaxnet/'
                                        'testdata/context.pbtxt')
    self._task_context = os.path.join(FLAGS.test_tmpdir, 'context.pbtxt')
    with open(initial_task_context, 'r') as fin:
      with open(self._task_context, 'w') as fout:
        fout.write(fin.read().replace('SRCDIR', FLAGS.test_srcdir)
                   .replace('OUTPATH', FLAGS.test_tmpdir))

    # Creates necessary term maps.
    with self.test_session() as sess:
      gen_parser_ops.lexicon_builder(task_context=self._task_context,
                                     corpus_name='training-corpus').run()
      self._num_features, self._num_feature_ids, _, self._num_actions = (
          sess.run(gen_parser_ops.feature_size(task_context=self._task_context,
                                               arg_prefix='brain_parser'))) 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:20,代碼來源:beam_reader_ops_test.py

示例2: setUp

# 需要導入模塊: from syntaxnet.ops import gen_parser_ops [as 別名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 別名]
def setUp(self):
    # Creates a task context with the correct testing paths.
    initial_task_context = os.path.join(
        FLAGS.test_srcdir,
        'syntaxnet/'
        'testdata/context.pbtxt')
    self._task_context = os.path.join(FLAGS.test_tmpdir, 'context.pbtxt')
    with open(initial_task_context, 'r') as fin:
      with open(self._task_context, 'w') as fout:
        fout.write(fin.read().replace('SRCDIR', FLAGS.test_srcdir)
                   .replace('OUTPATH', FLAGS.test_tmpdir))

    # Creates necessary term maps.
    with self.test_session() as sess:
      gen_parser_ops.lexicon_builder(task_context=self._task_context,
                                     corpus_name='training-corpus').run()
      self._num_features, self._num_feature_ids, _, self._num_actions = (
          sess.run(gen_parser_ops.feature_size(task_context=self._task_context,
                                               arg_prefix='brain_parser'))) 
開發者ID:coderSkyChen,項目名稱:Action_Recognition_Zoo,代碼行數:21,代碼來源:beam_reader_ops_test.py

示例3: setUp

# 需要導入模塊: from syntaxnet.ops import gen_parser_ops [as 別名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 別名]
def setUp(self):
    # Creates a task context with the correct testing paths.
    initial_task_context = os.path.join(test_flags.source_root(),
                                        'syntaxnet/'
                                        'testdata/context.pbtxt')
    self._task_context = os.path.join(test_flags.temp_dir(), 'context.pbtxt')
    with open(initial_task_context, 'r') as fin:
      with open(self._task_context, 'w') as fout:
        fout.write(fin.read().replace('SRCDIR', test_flags.source_root())
                   .replace('OUTPATH', test_flags.temp_dir()))

    # Creates necessary term maps.
    with self.test_session() as sess:
      gen_parser_ops.lexicon_builder(task_context=self._task_context,
                                     corpus_name='training-corpus').run()
      self._num_features, self._num_feature_ids, _, self._num_actions = (
          sess.run(gen_parser_ops.feature_size(task_context=self._task_context,
                                               arg_prefix='brain_parser'))) 
開發者ID:generalized-iou,項目名稱:g-tensorflow-models,代碼行數:20,代碼來源:beam_reader_ops_test.py

示例4: build_lexicon

# 需要導入模塊: from syntaxnet.ops import gen_parser_ops [as 別名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 別名]
def build_lexicon(output_path,
                  training_corpus_path,
                  tf_master='',
                  training_corpus_format='conll-sentence',
                  morph_to_pos=False,
                  **kwargs):
  """Constructs a SyntaxNet lexicon at the given path.

  Args:
    output_path: Location to construct the lexicon.
    training_corpus_path: Path to CONLL formatted training data.
    tf_master: TensorFlow master executor (string, defaults to '' to use the
      local instance).
    training_corpus_format: Format of the training corpus (defaults to CONLL;
      search for REGISTER_SYNTAXNET_DOCUMENT_FORMAT for other formats).
    morph_to_pos: Whether to serialize morph attributes to the tag field,
      combined with category and fine POS tag.
    **kwargs: Forwarded to the LexiconBuilder op.
  """
  context = create_lexicon_context(output_path)
  if morph_to_pos:
    context.parameter.add(name='join_category_to_pos', value='true')
    context.parameter.add(name='add_pos_as_attribute', value='true')
    context.parameter.add(name='serialize_morph_to_pos', value='true')

  # Add the training data to the context.
  resource = context.input.add()
  resource.name = 'corpus'
  resource.record_format.extend([training_corpus_format])
  part = resource.part.add()
  part.file_pattern = training_corpus_path

  # Run the lexicon builder op.
  with tf.Session(tf_master) as sess:
    sess.run(
        gen_parser_ops.lexicon_builder(
            task_context_str=str(context), corpus_name='corpus', **kwargs)) 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:39,代碼來源:lexicon.py

示例5: BuildLexicon

# 需要導入模塊: from syntaxnet.ops import gen_parser_ops [as 別名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 別名]
def BuildLexicon(self):
    with self.test_session():
      gen_parser_ops.lexicon_builder(
          task_context=self.context_file,
          lexicon_max_char_ngram_length=2,
          lexicon_char_ngram_mark_boundaries=True).run() 
開發者ID:ringringyi,項目名稱:DOTA_models,代碼行數:8,代碼來源:lexicon_builder_test.py


注:本文中的syntaxnet.ops.gen_parser_ops.lexicon_builder方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。