当前位置: 首页>>代码示例>>Python>>正文


Python gen_parser_ops.lexicon_builder方法代码示例

本文整理汇总了Python中syntaxnet.ops.gen_parser_ops.lexicon_builder方法的典型用法代码示例。如果您正苦于以下问题:Python gen_parser_ops.lexicon_builder方法的具体用法?Python gen_parser_ops.lexicon_builder怎么用?Python gen_parser_ops.lexicon_builder使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在syntaxnet.ops.gen_parser_ops的用法示例。


在下文中一共展示了gen_parser_ops.lexicon_builder方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: setUp

# 需要导入模块: from syntaxnet.ops import gen_parser_ops [as 别名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 别名]
def setUp(self):
    # Creates a task context with the correct testing paths.
    initial_task_context = os.path.join(FLAGS.test_srcdir,
                                        'syntaxnet/'
                                        'testdata/context.pbtxt')
    self._task_context = os.path.join(FLAGS.test_tmpdir, 'context.pbtxt')
    with open(initial_task_context, 'r') as fin:
      with open(self._task_context, 'w') as fout:
        fout.write(fin.read().replace('SRCDIR', FLAGS.test_srcdir)
                   .replace('OUTPATH', FLAGS.test_tmpdir))

    # Creates necessary term maps.
    with self.test_session() as sess:
      gen_parser_ops.lexicon_builder(task_context=self._task_context,
                                     corpus_name='training-corpus').run()
      self._num_features, self._num_feature_ids, _, self._num_actions = (
          sess.run(gen_parser_ops.feature_size(task_context=self._task_context,
                                               arg_prefix='brain_parser'))) 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:20,代码来源:beam_reader_ops_test.py

示例2: setUp

# 需要导入模块: from syntaxnet.ops import gen_parser_ops [as 别名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 别名]
def setUp(self):
    # Creates a task context with the correct testing paths.
    initial_task_context = os.path.join(
        FLAGS.test_srcdir,
        'syntaxnet/'
        'testdata/context.pbtxt')
    self._task_context = os.path.join(FLAGS.test_tmpdir, 'context.pbtxt')
    with open(initial_task_context, 'r') as fin:
      with open(self._task_context, 'w') as fout:
        fout.write(fin.read().replace('SRCDIR', FLAGS.test_srcdir)
                   .replace('OUTPATH', FLAGS.test_tmpdir))

    # Creates necessary term maps.
    with self.test_session() as sess:
      gen_parser_ops.lexicon_builder(task_context=self._task_context,
                                     corpus_name='training-corpus').run()
      self._num_features, self._num_feature_ids, _, self._num_actions = (
          sess.run(gen_parser_ops.feature_size(task_context=self._task_context,
                                               arg_prefix='brain_parser'))) 
开发者ID:coderSkyChen,项目名称:Action_Recognition_Zoo,代码行数:21,代码来源:beam_reader_ops_test.py

示例3: setUp

# 需要导入模块: from syntaxnet.ops import gen_parser_ops [as 别名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 别名]
def setUp(self):
    # Creates a task context with the correct testing paths.
    initial_task_context = os.path.join(test_flags.source_root(),
                                        'syntaxnet/'
                                        'testdata/context.pbtxt')
    self._task_context = os.path.join(test_flags.temp_dir(), 'context.pbtxt')
    with open(initial_task_context, 'r') as fin:
      with open(self._task_context, 'w') as fout:
        fout.write(fin.read().replace('SRCDIR', test_flags.source_root())
                   .replace('OUTPATH', test_flags.temp_dir()))

    # Creates necessary term maps.
    with self.test_session() as sess:
      gen_parser_ops.lexicon_builder(task_context=self._task_context,
                                     corpus_name='training-corpus').run()
      self._num_features, self._num_feature_ids, _, self._num_actions = (
          sess.run(gen_parser_ops.feature_size(task_context=self._task_context,
                                               arg_prefix='brain_parser'))) 
开发者ID:generalized-iou,项目名称:g-tensorflow-models,代码行数:20,代码来源:beam_reader_ops_test.py

示例4: build_lexicon

# 需要导入模块: from syntaxnet.ops import gen_parser_ops [as 别名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 别名]
def build_lexicon(output_path,
                  training_corpus_path,
                  tf_master='',
                  training_corpus_format='conll-sentence',
                  morph_to_pos=False,
                  **kwargs):
  """Constructs a SyntaxNet lexicon at the given path.

  Args:
    output_path: Location to construct the lexicon.
    training_corpus_path: Path to CONLL formatted training data.
    tf_master: TensorFlow master executor (string, defaults to '' to use the
      local instance).
    training_corpus_format: Format of the training corpus (defaults to CONLL;
      search for REGISTER_SYNTAXNET_DOCUMENT_FORMAT for other formats).
    morph_to_pos: Whether to serialize morph attributes to the tag field,
      combined with category and fine POS tag.
    **kwargs: Forwarded to the LexiconBuilder op.
  """
  context = create_lexicon_context(output_path)
  if morph_to_pos:
    context.parameter.add(name='join_category_to_pos', value='true')
    context.parameter.add(name='add_pos_as_attribute', value='true')
    context.parameter.add(name='serialize_morph_to_pos', value='true')

  # Add the training data to the context.
  resource = context.input.add()
  resource.name = 'corpus'
  resource.record_format.extend([training_corpus_format])
  part = resource.part.add()
  part.file_pattern = training_corpus_path

  # Run the lexicon builder op.
  with tf.Session(tf_master) as sess:
    sess.run(
        gen_parser_ops.lexicon_builder(
            task_context_str=str(context), corpus_name='corpus', **kwargs)) 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:39,代码来源:lexicon.py

示例5: BuildLexicon

# 需要导入模块: from syntaxnet.ops import gen_parser_ops [as 别名]
# 或者: from syntaxnet.ops.gen_parser_ops import lexicon_builder [as 别名]
def BuildLexicon(self):
    with self.test_session():
      gen_parser_ops.lexicon_builder(
          task_context=self.context_file,
          lexicon_max_char_ngram_length=2,
          lexicon_char_ngram_mark_boundaries=True).run() 
开发者ID:ringringyi,项目名称:DOTA_models,代码行数:8,代码来源:lexicon_builder_test.py


注:本文中的syntaxnet.ops.gen_parser_ops.lexicon_builder方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。