本文整理汇总了Python中official.utils.logs.logger.RUN_STATUS_SUCCESS属性的典型用法代码示例。如果您正苦于以下问题:Python logger.RUN_STATUS_SUCCESS属性的具体用法?Python logger.RUN_STATUS_SUCCESS怎么用?Python logger.RUN_STATUS_SUCCESS使用的例子?那么恭喜您, 这里精选的属性代码示例或许可以为您提供帮助。您也可以进一步了解该属性所在类official.utils.logs.logger
的用法示例。
在下文中一共展示了logger.RUN_STATUS_SUCCESS属性的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: main
# 需要导入模块: from official.utils.logs import logger [as 别名]
# 或者: from official.utils.logs.logger import RUN_STATUS_SUCCESS [as 别名]
def main(_):
if not flags.FLAGS.benchmark_log_dir:
print("Usage: benchmark_uploader.py --benchmark_log_dir=/some/dir")
sys.exit(1)
uploader = benchmark_uploader.BigQueryUploader(
gcp_project=flags.FLAGS.gcp_project)
run_id = str(uuid.uuid4())
run_json_file = os.path.join(
flags.FLAGS.benchmark_log_dir, logger.BENCHMARK_RUN_LOG_FILE_NAME)
metric_json_file = os.path.join(
flags.FLAGS.benchmark_log_dir, logger.METRIC_LOG_FILE_NAME)
uploader.upload_benchmark_run_file(
flags.FLAGS.bigquery_data_set, flags.FLAGS.bigquery_run_table, run_id,
run_json_file)
uploader.upload_metric_file(
flags.FLAGS.bigquery_data_set, flags.FLAGS.bigquery_metric_table, run_id,
metric_json_file)
# Assume the run finished successfully before user invoke the upload script.
uploader.insert_run_status(
flags.FLAGS.bigquery_data_set, flags.FLAGS.bigquery_run_status_table,
run_id, logger.RUN_STATUS_SUCCESS)
开发者ID:ShivangShekhar,项目名称:Live-feed-object-device-identification-using-Tensorflow-and-OpenCV,代码行数:25,代码来源:benchmark_uploader_main.py
示例2: test_benchmark_context
# 需要导入模块: from official.utils.logs import logger [as 别名]
# 或者: from official.utils.logs.logger import RUN_STATUS_SUCCESS [as 别名]
def test_benchmark_context(self, mock_config_benchmark_logger):
mock_logger = mock.MagicMock()
mock_config_benchmark_logger.return_value = mock_logger
with logger.benchmark_context(None):
tf.compat.v1.logging.info("start benchmarking")
mock_logger.on_finish.assert_called_once_with(logger.RUN_STATUS_SUCCESS)
示例3: test_on_finish
# 需要导入模块: from official.utils.logs import logger [as 别名]
# 或者: from official.utils.logs.logger import RUN_STATUS_SUCCESS [as 别名]
def test_on_finish(self):
self.logger.on_finish(logger.RUN_STATUS_SUCCESS)
# log_metric will call upload_benchmark_metric_json in a separate thread.
# Give it some grace period for the new thread before assert.
time.sleep(1)
self.mock_bq_uploader.update_run_status.assert_called_once_with(
"dataset", "run_status_table", "run_id", logger.RUN_STATUS_SUCCESS)
示例4: test_benchmark_context
# 需要导入模块: from official.utils.logs import logger [as 别名]
# 或者: from official.utils.logs.logger import RUN_STATUS_SUCCESS [as 别名]
def test_benchmark_context(self, mock_config_benchmark_logger):
mock_logger = mock.MagicMock()
mock_config_benchmark_logger.return_value = mock_logger
with logger.benchmark_context(None):
tf.logging.info("start benchmarking")
mock_logger.on_finish.assert_called_once_with(logger.RUN_STATUS_SUCCESS)