当前位置: 首页>>代码示例>>Python>>正文


Python gym.upload方法代码示例

本文整理汇总了Python中gym.upload方法的典型用法代码示例。如果您正苦于以下问题:Python gym.upload方法的具体用法?Python gym.upload怎么用?Python gym.upload使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在gym的用法示例。


在下文中一共展示了gym.upload方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: upload

# 需要导入模块: import gym [as 别名]
# 或者: from gym import upload [as 别名]
def upload():
    """
    Upload the results of training (as automatically recorded by
    your env's monitor) to OpenAI Gym.

    Parameters:
        - training_dir: A directory containing the results of a
        training run.
        - api_key: Your OpenAI API key
        - algorithm_id (default=None): An arbitrary string
        indicating the paricular version of the algorithm
        (including choices of parameters) you are running.
        """
    j = request.get_json()
    training_dir = get_required_param(j, 'training_dir')
    api_key      = get_required_param(j, 'api_key')
    algorithm_id = get_optional_param(j, 'algorithm_id', None)

    try:
        gym.upload(training_dir, algorithm_id, writeup=None, api_key=api_key,
                   ignore_open_monitors=False)
        return ('', 204)
    except gym.error.AuthenticationError:
        raise InvalidUsage('You must provide an OpenAI Gym API key') 
开发者ID:openai,项目名称:gym-http-api,代码行数:26,代码来源:gym_http_server.py

示例2: close

# 需要导入模块: import gym [as 别名]
# 或者: from gym import upload [as 别名]
def close(self):
        """Flush all monitor data to disk and close any open rending windows."""
        super(Monitor, self).close()

        if not self.enabled:
            return
        self.stats_recorder.close()
        if self.video_recorder is not None:
            self._close_video_recorder()
        self._flush(force=True)

        # Stop tracking this for autoclose
        monitor_closer.unregister(self._monitor_id)
        self.enabled = False

        logger.info('''Finished writing results. You can upload them to the scoreboard via gym.upload(%r)''', self.directory) 
开发者ID:hust512,项目名称:DQN-DDPG_Stock_Trading,代码行数:18,代码来源:monitor.py

示例3: close

# 需要导入模块: import gym [as 别名]
# 或者: from gym import upload [as 别名]
def close(self):
        """Flush all monitor data to disk and close any open rending windows."""
        if not self.enabled:
            return
        self.stats_recorder.close()
        if self.video_recorder is not None:
            self._close_video_recorder()
        self._flush(force=True)

        # Stop tracking this for autoclose
        monitor_closer.unregister(self._monitor_id)
        self.enabled = False

        logger.info('''Finished writing results. You can upload them to the scoreboard via gym.upload(%r)''', self.directory) 
开发者ID:ArztSamuel,项目名称:DRL_DeliveryDuel,代码行数:16,代码来源:monitor.py

示例4: play

# 需要导入模块: import gym [as 别名]
# 或者: from gym import upload [as 别名]
def play(self, test_ep, n_step=10000, n_episode=100):
    tf.initialize_all_variables().run()

    self.stat.load_model()
    self.target_network.run_copy()

    if not self.env.display:
      gym_dir = '/tmp/%s-%s' % (self.env_name, get_time())
      env = gym.wrappers.Monitor(self.env.env, gym_dir)

    best_reward, best_idx, best_count = 0, 0, 0
    try:
      itr = xrange(n_episode)
    except NameError:
      itr = range(n_episode)
    for idx in itr:
      observation, reward, terminal = self.new_game()
      current_reward = 0

      for _ in range(self.history_length):
        self.history.add(observation)

      for self.t in tqdm(range(n_step), ncols=70):
        # 1. predict
        action = self.predict(self.history.get(), test_ep)
        # 2. act
        observation, reward, terminal, info = self.env.step(action, is_training=False)
        # 3. observe
        q, loss, is_update = self.observe(observation, reward, action, terminal)

        logger.debug("a: %d, r: %d, t: %d, q: %.4f, l: %.2f" % \
            (action, reward, terminal, np.mean(q), loss))
        current_reward += reward

        if terminal:
          break

      if current_reward > best_reward:
        best_reward = current_reward
        best_idx = idx
        best_count = 0
      elif current_reward == best_reward:
        best_count += 1

      print ("="*30)
      print (" [%d] Best reward : %d (dup-percent: %d/%d)" % (best_idx, best_reward, best_count, n_episode))
      print ("="*30)

    #if not self.env.display:
      #gym.upload(gym_dir, writeup='https://github.com/devsisters/DQN-tensorflow', api_key='') 
开发者ID:carpedm20,项目名称:deep-rl-tensorflow,代码行数:52,代码来源:agent.py


注:本文中的gym.upload方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。