當前位置: 首頁>>代碼示例>>Python>>正文


Python env.Env方法代碼示例

本文整理匯總了Python中env.Env方法的典型用法代碼示例。如果您正苦於以下問題:Python env.Env方法的具體用法?Python env.Env怎麽用?Python env.Env使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在env的用法示例。


在下文中一共展示了env.Env方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: test

# 需要導入模塊: import env [as 別名]
# 或者: from env import Env [as 別名]
def test(actor):
  with torch.no_grad():
    env = Env()
    state, done, total_reward = env.reset(), False, 0
    while not done:
      action = torch.clamp(actor(state), min=-1, max=1)  # Use purely exploitative policy at test time
      state, reward, done = env.step(action)
      total_reward += reward
    return total_reward 
開發者ID:Kaixhin,項目名稱:spinning-up-basic,代碼行數:11,代碼來源:td3.py

示例2: test

# 需要導入模塊: import env [as 別名]
# 或者: from env import Env [as 別名]
def test(agent):
  with torch.no_grad():
    env = Env()
    state, done, total_reward = env.reset(), False, 0
    while not done:
      action = agent(state).argmax(dim=1, keepdim=True)  # Use purely exploitative policy at test time
      state, reward, done = env.step(convert_discrete_to_continuous_action(action))
      total_reward += reward
    return total_reward 
開發者ID:Kaixhin,項目名稱:spinning-up-basic,代碼行數:11,代碼來源:dqn.py

示例3: test

# 需要導入模塊: import env [as 別名]
# 或者: from env import Env [as 別名]
def test(actor):
  with torch.no_grad():
    env = Env()
    state, done, total_reward = env.reset(), False, 0
    while not done:
      action = actor(state).mean  # Use purely exploitative policy at test time
      state, reward, done = env.step(action)
      total_reward += reward
    return total_reward 
開發者ID:Kaixhin,項目名稱:spinning-up-basic,代碼行數:11,代碼來源:sac.py

示例4: test

# 需要導入模塊: import env [as 別名]
# 或者: from env import Env [as 別名]
def test(args, T, dqn, val_mem, metrics, results_dir, evaluate=False):
  env = Env(args)
  env.eval()
  metrics['steps'].append(T)
  T_rewards, T_Qs = [], []

  # Test performance over several episodes
  done = True
  for _ in range(args.evaluation_episodes):
    while True:
      if done:
        state, reward_sum, done = env.reset(), 0, False

      action = dqn.act_e_greedy(state)  # Choose an action ε-greedily
      state, reward, done = env.step(action)  # Step
      reward_sum += reward
      if args.render:
        env.render()

      if done:
        T_rewards.append(reward_sum)
        break
  env.close()

  # Test Q-values over validation memory
  for state in val_mem:  # Iterate over valid states
    T_Qs.append(dqn.evaluate_q(state))

  avg_reward, avg_Q = sum(T_rewards) / len(T_rewards), sum(T_Qs) / len(T_Qs)
  if not evaluate:
    # Save model parameters if improved
    if avg_reward > metrics['best_avg_reward']:
      metrics['best_avg_reward'] = avg_reward
      dqn.save(results_dir)

    # Append to results and save metrics
    metrics['rewards'].append(T_rewards)
    metrics['Qs'].append(T_Qs)
    torch.save(metrics, os.path.join(results_dir, 'metrics.pth'))

    # Plot
    _plot_line(metrics['steps'], metrics['rewards'], 'Reward', path=results_dir)
    _plot_line(metrics['steps'], metrics['Qs'], 'Q', path=results_dir)

  # Return average reward and Q-value
  return avg_reward, avg_Q


# Plots min, max and mean + standard deviation bars of a population over time 
開發者ID:Kaixhin,項目名稱:Rainbow,代碼行數:51,代碼來源:test.py


注:本文中的env.Env方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。