本文整理汇总了Python中agent.Agent.update_model方法的典型用法代码示例。如果您正苦于以下问题:Python Agent.update_model方法的具体用法?Python Agent.update_model怎么用?Python Agent.update_model使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类agent.Agent
的用法示例。
在下文中一共展示了Agent.update_model方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: main
# 需要导入模块: from agent import Agent [as 别名]
# 或者: from agent.Agent import update_model [as 别名]
def main(args):
env = gym.make('Pendulum-v0').unwrapped
agent = Agent(env, args)
reward_history = []
start_time = time.time()
# main loop
for ep in range(args.max_ep):
buffer_s, buffer_a, buffer_r = [], [], []
s = env.reset()
ep_reward = 0
for t in range(args.ep_len):
# env.render()
a = agent.sample_action(s)
next_s, r, done, _ = env.step(a)
buffer_s.append(s)
buffer_a.append(a)
buffer_r.append((r + 8) / 8)
s = next_s
ep_reward += r
# update agent
if (t + 1) % args.batch_size == 0 or t == args.ep_len - 1:
next_s_value = agent.get_value(next_s)
# calculate discounted rewards
discounted_r = []
for r in buffer_r[::-1]:
next_s_value = r + args.gamma * next_s_value
discounted_r.append(next_s_value)
discounted_r.reverse()
b_s = np.vstack(buffer_s)
b_a = np.vstack(buffer_a)
b_r = np.asarray(discounted_r)[:, np.newaxis]
buffer_s, buffer_a, buffer_r = [], [], []
agent.update_model(b_s, b_a, b_r)
if ep == 0:
reward_history.append(ep_reward)
else:
reward_history.append(reward_history[-1] * 0.99 + ep_reward * 0.01)
print('Ep %d reward: %d' % (ep, ep_reward))
print('train finished. time cost: %.4fs' % (time.time() - start_time))
plt.plot(np.arange(len(reward_history)), reward_history)
plt.xlabel('Episode')
plt.ylabel('Moving averaged episode reward')
plt.savefig('result.png')