本文整理汇总了Python中buffer.Buffer.addBatch方法的典型用法代码示例。如果您正苦于以下问题:Python Buffer.addBatch方法的具体用法?Python Buffer.addBatch怎么用?Python Buffer.addBatch使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类buffer.Buffer
的用法示例。
在下文中一共展示了Buffer.addBatch方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: xrange
# 需要导入模块: from buffer import Buffer [as 别名]
# 或者: from buffer.Buffer import addBatch [as 别名]
#print "reward:", reward
#print "poststate:", observation
# add experience to replay memory and IR buffer
R.add(x[0], action, reward, observation, done)
B.add(x[0], action, reward, observation, done)
# perform imagination rollouts
if (i_episode * args.max_timesteps + t) % args.batch_size == 0 and \
ir_model.supported_timesteps() > args.ir_steps:
print "Performing imagination rollout for", args.ir_steps, "steps"
preobs, timesteps = Bold.sample(args.batch_size, ir_model.supported_timesteps() - args.ir_steps)
for i in xrange(args.ir_steps):
actions = mu(preobs) # TODO: add noise?
postobs, rewards, terminals = ir_model.predict(preobs, actions, timesteps + i)
Rf.addBatch(preobs, actions, rewards, postobs, terminals)
#print "prediction:", preobs[0], timesteps[0], postobs[0]
preobs = postobs
print "Done, fictional replay memory now contains", Rf.count, "experiences"
print "For comparison, real replay memory contains", R.count, "experiences"
loss = 0
# perform train_repeat Q-updates
for k in xrange(args.train_repeat*(args.ir_steps + 1)):
# sample minibatches from fictional replay memory first and then from real
if k < args.train_repeat * args.ir_steps:
if Rf.count == 0:
continue
#print "Sampling from fictional replay memory", args.batch_size, "samples"
preobs, actions, rewards, postobs, terminals = Rf.sample(args.batch_size)
else: