当前位置: 首页>>代码示例>>Python>>正文


Python Agent.play_random方法代码示例

本文整理汇总了Python中agent.Agent.play_random方法的典型用法代码示例。如果您正苦于以下问题:Python Agent.play_random方法的具体用法?Python Agent.play_random怎么用?Python Agent.play_random使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在agent.Agent的用法示例。


在下文中一共展示了Agent.play_random方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: xrange

# 需要导入模块: from agent import Agent [as 别名]
# 或者: from agent.Agent import play_random [as 别名]
  if args.visualization_file:
    from visualization import visualize
    # use states recorded during gameplay. NB! Check buffer size, that it can accomodate one game!
    states = [agent.mem.getState(i) for i in xrange(agent.history_length, agent.mem.current - agent.random_starts)]
    logger.info("Collected %d game states" % len(states))
    import numpy as np
    states = np.array(states)
    states = states / 255.
    visualize(net.model, states, args.visualization_filters, args.visualization_file)
  sys.exit()

if args.random_steps:
  # populate replay memory with random steps
  logger.info("Populating replay memory with %d random moves" % args.random_steps)
  stats.reset()
  agent.play_random(args.random_steps)
  stats.write(0, "random")

# loop over epochs
for epoch in xrange(args.epochs):
  logger.info("Epoch #%d" % (epoch + 1))

  if args.train_steps:
    logger.info(" Training for %d steps" % args.train_steps)
    stats.reset()
    agent.train(args.train_steps, epoch)
    stats.write(epoch + 1, "train")

    if args.save_weights_prefix:
      filename = args.save_weights_prefix + "_%d.prm" % (epoch + 1)
      logger.info("Saving weights to %s" % filename)
开发者ID:Deanout,项目名称:simple_dqn,代码行数:33,代码来源:main.py

示例2: Agent

# 需要导入模块: from agent import Agent [as 别名]
# 或者: from agent.Agent import play_random [as 别名]
        print "Pre-Trained network not found!"
        sys.exit(1)
    else:
        network = cPickle.load(open(args.load_weights, 'r'))
        if network == None: #change to try-catch later
            print "Loading netowork failed!"
        print "Network loaded successfully!"


agent = Agent(env, mem, network)


if args.train_model:
    #stats = Statistics(agent, network, mem, env)

    agent.play_random(random_steps=default_random_steps)

    print "Traning Started....."

    for i in range(EPOCHS):
        #stats.reset()
        a = datetime.datetime.now().replace(microsecond=0)
        agent.train(train_steps = STEPS_PER_EPOCH,epoch = 1)
        agent.test(test_steps = STEPS_PER_TEST,epoch = 1)
        save_path = args.save_model_dir
        if args.save_models:
            path_file = args.save_model_dir+'/dep-q-shooter-nipscuda-8movectrl-'+str(i)+'-epoch.pkl'
            #print path_file
            net_file = open(path_file, 'w')
            cPickle.dump(network, net_file, -1)
            net_file.close()
开发者ID:pavitrakumar78,项目名称:Playing-custom-games-using-Deep-Learning,代码行数:33,代码来源:stester8.py


注:本文中的agent.Agent.play_random方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。