本文整理汇总了Python中experiment.Experiment.doEpisodeWithMemory方法的典型用法代码示例。如果您正苦于以下问题:Python Experiment.doEpisodeWithMemory方法的具体用法?Python Experiment.doEpisodeWithMemory怎么用?Python Experiment.doEpisodeWithMemory使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类experiment.Experiment
的用法示例。
在下文中一共展示了Experiment.doEpisodeWithMemory方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: trainDeepNetworkWithMemory
# 需要导入模块: from experiment import Experiment [as 别名]
# 或者: from experiment.Experiment import doEpisodeWithMemory [as 别名]
def trainDeepNetworkWithMemory(loopNum=30000, startTurn=0, history_filename='train_winners_dn_with_memory_1000', inputNum=192, type=1):
'''
使用带记忆的方式来训练深度神经网络
'''
agents = []
winners = {}
# load history match
if os.path.isfile(history_filename):
with open(history_filename, 'r') as f:
winners = pickle.load(f)
startTurn = sum([v for i,v in winners.items()])
print startTurn
# load agents with network
for i in range(0, 3):
playerName = PLAYER_LIST[i]
nw = RunFastDeepNetwork(playerName, inputNum=inputNum, hidden1Num=inputNum, hidden2Num=inputNum, hidden3Num=inputNum, outNum=1)
nw.loadNet(playerName, startTurn)
rfa = RunFastAgentWithMemory(playerName, nw)
agents.append(rfa)
env = RunFastEnvironment()
exp = Experiment(env, agents, type=type)
for i in range(startTurn, startTurn + loopNum):
# exp.setTurn(i)
if i % 200 == 0:
for agent in agents:
agent.saveNet()
with open(history_filename, 'w') as f:
pickle.dump(winners, f)
winner = exp.doEpisodeWithMemory(capacity=1000)
if winners.has_key(winner):
winners[winner] += 1
else:
winners[winner] = 1
for agent in agents:
agent.saveNet()
with open(history_filename, 'w') as f:
pickle.dump(winners, f)
print winners
with open(history_filename, 'w') as f:
pickle.dump(winners, f)