本文整理汇总了Python中agent.Agent.monte_carlo_control方法的典型用法代码示例。如果您正苦于以下问题:Python Agent.monte_carlo_control方法的具体用法?Python Agent.monte_carlo_control怎么用?Python Agent.monte_carlo_control使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类agent.Agent
的用法示例。
在下文中一共展示了Agent.monte_carlo_control方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: Environment
# 需要导入模块: from agent import Agent [as 别名]
# 或者: from agent.Agent import monte_carlo_control [as 别名]
env = Environment()
agent = Agent(env)
print ('the learning curve of mean-squared error against episode number for')
print ('lambda = 0')
agent.td_learning(10000, 0.0, True, trace = Trace.accumulating)
agent.reset()
print ('lambda = 1')
agent.td_learning(10000, 1.0, True, trace = Trace.accumulating)
agent.reset()
print ('The mean-squared error against lambda')
monte_carlo_iterations = 1000000
td_iterations = 10000
agent.monte_carlo_control(monte_carlo_iterations)
Q_monte_carlo = agent.Q
alphas = np.linspace(0,1,11)
mse_all = []
avg_iters = 1 # change it to average over more iterations
for alpha in alphas:
mse_current = 0
for i in range (0,avg_iters):
agent.reset()
agent.td_learning(td_iterations, alpha)
Q_tf = agent.Q
mse_current += compute_mse(Q_tf, Q_monte_carlo, False)
mse_all.append(mse_current / avg_iters)
示例2: Environment
# 需要导入模块: from agent import Agent [as 别名]
# 或者: from agent.Agent import monte_carlo_control [as 别名]
from __future__ import print_function
from environment import Environment
from agent import Agent
import itertools
import os
if __name__ == '__main__':
if os.path.isfile("output/checkQ.txt"):
os.remove("output/checkQ.txt")
env = Environment()
agent = Agent(env)
agent.monte_carlo_control(1000000)
for dealer, player, action in itertools.product(range(env.dealer_values), range(env.player_values), range(env.action_values)):
with open("output/checkQ.txt", "a") as f:
print("%d\t %d\t %d\t %.5f" % (dealer+1, player+1, action, agent.Q[dealer, player, action]), file = f)