当前位置: 首页>>代码示例>>Python>>正文


Python Agent.td_learning方法代码示例

本文整理汇总了Python中agent.Agent.td_learning方法的典型用法代码示例。如果您正苦于以下问题:Python Agent.td_learning方法的具体用法?Python Agent.td_learning怎么用?Python Agent.td_learning使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在agent.Agent的用法示例。


在下文中一共展示了Agent.td_learning方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: Environment

# 需要导入模块: from agent import Agent [as 别名]
# 或者: from agent.Agent import td_learning [as 别名]
import numpy as np
from utils import compute_mse, Trace

if __name__ == '__main__':
    """
    test sarsa lambda algorithm
    """
    
    """ the learning curve of mean-squared error
        against episode number for lambda = 0 and lambda = 1
    """
    env = Environment()
    agent = Agent(env)
    print ('the learning curve of mean-squared error against episode number for')
    print ('lambda = 0')
    agent.td_learning(10000, 0.0, True, trace = Trace.accumulating)
    
    agent.reset()
    print ('lambda = 1')
    agent.td_learning(10000, 1.0, True, trace = Trace.accumulating)
    
    agent.reset()
    print ('The mean-squared error against lambda')
    monte_carlo_iterations = 1000000
    td_iterations = 10000
    
    agent.monte_carlo_control(monte_carlo_iterations)
    Q_monte_carlo = agent.Q
    
    alphas = np.linspace(0,1,11)
    mse_all = []
开发者ID:mdaniluk,项目名称:Blackjack_Reinforcement,代码行数:33,代码来源:test_td_learning.py

示例2: Environment

# 需要导入模块: from agent import Agent [as 别名]
# 或者: from agent.Agent import td_learning [as 别名]
 td_iterations = 10000
 env = Environment()
 agent = Agent(env)
 agent.monte_carlo_control(monte_carlo_iterations)
 Q_monte_carlo = agent.Q
 
 alphas = np.linspace(0,1,11)
 mse_all_acc = []
 mse_all_replace = []
 mse_all_dutch = []
 avg_iters = 10
 for alpha in alphas:
     mse_current = 0
     for i in range (0,avg_iters):
         agent.reset()
         agent.td_learning(td_iterations, alpha, trace = Trace.accumulating)
         Q_tf = agent.Q           
         mse_current += compute_mse(Q_tf, Q_monte_carlo, True)
         
     mse_all_acc.append(mse_current / avg_iters)
     
     mse_current = 0
     for i in range (0,avg_iters):
         agent.reset()
         agent.td_learning(td_iterations, alpha, trace = Trace.replacing)
         Q_tf = agent.Q           
         mse_current += compute_mse(Q_tf, Q_monte_carlo, True)
         
     mse_all_replace.append(mse_current / avg_iters)
     
     mse_current = 0
开发者ID:mdaniluk,项目名称:Blackjack_Reinforcement,代码行数:33,代码来源:test_traces.py


注:本文中的agent.Agent.td_learning方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。