当前位置: 首页>>代码示例>>Python>>正文


Python LearningAgent.name方法代码示例

本文整理汇总了Python中pybrain.rl.agents.LearningAgent.name方法的典型用法代码示例。如果您正苦于以下问题:Python LearningAgent.name方法的具体用法?Python LearningAgent.name怎么用?Python LearningAgent.name使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在pybrain.rl.agents.LearningAgent的用法示例。


在下文中一共展示了LearningAgent.name方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: ENAC

# 需要导入模块: from pybrain.rl.agents import LearningAgent [as 别名]
# 或者: from pybrain.rl.agents.LearningAgent import name [as 别名]
    # Create an agent and select an episodic learner.
#    learner = ENAC()
    learner = Reinforce()
    learner.gd.rprop = False
    # only relevant for BP
#    learner.learningRate = 0.001 # (0.1-0.001, down to 1e-7 for RNNs, default: 0.1)
    learner.gd.alpha = 0.01
#    learner.gd.alphadecay = 0.9
#    learner.gd.momentum = 0.9
    # only relevant for RP
#    learner.gd.deltamin = 0.0001

    agent = LearningAgent(net, learner)
    # Name the agent according to its first generator's name.
    agent.name = gen.name

    # Adjust some parameters of the NormalExplorer.
    if manual_sigma:
        sigma = [-5.0] * env.indim
        learner.explorer.sigma = sigma
    # Add the task and agent to the experiment.
    experiment.tasks.append(task)
    experiment.agents.append(agent)

takers = case.generators[1:]
for g in takers:
    env = pyreto.continuous.MarketEnvironment([g], market, numOffbids)
    task = pyreto.continuous.ProfitTask(env, maxSteps=len(p1h))
    agent = pyreto.util.NegOneAgent(env.outdim, env.indim)
    experiment.tasks.append(task)
开发者ID:Waqquas,项目名称:pylon,代码行数:32,代码来源:episodic.py

示例2: MarketExperiment

# 需要导入模块: from pybrain.rl.agents import LearningAgent [as 别名]
# 或者: from pybrain.rl.agents.LearningAgent import name [as 别名]
# Construct an experiment to test the market.
experiment = MarketExperiment([], [], market)

# Add the agents and their tasks.
for g in case.generators:
    env = DiscreteMarketEnvironment([g], market, dimState, markups, numOffbids)
    task = ProfitTask(env)
    module = ActionValueTable(dimState, dimAction)
    module.initialize(1.0)
#    learner = SARSA(gamma=0.9)
    learner = Q()
#    learner = QLambda()
#    learner.explorer = BoltzmannExplorer() # default is e-greedy.
    agent = LearningAgent(module, learner)

    agent.name = g.name
    experiment.tasks.append(task)
    experiment.agents.append(agent)

# Prepare for plotting.
pylab.figure(1)#figsize=(16,8))
pylab.ion()
pl = MultilinePlotter(autoscale=1.1, xlim=[0, 24], ylim=[0, 1],
                      maxLines=len(experiment.agents))
pl.setLineStyle(linewidth=2)
pl.setLegend([a.name for a in experiment.agents], loc='upper left')

pylab.figure(2)
pylab.ion()
pl2 = MultilinePlotter(autoscale=1.1, xlim=[0, 24], ylim=[0, 1],
                       maxLines=len(experiment.agents))
开发者ID:Waqquas,项目名称:pylon,代码行数:33,代码来源:auction.py


注:本文中的pybrain.rl.agents.LearningAgent.name方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。