当前位置: 首页>>代码示例>>Python>>正文


Python history.History方法代码示例

本文整理汇总了Python中history.History方法的典型用法代码示例。如果您正苦于以下问题:Python history.History方法的具体用法?Python history.History怎么用?Python history.History使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在history的用法示例。


在下文中一共展示了history.History方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: history

# 需要导入模块: import history [as 别名]
# 或者: from history import History [as 别名]
def history(self, interval=None, start=None, end=None):
        return History(self._symbol, interval, start, end) 
开发者ID:rleonard21,项目名称:PyTradier,代码行数:4,代码来源:company.py

示例2: history

# 需要导入模块: import history [as 别名]
# 或者: from history import History [as 别名]
def history(self):
        return History() 
开发者ID:rleonard21,项目名称:PyTradier,代码行数:4,代码来源:account.py

示例3: __init__

# 需要导入模块: import history [as 别名]
# 或者: from history import History [as 别名]
def __init__(self):
        smoothing = config.get_entry('smoothing', default_value=True)
        self.history = History(log.get_battery(), smoothing=smoothing)
        self.future = Future(self.history) 
开发者ID:maks-a,项目名称:batterym,代码行数:6,代码来源:plotter.py

示例4: main

# 需要导入模块: import history [as 别名]
# 或者: from history import History [as 别名]
def main():
    env = RunEnv(visualize=False)
    env.reset(difficulty = 0)
    agent = RDPG(env)

    returns = []
    rewards = []

    for episode in xrange(EPISODES):
        state = env.reset(difficulty = 0)
        reward_episode = []
        print "episode:",episode
        #Initializing empty history
        history = History(state)
        # Train
        for step in xrange(env.spec.timestep_limit):
            action = agent.noise_action(history)
            next_state,reward,done,_ = env.step(action)
            # appending to history
            history.append(next_state,action,reward)
            reward_episode.append(reward)
            if done:
                break
        # storing the history into replay buffer and if the number of histories sequence is above the threshod, start training
        agent.perceive(history)
        # Testing:
        #if episode % 1 == 0:
        # if episode % 1000 == 0 and episode > 50:
        #     agent.save_model(PATH, episode)

        #     total_return = 0
        #     ave_reward = 0
        #     for i in xrange(TEST):
        #         state = env.reset()
        #         reward_per_step = 0
        #         for j in xrange(env.spec.timestep_limit):
        #             action = agent.action(state) # direct action for test
        #             state,reward,done,_ = env.step(action)
        #             total_return += reward
        #             if done:
        #                 break
        #             reward_per_step += (reward - reward_per_step)/(j+1)
        #         ave_reward += reward_per_step

        #     ave_return = total_return/TEST
        #     ave_reward = ave_reward/TEST
        #     returns.append(ave_return)
        #     rewards.append(ave_reward)

        #     print 'episode: ',episode,'Evaluation Average Return:',ave_return, '  Evaluation Average Reward: ', ave_reward 
开发者ID:kyleliang919,项目名称:-NIPS-2017-Learning-to-Run,代码行数:52,代码来源:gym_rdpg.py


注:本文中的history.History方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。