當前位置: 首頁>>代碼示例>>Python>>正文


Python history.History方法代碼示例

本文整理匯總了Python中history.History方法的典型用法代碼示例。如果您正苦於以下問題:Python history.History方法的具體用法?Python history.History怎麽用?Python history.History使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在history的用法示例。


在下文中一共展示了history.History方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: history

# 需要導入模塊: import history [as 別名]
# 或者: from history import History [as 別名]
def history(self, interval=None, start=None, end=None):
        return History(self._symbol, interval, start, end) 
開發者ID:rleonard21,項目名稱:PyTradier,代碼行數:4,代碼來源:company.py

示例2: history

# 需要導入模塊: import history [as 別名]
# 或者: from history import History [as 別名]
def history(self):
        return History() 
開發者ID:rleonard21,項目名稱:PyTradier,代碼行數:4,代碼來源:account.py

示例3: __init__

# 需要導入模塊: import history [as 別名]
# 或者: from history import History [as 別名]
def __init__(self):
        smoothing = config.get_entry('smoothing', default_value=True)
        self.history = History(log.get_battery(), smoothing=smoothing)
        self.future = Future(self.history) 
開發者ID:maks-a,項目名稱:batterym,代碼行數:6,代碼來源:plotter.py

示例4: main

# 需要導入模塊: import history [as 別名]
# 或者: from history import History [as 別名]
def main():
    env = RunEnv(visualize=False)
    env.reset(difficulty = 0)
    agent = RDPG(env)

    returns = []
    rewards = []

    for episode in xrange(EPISODES):
        state = env.reset(difficulty = 0)
        reward_episode = []
        print "episode:",episode
        #Initializing empty history
        history = History(state)
        # Train
        for step in xrange(env.spec.timestep_limit):
            action = agent.noise_action(history)
            next_state,reward,done,_ = env.step(action)
            # appending to history
            history.append(next_state,action,reward)
            reward_episode.append(reward)
            if done:
                break
        # storing the history into replay buffer and if the number of histories sequence is above the threshod, start training
        agent.perceive(history)
        # Testing:
        #if episode % 1 == 0:
        # if episode % 1000 == 0 and episode > 50:
        #     agent.save_model(PATH, episode)

        #     total_return = 0
        #     ave_reward = 0
        #     for i in xrange(TEST):
        #         state = env.reset()
        #         reward_per_step = 0
        #         for j in xrange(env.spec.timestep_limit):
        #             action = agent.action(state) # direct action for test
        #             state,reward,done,_ = env.step(action)
        #             total_return += reward
        #             if done:
        #                 break
        #             reward_per_step += (reward - reward_per_step)/(j+1)
        #         ave_reward += reward_per_step

        #     ave_return = total_return/TEST
        #     ave_reward = ave_reward/TEST
        #     returns.append(ave_return)
        #     rewards.append(ave_reward)

        #     print 'episode: ',episode,'Evaluation Average Return:',ave_return, '  Evaluation Average Reward: ', ave_reward 
開發者ID:kyleliang919,項目名稱:-NIPS-2017-Learning-to-Run,代碼行數:52,代碼來源:gym_rdpg.py


注:本文中的history.History方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。