当前位置: 首页>>代码示例>>Python>>正文


Python replay_buffer.PrioritizedReplayBuffer方法代码示例

本文整理汇总了Python中baselines.deepq.replay_buffer.PrioritizedReplayBuffer方法的典型用法代码示例。如果您正苦于以下问题:Python replay_buffer.PrioritizedReplayBuffer方法的具体用法?Python replay_buffer.PrioritizedReplayBuffer怎么用?Python replay_buffer.PrioritizedReplayBuffer使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在baselines.deepq.replay_buffer的用法示例。


在下文中一共展示了replay_buffer.PrioritizedReplayBuffer方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: __init__

# 需要导入模块: from baselines.deepq import replay_buffer [as 别名]
# 或者: from baselines.deepq.replay_buffer import PrioritizedReplayBuffer [as 别名]
def __init__(self, sess):
        print("Initializing the agent...")

        self.sess = sess
        self.env = Environment()
        self.state_size = self.env.get_state_size()[0]
        self.action_size = self.env.get_action_size()
        self.low_bound, self.high_bound = self.env.get_bounds()

        self.buffer = PrioritizedReplayBuffer(parameters.BUFFER_SIZE,
                                              parameters.ALPHA)

        print("Creation of the actor-critic network...")
        self.network = Network(self.state_size, self.action_size,
                               self.low_bound, self.high_bound)
        print("Network created !\n")

        self.epsilon = parameters.EPSILON_START
        self.beta = parameters.BETA_START

        self.best_run = -1e10

        self.sess.run(tf.global_variables_initializer()) 
开发者ID:SuReLI,项目名称:Deep-RL-agents,代码行数:25,代码来源:Agent.py

示例2: __init__

# 需要导入模块: from baselines.deepq import replay_buffer [as 别名]
# 或者: from baselines.deepq.replay_buffer import PrioritizedReplayBuffer [as 别名]
def __init__(self, sess):
        print("Initializing the agent...")

        self.sess = sess
        self.env = Environment()
        self.state_size = self.env.get_state_size()
        self.action_size = self.env.get_action_size()

        print("Creation of the main QNetwork...")
        self.mainQNetwork = QNetwork(self.state_size, self.action_size, 'main')
        print("Main QNetwork created !\n")

        print("Creation of the target QNetwork...")
        self.targetQNetwork = QNetwork(self.state_size, self.action_size,
                                       'target')
        print("Target QNetwork created !\n")

        self.buffer = PrioritizedReplayBuffer(parameters.BUFFER_SIZE,
                                              parameters.ALPHA)

        self.epsilon = parameters.EPSILON_START
        self.beta = parameters.BETA_START

        self.initial_learning_rate = parameters.LEARNING_RATE

        trainables = tf.trainable_variables()
        self.update_target_ops = updateTargetGraph(trainables)

        self.nb_ep = 1
        self.best_run = -1e10 
开发者ID:SuReLI,项目名称:Deep-RL-agents,代码行数:32,代码来源:Agent.py

示例3: __init__

# 需要导入模块: from baselines.deepq import replay_buffer [as 别名]
# 或者: from baselines.deepq.replay_buffer import PrioritizedReplayBuffer [as 别名]
def __init__(self, sess):
        print("Initializing the agent...")

        self.sess = sess
        self.env = Environment()
        self.state_size = self.env.get_state_size()
        self.action_size = self.env.get_action_size()

        print("Creation of the main QNetwork...")
        self.mainQNetwork = QNetwork(self.state_size, self.action_size, 'main')
        print("Main QNetwork created !\n")

        print("Creation of the target QNetwork...")
        self.targetQNetwork = QNetwork(self.state_size, self.action_size,
                                       'target')
        print("Target QNetwork created !\n")

        self.buffer = PrioritizedReplayBuffer(parameters.BUFFER_SIZE,
                                              parameters.ALPHA)

        self.epsilon = parameters.EPSILON_START
        self.beta = parameters.BETA_START

        trainables = tf.trainable_variables()
        self.update_target_ops = updateTargetGraph(trainables)

        self.nb_ep = 1 
开发者ID:SuReLI,项目名称:Deep-RL-agents,代码行数:29,代码来源:Agent.py


注:本文中的baselines.deepq.replay_buffer.PrioritizedReplayBuffer方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。