本文整理汇总了Python中baselines.deepq.replay_buffer.PrioritizedReplayBuffer方法的典型用法代码示例。如果您正苦于以下问题:Python replay_buffer.PrioritizedReplayBuffer方法的具体用法?Python replay_buffer.PrioritizedReplayBuffer怎么用?Python replay_buffer.PrioritizedReplayBuffer使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类baselines.deepq.replay_buffer
的用法示例。
在下文中一共展示了replay_buffer.PrioritizedReplayBuffer方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: __init__
# 需要导入模块: from baselines.deepq import replay_buffer [as 别名]
# 或者: from baselines.deepq.replay_buffer import PrioritizedReplayBuffer [as 别名]
def __init__(self, sess):
print("Initializing the agent...")
self.sess = sess
self.env = Environment()
self.state_size = self.env.get_state_size()[0]
self.action_size = self.env.get_action_size()
self.low_bound, self.high_bound = self.env.get_bounds()
self.buffer = PrioritizedReplayBuffer(parameters.BUFFER_SIZE,
parameters.ALPHA)
print("Creation of the actor-critic network...")
self.network = Network(self.state_size, self.action_size,
self.low_bound, self.high_bound)
print("Network created !\n")
self.epsilon = parameters.EPSILON_START
self.beta = parameters.BETA_START
self.best_run = -1e10
self.sess.run(tf.global_variables_initializer())
示例2: __init__
# 需要导入模块: from baselines.deepq import replay_buffer [as 别名]
# 或者: from baselines.deepq.replay_buffer import PrioritizedReplayBuffer [as 别名]
def __init__(self, sess):
print("Initializing the agent...")
self.sess = sess
self.env = Environment()
self.state_size = self.env.get_state_size()
self.action_size = self.env.get_action_size()
print("Creation of the main QNetwork...")
self.mainQNetwork = QNetwork(self.state_size, self.action_size, 'main')
print("Main QNetwork created !\n")
print("Creation of the target QNetwork...")
self.targetQNetwork = QNetwork(self.state_size, self.action_size,
'target')
print("Target QNetwork created !\n")
self.buffer = PrioritizedReplayBuffer(parameters.BUFFER_SIZE,
parameters.ALPHA)
self.epsilon = parameters.EPSILON_START
self.beta = parameters.BETA_START
self.initial_learning_rate = parameters.LEARNING_RATE
trainables = tf.trainable_variables()
self.update_target_ops = updateTargetGraph(trainables)
self.nb_ep = 1
self.best_run = -1e10
示例3: __init__
# 需要导入模块: from baselines.deepq import replay_buffer [as 别名]
# 或者: from baselines.deepq.replay_buffer import PrioritizedReplayBuffer [as 别名]
def __init__(self, sess):
print("Initializing the agent...")
self.sess = sess
self.env = Environment()
self.state_size = self.env.get_state_size()
self.action_size = self.env.get_action_size()
print("Creation of the main QNetwork...")
self.mainQNetwork = QNetwork(self.state_size, self.action_size, 'main')
print("Main QNetwork created !\n")
print("Creation of the target QNetwork...")
self.targetQNetwork = QNetwork(self.state_size, self.action_size,
'target')
print("Target QNetwork created !\n")
self.buffer = PrioritizedReplayBuffer(parameters.BUFFER_SIZE,
parameters.ALPHA)
self.epsilon = parameters.EPSILON_START
self.beta = parameters.BETA_START
trainables = tf.trainable_variables()
self.update_target_ops = updateTargetGraph(trainables)
self.nb_ep = 1