为什么无法识别 TensorFlow 的优化器对象?



我正在尝试使用近端策略优化从头开始增强增强算法。我面临的问题是,当我尝试使用{session.run((}调用它时,我的图形优化对象未被识别。我遇到的错误是"名称:名称'优化器'未定义"。我正在尝试从定义LSTM网络的类中的方法中调用。我从其他类中有效地做同样的事情来从网络中获得预测,但我不明白为什么这个呼叫不起作用。

def __init__(self, input_size, output_size, session):
    """
    input_size: dimension of input environment - OpenAI cartpole = 4
    output_size: dimension of action space - OpenAI cartpole = 2
    """
    # LSTM expects input to be 3D Tensor
    # Reshape input for LSTM
    self.env = tf.placeholder(dtype=tf.float32, shape=[1, None, input_size])
    #self.environment = tf.reshape(self.env, shape=[1, None, input_size])
    #obsp = tf.placeholder(dtype=tf.float32, shape=[None,])
    #delta = tf.reshape(obsp, shape=[1, 1, 4])
    self.lstm1 = tf.keras.layers.LSTM(8, return_sequences=False)(self.env)
    # Softmax returns probabilty distribution of actions and what NN predicts will be best
    self.actor = tf.keras.layers.Dense(output_size, activation="softmax")(self.lstm1)
    self.critic = tf.keras.layers.Dense(1, activation=None)(self.lstm1)
    return_ = tf.placeholder(dtype=tf.float32, shape=[None, 1])
    actor_loss = tf.placeholder(dtype=tf.float32, shape=[None, 1])
    entropy = tf.placeholder(dtype=tf.float32, shape=[None, 1])
    critic_loss = (return_ - self.critic)**2
    loss = 0.5 * critic_loss + actor_loss - 0.001 * entropy
    # There is no connection between the optimizer and the rest of the graph
    optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
    init = tf.global_variables_initializer()
    initlocal = tf.local_variables_initializer()
    session.run([init, initlocal])
def update(self, session, epochs, batch_size, states, actions, log_probs, returns, advantages):
    """
    Update neural network with experience buffers
    ##############################################
    epochs: number of epochs to train neural network for
    batch_size: size of data batches used for training
    states: array of environment states
    actions: array of actions taken
    log_probs: log probability of actions
    returns: array of estimated advantages at given times
    advantages: array of advantages, calculated by different between predicted returns and estimated
    """
    # Clipping value for PPO
    clip = 0.2
    for e in range(epochs):
        for state, action, old_log_prob, return_, advantage in self.make_batches(states, actions, log_probs, returns, advantages):
            new_actions, value = self.forward(state, session)
            new_log_prob = np.log(new_actions[action])
            ent = entropy(actions)    # NOTE: from scipy.stats import entropy
            #ratio = np.exp(new_log_prob - old_log_prob)
            ratio = np.exp(np.mean(np.mean(new_log_prob - old_log_prob, axis=1), axis=1))
            a = np.mean(ratio * advantage)
            b = np.mean(np.clip(ratio, 1 - clip, 1 + clip) * advantage)
            actor_loss = - np.min([a, b])
            session.run(optimizer, feed_dict={actor_loss: actor_loss, return_: returns, self.env: state, entropy: ent})

a(我假设两个函数 __init__&update在同一班上。如果是,请执行以下操作:

1(在__init__函数中将关键字 self.添加到optimizer的开头:

您得到:self.optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)

2(将关键字 self.添加到update函数中optimizer的开头:

您得到:session.run(self.optimizer, feed_dict={actor_loss: actor_loss, return_: returns, self.env: state, entropy: ent})

b(如果它们是不是同一类中的,则必须将optimizer作为参数传递给update函数。

您得到:def update(self, session, epochs, batch_size, states, actions, log_probs, returns, advantages, optimizer):

最新更新