策略网络为批处理状态和单个状态返回不同的输出



我正在实施应用于CartPole-V0 openAI健身房环境的钢筋。我正在尝试两种不同的相同实现,但我无法解决的问题如下:

在将单个状态传递给策略网络后,我得到一个大小为2的输出张量,其中包含2个动作的动作概率。然而,当我将"一批状态"传递给策略网络以计算所有状态的输出动作概率时,我获得的值与将每个状态单独传递给网络时的值非常不同。

有人能帮我理解这个问题吗?

我的代码如下:(注意:这不是完整的增强算法——我知道我需要根据概率计算损失。但在继续之前,我正在努力理解两种概率计算的差异,我认为这应该是相同的。(

# architecture of the Policy Network
class PolicyNetwork(nn.Module):
def __init__(self, state_dim, n_actions):
super().__init__()
self.n_actions = n_actions
self.model = nn.Sequential(
nn.Linear(state_dim, 64),
nn.ReLU(),
nn.Linear(64, n_actions),
nn.Softmax(dim=0)
).float()
def forward(self, X):
return self.model(X)

def train_reinforce_agent(env, episode_length, max_episodes, gamma, visualize_step, learning_rate=0.003):
# define the parametric model for the Policy: this is an instantiation of the PolicyNetwork class
model = PolicyNetwork(env.observation_space.shape[0], env.action_space.n)
# define the optimizer for updating the weights of the Policy Network
optimizer = optim.Adam(model.parameters(), lr=learning_rate)

# hyperparameters of the reinforce agent
EPISODE_LENGTH = episode_length
MAX_EPISODES = max_episodes
GAMMA = gamma
VISUALIZE_STEP = max(1, visualize_step)
score = []

for episode in range(MAX_EPISODES):
# reset the environment
curr_state = env.reset()
done = False
episode_t = []

# rollout an entire episode from the Policy Network
pred_vals = []
for t in range(EPISODE_LENGTH):
act_prob = model(torch.from_numpy(curr_state).float())
pred_vals.append(act_prob)
action = np.random.choice(np.array(list(range(env.action_space.n))), p=act_prob.data.numpy())
prev_state = curr_state
curr_state, _, done, info = env.step(action)
episode_t.append((prev_state, action, t+1))
if done:
break
score.append(len(episode_t))
# reward_batch = torch.Tensor([r for (s,a,r) in episode_t]).flip(dims=(0,))
reward_batch = torch.Tensor([r for (s, a, r) in episode_t])

# compute the return for every state-action pair from the rewards at every time-step
batch_Gvals = []
for i in range(len(episode_t)):
new_Gval = 0
power = 0
for j in range(i, len(episode_t)):
new_Gval = new_Gval + ((GAMMA ** power) * reward_batch[j]).numpy()
power += 1
batch_Gvals.append(new_Gval)

# normalize the returns for the batch
expected_returns_batch = torch.FloatTensor(batch_Gvals)
if torch.is_nonzero(expected_returns_batch.max()):
expected_returns_batch /= expected_returns_batch.max()

# batch the states, actions, prob after the episode
state_batch = torch.Tensor([s for (s,a,r) in episode_t])
print("State batch:", state_batch)
all_states = [s for (s,a,r) in episode_t]
print("All states:", all_states)
action_batch = torch.Tensor([a for (s,a,r) in episode_t])
pred_batch_v1 = model(state_batch)
pred_batch_v2 = torch.stack(pred_vals)
print("Batched state pred_vals:", pred_batch_v1)
print("Individual state pred_vals:", pred_batch_v2) ### Why is this different from the above predicted values??

我通过环境的主要功能是:

def main():
env = gym.make('CartPole-v0')
# train a REINFORCE-agent to learn the optimal policy
episode_length = 500
n_episodes = 500
gamma = 0.99
vis_steps = 50
train_reinforce_agent(env, episode_length, n_episodes, gamma, vis_steps)

在您的策略中,Softmax超过dim 0。这将使批次中每个操作的概率正常化。您希望通过dim=1在多个操作之间执行此操作。

最新更新