合并和拆分TF代理的时间和操作步骤



我正在尝试在一个简单的多智能体非合作并行游戏中使用TF代理。为了简化,我有两个代理,用TF代理定义。我定义了一个自定义的健身房环境,它将代理的组合动作作为输入并返回观察结果。代理人的政策不应该把全部观察作为输入,而应该只是其中的一部分。所以我需要做两件事:

  • 拆分TF代理环境包装器返回的time_step实例,以独立地提供给代理的策略
  • 合并来自代理策略的action_step实例以提供环境

如果agent1_policyagent2_policy是两个TF代理策略,而environment是一个TF代理环境,我希望能够这样做来收集步骤:

from tf_agents.trajectories import trajectory
time_step = environment.current_time_step()
# Split the time_step to have partial observability
time_step1, time_step2 = split(time_step)
# Get action from each agent
action_step1 = agent1_policy.action(time_step1)
action_step2 = agent2_policy.action(time_step2)
# Merge the independent actions
action_merged = merge(action_step1, action_step2)
# Use the merged actions to have the next step
next_time_step = environment.step(action_merged)
# Split the next step too
next_time_step1, next_time_step2 = split(next_time_step)
# Build two distinct trajectories
traj1 = trajectory.from_transition(time_step1, action_step1, next_time_step1)
traj2 = trajectory.from_transition(time_step2, action_step2, next_time_step2)

CCD_ 6和CCD_。

在本例中,我应该如何定义函数mergesplit

这可以通过在环境类中定义适当的action_specobservation_spec来实现。有关生成张量字典的观测值的示例,请参阅本文档。类似的方法可以用于接受作为字典或元组的动作。

最新更新