Action-Graph Policies: Learning Action Co-dependencies in Multi-Agent Reinforcement Learning

Coordinating actions is the most fundamental form of cooperation in multi-agent reinforcement learning (MARL). Successful decentralized decision-making often depends not only on good individual actions, but on selecting compatible actions across agents to synchronize behavior, avoid conflicts, and satisfy global constraints. In this paper, we propose Action Graph Policies (AGP), that model dependencies among agents’ available action choices. It constructs, what we call, textit{coordination contexts}, that enable agents to condition their decisions on global action dependencies. Theoretically, we show that AGPs induce a strictly more expressive joint policy compared to fully independent policies and can realize coordinated joint actions that are provably more optimal than greedy execution even from centralized value-decomposition methods. Empirically, we show that AGP achieves 80-95% success on canonical coordination tasks with partial observability and anti-coordination penalties, where other MARL methods reach only 10-25%. We further demonstrate that AGP consistently outperforms these baselines in diverse multi-agent environments.

Liked Liked