Abstract:
Cooperative multi-agent reinforcement learning (MARL) tasks rely on the efficient coordination among agents, working collectively as a team to address diverse challenges....Show MoreMetadata
Abstract:
Cooperative multi-agent reinforcement learning (MARL) tasks rely on the efficient coordination among agents, working collectively as a team to address diverse challenges. However, considering the team as a cohesive entity introduces a flat structure to cooperation. In contrast, grouping serves as a method to tackle the issue by decomposing the team, thereby providing a more compact representation of the team’s structure. While grouping has been proven effective, numerous grouping methods are limited to specific composition structures and struggle to introduce diverse group patterns into the framework. In this paper, we propose SGCD, a subgroup contribution decomposition method that incorporates the idea of subgroups and inner subgroups, leveraging the Shapley Value to distribute contributions. This approach facilitates the decomposition of contributions from subgroups to the collective onto individual agents, enabling the high-level network to maintain consistency across various grouping patterns, thereby fostering continued cooperation among agents. Notably, our decomposition method is not confined to a specific team decomposition, making it adaptable to different grouping structures. The effectiveness of SGCD is demonstrated through experiments conducted in the Google Research Football (GRF) and StarCraft Multi-Agent Challenge (SMAC) environments.
Date of Conference: 30 June 2024 - 05 July 2024
Date Added to IEEE Xplore: 09 September 2024
ISBN Information: