ABSTRACT
The traditional optimization and control technologies deal with the dynamic interactions between individuals separately, with the increase in the agents' number, the modeling process of cooperative attack-defense problems tends to be complex, and the difficulty of solving the optimal strategy will increase significantly. Moreover, to carry out more accurate real-time control of agents, the state variables used to characterize their kinematics are usually high-dimensional. To overcome these challenges, we formulate the cooperative attack-defense evolution of large-scale agents as a multi-population high-dimensional stochastic mean-field game (MPHD-MFG). Numerical methods for MPHD-MFGs are practically non-existent, because, the heterogeneity of the multi-population model increases the complexity of sequential games, and grid-based spatial discretization leads to dimension explosion. Thus, we propose a generative adversarial network-based method, where we use a coupled alternating neural network composed of multiple generators and multiple discriminators, to tractably solve MPHD-MFGs. Simulation experiments are carried out for various attack-defense scenarios, the results verify the feasibility and effectiveness of our proposed model and algorithm.
Supplemental Material
Available for Download
Supplemental material.
- Yaodong Yang, Rui Luo, Minne Li, Ming Zhou, Weinan Zhang, and Jun Wang. Mean field multi-agent reinforcement learning. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5571--5580. PMLR, 10--15 Jul 2018.Google Scholar
- Jean-Michel Lasry and Pierre-Louis Lions. Mean field games. Japanese Journal of Mathematics, 2(1):229--260, March 2007.Google ScholarCross Ref
- Minyi Huang, Peter E. Caines, and Roland P. Malhame. Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralized $\varepsilon$-nash equilibria. IEEE Transactions on Automatic Control, 52(9):1560--1571, September 2007.Google ScholarCross Ref
- Olivier Guéant, Jean-Michel Lasry, and Pierre-Louis Lions. Mean field games and applications. In Paris-Princeton Lectures on Mathematical Finance 2010, pages 205--266. Springer Berlin Heidelberg, 2011.Google Scholar
- Alex Tong Lin, Samy Wu Fung, Wuchen Li, Levon Nurbekyan, and Stanley J. Osher. Alternating the population and control neural networks to solve high-dimensional stochastic mean-field games. Proceedings of the National Academy of Sciences, 118(31):e2024713118, July 2021.Google ScholarCross Ref
- Guofang Wang, Wang Yao, Xiao Zhang, and Zijia Niu. Coupled alternating neural networks for solving multi-population high-dimensional mean-field games with stochasticity. January 2022.Google Scholar
- J. M. Schulte. Adjoint methods for Hamilton-Jacobi-Bellman equations. PhD thesis, University of Munster, November 2010.Google Scholar
Index Terms
- Cooperative attack-defense evolution of large-scale agents: a multi-population high-dimensional mean-field game approach
Recommendations
Decentralizing and coevolving differential evolution for large-scale global optimization problems
This paper presents a novel decentralizing and coevolving differential evolution (DCDE) algorithm to address the issue of scaling up differential evolution (DE) algorithms to solve large-scale global optimization (LSGO) problems. As most evolutionary ...
Increasing coalition stability in large-scale coalition formation with self-interested agents
ECAI'16: Proceedings of the Twenty-second European Conference on Artificial IntelligenceIn coalition formation with self-interested agents both social welfare of the multi-agent system and stability of individual coalitions must be taken into account. However, in large-scale systems with thousands of agents, finding an optimal solution ...
Extending surveillance systems capabilities using BDI cooperative sensor agents
VSSN '06: Proceedings of the 4th ACM international workshop on Video surveillance and sensor networksIn this paper we describe the Cooperative Sensor Agents (CSA), a logical framework of autonomous agents working in sensor network environments. CSA is a two-layer framework. In the first layer, called Sensor Layer, each agent controls and manages ...
Comments