ABSTRACT
This paper considers a non-cooperative real-time strategy game between two teams; each has multiple homogeneous players with identical capabilities. In particular, the first team consists of multiple land vehicles under attack by a team of drones, and the vehicles are equipped with weapons to counterattack the drones. However, with the increase in the number of drones, it may become difficult for human operators to coordinate actions across vehicles in a timely manner. Therefore, we explore a coevolutionary approach to simultaneously evolve competitive weapon target assignment strategies for the land vehicles and drone threats to address this problem. Different scenarios involving a different number of land vehicles and drone threats have been considered to evaluate the performance of the proposed approach. Results showed some advantages of applying such a coevolutionary approach.
- Navin K Adhikari, Sushil J. Louis, Siming Liu, and Walker Spurgeon. 2019. Co-evolving Real-Time Strategy Game Micro. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018. 1990–1997.Google Scholar
- Christopher A. Ballinger and Sushil J. Louis. 2014. Learning Robust Build-Orders from Previous Opponents with Coevolution. In IEEE Conference on Computatonal Intelligence and Games, CIG.Google Scholar
- Madeleine Cochrane and Robert Hunjet. 2020. A Multi-Armed Bandit Strategy for Countermeasure Selection. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). 2510–2515.Google ScholarCross Ref
- Mariela Nogueira Collazo, Carlos Cotta, and Antonio J. Fernandez-Leiva. 2014. Virtual Player Design Using Self-Learning via Competitive Coevolutionary Algorithms. Natural Computing 13, 2 (2014), 131–144.Google ScholarCross Ref
- Ehab Z. Elfeky, Saber Elsayed, Luke Marsh, Daryl Essam, Madeleine Cochrane, Brendan Sims, and Ruhul Sarker. 2021. A Systematic Review of Coevolution in Real-Time Strategy Games. IEEE Access 9(2021), 136647–136665.Google ScholarCross Ref
- Sevan G Ficici and Jordan B Pollack. 1998. Challenges in Coevolutionary Learning: Arms-Race Dynamics, Open-Endedness, and Mediocre Stable States. In Proceedings of the sixth international conference on Artificial life. 238–247.Google Scholar
- Dario Floreano and Stefano Nolfi. 1997. God Save the Red Queen! Competition in Co-Evolutionary Robotics. In GENETIC PROGRAMMING 1997: PROCEEDINGS OF THE SECOND ANNUAL CONFERENCE. Morgan Kaufmann, 398–406.Google Scholar
- Steffen Herbold. 2020. Autorank: A Python package for automated ranking of classifiers. Journal of Open Source Software 5, 48 (2020), 2173. https://doi.org/10.21105/joss.02173Google ScholarCross Ref
- Fredrik Johansson and Göran Falkman. 2010. Real-time allocation of defensive resources to rockets, artillery, and mortars. In 2010 13th International Conference on Information Fusion. 1–8.Google ScholarCross Ref
- Alexander Kline, Darryl Ahner, and Raymond Hill. 2019. Survey in Operations Research and Management Science: The Weapon-Target Assignment Problem. Computers and Operations Research 105 (2019), 226–236.Google ScholarDigital Library
- Raul Lara-Cabrera, Carlos Cotta, and Antonio J. Fernandez-Leiva. 2013. A Review of Computational Intelligence in RTS Games. 2013 Ieee Symposium on Foundations of Computational Intelligence (Foci) (2013), 114–121.Google ScholarCross Ref
- S.P. Lloyd and H.S. Witsenhausen. 1986. Weapons allocation is NP-complete. In Proceedings of the 1986 Summer Computer Simulation Conference, Reno, Nevada. 1054–1058.Google Scholar
- A. S. Manne. 1958. A target-assignment problem. Operations Research 6, 3 (1958), 346–351.Google ScholarDigital Library
- Luke Marsh, Madeleine Cochrane, Riley Lodge, Brendan Sims, Jason Traish, and Richard Xu. 2020. Autonomous Target Allocation Recommendations. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). 1403–1410.Google Scholar
- Chris Miles, Juan Quiroz, Ryan Leigh, and Sushil J. Louis. 2007. Co-evolving Influence Map Tree Based Strategy Game Players. In Proceedings of the 2007 IEEE Symposium on Computational Intelligence and Games, CIG 2007. 88–95.Google ScholarDigital Library
- Antonio M. Mora, Antonio Fernández-Ares, Juan-Julián Merelo-Guervós, and Pablo García-Sánchez. 2012. Dealing with Noisy Fitness in the Design of a RTS Game Bot. In Applications of Evolutionary Computation, Di Chio C. et al. (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 234–244.Google Scholar
- Munir Naveed, Diane Kitchin, Andrew Crampton, Lukáš Chrpa, and Peter Gregory. 2012. A Monte-Carlo path planner for dynamic and partially observable environments. In 2012 IEEE Conference on Computational Intelligence and Games (CIG). 211–218.Google ScholarCross Ref
- Mariela Nogueira, Carlos Cotta, and Antonio J. Fernandez-Leiva. 2013. An Analysis of Hall-of-Fame Strategies in Competitive Coevolutionary Algorithms for Self-Learning in RTS Games. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 7997 LNCS. Springer, Berlin, Heidelberg, 174–188.Google Scholar
- Mariela Nogueira, Juan M. Galvez, Carlos Cotta, and Antonio J. Fernandez-Leiva. 2012. Hall-of-Fame Competitive Coevolutionary Algorithms for Optimizing Opponent Strategies in a New Game. In 13th International Conference on Intelligent Games and Simulation, GAME-ON 2012. 71–78.Google Scholar
- Santiago Ontanon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, and Mike Preuss. 2013. A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft. IEEE Transactions on Computational Intelligence and AI in Games 5, 4(2013), 293–311.Google ScholarCross Ref
- Christopher D. Rosin and Richard K. Belew. 1997. New Methods for Competitive Coevolution. Evolutionary Computation 5, 1 (March 1997), 1–29.Google ScholarDigital Library
- Manu Sharma, M. Holmes, J. C. Santamaría, A. Irani, C. Isbell, and A. Ram. 2007. Transfer Learning in Real-Time Strategy Games Using Hybrid CBR/RL. In IJCAI.Google Scholar
- Oriol Vinyals, I. Babuschkin, Wojciech M. Czarnecki, and et al.2019. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature (2019), 1–5.Google Scholar
- R. P. Wiegand and J. Sarma. 2004. Spatial Embedding and Loss of Gradient in Cooperative Coevolutionary Algorithms. In PPSN.Google Scholar
- Ling Wu, Hang yu Wang, Fa xing Lu, and Peifa Jia. 2008. An anytime algorithm based on modified GA for dynamic weapon-target allocation problem. In 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence). 2020–2025.Google Scholar
- Wang Yanxia, Qian Longjun, Guo Zhi, and Ma Lifeng. 2008. Weapon target assignment problem satisfying expected damage probabilities based on ant colony algorithm. Journal of Systems Engineering and Electronics 19, 5(2008), 939–944.Google ScholarCross Ref
Index Terms
- Coevolutionary Algorithm for Evolving Competitive Strategies in the Weapon Target Assignment Problem
Recommendations
A cooperative coevolutionary biogeography-based optimizer
With its unique migration operator and mutation operator, Biogeography-Based Optimization (BBO), which simulates migration of species in natural biogeography, is different from existing evolutionary algorithms, but it has shortcomings such as poor ...
Solving a large sudoku by co-evolving numerals
GECCO '17: Proceedings of the Genetic and Evolutionary Computation Conference CompanionRecently we introduced an approach to solving Sudoku problems with co-evolution [4]: Resource-defined Fitness Sharing for Sudoku (RFSS). The idea is to find a set of non-conflicting numerals such that every cell in the puzzle is "covered" by a numeral. ...
A competitive-cooperative coevolutionary paradigm for dynamic multiobjective optimization
Special issue on computational finance and economicsIn addition to the need for satisfying several competing objectives, many real-world applications are also dynamic and require the optimization algorithm to track the changing optimum over time. This paper proposes a new coevolutionary paradigm that ...
Comments