ABSTRACT
Dynamic Difficulty Adjustment (DDA) is a technique to automatically adjust various game factors, such as items, maps, or opponent behavior, to provide players with a challenging and engaging experience. The goal is to maintain a balance ensuring an optimal level of enjoyment. In this work, we propose a reinforcement learning agent in a fighting game to create an opponent that matches the player’s skill level. We propose a reward function that leads the player to have similar relative skill to his opponent and maintain a balanced match. Additionally, we introduce a penalty given to the agent during training to constrain its win rate. Therefore, creating an opponent that is not too wear nor too strong. We also explore regularization techniques to improve the agent’s performance and adaptability. We show that regularization improves over the baseline in generalizing its behavior to handle opponents not encountered during training.
- Mihaly Csikszentmihalyi. 1990. Flow: The Psychology of Optimal Experience.Google Scholar
- Simon Demediuk, Marco Tamassia, William L. Raffe, Fabio Zambetta, Xiaodong Li, and Florian Mueller. 2017. Monte Carlo tree search based algorithms for dynamic difficulty adjustment. In 2017 IEEE Conference on Computational Intelligence and Games (CIG). 53–59. https://doi.org/10.1109/CIG.2017.8080415Google ScholarDigital Library
- Simon Demediuk, Marco Tamassia, William L. Raffe, Fabio Zambetta, Xiaodong Li, and Florian Mueller. 2017. Monte Carlo Tree Search Based Algorithms for Dynamic Difficulty Adjustment. In 2017 IEEE Conference on Computational Intelligence and Games (CIG) (New York, NY, USA). IEEE Press, 53–59. https://doi.org/10.1109/CIG.2017.8080415Google ScholarDigital Library
- Jesse Farebrother, Marlos C. Machado, and Michael Bowling. 2020. Generalization and Regularization in DQN. arxiv:1810.00123 [cs.LG]Google Scholar
- Miguel González-Duque, Rasmus Berg Palm, and Sebastian Risi. 2021. Fast Game Content Adaptation Through Bayesian-based Player Modelling. https://doi.org/10.48550/ARXIV.2105.08484Google ScholarCross Ref
- Juho Hamari and Lauri Keronen. 2017. Why do people play games? A meta-analysis. International Journal of Information Management 37, 3 (2017), 125–141. https://doi.org/10.1016/j.ijinfomgt.2017.01.006Google ScholarDigital Library
- Robin Hunicke and Vernell Chapman. 2004. AI for dynamic difficulty adjustment in games. Challenges in game artificial intelligence AAAI workshop 2 (01 2004).Google Scholar
- Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Rocktäschel. 2023. A Survey of Zero-shot Generalisation in Deep Reinforcement Learning. Journal of Artificial Intelligence Research 76 (jan 2023), 201–264. https://doi.org/10.1613/jair.1.14174Google ScholarDigital Library
- Raphaël Koster and Will Wright. 2004. A Theory of Fun for Game Design.Google Scholar
- Changchun Liu, Pramila Agrawal, Nilanjan Sarkar, and Shuo Chen. 2009. Dynamic Difficulty Adjustment in Computer Games Through Real-Time Anxiety-Based Affective Feedback. Int. J. Hum. Comput. Interaction 25 (08 2009), 506–529. https://doi.org/10.1080/10447310902963944Google ScholarCross Ref
- Zhuang Liu, Xuanlin Li, Bingyi Kang, and Trevor Darrell. 2021. Regularization Matters in Policy Optimization. arxiv:1910.09191 [cs.LG]Google Scholar
- Thomas W. Malone. 1981. What Makes Things Fun to Learn? A Study of Intrinsically Motivating Computer Games.Google Scholar
- Ashey Noblega., Aline Paes., and Esteban Clua.2019. Towards Adaptive Deep Reinforcement Game Balancing. In Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,. INSTICC, SciTePress, 693–700. https://doi.org/10.5220/0007395406930700Google ScholarCross Ref
- OpenAI, :, Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique P. d. O. Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, and Susan Zhang. 2019. Dota 2 with Large Scale Deep Reinforcement Learning. arxiv:1912.06680 [cs.LG]Google Scholar
- Dvir Ben Or, Michael Kolomenkin, and Gil Shabat. 2021. DL-DDA – Deep Learning based Dynamic Difficulty Adjustment with UX and Gameplay constraints. https://doi.org/10.48550/ARXIV.2106.03075Google ScholarCross Ref
- Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. 2021. Stable-Baselines3: Reliable Reinforcement Learning Implementations. Journal of Machine Learning Research 22, 268 (2021), 1–8. http://jmlr.org/papers/v22/20-1364.htmlGoogle Scholar
- John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal Policy Optimization Algorithms. arxiv:1707.06347 [cs.LG]Google Scholar
- Yoones A. Sekhavat. 2017. MPRL: Multiple-Periodic Reinforcement Learning for difficulty adjustment in rehabilitation games. In 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH). 1–7. https://doi.org/10.1109/SeGAH.2017.7939260Google ScholarCross Ref
- Mirna Silva, Victor Silva, and Luiz Chaimowicz. 2016. Dynamic Difficulty Adjustment on MOBA Games. Entertainment Computing 18 (10 2016). https://doi.org/10.1016/j.entcom.2016.10.002Google ScholarCross Ref
- Adi Stein, Yair Yotam, Rami Puzis, Guy Shani, and Meirav Taieb-Maimon. 2018. EEG-triggered dynamic difficulty adjustment for multiplayer games. Entertainment Computing 25 (2018), 14–25. https://doi.org/10.1016/j.entcom.2017.11.003Google ScholarCross Ref
- Penelope Sweetser and Peta Wyeth. 2005. GameFlow: A Model for Evaluating Player Enjoyment in Games. Comput. Entertain. 3, 3 (jul 2005), 3. https://doi.org/10.1145/1077246.1077253Google ScholarDigital Library
- Oriol Vinyals, Igor Babuschkin, Junyoung Chung, Michael Mathieu, Max Jaderberg, Wojtek Czarnecki, Andrew Dudzik, Aja Huang, Petko Georgiev, Richard Powell, Timo Ewalds, Dan Horgan, Manuel Kroiss, Ivo Danihelka, John Agapiou, Junhyuk Oh, Valentin Dalibard, David Choi, Laurent Sifre, Yury Sulsky, Sasha Vezhnevets, James Molloy, Trevor Cai, David Budden, Tom Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Toby Pohlen, Dani Yogatama, Julia Cohen, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Chris Apps, Koray Kavukcuoglu, Demis Hassabis, and David Silver. 2019. AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/.Google Scholar
- Su Xue, Meng Wu, John Kolen, Navid Aghdaie, and Kazi A. Zaman. 2017. Dynamic Difficulty Adjustment for Maximized Engagement in Digital Games. In Proceedings of the 26th International Conference on World Wide Web Companion (Perth, Australia) (WWW ’17 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 465–471. https://doi.org/10.1145/3041021.3054170Google ScholarDigital Library
- Alexander Zook and Mark Riedl. 2021. A Temporal Data-Driven Player Model for Dynamic Difficulty Adjustment. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 8, 1 (Jun. 2021), 93–98. https://ojs.aaai.org/index.php/AIIDE/article/view/12504Google ScholarCross Ref
Index Terms
- Investigating Reinforcement Learning for Dynamic Difficulty Adjustment
Recommendations
The case for dynamic difficulty adjustment in games
ACE '05: Proceedings of the 2005 ACM SIGCHI International Conference on Advances in computer entertainment technologyConventional wisdom suggests that while players enjoy unpredictability or novelty during gameplay experiences, they will feel "cheated" if games are adjusted during or across play sessions. In order for adjustment to be effective, it must be performed ...
Dynamic Difficulty Adjustment in a Multiplayer Minecraft Server
OzCHI '22: Proceedings of the 34th Australian Conference on Human-Computer InteractionMinecraft has given rise to diverse gameplay, including public servers that provide survival gameplay combined with Massively Multiplayer Online Role-Playing Game elements. In these servers, gameplay includes many players fighting mobs (non-player ...
The effect of multiplayer dynamic difficulty adjustment on the player experience of video games
CHI EA '14: CHI '14 Extended Abstracts on Human Factors in Computing SystemsMultiplayer Dynamic Difficulty Adjustment (mDDA) is a method of reducing the difference in player performance and subsequent challenge in competitive multiplayer video games. As a balance of between player skill and challenge experienced is necessary ...
Comments