skip to main content
10.1145/3472538.3472583acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfdgConference Proceedingsconference-collections
short-paper

Modular Reinforcement Learning Framework for Learners and Educators

Published: 21 October 2021 Publication History

Abstract

Reinforcement learning algorithms have been applied to many research areas in gaming and non-gaming applications. However, in gaming artificial intelligence (AI), existing reinforcement learning tools are aimed at experienced developers and not readily accessible to learners. The unpredictable nature of online learning is also a barrier for casual users. This paper proposes the EasyGameRL framework, a novel approach to the education of reinforcement learning in games using modular visual design patterns. The EasyGameRL framework and its software implementation in Unreal Engine are modular, reusable, and applicable to multiple game scenarios. The pattern-based approach allows users to effectively utilize reinforcement learning in their games and visualize the components of the process. This would be helpful to AI learners, educators, designers and casual users alike.

References

[1]
Samineh Bagheri, Markus Thill, Patrick Koch, and Wolfgang Konen. 2014. Online adaptable learning rates for the game Connect-4. IEEE Transactions on Computational Intelligence and AI in Games 8, 1(2014), 33–42.
[2]
Philip Bontrager, Ahmed Khalifa, Damien Anderson, Matthew Stephenson, Christoph Salge, and Julian Togelius. 2019. ” Superstition” in the Network: Deep Reinforcement Learning Plays Deceptive Games. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Vol. 15. 10–16.
[3]
Maria Cutumisu, Duane Szafron, Jonathan Schaeffer, Matthew McNaughton, Thomas Roy, Curtis Onuczko, and Mike Carbonaro. 2006. Generating ambient behaviors in computer role-playing games. IEEE Intelligent Systems 21, 5 (2006), 19–27.
[4]
Matthew S Emigh, Evan G Kriminger, Austin J Brockmeier, José C Príncipe, and Panos M Pardalos. 2014. Reinforcement learning in video games using nearest neighbor interpolation and metric learning. IEEE Transactions on Computational Intelligence and AI in Games 8, 1(2014), 56–66.
[5]
Carlos Florensa, David Held, Xinyang Geng, and Pieter Abbeel. 2018. Automatic goal generation for reinforcement learning agents. In International conference on machine learning. PMLR, 1515–1528.
[6]
Spencer Frazier and Mark Riedl. 2019. Improving deep reinforcement learning in minecraft with action advice. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Vol. 15. 146–152.
[7]
Frank G Glavin and Michael G Madden. 2014. Adaptive shooting for bots in first person shooter games using reinforcement learning. IEEE Transactions on Computational Intelligence and AI in Games 7, 2(2014), 180–192.
[8]
Kenneth Hullett and Jim Whitehead. 2010. Design patterns in FPS levels. In Proceedings of the Fifth International Conference on the Foundations of Digital Games (FDG). 78–85.
[9]
Arthur Juliani, Vincent-Pierre Berges, Ervin Teng, Andrew Cohen, Jonathan Harper, Chris Elion, Chris Goy, Yuan Gao, Hunter Henry, Marwan Mattar, 2018. Unity: A general platform for intelligent agents. arXiv preprint arXiv:1809.02627(2018).
[10]
Dennis Lee, Haoran Tang, Jeffrey Zhang, Huazhe Xu, Trevor Darrell, and Pieter Abbeel. 2018. Modular architecture for starcraft ii with deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Vol. 14.
[11]
Daniel MacCormick and Loutfouz Zaman. 2019. SuBViS: the use of subjunctive visual programming environments for exploring alternatives in game development. In Proceedings of the 14th International Conference on the Foundations of Digital Games (FDG). 1–11.
[12]
Ángel Martínez-Tenor, Ana Cruz-Martín, and Juan-Antonio Fernández-Madrigal. 2019. Teaching machine learning in robotics interactively: the case of reinforcement learning with Lego® Mindstorms. Interactive Learning Environments 27, 3 (2019), 293–306.
[13]
Matthew McNaughton, Maria Cutumisu, Duane Szafron, Jonathan Schaeffer, James Redford, and Dominique Parker. 2004. ScriptEase: Generative design patterns for computer role-playing games. In Proceedings of the 19th International Conference on Automated Software Engineering, 2004. IEEE, 88–99.
[14]
David Milam and Magy Seif El Nasr. 2010. Design patterns to guide player movement in 3D games. In Proceedings of the 5th ACM SIGGRAPH Symposium on Video Games. 37–42.
[15]
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In International conference on machine learning. PMLR, 1928–1937.
[16]
Thomas L Naps, Guido Rößling, Vicki Almstrum, Wanda Dann, Rudolf Fleischer, Chris Hundhausen, Ari Korhonen, Lauri Malmi, Myles McNally, Susan Rodger, 2002. Exploring the role of visualization and engagement in computer science education. In Working group reports from ITiCSE on Innovation and technology in computer science education. 131–152.
[17]
Ivan Pereira Pinto and Luciano Reis Coutinho. 2018. Hierarchical reinforcement learning with monte carlo tree search in computer fighting game. IEEE transactions on games 11, 3 (2018), 290–295.
[18]
Judy Robertson and Cathrin Howells. 2008. Computer game design: Opportunities for successful learning. Computers & Education 50, 2 (2008), 559–578.
[19]
Gavin A Rummery and Mahesan Niranjan. 1994. On-line Q-learning using connectionist systems. Vol. 37. University of Cambridge, Department of Engineering Cambridge, UK.
[20]
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 6419 (2018), 1140–1144.
[21]
Conor Stephens and Chris Exton. 2020. Assessing Multiplayer Level Design Using Deep Learning Techniques. In Proceedings of the International Conference on the Foundations of Digital Games (FDG). 1–3.
[22]
Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.
[23]
Ruben Rodriguez Torrado, Philip Bontrager, Julian Togelius, Jialin Liu, and Diego Perez-Liebana. 2018. Deep reinforcement learning for general video game ai. In 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 1–8.
[24]
Mike Treanor, Alexander Zook, Mirjam P Eladhari, Julian Togelius, Gillian Smith, Michael Cook, Tommy Thompson, Brian Magerko, John Levine, and Adam Smith. 2015. AI-based game design patterns. In Proceedings of the International Conference on the Foundations of Digital Games (FDG).
[25]
Bram Van Der Vlist, Rick Van De Westelaken, Christoph Bartneck, Jun Hu, Rene Ahn, Emilia Barakova, Frank Delbressine, and Loe Feijs. 2008. Teaching machine learning to design students. In International Conference on Technologies for E-Learning and Digital Entertainment. Springer, 206–217.
[26]
Riemer van Rozen. 2015. A Pattern-Based Game Mechanics Design Assistant. In Proceedings of the International Conference on the Foundations of Digital Games (FDG).
[27]
Christopher Watkins and Peter Dayan. 1992. Q-Learning. Machine Learning 8.3-4(1992), 279–292. https://doi.org/10.1007/BF00992698
[28]
Michael Wollowski, Robert Selkowitz, Laura Brown, Ashok Goel, George Luger, Jim Marshall, Andrew Neel, Todd Neller, and Peter Norvig. 2016. A survey of current practice and teaching of AI. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 30.
[29]
Sijia Xu, Hongyu Kuang, Zhuang Zhi, Renjie Hu, Yang Liu, and Huyang Sun. 2019. Macro action selection with deep reinforcement learning in starcraft. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Vol. 15. 94–99.
[30]
Georgios N Yannakakis and Julian Togelius. 2018. Artificial intelligence and games. Vol. 2. Springer.
[31]
Kun-Hao Yeh, I-Chen Wu, Chu-Hsuan Hsueh, Chia-Chuan Chang, Chao-Chin Liang, and Han Chiang. 2016. Multistage temporal difference learning for 2048-like games. IEEE Transactions on Computational Intelligence and AI in Games 9, 4(2016), 369–380.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FDG '21: Proceedings of the 16th International Conference on the Foundations of Digital Games
August 2021
534 pages
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 October 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. computer game
  2. design pattern
  3. education
  4. reinforcement learning
  5. visual design

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Conference

FDG'21

Acceptance Rates

Overall Acceptance Rate 152 of 415 submissions, 37%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 103
    Total Downloads
  • Downloads (Last 12 months)19
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media