ABSTRACT
Usertesting is commonly employed in games user research (GUR) to understand the experience of players interacting with digital games. However, recruitment and testing with human users can be laborious and resource-intensive, particularly for independent developers. To help mitigate these obstacles, we are developing a framework for simulated testing sessions with agents driven by artificial intelligence (AI). Specifically, we aim to imitate the navigation of human players in a virtual world. By mimicking the tendency of users to wander, explore, become lost, and so on, these agents may be used to identify basic issues with a game's world and level design, enabling informed iteration earlier in the development process. Here, we detail our progress in developing a framework for configurable agent navigation and simple visualization of simulated data. Ultimately, we hope to provide a basis for the development of a tool for simulation-driven usability testing in games.
- Staffan Bjork and Jussi Holopainen. 2005. Patterns in Game Design. Charles River Media.Google Scholar
- Benjamin Cowley and Darryl Charles. 2016. Behavlets: a method for practical player modelling using psychologybased player traits and domain specific features. User Modeling and User-Adapted Interaction 26, 2 (June 2016), 257--306. Google ScholarDigital Library
- C. Holmgård, A. Liapis, J. Togelius, and G. N. Yannakakis. 2014. Evolving personas for player decision modeling. In 2014 IEEE Conference on Computational Intelligence and Games. 1--8.Google Scholar
- T. Machado, D. Gopstein, A. Nealen, O. Nov, and J. Togelius. 2018. AI-Assisted Game Debugging with Cicero. In 2018 IEEE Congress on Evolutionary Computation (CEC). 1--8.Google Scholar
- Pejman Mirza-Babaei, Naeem Moosajee, and Brandon Drenikow. 2016. Playtesting for Indie Studios. In Proceedings of the 20th International Academic Mindtrek Conference (AcademicMindtrek '16). ACM, New York, NY, USA, 366--374. Google ScholarDigital Library
- Lennart E. Nacke, Chris Bateman, and Regan L. Mandryk. 2014. BrainHex: A neurobiological gamer typology survey. Entertainment Computing 5, 1 (Jan. 2014), 55--62.Google ScholarCross Ref
- N. Shaker, J. Togelius, G. N. Yannakakis, L. Poovanna, V. S. Ethiraj, S. J. Johansson, R. G. Reynolds, L. K. Heether, T. Schumann, and M. Gallagher. 2013. The turing test track of the 2012 Mario AI Championship: Entries and evaluation. In 2013 IEEE Conference on Computational Inteligence in Games (CIG). 1--8.Google Scholar
- David Silver, Aja Huang, Chris J. Maddison, et al. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (Jan. 2016), 484--489.Google ScholarCross Ref
- Gillian Smith, Jim Whitehead, and Michael Mateas. 2010. Tanagra: A Mixed-initiative Level Design Tool. In Proceedings of the Fifth International Conference on the Foundations of Digital Games (FDG '10). ACM, New York, NY, USA, 209--216. Google ScholarDigital Library
- Samantha N. Stahlke and Pejman Mirza-Babaei. 2018. Usertesting Without the User: Opportunities and Challenges of an AI-Driven Approach in Games User Research. Comput. Entertain. 16, 2 (April 2018), 9:1--9:18. Google ScholarDigital Library
- J. Jay Todd and René Marois. 2004. Capacity limit of visual short-term memory in human posterior parietal cortex. Nature 428, 6984 (April 2004), 751--754.Google Scholar
- Jonathan Tremblay, Pedro Andrade Torres, Nir Rikovitch, and Clark Verbrugge. 2013. An Exploration Tool for Predicting Stealthy Behaviour. In Artificial Intelligence in the Game Design Process 2: Papers from the 2013 AIIDE Workshop.Google Scholar
- Georgios N. Yannakakis, Pieter Spronck, Daniele Loiacono, and Elisabeth André. 2013. Player Modeling. In Artificial and Computational Intelligence in Games, Simon M. Lucas, Michael Mateas, Mike Preuss, Pieter Spronck, and Julian Togelius (Eds.). Dagstuhl Follow-Ups, Vol. 6. Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 45--59.Google Scholar
Index Terms
- Artificial Playfulness: A Tool for Automated Agent-Based Playtesting
Recommendations
PathOS+: A New Realm in Expert Evaluation
CHI PLAY '21: Extended Abstracts of the 2021 Annual Symposium on Computer-Human Interaction in PlayExpert evaluation is commonly employed in usability research as it is fast and cost-effective. However, as it heavily relies on evaluators’ expertise, it is associated with problems of subjective interpretation. This is particularly noticeable in the ...
Artificial Players in the Design Process: Developing an Automated Testing Tool for Game Level and World Design
CHI PLAY '20: Proceedings of the Annual Symposium on Computer-Human Interaction in PlayIterative user-centred design has become a standard approach for developing interactive products. This process relies on prototyping and usertesting as early as possible to deliver a positive user experience and ensure that final products align with ...
Playtesting for indie studios
AcademicMindtrek '16: Proceedings of the 20th International Academic Mindtrek ConferenceCreating video games is a lengthy and demanding process. Financial success for games studios often depends on making games that deliver a fun and engaging experience for a diverse audience of players. Therefore, understanding how players interact and ...
Comments