Skip to main content
Log in

Learning believable game agents using sensor noise and action histogram

  • Regular research paper
  • Published:
Memetic Computing Aims and scope Submit manuscript

Abstract

A believable game agent can help players to immerse in the game world and maintain their suspension of disbelief, thereby making the game more enjoyable and satisfying. This paper explores the use of two main ideas to acquire believable reactionary behaviours in game agents through learning low level human tendencies in depressing input keys. First, sensor noise is introduced to simulate errors in human judgment and its parameters are optimized through the evolution of the game agent. Second, indirect modeling of human reactionary behaviour tendencies is achieved by using input action histograms as optimization objectives. Two types of histograms are explored, the action histogram and the action sequence histogram. The resulting game agents with evolved sensor noise are able to improve its believability without compromising gaming performance in training, and demonstrate good generalization capability on four other previously unseen test tracks. In a user study involving 58 respondents, the same game agents are also evaluated to be more believable compared to one evolved for performance alone.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Bodenheimer B, Meng J, Wu H, Narasimham G, Rump B, McNamara TP, Carr TH, Rieser JJ (2007) Distance estimation in virtual and real environments using bisection. In: Proceedings of the fourth symposium on applied perception in graphics and visualization, p. 35–40

  2. Brandstetter MF, Ahmadi S (2012) Reactive control of Ms. Pac Man using information retrieval based on genetic programming. In: Proceedings of IEEE conference on computational intelligence and games, pp 250–256

  3. Bryant BD, Miikulainen R (2007) Acquiring visibly intelligent behavior with example-guided neuroevolution. In: Proceedings of the twenty-second national conference on artificial intelligence, pp 801–808

  4. Cardamone L, Loracono D, Lanzi PL (2009) Learning drivers for TORCS through Imitation using supervised methods. In: IEEE symposium on computational intelligence and games, pp 148–155

  5. Chellapilla K, Fogel DB (1999) Evolving neural networks to play checkers without relying on expert knowledge. IEEE Trans Neural Netw 10(6):1382–1391

    Article  Google Scholar 

  6. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evolut Comput 6(2):182–197

    Article  Google Scholar 

  7. Forbus K, Laird J (2002) AI and the entertainment industry. IEEE Intell Syst 17(4):15–16

    Article  Google Scholar 

  8. Gamez D, Fountas Z, Fidjeland AK (2013) A neurally controlled computer game avatar with humanlike behavior. IEEE Trans Comput Intell AI Games 5(1):1–14

    Article  Google Scholar 

  9. Gomez FJ, Togelius J, Schmidhuber J (2009) Measuring and optimizing behavioural complexity for evolutionary reinforcement learning. In: Proceedings of the international conference on artificial neural networks, pp 765–774

  10. Gorman B, Thurau C, Bauckhage C, Humphrys M (2006) Believability testing and Bayesian imitation in interactive computer games. Anim Animats 9(4095):655–666

    Google Scholar 

  11. Jansen T, Neumann F (2010) Editorial for the special issue on theoretical aspects of evolutionary multi-objective optimization. Evolut Comput 18(3):333–334

    Article  Google Scholar 

  12. Knowles JD, Corne DW (2000) Approximating the non-dominated front using the Pareto archived evolution strategy. Evolut Comput 8(2):149–172

    Article  Google Scholar 

  13. López-Ibáñez M, Prasad TD, Paechter B (2011) Representations and evolutionary operators for the scheduling of pump operations in water distribution networks. Evolut Comput 19(3):429–467

  14. Masazade E, Rajagopalan R, Varshney PK, Mohan CK, Sendur GK, Keskinoz M (2010) A multiobjective optimization approach to obtain decision thresholds for distributed detection in wireless sensor networks. IEEE Trans Syst Man Cybern Part B: Cybern 40(2):444–457

    Article  Google Scholar 

  15. Munoz J, Gutierrez G, Sanchis A (2009) Controller for TORCS created by imitation. In: IEEE symposium on computational intelligence and games, pp 271–278

  16. Pedersen C, Togelius J, Yannakakis GN (2009) Optimizing of platform game levels for player experience. In: Proceedings of artificial intelligence and interactive digital entertainment

  17. Shim VA, Tan KC, Chia JY, Mamun AA (2013) Multi-objective optimization with estimation of distribution algorithm in a noisy environment. Evolut Comput 21(1):149–177

    Article  Google Scholar 

  18. Song Z, Kusiak A (2010) Multiobjective optimization of temporal processes. IEEE Trans Syst Man Cybern Part B: Cybern 40(3):845–856

    Article  Google Scholar 

  19. Spronck P, Ponsen M, Sprinkhuizen-Kuyper I, Postma E (2006) Adaptive game AI with dynamic scripting. Mach Learn 63(3):217–248

    Article  Google Scholar 

  20. Spronck P, Sprinkhuizen-Kuyper I, Postma E (2004) Difficulty scaling of Game AI. In: 5th International conference intelligent games and simulation, pp 33–37

  21. Stensrud BS, Gonzalez AJ (2008) Discovery of high-level behavior from observation of human performance in a strategic game. IEEE Trans Syst Man Cybern Part B: Cybern 38(3):855–874

    Article  Google Scholar 

  22. Sweetser P, Johnson D (2004) Player-centered game environments: assessing player opinions, experiences and issues. Lect Notes Comput Sci 3166:305–336

    Google Scholar 

  23. Tan CH, Ang JH, Tan KC, Tay A (2008) Online adaptive controller for simulated car racing. In: Proceedings of IEEE congress on evolutionary computation, pp 3635–3642

  24. Tan CH, Tan KC, Tay A (2011) Dynamic game difficulty scaling using adaptive behavior-based AI. Comput Intell AI Games IEEE Trans 3(4):289–301

    Article  Google Scholar 

  25. Tan KL, Tan CH, Tan KC, Tay A (2009) Adaptive Game AI for Gomoku. In: Proceedings of the fourth international conference on autonomous robots and agents, pp 507–512

  26. Tang H, Tan CH, Tan KC, Tay A (2009) Neural network versus behavior based approach in simulated car racing game. In: Proceedings of IEEE workshop on evolving and self-developing intelligent systems, pp 58–65

  27. Tencé F, Buche C, De Loor P, Marc O (2010) The challenge of believability in video games: definitions, agents models and imitation learning. CoRR, abs/1009.0451

  28. Thurau C, Sagerer G, Bauckhage C (2004) Imitation learning at all levels of Game-AI. In: Proceedings of the international conference on computer games, artificial intelligence, design and education, pp 402–408

  29. Togelius J, Lucas SM (2005) Evolving controllers for simulated car racing. Proc IEEE Congr Evolut Comput 2:1906–1913

    Google Scholar 

  30. Togelius J, Lucas SM (2008) IEEE CEC 2007 car racing competition. http://julian.togelius.com/cec2007competition retrieved on 18 Aug 2008

  31. Togelius J, De Nardi R, Lucas SM (2006) Making racing fun through player modeling and track evolution. In: Proceedings of the SAB’06 workshop on adaptive approaches for optimizing player satisfaction in computer and physical games

  32. Togelius J, De Nardi R, Lucas SM (2007) Towards automatic personalized content creation for racing games. In: IEEE symposium on computational intelligence and games, pp 252–259

  33. van Hoorn N, Togelius J, Wierstra D, Schmidhuber J (2009) Robust player imitation using multiobjective evolution. In: Proceedings of the congress on evolutionary computation, pp 652–659

  34. Yannakakis GN (2005) AI in computer games: generating interesting interactive opponents by the use of evolutionary computation. Ph.D. thesis, University of Edinburg

  35. Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the strength Pareto evolutionary algorithm. Technical report 103, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Switzerland

Download references

Acknowledgments

This work was supported by the Singapore Ministry of Education Academic Research Fund Tier 1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kay Chen Tan.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tan, C.H., Tan, K.C. & Shim, V.A. Learning believable game agents using sensor noise and action histogram. Memetic Comp. 6, 215–232 (2014). https://doi.org/10.1007/s12293-014-0142-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12293-014-0142-x

Keywords

Navigation