Skip to main content
Log in

Machine learning in digital games: a survey

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Artificial intelligence for digital games constitutes the implementation of a set of algorithms and techniques from both traditional and modern artificial intelligence in order to provide solutions to a range of game dependent problems. However, the majority of current approaches lead to predefined, static and predictable game agent responses, with no ability to adjust during game-play to the behaviour or playing style of the player. Machine learning techniques provide a way to improve the behavioural dynamics of computer controlled game agents by facilitating the automated generation and selection of behaviours, thus enhancing the capabilities of digital game artificial intelligence and providing the opportunity to create more engaging and entertaining game-play experiences. This paper provides a survey of the current state of academic machine learning research for digital game environments, with respect to the use of techniques from neural networks, evolutionary computation and reinforcement learning for game agent control.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Agapitos A, Togelius J, Lucas SM (2007a) Evolving controllers for simulated car racing using object orientated genetic programming. In: Thierens D et al (eds) Proceedings of the 2007 genetic and evolutionary computation conference. ACM Press, pp 1543–1550

  • Agapitos A, Togelius J, Lucas SM (2007b) Multiobjective techniques for the use of state in genetic programming applied to simulated car racing. In: Proceedings of the 2007 IEEE congress on evolutionary computation. IEEE Press, pp 1562–1569

  • Baekkelund C (2006) A brief comparison of machine learning methods. In: Rabin S (eds) AI game programming wisdom 3, 1st edn. Charles River Media, Hingham

    Google Scholar 

  • Baluja S (1994) Population-based incremental learning: a method for integrating genetic search based function optimization and competitive learning. Technical report CMU-CS-94-163, Carnegie Mellon University

  • Bauckhage C, Thurau C (2004) Towards a fair ‘n square aimbot—using mixtures of experts to learn context aware weapon handling. In: El-Rhalibi A, Van Welden D (eds) Proceedings of the 5th international conference on intelligent games and simulation. Eurosis, pp 20–24

  • Beyer H-G (2001) The theory of evolutionary strategies. Springer, Berlin

    Google Scholar 

  • Billings D, Davidson A, Schaeffer J, Szafron D (2002) The challenge of poker. Artif Intell 134(1–2): 201–240

    Article  MATH  Google Scholar 

  • Blumberg B, Downie M, Ivanov Y, Berlin M, Johnson MP, Tomlinson B (2002) Integrated learning for interactive synthetic characters. In: Proceedings of the 29th annual conference on computer graphics and interactive techniques. ACM Press, pp 417–426

  • Bradley J, Hayes G (2005a) Adapting reinforcement learning for computer games: using group utility functions. In: Kendall G, Lucas SM (eds) Proceedings of the 2005 IEEE symposium on computational intelligence and games. IEEE Press, Piscataway

    Google Scholar 

  • Bradley J, Hayes G (2005b) Group utility functions: learning equilibria between groups of agents in computer games by modifying the reinforcement signal. In: Proceedings of the 2005 IEEE congress on evolutionary computation. IEEE Press, pp 1914–1921

  • Breiman L (1996) Bagging predictors. Mach Learn 24(2): 123–140

    MATH  MathSciNet  Google Scholar 

  • Bryant BD, Miikkulainen R (2003) Neuroevolution for adaptive teams. In: Proceedings of the 2003 IEEE congress on evolutionary computation. IEEE Press, pp 2194–2201

  • Bryant BD, Miikkulainen R (2006a) Exploiting sensor symmetries in example-based training for intelligent agents. In: Louis SJ, Kendall G (eds) Proceedings of the 2006 IEEE symposium on computational intelligence and games. IEEE Press, pp 90–97

  • Bryant BD, Miikkulainen R (2006b) Evolving stochastic controller networks for intelligent game agents. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1007–1014

  • Bryant BD, Miikkulainen R (2007) Acquiring visibly intelligent behavior with example-guided neuroevolution. In: Proceedings of the 22nd National conference on artificial intelligence. AAAI Press, Menlo park, pp 801–808

  • Buckland M (2005) Programming game AI by example. Wordware Publishing, Plano

    Google Scholar 

  • Buro M (1997) The othello match of the year: Takeshi Murakami vs. Logistello. ICCA J 20(3): 189–193

    Google Scholar 

  • Campbell M, Hoane AJ, Hsu F-H (2002) Deep blue. Artif Intell 134(1–2): 57–83

    Article  MATH  Google Scholar 

  • Chaperot B, Fyfe C (2006) Improving artificial intelligence in a motocross game. In: Louis SJ, Kendall G (eds) Proceedings of the 2006 IEEE symposium on computational intelligence and games. IEEE Press, pp 181–186

  • Charles D (2003) Enhancing gameplay: challenges for artificial intelligence in digital games. In: Proceedings of digital games research conference 2003. University of Utrecht, 4–6 November 2003

  • Charles D, McGlinchey S (2004) The past, present and future of artificial neural networks in digital games. In: Mehdi Q, Gough N, Natkin S, Al-Dabass D (eds) Proceedings of the 5th international conference on computer games: artificial intelligence, design and education. The University of Wolverhampton, pp 163–169

  • Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2): 182–197

    Article  Google Scholar 

  • DeBoer P-T, Kroese DP, Mannor S, Rubinstein RY (2004) A tutorial on the cross-entropy method. Ann Oper Res 134(1): 19–67

    Article  MathSciNet  Google Scholar 

  • DeJong KA, Sarma J (1992) Generation gaps revisited. In: Whitley D (eds) Foundations of genetic algorithms 2. Morgan Kaufmann, San Mateo, pp 19–28

    Google Scholar 

  • Demasi P, Cruz AJ (2003) Online coevolution for action games. Int J Intell Games Simul 2: 80–88

    Google Scholar 

  • Dietterich TG (2000) An overview of MAXQ hierarchical reinforcement learning. In: Choueiry BY, Walsh T (eds) Lecture notes in computer science, vol. 1864. Springer, Berlin, pp 26–44

    Google Scholar 

  • D’Silva T, Janik R, Chrien M, Stanley KO, Miikkulainen R (2005) Retaining learned behavior during real-time neuroevolution. In: Young M, Laird J (eds) Proceedings of the 1st artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 39–44

    Google Scholar 

  • Duan J, Gough NE, Mehdi QH (2002) Multi-agent reinforcement learning for computer game agents. In: Mehdi QH, Gough NE (eds) Proceedings of the 3rd international conference on intelligent games and simulation. The University of Wolverhampton, pp 104–109

  • Elman JL (1990) Finding structure in time. Cog Sci 14: 179–211

    Article  Google Scholar 

  • Evans R (2001) The future of AI in games: a personal view. Game Dev 8: 46–49

    Google Scholar 

  • Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Saitta L (ed) Proceedings of the 13th international conference on machine learning. Morgan Kaufmann, pp 148–156

  • Fürnkranz J (2001) Machine learning in games: a survey. In: Fürnkranz J, Kubat M (eds) Machines that learn to play games. Nova Science Publishers, Huntington, pp 11–59

    Google Scholar 

  • Gallagher M, Ledwich M (2007) Evolving Pac-Man players: can we learn from raw input? In: Proceedings of the 2007 IEEE symposium on computational intelligence in games. IEEE Press, pp 282–287

  • Gallagher M, Ryan A (2003) Learning to play Pac-Man: an evolutionary, rule-based approach. In: Proceedings of the 2003 IEEE congress on evolutionary computation. IEEE Press, pp 2462–2469

  • Galway L, Charles D, Black M (2007) Temporal difference control within a dynamic environment. In: Roccetti M (ed) Proceedings of the 8th international conference on intelligent games and simulation. Eurosis, pp 42–47

  • Geisler B (2004) Integrated machine learning for behavior modeling in video games. In: Fu D, Henke S, Orkin J (eds) Challenges in game artificial intelligence: papers from the 2004 AAAI workshop. AAAI Press, Menlo Park, pp 54–62

    Google Scholar 

  • Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning. Addison-Wesley, Reading

    MATH  Google Scholar 

  • Gomez F, Miikkulainen R (1997) Incremental evolution of complex general behavior. Adapt Behav 5: 317–342

    Article  Google Scholar 

  • Gorniak P, Davis I (2007) SquadSmart hierarchical planning and coordinated plan execution for squads of characters. In: Schaeffer J, Mateas M (eds) Proceedings of the 3rd artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 14–19

    Google Scholar 

  • Graepel T, Herbrich R, Gold J (2004) Learning to fight. In: Mehdi Q, Gough N, Natkin, S, Al-Dabass D (eds) Proceedings of 5th international conference on computer games: artificial intelligence, design and education. The University of Wolverhampton, pp 193–200

  • Grand S, Cliff D, Malhotra A (1997) Creatures: artificial life autonomous software agents for home entertainment. In: Proceedings of the 1st international conference on autonomous agents. ACM Press, pp 22–29

  • Hagan MT, Menhaj M (1994) Training feed-forward networks with the Marquardt algorithm. IEEE Trans Neural Netw 5(6): 989–993

    Article  Google Scholar 

  • Haykin S (1994) Neural networks: a comprehensive foundation. Prentice-Hall, Upper Saddle River

    MATH  Google Scholar 

  • Hoang H, Lee-Urban S, Muñoz-Avila H (2005) Hierarchical plan representations for encoding strategic game AI. In: Young M, Laird J (eds) Proceedings of the 1st artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 63–68

    Google Scholar 

  • Holland JH (1975) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control and artificial intelligence. MIT Press, Cambridge

    Google Scholar 

  • Hsu F-H (2002) Behind deep blue: building the computer that defeated the World Chess Champion. Princeton University Press, Princeton

    MATH  Google Scholar 

  • Jacobs R, Jordan M, Nowlan S, Hinton G (1991) Adaptive mixture of local experts. Neural Comput 3(1): 79–87

    Article  Google Scholar 

  • Kirby N (2004) Getting around the limits of machine learning. In: Rabin S(eds) AI game programming wisdom 2, 1st edn. Charles River Media, Hingham

    Google Scholar 

  • Kohonen T (1982) Self-organized formation of topologically correct feature maps. Biol Cybern 43: 59–69

    Article  MATH  MathSciNet  Google Scholar 

  • Koza JP (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, Cambridge

    MATH  Google Scholar 

  • Laird JE, Van Lent M (1999) Developing an artificial intelligence engine. In: Proceedings of the 1999 game developers conference, pp 577–588

  • Laird JE, Van Lent M (2001) Human level AI’s killer application: interactive computer games. AI Mag 22: 15–25

    Google Scholar 

  • Louis SJ, McDonnell J (2004) Learning with case-injected genetic algorithms. IEEE Trans Evol Comput 8: 316–327

    Article  Google Scholar 

  • Louis SJ, Miles C (2005) Playing to learn: case-injected genetic algorithms for learning to play computer games. IEEE Trans Evol Comput 9: 669–681

    Article  Google Scholar 

  • Lucas SM (2004) Cellz: A simple dynamic game for testing evolutionary algorithms. In: Proceedings of the 2004 IEEE congress on evolutionary computation. IEEE Press, pp 1007–1014

  • Lucas SM (2005) Evolving a neural network location evaluator to play Ms. Pac-Man. In: Kendall G, Lucas SM (eds) Proceedings of the 2005 IEEE symposium on computational intelligence and games. IEEE Press, pp 203–210

  • Lucas SM, Kendall G (2006) Evolutionary computation and games. IEEE Comput Intell Mag. February, pp 10–18

  • Lucas SM, Togelius J (2007) Point-to-point car racing: an initial study of evolution versus temporal difference learning. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 260–267

  • MacNamee B, Cunningham P (2001) Proposal for an agent architecture for proactive persistent non player characters. In: Proceedings of the 12th Irish conference on artificial intelligence and cognitive science, pp 221–232

  • MacNamee B, Cunningham P (2003) Creating socially interactive non player characters: the μ-SIC system. Int J Intell Games Simul 2: 28–35

    Google Scholar 

  • Maderia C, Corruble V, Ramalho G, Ratitch B (2004) Bootstrapping the learning process for semi-automated design of a challenging game AI. In: Fu D, Henke S, Orkin J (eds) Challenges in game artificial intelligence: papers from the 2004 AAAI workshop. AAAI Press, Menlo Park, pp 72–76

    Google Scholar 

  • Maderia C, Corruble V, Ramalho G (2006) Designing a reinforcement learning-based adaptive AI for large-scale strategy games. In: Laird J, Schaeffer J (eds) Proceedings of the 2nd artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 121–123

    Google Scholar 

  • Maes P (1995) Artificial life meets entertainment: lifelike autonomous agents. Commun ACM 38: 108–114

    Article  Google Scholar 

  • Manslow J (2002) Learning and adaptation. In: Rabin S (eds) AI game programming wisdom, 1st edn. Charles River Media, Hingham

    Google Scholar 

  • McGlinchey S (2003) Learning of AI players from game observation data. In: Mehdi Q, Gough N, Natkin S (eds) Proceedings of the 4th international conference on intelligent games and simulation. Eurosis, pp 106–110

  • Miconi T, Channon A (2006) The N-strikes-out algorithm: a steady-state algorithm for coevolution. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1639–1646

  • Miikkulainen R, Bryant BD, Cornelius R, Karpov IV, Stanley KO, Yong CH (2006) Computational intelligence in games. In: Yen GY, Fogel DB (eds) Computational intelligence: principles and practice. IEEE Computational Intelligence Society, Piscataway, pp 155–191

  • Miles C, Louis SJ (2006) Towards the co-evolution of influence map tree based strategy game players. In: Louis SJ, Kendall G (eds) Proceedings of the 2006 IEEE symposium on computational intelligence and games. IEEE Press, pp 75–82

  • Miles C, Louis SJ, Cole N, McDonnell J (2004a) Learning to play like a human: case injected genetic algorithms for strategic computer gaming. In: Proceedings of the 2004 IEEE congress on evolutionary computing. IEEE Press, pp 1441–1448

  • Miles C, Louis SJ, Drewes R (2004b) Trap avoidance in strategic computer game playing with case injected genetic algorithms. In: Proceedings of the 2004 genetic and evolutionary computation conference. ACM Press, pp 1365–1376

  • Miles C, Quiroz J, Leigh R, Louis SJ (2007) Co-evolving influence map tree based strategy game players. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 88–95

  • Orkin J (2005) Agent architecture considerations for real-time planning in games. In: Young M, Laird J (eds) Proceedings of the 1st artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 105–110

    Google Scholar 

  • Parker G, Rawlins G (1996) Cyclic genetic algorithms for the locomotion of hexapod robots. In: Proceedings of the 2nd world automation congress, pp 617–622

  • Parker GB, Doherty TS, Parker M (2005a) Evolution and prioritization of survival strategies for a simulated robot in xpilot. In: Proceedings of the 2005 IEEE congress on evolutionary computing. IEEE Press, pp 2410–2415

  • Parker G, Parker M, Johnson S (2005b) Evolving autonomous agent control in the xpilot environment. In: Proceedings of the 2005 IEEE congress on evolutionary computation. IEEE Press, pp 2416–2421

  • Parker GB, Doherty TS, Parker M (2006) Generation of unconstrained looping programs for control of xpilot agents. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1216–1223

  • Parker M, Parker GB (2006a) Learning control for xpilot agents in the core. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 800–807

  • Parker GB, Parker M (2006b) The incremental evolution of attack agents in xpilot. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 969–975

  • Parker M, Parker GB (2006c) Using a queue genetic algorithm to evolve xpilot control strategies on a distributed system. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1202–1207

  • Parker M, Parker GB (2007a) The Evolution of multi-layer neural networks for the control of xpilot agents. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 232–237

  • Parker GB, Parker M (2007b) Evolving parameters for xpilot combat agents. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 238–243

  • Pfeiffer M (2004) Reinforcement learning of strategies for settlers of catan. In: Mehdi Q, Gough N, Natkin, S, Al-Dabass D (eds) Proceedings of the 5th international conference on computer games: artificial intelligence, design and education. The University of Wolverhampton, pp 384–388

  • Ponsen M, Spronck P (2004) Improving adaptive game AI with evolutionary learning. In: Mehdi Q, Gough N, Natkin, S, Al-Dabass D (eds) Proceedings of the 5th international conference on computer games: artificial intelligence, design and education. The University of Wolverhampton, pp 389–396

  • Ponsen M, Spronck P, Tuyls K (2006) Hierarchical reinforcement learning with deictic representation in a computer game. In: Schobbens P-Y, Vanhoof W, Schwanen G (eds) Proceedings of the 18th Belgium-Netherlands conference on artificial intelligence. University of Namur, pp 251–258

  • Pottinger D (2000) The Future of AI in games. Game Dev 8: 36–39

    Google Scholar 

  • Rumelhart DE, Hinton GE, Williams RJ (1989) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing: explorations in the microstructure of cognition, vol. 1. MIT Press, Cambridge

    Google Scholar 

  • Rummery GA, Niranjan M (1994) On-line Q-learning using connectionist systems. Technical Report CUED/F-INFENG/TR 166, Cambridge University, UK

  • Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM J Res Dev 3(3): 211–229

    Article  MathSciNet  Google Scholar 

  • Samuel AL (1967) Some studies in machine learning using the game of checkers. ii – recent progress. IBM J Res Dev 11(6): 601–617

    Google Scholar 

  • Schaeffer J (1997) One jump ahead: challenging human supremacy in checkers. Springer, Berlin

    Google Scholar 

  • Schaeffer J (2000) The games computers (and people) play. In: Zelkowitz M (eds) Advances in computers: 50. Academic Press, San Diego, pp 189–266

    Google Scholar 

  • Schaeffer J, Vanden Herik HJ (2002) Games, computers, and artificial intelligence. Artif Intell 134: 1–7

    Article  MATH  Google Scholar 

  • Schwab B (2004) AI game engine programming, 1st edn. Charles River Media, Hingham

    Google Scholar 

  • Spronck P, Sprinkhuizen-Kuyper I, Postma E (2003a) Improving opponent intelligence through offline evolutionary learning. Int J Intell Games Sim 2: 20–27

    Google Scholar 

  • Spronck P, Sprinkhuizen-Kuyper I, Postma E (2003b) Online adaptation of game opponent AI in simulation and in practice. In: Mehdi Q, Gough N, Natkin S (eds) Proceedings of the 4th international conference on intelligent games and simulation. Eurosis, pp 93–100

  • Spronck P, Ponsen M, Sprinkhuizen-Kuyper I, Postma E (2006) Adaptive game AI with dynamic scripting. Mach Learn 63: 217–248

    Article  Google Scholar 

  • Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10: 99–127

    Article  Google Scholar 

  • Stanley KO, Bryant BD, Miikkulainen R (2005a) Real-time neuroevolution in the NERO video game. IEEE Trans Evol Comput 9: 653–668

    Article  Google Scholar 

  • Stanley KO, Bryant BD, Miikkulainen R (2005b) Evolving neural network agents in the NERO video game. In: Kendall G, Lucas SM (eds) Proceedings of the 2005 IEEE symposium on computational intelligence and games. IEEE Press, Piscataway, NJ

    Google Scholar 

  • Sutton RS (1988) Learning to predict by the method of temporal differences. Mach Learn 3: 9–44

    Google Scholar 

  • Sutton RS (1996) Generalization in reinforcement learning: successful examples using sparse coarse coding. In: Touretzky DS, Mozer MC, Hasselmo ME (eds) Advances in neural information processing systems 8. MIT Press, Cambridge, pp 1038–1044

    Google Scholar 

  • Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT Press, Cambridge

    Google Scholar 

  • Szita I, Lõrincz A (2006) Learning tetris using noisy cross-entropy method. Neural Comput 18: 2936–2941

    Article  MATH  Google Scholar 

  • Szita I, Lõrincz A (2007) Learning to play using low-complexity rule-based policies: illustrations through Ms. Pac-Man. J Artif Intell Res 30: 659–684

    MATH  Google Scholar 

  • Tesauro G (1995) Temporal difference learning and td-gammon. Commun ACM 38(3): 58–68

    Article  Google Scholar 

  • Thurau C, Bauckhage C, Sagerer G (2003) Combining self-organizing maps and multilayer perceptrons to learn bot-behavior for a commercial game. In: Mehdi Q, Gough N, Natkin S (eds) Proceedings of the 4th international conference on intelligent games and simulation. Eurosis, pp 119–123

  • Togelius J, Lucas SM (2005) Evolving controllers for simulated car racing. In: Proceedings of the 2005 IEEE congress on evolutionary computation. IEEE Press, pp 1906–1913

  • Togelius J, Lucas SM (2006) Evolving robust and specialized car racing skills. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1187–1194

  • Togelius J, Burrow P, Lucas SM (2007a) Multi-population competitive co-evolutions of car racing controllers. In: Proceedings of the 2007 IEEE congress on evolutionary computation. IEEE Press, pp 4043–4050

  • Togelius J, Nardi RD, Lucas SM (2007b) Towards automatic personalized content creation for racing games. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 252–259

  • Tozour P (2001) Influence mapping. In: DeLoura M (eds) Game programming gems, vol. 2. Charles River Media, Hingham, pp 287–297

    Google Scholar 

  • Tozour P (2002) The evolution of game AI. In: Rabin S (eds) AI game programming wisdom, 1st edn. Charles River Media, Hingham

    Google Scholar 

  • Watkins C, Dayan P (1992) Q-learning. Mach Learn 8: 279–292

    MATH  Google Scholar 

  • Whiteson S, Stone P, Stanley KO, Miikkulainen R, Kohl N (2005) Automatic feature selection in neuroevolution. In: Proceedings of the 2005 genetic and evolutionary computation conference. ACM Press, pp 1225–1232

  • Wolpert D, Sill J, Tumer K (2001) Reinforcement learning in distributed domains: beyond team games. In: Proceedings of the 17th international joint conference on artificial intelligence. Morgan Kaufmann, pp 819–824

  • Woodcock S (1998) Game AI: the state of the industry. Game Dev 5: 28–35

    Google Scholar 

  • Woodcock S (1999) Game AI: The state of the industry. Game Dev 6: 34–43

    Google Scholar 

  • Woodcock S (2000) Game AI: the state of the industry. Game Dev 7: 24–32

    Google Scholar 

  • Woodcock S (2001) Game AI: the state of the industry 2000–2001. Game Dev 8: 36–44

    Google Scholar 

  • Woodcock S (2002) Game AI: the state of the industry 2001–2002. Game Dev 9: 26–31

    Google Scholar 

  • Yannakakis GN, Hallam J (2004) Evolving opponents for interesting interactive computer games. In: Schaal S, Ijspeert A, Billard A, Vijayakumar S, Hallam J, Meyer J-A (eds) From animals to animats 8: proceedings of the 8th international conference on simulation of adaptive behaviour. MIT Press, pp 499–508

  • Yannakakis GN, Levine J, Hallam J (2004) An evolutionary approach for interactive computer games. In: Proceedings of the 2004 IEEE congress on evolutionary computation. IEEE Press, pp 986–993

  • Yannakakis GN, Levine J, Hallam J, Papageorgiou M (2003) Performance, robustness and effort cost comparison of machine learning mechanisms in FlatLand. In: Proceedings of the 11th Mediterranean conference on control and automation. Rhodes, 18–20 June 2003

  • Yong CH, Stanley KO, Miikkulainen R, Karpov IV (2006) Incorporating advice into evolution of neural networks. In: Laird J, Schaeffer J (eds) Proceedings of the second artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 98–104

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leo Galway.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Galway, L., Charles, D. & Black, M. Machine learning in digital games: a survey. Artif Intell Rev 29, 123–161 (2008). https://doi.org/10.1007/s10462-009-9112-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-009-9112-y

Keywords

Navigation