Skip to main content

Learning Human-Like Opponent Behavior for Interactive Computer Games

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2781))

Abstract

Compared to their ancestors in the early 1970s, present day computer games are of incredible complexity and show magnificent graphical performance. However, in programming intelligent opponents, the game industry still applies techniques developed some 30 years ago. In this paper, we investigate whether opponent programming can be treated as a problem of behavior learning. To this end, we assume the behavior of game characters to be a function that maps the current game state onto a reaction. We will show that neural networks architectures are well suited to learn such functions and by means of a popular commercial game we demonstrate that agent behaviors can be learned from observation.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arkin, R.C.: Behavior-Based Robotics. MIT Press, Cambridge (1998)

    Google Scholar 

  2. Cass, S.: Mind games. IEEE Spectrum, 40–44 (December 2002)

    Google Scholar 

  3. Dixon, K.R., Malak, R.J., Khosla, P.K.: Incorporating Prior Knowledge and Previously Learned Information into Reinforcement Learning Agents. Technical report, CMU (2000)

    Google Scholar 

  4. Fairclough, C., Fagan, M., MacNamee, B., Cunningham, P.: Research Directions for AI in Computer Games. Technical report, Trinity College Dublin (2001)

    Google Scholar 

  5. Galata, A., Johnson, N., Hogg, D.: LearningVariable-Length Markov Models of Behaviour. Computer Visiosn and Image Understanding 81(3), 398–413 (2001)

    Article  MATH  Google Scholar 

  6. http://www.idsoftware.com/business/home/techdownloads/

  7. Jebara, T., Pentland, A.: Action reaction learning: Automatic visual analysis and synthesis of interactive behaviour. In: Christensen, H.I. (ed.) ICVS 1999. LNCS, vol. 1542, pp. 273–292. Springer, Heidelberg (1999)

    Chapter  Google Scholar 

  8. Laird, J.E.: Using a Computer Game to develop advanced AI. IEEE Computer, 70–75 (July 2001)

    Google Scholar 

  9. Laird, J.E., Lent., M.v.: Interactice Computer Games: Human-LevelAI’s KillerApplication. In: Proc. AAAI, pp. 1171–1178 (2000)

    Google Scholar 

  10. Thurau, C.: Untersuchung über Lernverfahren für künstliche Agenten in virtuellen 3DUmgebungen. Master’s thesis, Bielefeld Universtiy (March 2003)

    Google Scholar 

  11. van Vaveren, J.M.P.: The Quake III Arena Bot. Master’s thesis, TU Delft (June 2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bauckhage, C., Thurau, C., Sagerer, G. (2003). Learning Human-Like Opponent Behavior for Interactive Computer Games. In: Michaelis, B., Krell, G. (eds) Pattern Recognition. DAGM 2003. Lecture Notes in Computer Science, vol 2781. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45243-0_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45243-0_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40861-1

  • Online ISBN: 978-3-540-45243-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics