Skip to main content
Log in

Intelligent virtual agents keeping watch in the battlefield

  • Original Article
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

One of the first areas where virtual reality found a practical application was military training. Two fairly obvious reasons have driven the military to explore and employ this kind of technique in their training; to reduce exposure to hazards and to increase stealth. Many aspects of combat operations are very hazardous, and they become even more dangerous if the combatant seeks to improve his performance. Some smart weapons are autonomous, while others are remotely controlled after they are launched. This allows the shooter and weapon controller to launch the weapon and immediately seek cover, thus decreasing his exposure to return fire. Before launching a weapon, the person who controls that weapon must acquire/perceive as much information as he can, not only from its environment, but also from the people who inhabits that environment. Intelligent virtual agents (IVAs) are used in a wide variety of simulation environments, especially in order to simulate realistic situations as, for example, high fidelity virtual environment (VE) for military training that allows thousands of agents to interact in battlefield scenarios. In this paper, we propose a perceptual model, which seeks to introduce more coherence between IVA perception and human being perception, increasing the psychological “coherence” between the real life and the VE experience. Agents lacking this perceptual model could react in a non-realistic way, hearing or seeing things that are too far away or hidden behind other objects. The perceptual model, we propose in this paper introduces human limitations inside the agent’s perceptual model with the aim of reflecting human perception.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Benford SD, Fahlén LE (1993) A spatial model of interaction in large virtual environments. In: Proceedings of 3rd European conference on computer supported cooperative work (ECSCW’93). Kluwer, Milano

  2. Blumberg B (1997) Go with the flow: synthetic vision for autonomous animated creatures. In: Proceedings of the 1st international conference on autonomous agents (Agents’97), Marina del Rey

  3. Chopra-Khullar S, Badler N (2001) Where to look? Automating attending behaviors of virtual human characters. Autonomous Agents Multi-agent Syst 4(1/2):9–23

    Article  Google Scholar 

  4. Endsley M (1995) Toward a theory of situation awareness in dynamic systems. Hum Factors 37(1):65–84

    Google Scholar 

  5. Herrero P (2003) A human like perceptual model for intelligent virtual agents. PhD Thesis. Universidad Politécnica de Madrid

  6. Herrero P, De Antonio A (2003) Keeping watch: intelligent virtual agents reflecting human-like perception in cooperative information systems. In: Proceedings of the 11th international conference on cooperative information systems (CoopIS 2003). Catania, Sicily

  7. Herrero P, De Antonio A, Benford S, Greenhalgh C (2003) A hearing perceptual model for intelligent virtual agents. In: Proceedings of the 2nd international joint conference on autonomous agents and multiagent systems, Melbourne

  8. Hill R, Han C, van Lent M (2002) Applying perceptually driven cognitive mapping to virtual urban environments. In: Conference on innovative applications of artificial intelligence (IAAI-2002) in Edmonton

  9. Hill R, Han C, van Lent M (2002) Perceptually driven cognitive mapping of urban environments. In: Proceedings of the first international joint conference on autonomous agents and multiagent systems, Bologna

  10. Howarth PA, Costello PJ (1997) Contemporary ergonomics 1997. Robertson SA (ed) Taylor and Francis, London, pp 109–116

    Google Scholar 

  11. Kendall G (2002) 3D Sound. Center for Music Technology School of Music. Northwestern University, Consulted.

  12. Levi DM, Klein SA, Hariharan S (2002) Suppressive and facilitatory spatial interactions in foveal vision: foveal crowding is simple contrast masking. J Vis 2:140–166. http://journalofvision.org/2/2/2/

    Google Scholar 

  13. Levi DM, Hariharan S, Klein SA (2002) Suppressive and facilitatory spatial interactions in peripheral vision: peripheral crowding is neither size invariant nor simple contrast masking. J Vis 2:167–177. http://www.journalofvision.org/2/2/3/

    Google Scholar 

  14. Noser H (1997) A behavioral animation system based on L-systems and synthetic sensors for actors. PhD Thesis, École Polytechnique Fédérale De Lausanne

  15. Shinn-Cunningham, BG (2000) Distance cues for virtual auditory space. In: Proceedings of the IEEE 2000 international symposium on multimedia information processing, Sydney

  16. Terzopoulos D, Rabie TF (1997) Animat vision: active vision in artificial animals. J Comput Vis Res 1(1):2–19

    Google Scholar 

  17. Thalmann D (2001) The foundations to build a virtual human society. In: Proceedings of Intelligent Virtual Actors (IVA’01). Madrid, Spain

  18. Zahorik P (2002) Assessing auditory distance perception using virtual acoustics. J Acoust Soc Am 111:1832–1846

    Article  Google Scholar 

Download references

Acknowledgements

The work presented in this paper was supported by the Communication Research Group (CRG), led by Steve Benford and Chris Greenhalgh at the School of Computer Science and Information Technology in the University of Nottingham, in UK.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pilar Herrero.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Herrero, P., Antonio, A.d. Intelligent virtual agents keeping watch in the battlefield. Virtual Reality 8, 185–193 (2005). https://doi.org/10.1007/s10055-004-0148-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-004-0148-7

Keywords

Navigation