Abstract
One of the first areas where virtual reality found a practical application was military training. Two fairly obvious reasons have driven the military to explore and employ this kind of technique in their training; to reduce exposure to hazards and to increase stealth. Many aspects of combat operations are very hazardous, and they become even more dangerous if the combatant seeks to improve his performance. Some smart weapons are autonomous, while others are remotely controlled after they are launched. This allows the shooter and weapon controller to launch the weapon and immediately seek cover, thus decreasing his exposure to return fire. Before launching a weapon, the person who controls that weapon must acquire/perceive as much information as he can, not only from its environment, but also from the people who inhabits that environment. Intelligent virtual agents (IVAs) are used in a wide variety of simulation environments, especially in order to simulate realistic situations as, for example, high fidelity virtual environment (VE) for military training that allows thousands of agents to interact in battlefield scenarios. In this paper, we propose a perceptual model, which seeks to introduce more coherence between IVA perception and human being perception, increasing the psychological “coherence” between the real life and the VE experience. Agents lacking this perceptual model could react in a non-realistic way, hearing or seeing things that are too far away or hidden behind other objects. The perceptual model, we propose in this paper introduces human limitations inside the agent’s perceptual model with the aim of reflecting human perception.
Similar content being viewed by others
References
Benford SD, Fahlén LE (1993) A spatial model of interaction in large virtual environments. In: Proceedings of 3rd European conference on computer supported cooperative work (ECSCW’93). Kluwer, Milano
Blumberg B (1997) Go with the flow: synthetic vision for autonomous animated creatures. In: Proceedings of the 1st international conference on autonomous agents (Agents’97), Marina del Rey
Chopra-Khullar S, Badler N (2001) Where to look? Automating attending behaviors of virtual human characters. Autonomous Agents Multi-agent Syst 4(1/2):9–23
Endsley M (1995) Toward a theory of situation awareness in dynamic systems. Hum Factors 37(1):65–84
Herrero P (2003) A human like perceptual model for intelligent virtual agents. PhD Thesis. Universidad Politécnica de Madrid
Herrero P, De Antonio A (2003) Keeping watch: intelligent virtual agents reflecting human-like perception in cooperative information systems. In: Proceedings of the 11th international conference on cooperative information systems (CoopIS 2003). Catania, Sicily
Herrero P, De Antonio A, Benford S, Greenhalgh C (2003) A hearing perceptual model for intelligent virtual agents. In: Proceedings of the 2nd international joint conference on autonomous agents and multiagent systems, Melbourne
Hill R, Han C, van Lent M (2002) Applying perceptually driven cognitive mapping to virtual urban environments. In: Conference on innovative applications of artificial intelligence (IAAI-2002) in Edmonton
Hill R, Han C, van Lent M (2002) Perceptually driven cognitive mapping of urban environments. In: Proceedings of the first international joint conference on autonomous agents and multiagent systems, Bologna
Howarth PA, Costello PJ (1997) Contemporary ergonomics 1997. Robertson SA (ed) Taylor and Francis, London, pp 109–116
Kendall G (2002) 3D Sound. Center for Music Technology School of Music. Northwestern University, Consulted.
Levi DM, Klein SA, Hariharan S (2002) Suppressive and facilitatory spatial interactions in foveal vision: foveal crowding is simple contrast masking. J Vis 2:140–166. http://journalofvision.org/2/2/2/
Levi DM, Hariharan S, Klein SA (2002) Suppressive and facilitatory spatial interactions in peripheral vision: peripheral crowding is neither size invariant nor simple contrast masking. J Vis 2:167–177. http://www.journalofvision.org/2/2/3/
Noser H (1997) A behavioral animation system based on L-systems and synthetic sensors for actors. PhD Thesis, École Polytechnique Fédérale De Lausanne
Shinn-Cunningham, BG (2000) Distance cues for virtual auditory space. In: Proceedings of the IEEE 2000 international symposium on multimedia information processing, Sydney
Terzopoulos D, Rabie TF (1997) Animat vision: active vision in artificial animals. J Comput Vis Res 1(1):2–19
Thalmann D (2001) The foundations to build a virtual human society. In: Proceedings of Intelligent Virtual Actors (IVA’01). Madrid, Spain
Zahorik P (2002) Assessing auditory distance perception using virtual acoustics. J Acoust Soc Am 111:1832–1846
Acknowledgements
The work presented in this paper was supported by the Communication Research Group (CRG), led by Steve Benford and Chris Greenhalgh at the School of Computer Science and Information Technology in the University of Nottingham, in UK.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Herrero, P., Antonio, A.d. Intelligent virtual agents keeping watch in the battlefield. Virtual Reality 8, 185–193 (2005). https://doi.org/10.1007/s10055-004-0148-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10055-004-0148-7