Skip to main content
Log in

Where to Look? Automating Attending Behaviors of Virtual Human Characters

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling) and a proposed method of how these types interact.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R. Abrams, D. Meyer, and S. Kornblum, “Eye-hand coordination: oculomotor control in rapid aimed limb movements,” J. Exp. Psychol. Hum. Percept. Perform. vol. 16, pp. 248-267, 1990.

    Google Scholar 

  2. A. Allport, “Attention and control: have we been asking the wrong questions? A critical review of 25 years,” Attention Perform. vol. 14, pp. 183-218, 1993.

    Google Scholar 

  3. A. Allport, E. Styles, and S. Hsieh, “Shifting intentional set: exploring the dynamic control of task,” Attention Perform. vol. 15, pp. 421-452, 1994.

    Google Scholar 

  4. D. Ballard, M. Hayhoe, F. Li, and S. Whitehead, “Hand-eye coordination during complex tasks,” Invest. Ophthalmol. Vis. Sci. vol. 33, no. 4, p. 1355, 1992.

    Google Scholar 

  5. R. Brooks, C. Breazeal, R. Irie, C. Kemp, M. Marjanovic, B. Scassellati, and M. Williamson, “Alternative essences of intelligence,” in AAAI98, 1998. where to look? 23

  6. J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Becket, B. Douville, S. Prevost, and M. Stone, “Animated conversation: rule-based generation of facial expression, cesture and spoken intonation for multiple conversational agents,” in ACM SIGGRAPH Annu. Conf. Ser., 1994, pp. 413-420.

  7. S. Chopra-Khullar and N. I. Badler, “Where to look? Automating attending behaviors of virtual human characters,” in Third Proc. ACM Auton. Agents, 1999, pp. 16-23.

  8. B. Fisher, “The role of attention in visually guided eye movements in monkey and man,” Psychol. Res. vol. 48, pp. 251-257, 1986.

    Google Scholar 

  9. S. Hacisalihzade, L. Stark, and J. Allen, “Visual perception and sequences of eye movement fixations: a stochastic modeling approach,” IEEE Trans. Syst. Man Cybernet. vol. 22, pp. 474-481, 1992.

    Google Scholar 

  10. A. Hillstrom and S. Yantis, “Visual motion and attentional capture,” Percept. Psychophys. vol. 55, no. 4, pp. 399-411, 1994.

    Google Scholar 

  11. W. Hirst, “The psychology of attention,” in Mind and Brain: Dialogues in Cognitive Neuroscience, 1986, pp. 105-141.

  12. M. Johnson, “Visual attention and the control of eye movements in early infancy,” Attention Perform. vol. 15, pp. 291-310, 1994.

    Google Scholar 

  13. J. Jonides, “Voluntary versus automatic control over the mind's eye movements,” Attention Perform. vol. 9, pp. 187-203, 1981.

    Google Scholar 

  14. D. Kahneman, Attention and Efforts, Prentice-Hall: Englewood Cliffs, NJ, 1973.

    Google Scholar 

  15. C. Koch and S. Ullman, “Shifts in selective visual attention: toward the underlying neural circuitry,” Hum. Neurobiol. vol. 4, pp. 219-227, 1985.

    Google Scholar 

  16. E. Ladavas, G. Zeloni, G. Zaccara, and P. Gangeni, “Eye movements and orienting of attention in patients with visual neglect,” J. Cognitive Neurosci. vol. 9, no. 1, pp. 67-75, 1997.

    Google Scholar 

  17. M. Marjanovic, B. Scassellati, and M. Williamson, “Self-taught visually-guided pointing for a humanoid robot,” in Fourth Int. Conf. on Simulation of Adaptive Behavior, Cape Cod, MA, 1996.

  18. N. Moray, “Designing for attention,” in Attention, Selection, Awareness, and Control: A Tribute to Donald Broadbent, Clarendon Press: Oxford, UK, 1993, pp. 53-72.

    Google Scholar 

  19. R. Rao, G. Zelinsky, M. Hayhoe, and D. Ballard, “Modeling saccadic targeting in visual search,” in D. Touretzky, M. Mozer, and M. Hasselmo (eds.), Advances in Neural Information Processing Systems, MIT Press: Cambridge, MA, 1996.

    Google Scholar 

  20. J. Rickel and W. L. Johnson, “Integrating pedagogical capabilities in a virtual environment agent,” in Proc. Auton. Agents 1997, 1997.

  21. B. Scassellati, “Mechanisms of shared attention for a humanoid robot,” in AAAI Fall Symp. Embodied Cognition and Action, 1996.

  22. L. Stark and Y. Choi, “Experimental metaphysics: the scanpath as an epistemological mechanism,” in W. Zangemeister, H. Stiehl, and C. Freksa (eds.), Advances in Psychology: Visual Attention and Cognition, Elsevier: North-Holland, Chap. 1, 1996.

  23. M. Swain and M. Stricker, “Promising directions in active vision,” Int. J. Comput. Vis. vol. 11, pp. 109-126, 1993.

    Google Scholar 

  24. D. Terzopoulos, X. Tu, and R. Grzeszczuk, “Artificial fishes: autonomous locomotion, perception, behavior and learning in a simulated physical world,” Artif. Life vol. 1, no. 4, pp. 327-351, 1994.

    Google Scholar 

  25. T. Trias, S. Chopra, B. Reich, M. Moore, N. Badler, B. Webber, and C. Geib, “Decision networks for integrating the behaviors of virtual agents and avatars,” in Proc. IEEE Virtual Reality Annu. Int. Symp. 1996, pp. 156-162.

  26. J. Tsotsos, S. Culhane, W. Wai, Y. Lai, and F. Nufflo, “Modeling visual attention via selective tuning,” Artif. Intell. vol. 78, pp. 507-545, 1995.

    Google Scholar 

  27. S. Yantis, “Stimulus-driven attentional capture and attentional control setting,” J. Exp. Psychol. Hum. Percept. Perform. vol. 19, no. 3, pp. 676-681, 1993.

    Google Scholar 

  28. S. Yantis and J. Jonides, “Abrupt visual onsets and selective attention: voluntary versus automatic allocation,” J. Exp. Psychol. Hum. Perception Performance vol. 16, no. 1, pp. 121-134, 1990.

    Google Scholar 

  29. A. L. Yarbus, Eye Movements and Vision, Plenum Press: New York, 1967.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Khullar, S.C., Badler, N.I. Where to Look? Automating Attending Behaviors of Virtual Human Characters. Autonomous Agents and Multi-Agent Systems 4, 9–23 (2001). https://doi.org/10.1023/A:1010010528443

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1010010528443

Navigation