skip to main content
10.1145/1643928.1643973acmconferencesArticle/Chapter ViewAbstractPublication PagesvrstConference Proceedingsconference-collections
research-article

A saliency-based method of simulating visual attention in virtual scenes

Published: 18 November 2009 Publication History

Abstract

Complex interactions occur in virtual reality systems, requiring the modelling of next-generation attention models to obtain believable virtual human animations. This paper presents a saliency model that is neither domain nor task specific, which is used to animate the gaze of virtual characters. A critical question is addressed: What types of saliency attract attention in virtual environments and how can they be weighted to drive an avatar's gaze? Saliency effects were measured as a function of their total frequency. Scores were then generated for each object in the field of view within each frame to determine the most salient object within the virtual environment. This paper compares the resulting saliency gaze model to tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye-trackers worn by human subjects, random gaze model informed by head-orientation for saccade generation, and static gaze featuring non-moving centered eyes. Results from the evaluation experiment and graphical analysis demonstrate a promising saliency gaze model that is not just believable and realistic but also target-relevant and adaptable to varying tasks. Furthermore, the saliency model does not use any prior knowledge of the content or description of the virtual scene.

References

[1]
Argyle, M., and Cook, M. 1976. Gaze and Mutual Gaze. Cambridge University Press Cambridge.
[2]
Bentivoglio, A., Bressman, S., Cassetta, E., Carretta, D., Tonali, P., and Albanese, A. 1997. Analysis of blink rate patterns in normal subjects. Movement Disorders 12, 6.
[3]
Findlay, J., and Walker, R. 1999. A model of saccade generation based on parallel processing and competitive inhibition. Behavioral and Brain Sciences 22, 04, 661--674.
[4]
Garau, M., Slater, M., Vinayagamoorthy, V., Brogni, A., Steed, A., and Sasse, M. 2003. The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. Human factors in computing systems, 529--536.
[5]
Grillon, H., and Thalmann, D. 2009. Simulating gaze attention behaviors for crowds. Computer Animation and Virtual Worlds, 20 2, 3, 111--119.
[6]
Gu, E., and Badler, N. 2006. Visual attention and eye gaze during multiparty conversations with distractions. Lecture Notes in Computer Science 4133, 193.
[7]
Henderson, J., and Hollingworth, A. 1999. High-level scene perception. Annual Review of Psychology 50, 1, 243--271.
[8]
Itti, L., Koch, C., and Niebur, E. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence 20, 11, 1254--1259.
[9]
Itti, L., Dhavale, N., and Pighin, F. 2003. Realistic avatar eye and head animation using a neurobiological model of visual attention. In Proc. of the SPIE 48th Annual International Symposium on Optical Science and Technology, 64--78.
[10]
James, W. 1890. The principles of psychology.
[11]
Kendall, M., and Smith, B. 1940. On the method of paired comparisons. Biometrika 31, 3--4, 324--345.
[12]
Khullar, S., and Badler, N. 2001. Where to Look? Automating Attending Behaviors of Virtual Human Characters. Autonomous Agents and Multi-Agent Systems 4, 1, 9--23.
[13]
Koch, C., and Ullman, S. 1985. Shifts in selective visual attention: towards the underlying neural circuitry. Human neurobiology 4, 4, 219--227.
[14]
Ledda, P., Chalmers, A., Troscianko, T., and Seetzen, H. 2005. Evaluation of tone mapping operators using a high dynamic range display. Proceedings of ACM SIGGRAPH 2005 24, 3, 640--648.
[15]
Lee, S., Badler, J., and Badler, N. 2002. Eyes alive. ACM Transactions on Graphics (TOG) 21, 3, 637--644.
[16]
Lee, J., Marsella, S., Traum, D., Gratch, J., and Lance, B. 2007. The rickel gaze model: A window on the mind of a virtual human. Lecture Notes in Computer Science 4722, 296.
[17]
Luebke, D. 2003. Level of detail for 3D graphics. Morgan Kaufmann.
[18]
Ma, X., and Deng, Z. 2009. Natural Eye Motion Synthesis by Modeling Gaze-Head Coupling. In Proc. of IEEE Virtual Reality Conference, 143--150.
[19]
Masuko, S., and Hoshino, J. 2007. Head-eye Animation Corresponding to a Conversation for CG Characters. In Computer Graphics Forum, vol. 26, Blackwell Synergy, 303--312.
[20]
Melcher, D., and Kowler, E. 2001. Visual scene memory and the guidance of saccadic eye movements. Vision Research 41, 25--26, 3597--3611.
[21]
Murgia, A., Wolff, R., Steptoe, W., Sharkey, P., Roberts, D., Guimaraes, E., Steed, A., and Rae, J. 2008. A Tool For Replay And Analysis of Gaze-Enhanced Multiparty Sessions Captured in Immersive Collaborative Environments. In Proc. IEEE/ACM DS-RT 2008, 252--258.
[22]
Peters, C., Pelachaud, C., Bevacqua, E., Mancini, M., and Poggi, I. 2005. A model of attention and interest using gaze behavior. Lecture notes in computer science 3661, 229.
[23]
Queiroz, R., Barros, L., and Musse, S. 2008. Providing expressive gaze to virtual animated characters in interactive applications. Computers in Entertainment (CIE) 6, 3.
[24]
Stentiford, F. 2007. Attention-based similarity. Pattern Recognition 40, 3, 771--783.
[25]
Steptoe, W., Oyekoya, O., Murgia, A., Wolff, R., Rae, J., Guimaraes, E., Roberts, D., and Steed, A. 2009. Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments. IEEE Virtual Reality Conference, 2009., 83--90.
[26]
Treisman, A., and Gelade, G. 1980. A feature-integration theory of attention. Cognitive psychology 12, 1, 97--136.
[27]
Vinayagamoorthy, V., Garau, M., Steed, A., and Slater, M. 2004. An Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and Experience. Computer Graphics Forum 23, 1, 1--11.
[28]
Wolff, R., Roberts, D., Murgia, A., Murray, N., Rae, J., Steptoe, W., Steed, A., and Sharkey, P. 2008. Communicating Eye Gaze across a Distance without Rooting Participants to the Spot. In Proc. IEEE/ACM DS-RT 2008, 111--118.
[29]
Yarbus, A. 1967. Eye movements and vision. Plenum press.

Cited By

View all
  • (2024)A Virtual Reality Scene Taxonomy: Identifying and Designing Accessible Scene-Viewing TechniquesACM Transactions on Computer-Human Interaction10.1145/363514231:2(1-44)Online publication date: 5-Feb-2024
  • (2024)Real-Time Multi-Map Saliency-Driven Gaze Behavior for Non-Conversational CharactersIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.324467930:7(3871-3883)Online publication date: Jul-2024
  • (2023)Real-Time Conversational Gaze Synthesis for AvatarsProceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3623264.3624446(1-7)Online publication date: 15-Nov-2023
  • Show More Cited By

Index Terms

  1. A saliency-based method of simulating visual attention in virtual scenes

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      VRST '09: Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology
      November 2009
      277 pages
      ISBN:9781605588698
      DOI:10.1145/1643928
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 18 November 2009

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. behavioural realism
      2. character animation
      3. facial animation
      4. gaze modeling
      5. target saliency
      6. visual attention

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      VRST '09

      Acceptance Rates

      Overall Acceptance Rate 66 of 254 submissions, 26%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)11
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 22 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)A Virtual Reality Scene Taxonomy: Identifying and Designing Accessible Scene-Viewing TechniquesACM Transactions on Computer-Human Interaction10.1145/363514231:2(1-44)Online publication date: 5-Feb-2024
      • (2024)Real-Time Multi-Map Saliency-Driven Gaze Behavior for Non-Conversational CharactersIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.324467930:7(3871-3883)Online publication date: Jul-2024
      • (2023)Real-Time Conversational Gaze Synthesis for AvatarsProceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3623264.3624446(1-7)Online publication date: 15-Nov-2023
      • (2023)Mapping and Recognition of Facial Expressions on Another Person's Look-Alike AvatarsSIGGRAPH Asia 2023 Technical Communications10.1145/3610543.3626159(1-4)Online publication date: 28-Nov-2023
      • (2022)The Eye in Extended Reality: A Survey on Gaze Interaction and Eye Tracking in Head-worn Extended RealityACM Computing Surveys10.1145/349120755:3(1-39)Online publication date: 25-Mar-2022
      • (2022)Automatic estimation of parametric saliency maps (PSMs) for autonomous pedestriansComputers & Graphics10.1016/j.cag.2022.03.010104(86-94)Online publication date: May-2022
      • (2022)Learning based versus heuristic based: A comparative analysis of visual saliency prediction in immersive virtual realityComputer Animation and Virtual Worlds10.1002/cav.210633:6Online publication date: 24-Jul-2022
      • (2021)PSM: Parametric Saliency Maps for Autonomous PedestriansProceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3487983.3488299(1-7)Online publication date: 10-Nov-2021
      • (2021)Exploring How Saliency Affects Attention in Virtual RealityHuman-Computer-Interaction – INTERACT 202110.1007/978-3-030-85607-6_10(147-155)Online publication date: 27-Aug-2021
      • (2018)A group‐based approach for gaze behavior of virtual crowds incorporating personalitiesComputer Animation and Virtual Worlds10.1002/cav.180629:5Online publication date: 17-Apr-2018
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media