skip to main content
research-article

Saccade landing position prediction for gaze-contingent rendering

Published: 20 July 2017 Publication History

Abstract

Gaze-contingent rendering shows promise in improving perceived quality by providing a better match between image quality and the human visual system requirements. For example, information about fixation allows rendering quality to be reduced in peripheral vision, and the additional resources can be used to improve the quality in the foveal region. Gaze-contingent rendering can also be used to compensate for certain limitations of display devices, such as reduced dynamic range or lack of accommodation cues. Despite this potential and the recent drop in the prices of eye trackers, the adoption of such solutions is hampered by system latency which leads to a mismatch between image quality and the actual gaze location. This is especially apparent during fast saccadic movements when the information about gaze location is significantly delayed, and the quality mismatch can be noticed. To address this problem, we suggest a new way of updating images in gaze-contingent rendering during saccades. Instead of rendering according to the current gaze position, our technique predicts where the saccade is likely to end and provides an image for the new fixation location as soon as the prediction is available. While the quality mismatch during the saccade remains unnoticed due to saccadic suppression, a correct image for the new fixation is provided before the fixation is established. This paper describes the derivation of a model for predicting saccade landing positions and demonstrates how it can be used in the context of gaze-contingent rendering to reduce the influence of system latency on the perceived quality. The technique is validated in a series of experiments for various combinations of display frame rate and eye-tracker sampling rate.

Supplementary Material

ZIP File (a50-arabadzhiyskaa.zip)
Supplemental files.
MP4 File (papers-0281.mp4)

References

[1]
Richard Andersson, Linnéa Larsson, Kenneth Holmqvist, Martin Stridh, and Marcus Nyström. 2016. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms. Behavior Research Methods (2016).
[2]
James Anliker. 1976. Eye movement: On-line measurement, analysis, and control. R. A. Monty & J. W. Senders (Eds.), Eye movements and psychological processes (1976), 185--202.
[3]
A Terry Bahill, Michael R Clark, and Lawrence Stark. 1975. The main sequence, a tool for studying human eye movements. Mathematical Biosciences 24, 3--4 (1975), 191--204.
[4]
Martin S Banks, Allison B Sekuler, and Stephen J Anderson. 1991. Peripheral spatial vision: Limits imposed by optics, photoreceptors, and receptor pooling. J. Opt. Soc. Am. 8, 11 (1991), 1775--1787.
[5]
Clara Bodelón, Mazyar Fallah, and John H. Reynolds. 2007. Temporal resolution for the perception of features and conjunctions. Journal of Neuroscience 27, 4 (2007), 725--730.
[6]
D Boghen, BT Troost, RB Daroff, LF Dell'Osso, and JE Birkett. 1974. Velocity characteristics of normal human saccades. Invest Ophthalmology & Vis Science 13, 8 (1974), 619--623.
[7]
E. Bollen, J.Bax, J.G. Van Dijk, M. Koning, J.E. Bos, C.G. Kramer, and E.A. Van Der Velde. 1993. Variability of the main sequence. Invest Ophthalmology & Vis Science 34, 13 (1993), 3700--3704.
[8]
A. Borji and L. Itti. 2013. State-of-the-art in visual attention modeling. IEEE PAMI 35, 1 (2013), 185--207.
[9]
Christine A Curcio, Kenneth R Sloan, Robert E Kalina, and Anita E Hendrickson. 1990. Human photoreceptor topography. Journal of Comparative Neurology 292, 4 (1990), 497--523.
[10]
Scott J Daly. 1998. Engineering observations from spatiovelocity and spatiotemporal visual models. In Photonics West'98 Electronic Imaging. International Society for Optics and Photonics, 180--191.
[11]
Mark R. Diamond, John Ross, and M. C. Morrone. 2000. Extraretinal control of saccadic suppression. Journal of Neuroscience 20, 9 (2000), 3449--3455.
[12]
Michael Dorr, Thomas Martinetz, Karl R. Gegenfurtner, and Erhardt Barth. 2010. Variability of eye movements when viewing dynamic natural scenes. Journal of Vision 10, 10 (2010), 28. arXiv:/data/journals/jov/932797/jov-10-10-28.pdf
[13]
Mark H. Draper, Erik S. Viirre, Thomas A. Furness, and Valerie J. Gawron. 2001. Effects of image scale and system time delay on simulator sickness within head-coupled virtual environments. Human Factors 43, 1 (2001), 129--146.
[14]
Andrew T Duchowski, David Bate, Paris Stringfellow, Kaveri Thakur, Brian J. Melloy, and Anand K. Gramopadhye. 2009. On spatiochromatic visual sensitivity and peripheral color LOD management. ACM Trans. Appl. Percept. 6, 2 (2009), 9:1--9:18.
[15]
Andrew T. Duchowski, Donald H. House, Jordan Gestring, Rui I. Wang, Krzysztof Krejtz, Izabela Krejtz, Radosław Mantiuk, and Bartosz Bazyluk. 2014. Reducing visual discomfort of 3D stereoscopic sisplays with gaze-contingent depth-of-field. In Proc. ACM Symp. on Appl. Perc. (SAP). 39--46.
[16]
LH Frank, JG Casali, and WW Wierwille. 1988. Effects of visual display and motion system delays on operator performance and uneasiness in a driving simulator. Human Factors 30, 2 (1988), 201--217.
[17]
O-J Grüsser and U Grüsser-Cornehls. 1986. Physiology of vision. In Fundamentals of Sensory Physiology. 144--198.
[18]
Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Trans Graph (Proc SIGGRAPH Asia) 31, 6 (2012), 164.
[19]
Peng Han, Daniel R Saunders, Russell L Woods, and Gang Luo. 2013. Trajectory prediction of saccadic eye movements using a compressed exponential model. Journal of Vision 13, 8 (2013), 27--27.
[20]
Philippe Hanhart and Touradj Ebrahimi. 2014. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience. In Proc. SPIE vol. 9011.
[21]
Kenneth Holmqvist, Marcus Nyström, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye tracking: A comprehensive guide to methods and measures. Oxford University Press.
[22]
David Jacobs, Orazio Gallo, Emily Cooper, Kari Pulli, and Marc Levoy. 2015. Simulating the visual experience of very bright and very dark scenes. ACM Trans Graph (TOG) 34, 3 (2015), 25.
[23]
Harish Katti, Anoop Kolar Rajagopal, Mohan Kankanhalli, and Ramakrishnan Kalpathi. 2014. Online estimation of evolving human visual interest. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 11, 1 (2014), 8.
[24]
Petr Kellnhofer, Piotr Didyk, Karol Myszkowski, Mohamed M Hefeeda, Hans-Peter Seidel, and Wojciech Matusik. 2016. GazeStereo3D: Seamless disparity manipulations. ACM Trans Graph (Proc SIGGRAPH) 35, 4 (2016), 68.
[25]
Oleg V Komogortsev and Javed I Khan. 2009. Eye movement prediction by oculomotor plant Kalman filter with brainstem control. Journal of Control Theory and Applications 7, 1 (2009), 14--22.
[26]
Oleg V Komogortsev, Young Sam Ryu, Do Hyong Koh, and Sandeep M Gowda. 2009a. Instantaneous saccade driven eye gaze interaction. In Proceedings of the International Conference on Advances in Computer Enterntainment Technology. ACM, 140--147.
[27]
Oleg V Komogortsev, Young Sam Ryu, San Marcos, and Do Hyong Koh. 2009b. Quick models for saccade amplitude prediction. Journal of Eye Movement Research 3, 1 (2009).
[28]
Eileen Kowler. 2011. Eye movements: The past 25 years. Vision Research 51, 13 (2011), 1457--1483.
[29]
R John Leigh and David S Zee. 2015. The neurology of eye movements. Vol. 90. Oxford University Press, USA.
[30]
Lester C Loschky and Gary S Wolverton. 2007. How late can you update gaze-contingent multiresolutional displays without detection? ACM Trans Multimedia Comput, Comm, and Appl (TOMM) 3, 4 (2007), 25:10.
[31]
Radosław Mantiuk, Bartosz Bazyluk, and Anna Tomaszewska. 2011. Gaze-dependent depth-of-field effect rendering in virtual environments. In Int Conf on Serious Games Dev & Appl. 1--12.
[32]
Michael Mauderer, Simone Conte, Miguel A. Nacenta, and Dhanraj Vishwanath. 2014. Depth perception with gaze-contingent depth of field. In Proc Human Fact in Comp Sys (CHI). 217--226.
[33]
Craig H Meyer, Adrian G Lasker, and David A Robinson. 1985. The upper limit of human smooth pursuit velocity. Vision research 25, 4 (1985), 561--563.
[34]
Hunter Murphy and Andrew T. Duchowski. 2001. Gaze-contingent level of detail rendering. Eurographics Short Presentations (2001).
[35]
Cornelis Noorlander, Jan J. Koenderink, Ron J. Den Olden, and B. Wigbold Edens. 1983. Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vision Research 23, 1 (1983), 1--11.
[36]
Oculus VR. 2016a. Asynchronous spacewarp. https://developer.oculus.com/blog/asynchronous-spacewarp/. (2016). Accessed: 2017-04-18.
[37]
Oculus VR. 2016b. Asynchronous timewarp. https://developer3.oculus.com/documentation/mobilesdk/latest/concepts/mobile-timewarp-overview/. (2016).
[38]
Céline Paeye, Alexander C Schütz, and Karl R Gegenfurtner. 2016. Visual reinforcement shapes eye movements in visual search. Journal of Vision 16, 10 (2016), 15--15.
[39]
Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Trans Graph (Proc SIGGRAPH Asia) 35, 6 (2016), 179.
[40]
Simon JD Prince and Brian J Rogers. 1998. Sensitivity to disparity corrugations in peripheral vision. Vision Res 38, 17 (1998), 2533--2537.
[41]
John Ross, David Burr, and Concetta Morrone. 1996. Suppression of the magnocellular pathway during saccades. Behavioural Brain Research 80, 1 (1996), 1--8.
[42]
John Ross, M.Concetta Morrone, Michael E Goldberg, and David C Burr. 2001. Changes in visual perception at the time of saccades. Trends in Neurosciences 24, 2 (2001), 113--121.
[43]
Dario D Salvucci and Joseph H Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proc. Symp. on Eye Tracking Res. and Appl. (ETRA). 71--78.
[44]
DR Saunders and RL Woods. 2014. Direct measurement of the system latency of gaze-contingent displays. Behavior Research Methods 46, 2 (2014), 439--447.
[45]
Jeroen BJ Smeets and Ignace TC Hooge. 2003. Nature of variability in saccades. Journal of Neurophysiology 90, 1 (2003), 12--20.
[46]
Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016. Adaptive image-space sampling for gaze-contingent real-time rendering. In Comp Graph Forum, Vol. 35. 129--139.
[47]
Hans Strasburger, Ingo Rentschler, and Martin Jüttner. 2011. Peripheral vision and pattern recognition: A review. Journal of Vision 11, 5 (2011), 13--13.
[48]
Nicholas T. Swafford, José A. Iglesias-Guitian, Charalampos Koniaris, Bochang Moon, Darren Cosker, and Kenny Mitchell. 2016. User, metric, and computational evaluation of foveated rendering methods. In Proc. ACM Symp. on Appl. Perc. (SAP). 7--14.
[49]
Karthik Vaidyanathan, Marco Salvi, Robert Toth, Tim Foley, Tomas Akenine-Möller, Jim Nilsson, Jacob Munkberg, Jon Hasselgren, Masamichi Sugihara, Petrik Clarberg, and others. 2014. Coarse pixel shading. In High Performance Graphics.
[50]
AJ Van Opstal and JAM Van Gisbergen. 1987. Skewness of saccadic velocity profiles: A unifying parameter for normal and slow saccades. Vision Research 27, 5 (1987), 731--745.
[51]
Margarita Vinnikov and Robert S. Allison. 2014. Gaze-contingent depth of field in realistic scenes: the user experience. In Proc. Symp. on Eye Tracking Res. and Appl. (ETRA). 119--126.
[52]
Frances C. Volkmann, Lorrin A. Riggs, Keith D. White, and Robert K. Moore. 1978. Contrast sensitivity during saccadic eye movements. Vision Research 18, 9 (1978), 1193--1199.
[53]
Sang Hoon Yeo, Martin Lesmana, Debanga R Neog, and Dinesh K Pai. 2012. Eyecatch: Simulating visuomotor coordination for object interception. ACM Transactions on Graphics (TOG) 31, 4 (2012), 42.
[54]
Wei Zhou, Xinnian Chen, and John Enderle. 2009. An updated time-optimal 3rd-order linear saccadic eye plant model. International Journal of Neural Systems 19, 05 (2009), 309--330.

Cited By

View all
  • (2025)Oculomotor Plant Mathematical Model in Kalman Filter Form With Peak Velocity-Based Neural Pulse for Continuous Gaze PredictionIEEE Access10.1109/ACCESS.2025.352810413(11544-11559)Online publication date: 2025
  • (2024)Assessing the data quality of AdHawk MindLink eye-tracking glassesBehavior Research Methods10.3758/s13428-023-02310-256:6(5771-5787)Online publication date: 2-Jan-2024
  • (2024)Theia: Gaze-driven and Perception-aware Volumetric Content Delivery for Mixed Reality HeadsetsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661858(70-84)Online publication date: 3-Jun-2024
  • Show More Cited By

Index Terms

  1. Saccade landing position prediction for gaze-contingent rendering

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 36, Issue 4
        August 2017
        2155 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3072959
        Issue’s Table of Contents
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 20 July 2017
        Published in TOG Volume 36, Issue 4

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. gaze-contingent rendering
        2. new display technology
        3. perception
        4. saccade prediction
        5. saccadic suppression
        6. virtual reality

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)106
        • Downloads (Last 6 weeks)16
        Reflects downloads up to 13 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)Oculomotor Plant Mathematical Model in Kalman Filter Form With Peak Velocity-Based Neural Pulse for Continuous Gaze PredictionIEEE Access10.1109/ACCESS.2025.352810413(11544-11559)Online publication date: 2025
        • (2024)Assessing the data quality of AdHawk MindLink eye-tracking glassesBehavior Research Methods10.3758/s13428-023-02310-256:6(5771-5787)Online publication date: 2-Jan-2024
        • (2024)Theia: Gaze-driven and Perception-aware Volumetric Content Delivery for Mixed Reality HeadsetsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661858(70-84)Online publication date: 3-Jun-2024
        • (2024)Accelerating Saccadic Response through Spatial and Temporal Cross-Modal MisalignmentsACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657432(1-12)Online publication date: 13-Jul-2024
        • (2024)Saccade-Contingent RenderingACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657420(1-9)Online publication date: 13-Jul-2024
        • (2024)Measuring and Predicting Multisensory Reaction Latency: A Probabilistic Model for Visual-Auditory IntegrationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345618530:11(7364-7374)Online publication date: 1-Nov-2024
        • (2024)tSPM-Net: A probabilistic spatio-temporal approach for scanpath predictionComputers & Graphics10.1016/j.cag.2024.103983122(103983)Online publication date: Aug-2024
        • (2024)Eye Tracking in Virtual RealityEncyclopedia of Computer Graphics and Games10.1007/978-3-031-23161-2_170(681-688)Online publication date: 5-Jan-2024
        • (2023)Enriching Telepresence with Semantic-driven Holographic CommunicationProceedings of the 22nd ACM Workshop on Hot Topics in Networks10.1145/3626111.3628184(147-156)Online publication date: 28-Nov-2023
        • (2023)The Shortest Route is Not Always the Fastest: Probability-Modeled Stereoscopic Eye Movement Completion Time in VRACM Transactions on Graphics10.1145/361833442:6(1-14)Online publication date: 5-Dec-2023
        • Show More Cited By

        View Options

        Login options

        Full Access

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media