skip to main content
10.1145/3317959.3321488acmconferencesArticle/Chapter ViewAbstractPublication PagesetraConference Proceedingsconference-collections
research-article

GeoGCD: improved visual search via gaze-contingent display

Published: 25 June 2019 Publication History

Abstract

Gaze-Contingent Displays (GCDs) can improve visual search performance on large displays. GCDs, a Level Of Detail (LOD) management technique, discards redundant peripheral detail using various human visual perception models. Models of depth and contrast perception (e.g., depth-of-field and foveation) have often been studied to address the trade-off between the computational and perceptual benefits of GCDs. However, color perception models and combinations of multiple models have not received as much attention. In this paper, we present GeoGCD which uses individual contrast, color, and depth-perception models, and their combination to render scenes without perceptible latency. As proof-of-concept, we present a three-stage user evaluation built upon geographic image interpretation tasks. GeoGCD does not impair users' visual search performance or affect their display preferences. On the contrary, in some cases, it can significantly improve users' performance.

Supplementary Material

ZIP File (a84-bektas.zip)
Supplemental files.

References

[1]
Albert, R., Patney, A., Luebke, D., and Kim, J. (2017). Latency requirements for foveated rendering in virtual reality. ACM Transactions on Applied Perception (TAP), 14(4):25.
[2]
Andrews, C., Endert, A., Yost, B., and North, C. (2011). Information Visualization on Large, High-resolution Displays - Issues, Challenges, and Opportunities. Information Visualization, 10(4):341--355.
[3]
Ball, R. and North, C. (2005). Effects of tiled high-resolution display on basic visualization and navigation tasks. CHI '05 extended abstracts on Human factors in computing systems - CHI '05, pages 1196 -- 1199.
[4]
Ball, R., Varghese, M., Carstensen, B., Cox, E. D., Fierer, C., Peterson, M., and North, C. (2005). Evaluating the Benefits of Tiled Displays for Navigating Maps. In Proc. IASTED-HCI '05, ACTA, pages 66--71.
[5]
Batty, M. (2008). Virtual reality in geographic information systems. In Wilson, J. P. and Fotheringham, A. S., editors, The Handbook of Geographic Information Science, chapter 17, pages 317--335. Blackwell.
[6]
Baudisch, P., Good, N., Bellotti, V., and Schraedley, P. (2002). Keeping Things in Context: A Comparative Evaluation of Focus Plus Context Screens, Overviews, and Zooming. CHI '02 Proceedings of the SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves, (4):259--266.
[7]
Bektaş, K. and Çöltekin, A. (2011). An Approach to Modeling Spatial Perception for Geovisualization. Procedia - Social and Behavioral Sciences, 21:53--62.
[8]
Bektaş, K. and Çöltekin, A. (2012). Area of Interest Based Interaction and Geovisualization with WebGL. The Graphical Web Conference, pages 1--14.
[9]
Bektaş, K., Çöltekin, A., Krüger, J., and Duchowski, A. T. (2015). A Testbed Combining Visual Perception Models for Geographic Gaze Contingent Displays. In Bertini, E., Kennedy, J., and Puppo, E., editors, Eurographics Conference on Visualization (EuroVis) - Short Papers, pages 67--71.
[10]
Bektaş, K. and Çöltekin, A. (2018). Geogcd: Geographic gaze contingent display. In Eye Tracking for Spatial Research, Proceedings of the 3rd International Workshop. ETH Zurich.
[11]
Böhme, M., Dorr, M., Martinetz, T., and Barth, E. (2006). Gaze-Contingent Temporal Filtering of Video. In ETRA '06: Proceedings of the 2006 Symposium on Eye Tracking Research & Applications, pages 109--116, San Diego, CA. ACM.
[12]
Bouma, H. (1970). Interaction Effects in Parafoveal Letter Recognition. Nature, 226(5241):177--178.
[13]
Chung, S. T. L., Levi, D. M., and Legge, G. E. (2001). Spatial-frequencyand contrast properties of crowding. Vision Research, 41(14):1833--1850.
[14]
Duchowski, A. T. (2004). Hardware-Accelerated Real-Time Simulation of Arbitrary Visual Fields. In ETRA '04: Proceedings of the 2004 Symposium on Eye Tracking Research & Applications, San Antonio, TX. ACM. (Poster).
[15]
Duchowski, A. T. (2018). Gaze-based interaction: A 30 year retrospective. Computers & Graphics, pages -. Special Section on Serious Games and Virtual Environments.
[16]
Duchowski, A. T., Bate, D., Stringfellow, P., Thakur, K., Melloy, B. J., and Gramopadhye, A. K. (2009). On spatiochromatic visual sensitivity and peripheral color LOD management. ACM Transactions on Applied Perception, 6(2):1--18.
[17]
Duchowski, A. T. and Çöltekin, A. (2007). Foveated gaze-contingent displays for peripheral LOD management, 3D visualization, and stereo imaging. ACMTransactions on Multimedia Computing, Communications, and Applications, 3(4):1--18.
[18]
Duchowski, A. T. and Eaddy, T. D. (2009). AGaze-ContingentDisplayCompensatingfor Scotomata. In EuroGraphics (Short Presentations), Munich, Germany. EuroGraphics.
[19]
Eckstein, M. P. (2011). Visual search: a retrospective. Journal of vision, 11(5).
[20]
Feit, A. M., Williams, S., Toledo, A., Paradiso, A., Kane, S., and Morris, M. R. (2017). Toward Everyday Gaze Input: Accuracy and Precision of Eye Tracking and Implications for Design. Proc. of ACM CHI 2017.
[21]
Ferwerda, K. a., Shirley, P., Pattanaik, S. N., and Greenberg, D. P. (1997). A model of visual masking for computer graphics. Siggraph 97, pages 143--152.
[22]
Geisler, W. S. and Perry, J. S. (1998a). A real-time foveated multiresolution system for low-bandwidth video communication. In Rogowitz, B. E. and Pappas, T. N., editors, Proceedings of SPIE, volume 3299, pages 294--305. Spie.
[23]
Geisler, W. S. and Perry, J. S. (1998b). Real-time Foveated Multiresolution System for Low-bandwidth Video Communication. In Human Vision and Electronic Imaging, Bellingham, WA. SPIE.
[24]
Geisler, W. S. and Perry, J. S. (2002). Real-time Simulation of Arbitrary Visual Fields. In Eye Tracking Research & Applications (ETRA) Symposium, pages 83--153, New Orleans, LA. ACM.
[25]
Geisler, W. S., Perry, J. S., and Najemnik, J. (2006). Visual search: The role of peripheral information measured using gaze-contingent displays. Journal of Vision, 6(9):1--1.
[26]
Guenter, B., Finch, M., Drucker, S., Tan, D., and Snyder, J. (2012). Foveated 3D graphics. ACM Transactions on Graphics, 31(6):1.
[27]
Hillaire, S., Lécuyer, A., Cozot, R., and Casiez, G. (2008). Depth-of-field blur effects for first-person navigation in virtual environments. In IEEE computer graphics and applications, pages 47--55, New York, New York, USA. ACM Press.
[28]
Koulieris, G. A., Drettakis, G., Cunningham, D., and Mania, K. (2014). C-LOD: Context-aware Material Level-of-Detail applied to Mobile Graphics. Computer Graphics Forum, 3(4):41--49.
[29]
Levi, D. M. (2008). Crowding-An essential bottleneck for object recognition: A mini-review. Vision Research, 48(5):635--654.
[30]
Levoy, M. and Whitaker, R. (1990). Gaze-Directed Volume Rendering. In Computer Graphics (SIGGRAPH '90), pages 217--223, NewYork, NY. ACM.
[31]
Loschky, L. C. and McConkie, G. W. (2000). User performance with gaze contingent multiresolutional displays. In Proceedings of the symposium on Eye tracking research & applications- ETRA '00, pages 97--103, New York, New York, USA. ACM Press.
[32]
Loschky, L. C. and McConkie, G. W. (2002). Investigating spatial vision and dynamic attentional selection using a gaze-contingent multiresolutional display. Journal of Experimental Psychology: Applied, 8(2):99--117.
[33]
Loschky, L. C. and Wolverton, G. S. (2007). How late can you update gaze-contingent multiresolutional displays without detection? ACM Transactions on Multimedia Computing, Communications, and Applications, 3(4):1--10.
[34]
Louie, E. G., Bressler, D. W., and Whitney, D. (2007). Holistic crowding: selective interference between configural representations of faces in crowded scenes. Journal of Vision, 7(2):24.1--11.
[35]
Luebke, D. and Erikson, C. (1997). View-Dependent Simplification Of Arbitrary Polygonal Environments. In Computer Graphics (SIGGRAPH '97), New York, NY. ACM.
[36]
Luebke, D., Hallen, B., Newfield, D., and Watson, B. (2000). Perceptually Driven Simplification Using Gaze-Directed Rendering. Technical Report CS-2000-04, University of Virginia.
[37]
Luebke, D., Reddy, M., Cohen, J., Varshney, A., Watson, B., and Huebner, R. (2002). Level of Detail for 3D Graphics. Morgan-Kaufmann Publishers, San Francisco, CA.
[38]
Mantiuk, R., Bazyluk, B., and Tomaszewska, A. (2011). Gaze-dependent depth-of-field effect rendering in virtual environments. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6944 LNCS:1--12.
[39]
Mauderer, M., Conte, S., Nacenta, M. a., and Vishwanath, D. (2014). Depth perception with gaze-contingent depth of field. Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI '14, (Chi):217--226.
[40]
Mauderer, M., Flatla, D. R., and Nacenta, M. A. (2016). Gaze-contingent manipulation of color perception. In Proceedings of the 2016 CHI Conference on Human Factors in ComputingSystems, pages 5191--5202. ACM.
[41]
Murphy, H. and Duchowski, A. T. (2001). Gaze-contingent level of detail rendering. Eurographics.
[42]
Murphy, H. and Duchowski, A. T. (2007). Hybrid image-/model-based gaze-contingent rendering. In Proceedings of the 4th symposium on Applied perception in graphics and visualization - APGV '07, volume 5, page 107, New York, New York, USA. ACM Press.
[43]
Murphy, H. a., Duchowski, A. T., and Tyrrell, R. a. (2009). Hybrid image/model-based gaze-contingent rendering. ACM Transactions on Applied Perception, 5(4):1--21.
[44]
Netzel, R., Hlawatsch, M., Burch, M., Balakrishnan, S., Schmauder, H., and Weiskopf, D. (2017). An Evaluation of Visual Search Support in Maps. IEEE Transactions on Visualization and Computer Graphics, 23(1):421--430.
[45]
Nikolov, S. G., Bull, D. R., and Gilchrist, I. D. (2003). Gaze-Contingent Multi-modality Displays of Multi-layered Geographical Maps. pages 325--332.
[46]
Orlosky, J., Kiyokawa, K., and Takemura, H. (2017). Virtual and Augmented Reality on the 5G Highway. Journal of Information Processing, 25(0):133--141.
[47]
Padmanaban, N., Konrad, R., Stramer, T., Cooper, E. A., and Wetzstein, G. (2017). Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proceedings of the National Academy of Sciences, 114(9):2183--2188.
[48]
Parkhurst, D., Culurciello, E., and Niebur, E. (2000). Evaluating variable resolution displays with visual search. In Proceedings of the symposium on Eye tracking research & applications - ETRA '00, pages 105--109, New York, New York, USA. ACM Press.
[49]
Parkhurst, D., Law, I., and Niebur, E. (2001). Evaluating Gaze-Contingent Level of Detail Rendering of Virtual Environments using Visual Search.
[50]
Parkhurst, D. J. and Niebur, E. (2002). Variable-resolution displays: a theoretical, practical, and behavioral evaluation. Human factors, 44(4):611--29.
[51]
Patney, A., Salvi, M., Kim, J., Kaplanyan, A., Wyman, C., Benty, N., Luebke, D., and Lefohn, A. (2016). Towards Foveated Rendering for Gaze-tracked Virtual Reality. ACM Trans. Graph., 35(6):179:1--179:12.
[52]
Peli, E. (1990). Contrast in complex images. Journal of the Optical Society of America. A, Optics and image science, 7(10):2032--2040.
[53]
Pelli, D. G., Palomares, M., and Majaj, N. J. (2004). Crowding is unlike ordinary masking: Distinguishing feature integration from detection. Journal of vision, 4(12):12.
[54]
Reingold, E. M., Loschky, L. C., McConkie, G. W., and Stampe, D. M. (2003). Gaze-Contingent Multiresolutional Displays: An Integrative Review. Human Factors: The Journal of the Human Factors and Ergonomics Society, 45(2):307--328.
[55]
Resnikoff, H. L. (1989). The Illusion of Reality, volume 10. Springer US, New York, NY.
[56]
Rokita, P. (1996). Generating depth of-field effects in virtual reality applications. IEEE Computer Graphics and Applications, 16(2):18--21.
[57]
Roth, T., Weier, M., Hinkenjann, A., and Li, Y. (2016). An analysis of eye-tracking data in foveated ray tracing. IEEE Second Workshop on Eye Tracking and Visualization (ETVIZ), pages 69--73.
[58]
Sakurai, M., Ayama, M., and Kumagai, T. (2003). Color appearance in the entire visual field: color zone map based on the unique hue component. Journal of the Optical Society of America. A, Optics, image science, and vision, 20(11):1997--2009.
[59]
Schneider, N., Dorr, M., Pomarjanschi, L., and Barth, E. (2011a). A gaze-contingent reading tutor program for children with developmental dyslexia. In Proceedings of the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization - APGV '11, number January 2011, page 117, New York, New York, USA. ACM Press.
[60]
Schneider, N., Dorr, M., Pomarjanschi, L., and Barth, E. (2011b). Improving reading capability of children with developmental dyslexia with a gaze-contingent display. Institute for Neuro-und Bioinformatics, University of Lubeck (http://www.perceptionweb.com/abstract.cgi, page 124.
[61]
Shioiri, S. and Ikeda, M. (1989). Useful resolution for picture perception as a function of eccentricity. Perception, 18(3):347--361.
[62]
Smith, G. and Atchison, D. A. (1997). The Eye. In The Eye and Visual Optical Instruments, chapter 13. Cambridge University Press.
[63]
Stengel, M. and Magnor, M. (2016). Gaze-Contingent Computational Displays: Boosting perceptual fidelity. IEEE Signal Processing Magazine, 33(5):139--148.
[64]
Strasburger, H., Rentschler, I., and Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of vision, 11(5):13.
[65]
Tan, D. S., Gergle, D., Scupelli, P., and Pausch, R. (2003). With similar visual angles, larger displays improve spatial performance. Proceedings of the conference on Human factors in computing systems - CHI '03, (5):217.
[66]
Tobii (2010). Tobii TX300 Eye Tracker Product Description. Technical report.
[67]
Toet, A. (2006). Gaze directed displays as an enabling technology for attention aware systems. Computers in Human Behavior, 22(4):615--647.
[68]
Vaidyanathan, K., Salvi, M., Toth, R., Foley, T., Akenine-Möller, T., Nilsson, J., Munkberg, J., Hasselgren, J., Sugihara, M., Clarberg, P., et al. (2014). Coarse pixel shading. In Proceedings of High Performance Graphics, pages 9--18. Eurographics Association.
[69]
Valois, K. K. D. (2000). Seeing. Academic Press.
[70]
Wang, Z. and Bovik, a. C. (2001). Embedded foveation image coding. IEEE transactions on image processing, 10(10):1397--1410.
[71]
Ware, C. (2013). Information Visualization Perception For Design. Morgan Kaufmann, 3 edition.
[72]
Watson, B., Walker, N., Hodges, L. F., and Worden, A. (1997). Managing level of detail through peripheral degradation: effects on search performance with a head-mounted display. ACM Transactions on Computer-Human Interaction, 4(4):323--346.
[73]
Whitehead, K. and Hugenholtz, C. H. (2014). Remote sensing of the environment with small unmanned aircraft systems (UASs), part 1: a review of progress and challenges 1. Journal of Unmanned Vehicle Systems, 02(03):69--85.
[74]
Whitney, D. and Levi, D. M. (2012). Visual Crowding: a dundamental limit on conscious perception and object recognition. Trends Cogn Sci, 15(4):160--168.
[75]
Williams, N., Luebke, D., Cohen, J. D., Kelley, M., and Schubert, B. (2003). Perceptually guided simplification of lit, textured meshes. Proceedings of the 2003 symposium on Interactive 3D graphics - SI3D '03, page 113.

Cited By

View all
  • (2024)NeighboAR: Efficient Object Retrieval using Proximity- and Gaze-based Object Grouping with an AR SystemProceedings of the ACM on Human-Computer Interaction10.1145/36555998:ETRA(1-19)Online publication date: 28-May-2024
  • (2024)ShoppingCoach: Using Diminished Reality to Prevent Unhealthy Food Choices in an Offline Supermarket ScenarioExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650795(1-8)Online publication date: 11-May-2024
  • (2024)Making maps & visualizations for mobile devices: A research agenda for mobile-first and responsive cartographic designJournal of Location Based Services10.1080/17489725.2023.2251423(1-71)Online publication date: 3-Apr-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ETRA '19: Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications
June 2019
623 pages
ISBN:9781450367097
DOI:10.1145/3314111
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 June 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. color
  2. contrast
  3. depth perception
  4. depth-of-field simulation
  5. gaze-contingent displays
  6. visual crowding
  7. visual search

Qualifiers

  • Research-article

Funding Sources

  • the Swiss National Science Foundation

Conference

ETRA '19

Acceptance Rates

Overall Acceptance Rate 69 of 137 submissions, 50%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)25
  • Downloads (Last 6 weeks)7
Reflects downloads up to 12 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)NeighboAR: Efficient Object Retrieval using Proximity- and Gaze-based Object Grouping with an AR SystemProceedings of the ACM on Human-Computer Interaction10.1145/36555998:ETRA(1-19)Online publication date: 28-May-2024
  • (2024)ShoppingCoach: Using Diminished Reality to Prevent Unhealthy Food Choices in an Offline Supermarket ScenarioExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650795(1-8)Online publication date: 11-May-2024
  • (2024)Making maps & visualizations for mobile devices: A research agenda for mobile-first and responsive cartographic designJournal of Location Based Services10.1080/17489725.2023.2251423(1-71)Online publication date: 3-Apr-2024
  • (2024)Evaluating the performance of gaze interaction for map target selectionCartography and Geographic Information Science10.1080/15230406.2024.233533152:1(82-102)Online publication date: 9-Apr-2024
  • (2024)Gaze-enabled activity recognition for augmented reality feedbackComputers and Graphics10.1016/j.cag.2024.103909119:COnline publication date: 1-Apr-2024
  • (2023)Evaluating the Usability of a Gaze-Adaptive Approach for Identifying and Comparing Raster Values between MultilayersISPRS International Journal of Geo-Information10.3390/ijgi1210041212:10(412)Online publication date: 8-Oct-2023
  • (2023)Eye Tracking-Based Adaptive Displays: A Review of the Recent LiteratureProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/2169506723119263167:1(927-932)Online publication date: 25-Oct-2023
  • (2020)Toward A Pervasive Gaze-Contingent Assistance SystemACM Symposium on Eye Tracking Research and Applications10.1145/3379157.3391657(1-3)Online publication date: 2-Jun-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media