skip to main content
10.1145/3491102.3501977acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors

Authors Info & Claims
Published:29 April 2022Publication History

ABSTRACT

We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.

Skip Supplemental Material Section

Supplemental Material

3491102.3501977-video-preview.mp4

mp4

28 MB

3491102.3501977-talk-video.mp4

mp4

20.4 MB

References

  1. Sunggeun Ahn, Stephanie Santosa, Mark Parent, Daniel Wigdor, Tovi Grossman, and Marcello Giordano. 2021. StickyPie: A Gaze-Based, Scale-Invariant Marking Menu Optimized for AR/VR. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. AH Bell, MA Meredith, AJ Van Opstal, and DougP Munoz. 2006. Stimulus intensity modifies saccadic reaction time and visual response latency in the superior colliculus. Experimental Brain Research 174, 1 (2006), 53–59.Google ScholarGoogle ScholarCross RefCross Ref
  3. Dipesh Bhattarai, Marwan Suheimat, Andrew J Lambert, and David A Atchison. 2019. Fixation stability with Bessel beams. Optometry and Vision Science 96, 2 (2019), 95–102.Google ScholarGoogle ScholarCross RefCross Ref
  4. Andy Cockburn, Carl Gutwin, Joey Scarr, and Sylvain Malacria. 2014. Supporting novice to expert transitions in user interfaces. ACM Computing Surveys (CSUR) 47, 2 (2014), 1–36.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. H Deuble, W Wolf, and G Hauske. 1984. The evaluation of the oculomotor error signal. In Advances in Psychology. Vol. 22. Elsevier, 55–62.Google ScholarGoogle Scholar
  6. Heiko Drewes and Albrecht Schmidt. 2007. Interacting with the computer using gaze gestures. In IFIP Conference on Human-Computer Interaction. Springer, 475–488.Google ScholarGoogle ScholarCross RefCross Ref
  7. Jay Henderson, Sylvain Malacria, Mathieu Nancel, and Edward Lank. 2020. Investigating the necessity of delay in marking menu invocation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Anthony Hornof, Anna Cavender, and Rob Hoselton. 2003. Eyedraw: a system for drawing pictures with eye movements. ACM SIGACCESS Accessibility and Computing77-78 (2003), 86–93.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Anke Huckauf and Mario H Urbina. 2008. Gazing with pEYEs: towards a universal input for various applications. In Proceedings of the 2008 symposium on Eye tracking research & applications. 51–54.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Aulikki Hyrskykari, Howell Istance, and Stephen Vickers. 2012. Gaze gestures or dwell-based interaction?. In Proceedings of the Symposium on Eye Tracking Research and Applications. 229–232.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. FOVE inc.2021. FOVE0 Headset Specification. https://fove-inc.com/product/, last visited Dec. 2021.Google ScholarGoogle Scholar
  12. HTC inc.2021. HTC Vive Pro Eye Headset Specification. https://www.vive.com/kr/product/vive-pro-eye/specs/, last visited Dec. 2021.Google ScholarGoogle Scholar
  13. Toshiya Isomoto, Shota Yamanaka, and Buntarou Shizuki. 2020. Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture. (2020).Google ScholarGoogle Scholar
  14. Laurent Itti and Christof Koch. 2001. Computational modelling of visual attention. Nature reviews neuroscience 2, 3 (2001), 194–203.Google ScholarGoogle Scholar
  15. RP Kalesnykas and PE Hallett. 1994. Retinal eccentricity and the latency of eye saccades. Vision research 34, 4 (1994), 517–531.Google ScholarGoogle Scholar
  16. Yvonne Kammerer, Katharina Scheiter, and Wolfgang Beinhauer. 2008. Looking my way through the menu: the impact of menu design and multimodal input on gaze-based menu selection. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications. 213–220.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Gordon Kurtenbach and William Buxton. 1991. Issues in combining marking and direct manipulation techniques. In Proceedings of the 4th annual ACM symposium on User interface software and technology. 137–144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Gordon Kurtenbach and William Buxton. 1994. User learning and performance with marking menus. In Proceedings of the SIGCHI conference on Human factors in computing systems. 258–264.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Gordon Paul Kurtenbach. 1993. The design and evaluation of marking menus.University of Toronto Toronto.Google ScholarGoogle Scholar
  20. Olivier Le Meur and Zhi Liu. 2015. Saccadic model of eye movements for free-viewing condition. Vision research 116(2015), 152–164.Google ScholarGoogle Scholar
  21. Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 357–360.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Päivi Majaranta, Jari Laitinen, Jari Kangas, and Poika Isokoski. 2019. Inducing gaze gestures by static illustrations. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. 1–5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Päivi Majaranta, Kari-Jouko Räihä, Aulikki Hyrskykari, and Oleg Špakov. 2019. Eye movements and human-computer interaction. In Eye Movement Research. Springer, 971–1015.Google ScholarGoogle Scholar
  24. M v Menozzi, A v Buol, H Krueger, and Ch Miège. 1994. Direction of gaze and comfort: discovering the relation for the ergonomic optimization of visual tasks. Ophthalmic and Physiological Optics 14, 4 (1994), 393–399.Google ScholarGoogle ScholarCross RefCross Ref
  25. Marco Porta and Matteo Turina. 2008. Eye-S: a full-screen input modality for pure eye-based communication. In Proceedings of the 2008 symposium on Eye tracking research & applications. 27–34.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Vijay Rajanna and John Paulin Hansen. 2018. Gaze typing in virtual reality: impact of keyboard design, selection method, and motion. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Joey Scarr, Andy Cockburn, Carl Gutwin, and Philip Quinn. 2011. Dips and ceilings: understanding and supporting transitions to expertise in user interfaces. In Proceedings of the sigchi conference on human factors in computing systems. 2741–2750.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Immo Schuetz, T Scott Murdison, Kevin J MacKenzie, and Marina Zannoli. 2019. An Explanation of Fitts’ Law-like Performance in Gaze-Based Selection Tasks Using a Psychophysics Approach. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Ludwig Sidenmark and Hans Gellersen. 2019. Eye, head and torso coordination during gaze shifts in virtual reality. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 1(2019), 1–40.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Ludwig Sidenmark and Hans Gellersen. 2019. Eye&head: Synergetic eye and head movement for gaze pointing and selection. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 1161–1174.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Lore Thaler, Alexander C Schütz, Melvyn A Goodale, and Karl R Gegenfurtner. 2013. What is the best fixation target? The effect of target shape on stability of fixational eye movements. Vision research 76(2013), 31–42.Google ScholarGoogle Scholar
  32. Jan Theeuwes, Arthur F Kramer, Sowon Hahn, and David E Irwin. 1998. Our eyes do not always go where we want them to go: Capture of the eyes by new objects. Psychological Science 9, 5 (1998), 379–385.Google ScholarGoogle ScholarCross RefCross Ref
  33. Jan Theeuwes, Sebastiaan Mathôt, and Alan Kingstone. 2010. Object-based eye movements: The eyes prefer to stay within the same object. Attention, Perception, & Psychophysics 72, 3 (2010), 597–601.Google ScholarGoogle ScholarCross RefCross Ref
  34. Geoffrey Tien and M Stella Atkins. 2008. Improving hands-free menu selection using eyegaze glances and fixations. In Proceedings of the 2008 symposium on Eye tracking research & applications. 47–50.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Mario H Urbina, Maike Lorenz, and Anke Huckauf. 2010. Pies with EYEs: the limits of hierarchical pie menus in gaze control. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. 93–96.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Jacob O Wobbrock, Leah Findlater, Darren Gergle, and James J Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI conference on human factors in computing systems. 143–146.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Jacob O Wobbrock, James Rubinstein, Michael W Sawyer, and Andrew T Duchowski. 2008. Longitudinal evaluation of discrete consecutive gaze gestures for text entry. In Proceedings of the 2008 symposium on Eye tracking research & applications. 11–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Steven Yantis and Anne P Hillstrom. 1994. Stimulus-driven attentional capture: evidence from equiluminant visual objects.Journal of experimental psychology: Human perception and performance 20, 1(1994), 95.Google ScholarGoogle Scholar

Index Terms

  1. Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
      April 2022
      10459 pages
      ISBN:9781450391573
      DOI:10.1145/3491102

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 29 April 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate6,199of26,314submissions,24%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format