ABSTRACT
We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.
Supplemental Material
- Sunggeun Ahn, Stephanie Santosa, Mark Parent, Daniel Wigdor, Tovi Grossman, and Marcello Giordano. 2021. StickyPie: A Gaze-Based, Scale-Invariant Marking Menu Optimized for AR/VR. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.Google ScholarDigital Library
- AH Bell, MA Meredith, AJ Van Opstal, and DougP Munoz. 2006. Stimulus intensity modifies saccadic reaction time and visual response latency in the superior colliculus. Experimental Brain Research 174, 1 (2006), 53–59.Google ScholarCross Ref
- Dipesh Bhattarai, Marwan Suheimat, Andrew J Lambert, and David A Atchison. 2019. Fixation stability with Bessel beams. Optometry and Vision Science 96, 2 (2019), 95–102.Google ScholarCross Ref
- Andy Cockburn, Carl Gutwin, Joey Scarr, and Sylvain Malacria. 2014. Supporting novice to expert transitions in user interfaces. ACM Computing Surveys (CSUR) 47, 2 (2014), 1–36.Google ScholarDigital Library
- H Deuble, W Wolf, and G Hauske. 1984. The evaluation of the oculomotor error signal. In Advances in Psychology. Vol. 22. Elsevier, 55–62.Google Scholar
- Heiko Drewes and Albrecht Schmidt. 2007. Interacting with the computer using gaze gestures. In IFIP Conference on Human-Computer Interaction. Springer, 475–488.Google ScholarCross Ref
- Jay Henderson, Sylvain Malacria, Mathieu Nancel, and Edward Lank. 2020. Investigating the necessity of delay in marking menu invocation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Anthony Hornof, Anna Cavender, and Rob Hoselton. 2003. Eyedraw: a system for drawing pictures with eye movements. ACM SIGACCESS Accessibility and Computing77-78 (2003), 86–93.Google ScholarDigital Library
- Anke Huckauf and Mario H Urbina. 2008. Gazing with pEYEs: towards a universal input for various applications. In Proceedings of the 2008 symposium on Eye tracking research & applications. 51–54.Google ScholarDigital Library
- Aulikki Hyrskykari, Howell Istance, and Stephen Vickers. 2012. Gaze gestures or dwell-based interaction?. In Proceedings of the Symposium on Eye Tracking Research and Applications. 229–232.Google ScholarDigital Library
- FOVE inc.2021. FOVE0 Headset Specification. https://fove-inc.com/product/, last visited Dec. 2021.Google Scholar
- HTC inc.2021. HTC Vive Pro Eye Headset Specification. https://www.vive.com/kr/product/vive-pro-eye/specs/, last visited Dec. 2021.Google Scholar
- Toshiya Isomoto, Shota Yamanaka, and Buntarou Shizuki. 2020. Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture. (2020).Google Scholar
- Laurent Itti and Christof Koch. 2001. Computational modelling of visual attention. Nature reviews neuroscience 2, 3 (2001), 194–203.Google Scholar
- RP Kalesnykas and PE Hallett. 1994. Retinal eccentricity and the latency of eye saccades. Vision research 34, 4 (1994), 517–531.Google Scholar
- Yvonne Kammerer, Katharina Scheiter, and Wolfgang Beinhauer. 2008. Looking my way through the menu: the impact of menu design and multimodal input on gaze-based menu selection. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications. 213–220.Google ScholarDigital Library
- Gordon Kurtenbach and William Buxton. 1991. Issues in combining marking and direct manipulation techniques. In Proceedings of the 4th annual ACM symposium on User interface software and technology. 137–144.Google ScholarDigital Library
- Gordon Kurtenbach and William Buxton. 1994. User learning and performance with marking menus. In Proceedings of the SIGCHI conference on Human factors in computing systems. 258–264.Google ScholarDigital Library
- Gordon Paul Kurtenbach. 1993. The design and evaluation of marking menus.University of Toronto Toronto.Google Scholar
- Olivier Le Meur and Zhi Liu. 2015. Saccadic model of eye movements for free-viewing condition. Vision research 116(2015), 152–164.Google Scholar
- Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 357–360.Google ScholarDigital Library
- Päivi Majaranta, Jari Laitinen, Jari Kangas, and Poika Isokoski. 2019. Inducing gaze gestures by static illustrations. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. 1–5.Google ScholarDigital Library
- Päivi Majaranta, Kari-Jouko Räihä, Aulikki Hyrskykari, and Oleg Špakov. 2019. Eye movements and human-computer interaction. In Eye Movement Research. Springer, 971–1015.Google Scholar
- M v Menozzi, A v Buol, H Krueger, and Ch Miège. 1994. Direction of gaze and comfort: discovering the relation for the ergonomic optimization of visual tasks. Ophthalmic and Physiological Optics 14, 4 (1994), 393–399.Google ScholarCross Ref
- Marco Porta and Matteo Turina. 2008. Eye-S: a full-screen input modality for pure eye-based communication. In Proceedings of the 2008 symposium on Eye tracking research & applications. 27–34.Google ScholarDigital Library
- Vijay Rajanna and John Paulin Hansen. 2018. Gaze typing in virtual reality: impact of keyboard design, selection method, and motion. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. 1–10.Google ScholarDigital Library
- Joey Scarr, Andy Cockburn, Carl Gutwin, and Philip Quinn. 2011. Dips and ceilings: understanding and supporting transitions to expertise in user interfaces. In Proceedings of the sigchi conference on human factors in computing systems. 2741–2750.Google ScholarDigital Library
- Immo Schuetz, T Scott Murdison, Kevin J MacKenzie, and Marina Zannoli. 2019. An Explanation of Fitts’ Law-like Performance in Gaze-Based Selection Tasks Using a Psychophysics Approach. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Ludwig Sidenmark and Hans Gellersen. 2019. Eye, head and torso coordination during gaze shifts in virtual reality. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 1(2019), 1–40.Google ScholarDigital Library
- Ludwig Sidenmark and Hans Gellersen. 2019. Eye&head: Synergetic eye and head movement for gaze pointing and selection. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 1161–1174.Google ScholarDigital Library
- Lore Thaler, Alexander C Schütz, Melvyn A Goodale, and Karl R Gegenfurtner. 2013. What is the best fixation target? The effect of target shape on stability of fixational eye movements. Vision research 76(2013), 31–42.Google Scholar
- Jan Theeuwes, Arthur F Kramer, Sowon Hahn, and David E Irwin. 1998. Our eyes do not always go where we want them to go: Capture of the eyes by new objects. Psychological Science 9, 5 (1998), 379–385.Google ScholarCross Ref
- Jan Theeuwes, Sebastiaan Mathôt, and Alan Kingstone. 2010. Object-based eye movements: The eyes prefer to stay within the same object. Attention, Perception, & Psychophysics 72, 3 (2010), 597–601.Google ScholarCross Ref
- Geoffrey Tien and M Stella Atkins. 2008. Improving hands-free menu selection using eyegaze glances and fixations. In Proceedings of the 2008 symposium on Eye tracking research & applications. 47–50.Google ScholarDigital Library
- Mario H Urbina, Maike Lorenz, and Anke Huckauf. 2010. Pies with EYEs: the limits of hierarchical pie menus in gaze control. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. 93–96.Google ScholarDigital Library
- Jacob O Wobbrock, Leah Findlater, Darren Gergle, and James J Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI conference on human factors in computing systems. 143–146.Google ScholarDigital Library
- Jacob O Wobbrock, James Rubinstein, Michael W Sawyer, and Andrew T Duchowski. 2008. Longitudinal evaluation of discrete consecutive gaze gestures for text entry. In Proceedings of the 2008 symposium on Eye tracking research & applications. 11–18.Google ScholarDigital Library
- Steven Yantis and Anne P Hillstrom. 1994. Stimulus-driven attentional capture: evidence from equiluminant visual objects.Journal of experimental psychology: Human perception and performance 20, 1(1994), 95.Google Scholar
Index Terms
- Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors
Recommendations
StickyPie: A Gaze-Based, Scale-Invariant Marking Menu Optimized for AR/VR
CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing SystemsThis work explores the design of marking menus for gaze-based AR/VR menu selection by expert and novice users. It first identifies and explains the challenges inherent in ocular motor control and current eye tracking hardware, including overshooting, ...
Prediction of gaze estimation error for error-aware gaze-based interfaces
ETRA '16: Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & ApplicationsGaze estimation error is inherent in head-mounted eye trackers and seriously impacts performance, usability, and user experience of gaze-based interfaces. Particularly in mobile settings, this error varies constantly as users move in front and look at ...
Simple gaze gestures and the closure of the eyes as an interaction technique
ETRA '12: Proceedings of the Symposium on Eye Tracking Research and ApplicationsWe created a set of gaze gestures that utilize the following three elements: simple one-segment gestures, off-screen space, and the closure of the eyes. These gestures are to be used as the moving tool in a gaze-only controlled drawing application. We ...
Comments