skip to main content
research-article

Dwell Selection with ML-based Intent Prediction Using Only Gaze Data

Published:07 September 2022Publication History
Skip Abstract Section

Abstract

We developed a dwell selection system with ML-based prediction of a user's intent to select. Because a user perceives visual information through the eyes, precise prediction of a user's intent will be essential to the establishment of gaze-based interaction. Our system first detects a dwell to roughly screen the user's intent to select and then predicts the intent by using an ML-based prediction model. We created the intent prediction model from the results of an experiment with five different gaze-only tasks representing everyday situations. The intent prediction model resulted in an overall area under the curve (AUC) of the receiver operator characteristic curve of 0.903. Moreover, it could perform independently of the user (AUC=0.898) and the eye-tracker (AUC=0.880). In a performance evaluation experiment with real interactive situations, our dwell selection method had both higher qualitative and quantitative performance than previously proposed dwell selection methods.

Skip Supplemental Material Section

Supplemental Material

References

  1. Sunggeun Ahn, Stephanie Santosa, Mark Parent, Daniel Wigdor, Tovi Grossman, and Marcello Giordano. 2021. StickyPie: A Gaze-Based, Scale-Invariant Marking Menu Optimized for AR/VR. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 739, 16 pages. https://doi.org/10.1145/3411764.3445297Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Sunggeun Ahn, Jeongmin Son, Sangyoon Lee, and Geehyuk Lee. 2020. Verge-It: Gaze Interaction for a Binocular Head-Worn Display Using Modulated Disparity Vergence Eye Movement. In Proceedings of the 2020 CHI Extended Abstracts on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI EA '20). Association for Computing Machinery, New York, NY, USA, 264:1-7. https://doi.org/10.1145/3334480.3382908Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A Next-Generation Hyperparameter Optimization Framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (Anchorage, AK, USA) (KDD '19). Association for Computing Machinery, New York, NY, USA, 2623--2631. https://doi.org/10.1145/3292500.3330701Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Borji Ali and Itti Laurent. 2013. State-of-the-Art in Visual Attention Modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (2013), 185--207.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Roman Bednarik, Hana Vrzakova, and Michal Hradis. 2012. What Do You Want to Do Next: A Novel Approach for Intent Prediction in Gaze-based Interaction. In Proceedings of the 2012 ACM Symposium on Eye Tracking Research & Applications (Santa Barbara, California) (ETRA '12). Association for Computing Machinery, New York, NY, USA, 83--90. https://doi.org/10.1145/2168556.2168569Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. James E. Birren, Roland C. Casperson, and Jack Botwinick. 1950. Age Changes in Pupil Size. Journal of Gerontology 5, 3 (07 1950), 216--221. https://doi.org/10.1093/geronj/53.216 arXiv:https://academic.oup.com/geronj/article-pdf/5/3/216/1647183/5-3-216.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  7. Andrew P. Bradley. 1997. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition 30, 7 (1997), 1145--1159. https://doi.org/10.1016/S0031-3203(96)00142-2Google ScholarGoogle ScholarCross RefCross Ref
  8. John Brooke. 1996. Usability Evaluation in Industry. CRC Press, Chapter SUS-A Quick and Dirty Usability Scale, 189--194.Google ScholarGoogle Scholar
  9. Çağla Çiğ Karaman and Tevfik Metin Sezgin. 2018. Gaze-based predictive user interfaces: Visualizing user intentions in the presence of uncertainty. International Journal of Human-Computer Studies 111 (2018), 78--91. https://doi.org/10.1016/j.ijhcs.2017.11.005Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ishan Chatterjee, Robert Xiao, and Chris Harrison. 2015. Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (Seattle, Washington, USA) (ICMI '15). Association for Computing Machinery, New York, NY, USA, 131--138. https://doi.org/10.1145/2818346.2820752Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Myungguen Choi, Daisuke Sakamoto, and Tetsuo Ono. 2020. Bubble Gaze Cursor + Bubble Gaze Lens: Applying Area Cursor Technique to Eye-Gaze Interface. In ACM Symposium on Eye Tracking Research and Applications (Stuttgart, Germany) (ETRA '20). Association for Computing Machinery, New York, NY, USA, Article 11, 10 pages. https://doi.org/10.1145/3379155.3391322Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Manfred Clynes. 1962. The Non-Linear Biological Dynamics of Unidirectional Rate Sensitivity Illustrated by Analog Computer Analysis, Pupillary Reflex to Light and Sound, and Heart Rate Behavior. Annals of the New York Academy of Sciences 98, 4 (1962), 806--845. https://doi.org/10.1111/j.1749-6632.1962.tb30600.x arXiv:https://nyaspubs.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1749-6632.1962.tb30600.xGoogle ScholarGoogle ScholarCross RefCross Ref
  13. Brendan David-John, Candace Peacock, Ting Zhang, T. Scott Murdison, Hrvoje Benko, and Tanya R. Jonker. 2021. Towards Gaze-Based Prediction of the Intent to Interact in Virtual Reality. In ACM Symposium on Eye Tracking Research and Applications (Virtual Event, Germany) (ETRA '21 Short Papers). Association for Computing Machinery, New York, NY, USA, Article 2, 7 pages. https://doi.org/10.1145/3448018.3458008Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. William Delamare, Teng Han, and Pourang Irani. 2017. Designing a Gaze Gesture Guiding System. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (Vienna, Austria) (MobileHCI '17). Association for Computing Machinery, New York, NY, USA, Article 26, 13 pages. https://doi.org/10.1145/3098279.3098561Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Heiko Drewes, Mohamed Khamis, and Florian Alt. 2018. Smooth Pursuit Target Speeds and Trajectories. In Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia (Cairo, Egypt) (MUM '18). Association for Computing Machinery, New York, NY, USA, 139--146. https://doi.org/10.1145/3282894.3282913Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Tobii Dynavox. 2021. Assistive technology for communication/AAC - Tobii Dynavox. https://www.tobiidynavox.com/ (Retrieved January 27, 2021).Google ScholarGoogle Scholar
  17. Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (Charlotte, NC, USA) (UIST '15). Association for Computing Machinery, New York, NY, USA, 457--466. https://doi.org/10.1145/2807442.2807499Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Bernard Ghanem Fabian Caba Heilbron, Victor Escorcia and Juan Carlos Niebles. 2015. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 961--970.Google ScholarGoogle Scholar
  19. Anna Maria Feit, Lukas Vordemann, Seonwook Park, Caterina Berube, and Otmar Hilliges. 2020. Detecting Relevance during Decision-Making from Eye Movements for UI Adaptation. In ACM Symposium on Eye Tracking Research and Applications (Stuttgart, Germany) (ETRA '20 Full Papers). Association for Computing Machinery, New York, NY, USA, Article 10, 11 pages. https://doi.org/10.1145/3379155.3391321Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Anna Maria Feit, Shane Williams, Arturo Toledo, Ann Paradiso, Harish Kulkarni, Shaun Kane, and Meredith Ringel Morris. 2017. Toward Everyday Gaze Input: Accuracy and Precision of Eye Tracking and Implications for Design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). Association for Computing Machinery, New York, NY, USA, 1118--1130. https://doi.org/10.1145/3025453.3025599Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Pedro Figueiredo and Manuel J. Fonseca. 2018. EyeLinks: A Gaze-Only Click Alternative for Heterogeneous Clickables. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (Boulder, CO, USA) (ICMI '18). Association for Computing Machinery, New York, NY, USA, 307--314. https://doi.org/10.1145/3242969.3243021Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Christoph Gebhardt, Brian Hecox, Bas van Opheusden, Daniel Wigdor, James Hillis, Otmar Hilliges, and Hrvoje Benko. 2019. Learning Cooperative Personalized Policies from Gaze Data. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST '19). Association for Computing Machinery, New York, NY, USA, 197--208. https://doi.org/10.1145/3332165.3347933Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. John Hansen, Anders Johansen, Dan Hansen, Kenji Ito, and Satoru Mashino. 2003. Command Without a Click: Dwell Time Typing by Mouse and Gaze Selections. In Proceedings of Human-Computer Interaction (INTERACTA '03). IFIP, 121--128.Google ScholarGoogle Scholar
  24. Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Human Mental Workload, Peter A. Hancock and Najmedin Meshkati (Eds.). Advances in Psychology, Vol. 52. North-Holland, 139--183. https://doi.org/10.1016/S0166-4115(08)62386-9Google ScholarGoogle Scholar
  25. Mary Hayhoe and Dana Ballard. 2005. Eye Movements in Natural Behavior. Trends in Cognitive Sciences 9, 4 (2005), 188--194. https://doi.org/10.1016/j.tics.2005.02.009Google ScholarGoogle ScholarCross RefCross Ref
  26. Eckhard H. Hess and James M. Polt. 1960. Pupil Size as Related to Interest Value of Visual Stimuli. Science 132 (1960), 349--350.Google ScholarGoogle ScholarCross RefCross Ref
  27. Anthony Hornof, Anna Cavender, and Rob Hoselton. 2004. EyeDraw: A System for Drawing Pictures with the Eyes. In Proceedings of the 2004 CHI Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria) (CHI EA '04). Associati0on for Computing Machinery, New York, NY, USA, 1251--1254. https://doi.org/10.1145/985921.986036Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Anthony J. Hornof and Anna Cavender. 2005. EyeDraw: Enabling Children with Severe Motor Impairments to Draw with Their Eyes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Portland, Oregon, USA) (CHI '05). Association for Computing Machinery, New York, NY, USA, 161--170. https://doi.org/10.1145/1054972.1054995Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Thomas E. Hutchinson, K. Preston White, Worthy N. Martin, Kelly C. Reichert, and Lisa A. Frey. 1989. Human-computer Interaction using Eye-gaze Input. IEEE Transactions on Systems, Man, and Cybernetics 19, 6 (1989), 1527--1534. https://doi.org/10.1109/21.44068Google ScholarGoogle ScholarCross RefCross Ref
  30. Toshiya Isomoto, Toshiyuki Ando, Buntarou Shizuki, and Shin Takahashi. 2018. Dwell Time Reduction Technique Using Fitts' Law for Gaze-Based Target Acquisition. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (Warsaw, Poland) (ETRA '18). Association for Computing Machinery, New York, NY, USA, 26:1-26:7. https://doi.org/10.1145/3204493.3204532Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Toshiya Isomoto, Shota Yamanaka, and Buntarou Shizuki. 2020. Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture. In Proceedings of Graphics Interface 2020 (University of Toronto) (GI '20). Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 256--266. https://doi.org/10.20380/GI2020.26Google ScholarGoogle Scholar
  32. Toshiya Isomoto, Shota Yamanaka, and Buntarou Shizuki. 2021. Relationship between Dwell-Time and Model Human Processor for Dwell-based Image Selection. In Proceedings of the 2021 ACM Symposium on Applied Perception (virtual) (SAP '21). Association for Computing Machinery, Article 6, 5 pages. https://doi.org/10.1145/3474451.3476240Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Robert J. K. Jacob. 1990. What You Look at is What You Get: Eye Movement-based Interaction Techniques. In Proceedings of the 1990 CHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI '90). Association for Computing Machinery, New York, NY, USA, 11--18. https://doi.org/10.1145/97243.97246Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Robert J. K. Jacob. 1991. The Use of Eye Movements in Human-computer Interaction Techniques: What You Look at is What You Get. ACM Transaction on Information Systems 9, 2 (1991), 152--169.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Robert. J. K. Jacob. 1993. Eye Movement-Based Human-Computer Interaction Techniques: Toward Non-Command Interfaces. Advances in Human-Computer Interaction 4 (1993), 151--190.Google ScholarGoogle Scholar
  36. Joaquin Jadue, Gino Slanzi, Luis Salas, and Juan D. Velásquez. 2015. Web User Click Intention Prediction by Using Pupil Dilation Analysis. In 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT '15, Vol. 1). IEEE / Association for Computing Machinery, New York, NY, USA, 433--436. https://doi.org/10.1109/WI-IAT.2015.221Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Dagmar Kern, Paul Marshall, and Albrecht Schmidt. 2010. Gazemarks: Gaze-Based Visual Placeholders to Ease Attention Switching. In Proceedings of the 2010 CHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI '10). Association for Computing Machinery, New York, NY, USA, 2093--2102. https://doi.org/10.1145/1753326.1753646Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Mohamed Khamis, Florian Alt, Mariam Hassib, Emanuel von Zezschwitz, Regina Hasholzner, and Andreas Bulling. 2016. GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (San Jose, California, USA) (CHI EA '16). Association for Computing Machinery, New York, NY, USA, 2156--2164. https://doi.org/10.1145/2851581.2892314Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Taejun Kim, Auejin Ham, Sunggeun Ahn, and Geehyuk Lee. 2022. Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors. In CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 277, 12 pages. https://doi.org/10.1145/3491102.3501977Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Dominik Kirst and Andreas Bulling. 2016. On the Verge: Voluntary Convergences for Accurate and Precise Timing of Gaze Input. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (San Jose, California, USA) (CHI EA '16). Association for Computing Machinery, New York, NY, USA, 1519--1525. https://doi.org/10.1145/2851581.2892307Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Shinya Kudo, Hiroyuki Okabe, Taku Hachisu, Michi Sato, Shogo Fukushima, and Hiroyuki Kajimoto. 2013. Input Method Using Divergence Eye Movement. In Proceedings of the 2013 CHI Extended Abstracts on Human Factors in Computing Systems (Paris, France) (CHI EA '13). Association for Computing Machinery, New York, NY, USA, 1335--1340. https://doi.org/10.1145/2468356.2468594Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Michael F. Land and Mary Hayhoe. 2001. In what ways do eye movements contribute to everyday activities? Vision Research 41, 25 (2001), 3559--3565. https://doi.org/10.1016/S0042-6989(01)00102-XGoogle ScholarGoogle ScholarCross RefCross Ref
  43. Feiyu Lu, Shakiba Davari, and Doug Bowman. 2021. Exploration of Techniques for Rapid Activation of Glanceable Information in Head-Worn Augmented Reality. In Symposium on Spatial User Interaction (Virtual Event, USA) (SUI '21). Association for Computing Machinery, New York, NY, USA, Article 14, 11 pages. https://doi.org/10.1145/3485279.3485286Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Christof Lutteroth, Moiz Penkar, and Gerald Weber. 2015. Gaze vs. Mouse: A Fast and Accurate Gaze-Only Click Alternative. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (Charlotte, NC, USA) (UIST '15). Association for Computing Machinery, New York, NY, USA, 385--394. https://doi.org/10.1145/2807442.2807461Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast Gaze Typing with an Adjustable Dwell Time. In Proceedings of the 2009 CHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI '09). Association for Computing Machinery, New York, NY, USA, 357--360. https://doi.org/10.1145/1518701.1518758Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Päivi Majaranta, Anne Aula, and Kari-Jouko Räihä. 2004. Effects of Feedback on Eye Typing with a Short Dwell Time. In Proceedings of the 2004 Symposium on Eye Tracking Research & Applications (San Antonio, Texas) (ETRA '04). Association for Computing Machinery, New York, NY, USA, 139--146. https://doi.org/10.1145/968363.968390Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Päivi Majaranta, I. Scott MacKenzie, Anne Aula, and Kari-Jouko Räihä. 2006. Effects of Feedback and Dwell Time on Eye Typing Speed and Accuracy. Universal Access in the Information Society 5, 2 (2006), 199--208. https://doi.org/10.1007/s10209-006-0034-zGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  48. Sven Mayer, Gierad Laput, and Chris Harrison. 2020. Enhancing Mobile Voice Assistants with WorldGaze. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--10. https://doi.org/10.1145/3313831.3376479Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Pallavi Mohan, Wooi Boon Goh, Chi-Wing Fu, and Sai-Kit Yeung. 2018. DualGaze: Addressing the Midas Touch Problem in Gaze Mediated VR Interaction. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct '18). 79--84. https://doi.org/10.1109/ISMAR-Adjunct.2018.00039Google ScholarGoogle Scholar
  50. Martez E. Mott, Shane Williams, Jacob O. Wobbrock, and Meredith Ringel Morris. 2017. Improving Dwell-Based Gaze Typing with Dynamic, Cascading Dwell Times. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). Association for Computing Machinery, New York, NY, USA, 2558--2570. https://doi.org/10.1145/3025453.3025517Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Aanand Nayyar, Utkarsh Dwivedi, Karan Ahuja, Nitendra Rajput, Seema Nagar, and Kuntal Dey. 2017. OptiDwell: Intelligent Adjustment of Dwell Click Time. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (Limassol, Cyprus) (IUI '17). Association for Computing Machinery, New York, NY, USA, 193--204. https://doi.org/10.1145/3025171.3025202Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Abdul Moiz Penkar, Christof Lutteroth, and Gerald Weber. 2013. Eyes Only: Navigating Hypertext with Gaze. In 14th IFIP TC 13 International Conference on Human-Computer Interaction - INTERACT 2013. Springer Berlin Heidelberg, Berlin, Heidelberg, 153--169.Google ScholarGoogle Scholar
  53. Ken Pfeuffer, Yasmeen Abdrabou, Augusto Esteves, Radiah Rivu, Yomna Abdelrahman, Stefanie Meitner, Amr Saadi, and Florian Alt. 2021. ARtention: A Design Space for Gaze-adaptive User Interfaces in Augmented Reality. Computers & Graphics 95 (2021), 1--12. https://doi.org/10.1016/j.cag.2021.01.001Google ScholarGoogle ScholarCross RefCross Ref
  54. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gaze-touch: Combining Gaze with Multi-touch for Interaction on the Same Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, Hawaii, USA) (UIST '14). Association for Computing Machinery, New York, NY, USA, 509--518. https://doi.org/10.1145/2642918.2647397Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (Charlotte, NC, USA) (UIST '15). Association for Computing Machinery, New York, NY, USA, 373--383. https://doi.org/10.1145/2807442.2807460Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Ken Pfeuffer, Benedikt Mayer, Diako Mardanbegi, and Hans Gellersen. 2017. Gaze + Pinch Interaction in Virtual Reality. In Proceedings of the 5th Symposium on Spatial User Interaction (Brighton, United Kingdom) (SUI '17). Association for Computing Machinery, New York, NY, USA, 99--108. https://doi.org/10.1145/3131277.3132180Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Jimin. Pi and Bertram. E. Shi. 2017. Probabilistic Ajustment of Dwell Time for Eye Typing. In 10th International Conference on Human System Interactions (HSI). IEEE, 251--257.Google ScholarGoogle Scholar
  58. Robin Piening, Ken Pfeuffer, Augusto Esteves, Tim Mittermeier, Sarah Prange, Philippe Schröder, and Florian Alt. 2021. Looking for Info: Evaluation of Gaze Based Information Retrieval in Augmented Reality. In 18th IFIP TC 13 International Conference on Human-Computer Interaction - INTERACT 2021. Springer International Publishing, 544--565.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Panwar Prateek, Sarcar Sayan, and Samanta Debasis. 2012. EyeBoard: A Fast and Accurate Eye Gaze-Based Text Entry System. In 2012 4th International Conference on Intelligent Human Computer Interaction (IHCI '12). 1--8. https://doi.org/10.1109/IHCI.2012.6481793Google ScholarGoogle Scholar
  60. Kari-Jouko Räihä and Saila Ovaska. 2012. An Exploratory Study of Eye Typing Fundamentals: Dwell Time, Text Entry Rate, Errors, and Workload. In Proceedings of the 2012 CHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI '12). Association for Computing Machinery, New York, NY, USA, 3001--3010. https://doi.org/10.1145/2207676.2208711Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Dario D. Salvucci and John R. Anderson. 2000. Intelligent Gaze-Added Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (The Hague, The Netherlands) (CHI '00). Association for Computing Machinery, New York, NY, USA, 273--280. https://doi.org/10.1145/332040.332444Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Dario D. Salvucci and Joseph H. Goldberg. 2000. Identifying Fixations and Saccades in Eye-Tracking Protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications (Palm Beach Gardens, Florida, USA) (ETRA '00). Association for Computing Machinery, New York, NY, USA, 71--78. https://doi.org/10.1145/355017.355028Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Simon Schenk, Marc Dreiser, Gerhard Rigoll, and Michael Dorr. 2017. GazeEverywhere: Enabling Gaze-only User Interaction on an Unmodified Desktop PC in Everyday Scenarios. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). Association for Computing Machinery, New York, NY, USA, 3034--3044. https://doi.org/10.1145/3025453.3025455Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Asma Shakil, Christof Lutteroth, and Gerald Weber. 2019. CodeGazer: Making Code Navigation Easy and Natural With Gaze Input. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300306Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Linda E. Sibert and Robert J. K. Jacob. 2000. Evaluation of Eye Gaze Interaction. In Proceedings of the 2000 CHI Conference on Human Factors in Computing Systems (The Hague, The Netherlands) (CHI '00). Association for Computing Machinery, New York, NY, USA, 281--288. https://doi.org/10.1145/332040.332445Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Henrik Skovsgaard, Kari-Jouko Räihä, and Martin Tall. 2011. Computer Control by Gaze. In Gaze Interaction and Aplications of Eye Tracking: Advances in Assistive Technologies, Päivi Majaranta, Hirotaka Aoki, Mick Donegan, Witzner Hansen Dan, John Paulin Hansen, Aulikki Hyrskykari, and Kari-Jouko Räihä (Eds.). IGI Global, Hershey, PA, Chapter 9, 78--103.Google ScholarGoogle Scholar
  67. Sophie Stellmach and Raimund Dachselt. 2012. Look & Touch: Gaze-supported Target Acquisition. In Proceedings of the 2012 CHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI '12). Association for Computing Machinery, New York, NY, USA, 2981--2990. https://doi.org/10.1145/2207676.2208709Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Geoffrey Tien and M. Stella Atkins. 2008. Improving Hands-Free Menu Selection Using Eyegaze Glances and Fixations. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications (Savannah, Georgia) (ETRA '08). Association for Computing Machinery, New York, NY, USA, 47--50. https://doi.org/10.1145/1344471.1344482Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Jayson Turner, Jason Alexander, Andreas Bulling, and Hans Gellersen. 2015. Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-Scale-Translate Tasks. In Proceedings of the 2015 CHI Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI '15). Association for Computing Machinery, New York, NY, USA, 4179--4188. https://doi.org/10.1145/2702123.2702355Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Mario H. Urbina, Maike Lorenz, and Anke Huckauf. 2010. Pies with EYEs: The Limits of Hierarchical Pie Menus in Gaze Control. In Proceedings of the 2010 ACM Symposium on Eye-Tracking Research & Applications (Austin, Texas) (ETRA '10). Association for Computing Machinery, New York, NY, USA, 93--96. https://doi.org/10.1145/1743666.1743689Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Eduardo Velloso, Flavio Luiz Coutinho, Andrew Kurauchi, and Carlos H Morimoto. 2018. Circular Orbits Detection for Gaze Interaction Using 2D Correlation and Profile Matching Algorithms. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (Warsaw, Poland) (ETRA '18). Association for Computing Machinery, New York, NY, USA, Article 25, 9 pages. https://doi.org/10.1145/3204493.3204524Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Mélodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: Spontaneous Interaction with Displays Based on Smooth Pursuit Eye Movement and Moving Targets. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Zurich, Switzerland) (UbiComp '13). Association for Computing Machinery, New York, NY, USA, 439--448. https://doi.org/10.1145/2493432.2493477Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Oleg Špakov, Poika Isokoski, Jari Kangas, Deepak Akkil, and Päivi Majaranta. 2016. PursuitAdjuster: An Exploration into the Design Space of Smooth Pursuit-based Widgets. In Proceedings of the 2016 ACM Symposium on Eye Tracking Research & Applications (Charleston, South Carolina) (ETRA '16). Association for Computing Machinery, New York, NY, USA, 287--290. https://doi.org/10.1145/2857491.2857526Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Colin Ware and Harutune H. Mikaelian. 1987. An Evaluation of an Eye Tracker As a Device for Computer Input. In Proceedings of the 1987 CHI/GI Conference on Human Factors in Computing Systems and Graphics Interface (Toronto, Ontario, Canada) (CHI '87). Association for Computing Machinery, New York, NY, USA, 183--188. https://doi.org/10.1145/29933.275627Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Pingmei Xu, Yusuke Sugano, and Andreas Bulling. 2016. Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces. Association for Computing Machinery, New York, NY, USA, 3299--3310. https://doi.org/10.1145/2858036.2858479Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Xinyong Zhang, Xiangshi Ren, and Hongbin Zha. 2008. Improving Eye Cursor's Stability for Eye Pointing Tasks. In Proceedings of the 2008 CHI Conference on Human Factors in Computing Systems (Florence, Italy) (CHI '08). Association for Computing Machinery, New York, NY, USA, 525--534. https://doi.org/10.1145/1357054.1357139Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Xinyong Zhang, Pianpian Xu, Qing Zhang, and Hongbin Zha. 2011. Speed-Accuracy Trade-off in Dwell-Based Eye Pointing Tasks at Different Cognitive Levels. In Proceedings of the 1st International Workshop on Pervasive Eye Tracking & Mobile Eye-Based Interaction (Beijing, China) (PETMEI '11). Association for Computing Machinery, New York, NY, USA, 37--42. https://doi.org/10.1145/2029956.2029967Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Dwell Selection with ML-based Intent Prediction Using Only Gaze Data

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
      Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 6, Issue 3
      September 2022
      1612 pages
      EISSN:2474-9567
      DOI:10.1145/3563014
      Issue’s Table of Contents

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 September 2022
      Published in imwut Volume 6, Issue 3

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader