ABSTRACT
Although gaze has been widely studied for mobile interactions, eyelid-based gestures are relatively understudied and limited to few basic gestures (e.g., blink). In this work, we propose a gesture grammar to construct both basic and compound eyelid gestures. We present an algorithm to detect nine eyelid gestures in real-time on mobile devices and evaluate its performance with 12 participants. Results show that our algorithm is able to recognize nine eyelid gestures with 83% and 78% average accuracy using user-dependent and user-independent models respectively. Further, we design a gesture mapping scheme to allow for navigating between and within mobile apps only using eyelid gestures. Moreover, we show how eyelid gestures can be used to enable cross-application and sensitive interactions. Finally, we highlight future research directions.
- Toshiyuki Ando, Yuki Kubo, Buntarou Shizuki, and Shin Takahashi. 2017. Canalsense: Face-related movement recognition system based on sensing air pressure in ear canals. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 679--689.Google ScholarDigital Library
- Behrooz Ashtiani and I Scott MacKenzie. 2010. BlinkWrite2: an improved text entry method using eye blinks. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. 339--345.Google ScholarDigital Library
- Matthias Böhmer, Brent Hecht, Johannes Schöning, Antonio Krüger, and Gernot Bauer. 2011. Falling asleep with Angry Birds, Facebook and Kindle. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services - MobileHCI '11. ACM Press, New York, New York, USA, 47. https://doi.org/10.1145/2037373.2037383Google ScholarDigital Library
- Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze Interaction for SmartWatches using Smooth Pursuit Eye Movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology - UIST'15. ACM Press, New York, New York, USA, 457--466. https: //doi.org/10.1145/2807442.2807499Google ScholarDigital Library
- Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze interaction for smart watches using smooth pursuit eye movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. 457--466.Google ScholarDigital Library
- Mingming Fan, Zhen Li, and Franklin Mingzhe Li. 2020. Eyelid Gestures on Mobile Devices for People with Motor Impairments. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'20). ACM, In Press. https://doi.org/10.1145/3373625.3416987Google Scholar
- Nathan Gitter. 2017. Rainbrow: an eyebrow-controlled game. https://apps.apple. com/us/app/rainbrow/id1312458558. (Accessed on 08/04/2020).Google Scholar
- Mayank Goel, Chen Zhao, Ruth Vinisha, and Shwetak N Patel. 2015. Tongue-incheek: Using wireless signals to enable non-intrusive and flexible facial gestures detection. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 255--258.Google Scholar
- Kristen Grauman, Margrit Betke, Jonathan Lombardi, James Gips, and Gary R Bradski. 2003. Communication via eye blinks and eyebrow raises: Video-based human-computer interfaces. Universal Access in the Information Society 2, 4 (2003), 359--373.Google ScholarDigital Library
- Aakar Gupta, Muhammed Anwar, and Ravin Balakrishnan. 2016. Porous Interfaces for Small Screen Multitasking using Finger Identification. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology - UIST '16. ACM Press, New York, New York, USA, 145--156. https: //doi.org/10.1145/2984511.2984557Google ScholarDigital Library
- Henna Heikkilä and Kari-Jouko Räihä. 2012. Simple gaze gestures and the closure of the eyes as an interaction technique. In Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA '12. ACM Press, New York, New York, USA, 147. https://doi.org/10.1145/2168556.2168579Google ScholarDigital Library
- Fabian Hemmert, Danijela Djokic, and RetoWettach. 2008. Perspective change: a system for switching between on-screen views by closing one eye. In Proceedings of the working conference on Advanced visual interfaces - AVI '08. ACM Press, New York, New York, USA, 484. https://doi.org/10.1145/1385569.1385668Google ScholarDigital Library
- Seongkook Heo, Michelle Annett, Benjamin Lafreniere, Tovi Grossman, and George Fitzmaurice. 2017. No Need to Stop What You're Doing: Exploring No-Handed Smartwatch Interaction. In Graphics Interface. Canadian Human- Computer Communications Society, Waterloo, Ontario, Canada, 107--114. https: //doi.org/10.20380/gi2017.14Google Scholar
- Shoya Ishimaru, Kai Kunze, Koichi Kise, Jens Weppner, Andreas Dengel, Paul Lukowicz, and Andreas Bulling. 2014. In the Blink of an Eye: Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. In Proceedings of the 5th Augmented Human International Conference (Kobe, Japan) (AH '14). ACM, New York, NY, USA, Article 15, 4 pages. https://doi.org/10.1145/ 2582051.2582066Google ScholarDigital Library
- Robert JK Jacob. 1990. What you look at is what you get: eye movement-based interaction techniques. In Proceedings of the SIGCHI conference on Human factors in computing systems. 11--18.Google ScholarDigital Library
- Ricardo Jota and Daniel Wigdor. 2015. Palpebrae Superioris: Exploring the Design Space of Eyelid Gestures. In Proceedings of the 41st Graphics Interface Conference. Canadian Human-Computer Communications Society, Toronto, Ontario, Canada, 3--5. https://doi.org/10.20380/GI2015.35Google Scholar
- Arie E Kaufman, Amit Bandopadhay, and Bernard D Shaviv. 1993. An eye tracking computer user interface. In Proceedings of 1993 IEEE Research Properties in Virtual Reality Symposium. IEEE, 120--121.Google ScholarCross Ref
- Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, and Florian Alt. 2017. GazeTouchPIN: protecting sensitive data on mobile devices using secure multimodal authentication. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. 446--450.Google ScholarDigital Library
- Jinhyuk Kim, Jaekwang Cha, Hojun Lee, and Shiho Kim. 2017. Hand-free Natural User Interface for VR HMD with IR Based Facial Gesture Tracking Sensor. In Proceedings of the 23rd ACMSymposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST'17). ACM, New York, NY, USA, Article 62, 2 pages. https://doi.org/10.1145/3139131.3143420Google ScholarDigital Library
- Piotr Kowalczyk and Dariusz Sawicki. 2019. Blink and wink detection as a control tool in multimodal interaction. Multimedia Tools and Applications 78, 10 (2019), 13749--13765.Google ScholarDigital Library
- Aleksandra Królak. 2012. Eye-blink detection system for human--computer interaction. Universal Access in the Information Society 11, 4 (2012), 409--419.Google ScholarDigital Library
- Pin-Sung Ku, Te-Yen Wu, Ericka Andrea Valladares Bastias, and Mike Y. Chen. 2018. Wink It: Investigating Wink-based Interactions for Smartphones. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (Barcelona, Spain) (MobileHCI'18). ACM, New York, NY, USA, 146--150. https://doi.org/10.1145/3236112.3236133Google Scholar
- Pin-Sung Ku, Te-Yan Wu, and Mike Y. Chen. 2017. EyeExpression: Exploring the Use of Eye Expressions As Hands-free Input for Virtual and Augmented Reality Devices. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST '17). ACM, New York, NY, USA, Article 60, 2 pages. https://doi.org/10.1145/3139131.3141206Google ScholarDigital Library
- S.H. Kwon and H.C. Kim. 1999. EOG-based glasses-type wireless mouse for the disabled. Proceedings of the First Joint BMES/EMBS Conference. 1999 IEEE Engineering in Medicine and Biology 21st Annual Conference and the 1999 Annual Fall Meeting of the Biomedical Engineering Society (Cat. No.99CH37015) 1 (1999), 592. https://doi.org/10.1109/IEMBS.1999.802670Google Scholar
- Luis Leiva, Matthias Böhmer, Sven Gehring, and Antonio Krüger. 2012. Back to the app: the costs of mobile application interruptions. In Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services - MobileHCI '12. ACM Press, New York, New York, USA, 291. https: //doi.org/10.1145/2371574.2371617Google ScholarDigital Library
- Jingjing Liu, Bo Liu, Shaoting Zhang, Fei Yang, Peng Yang, Dimitris N Metaxas, and Carol Neidle. 2013. Recognizing eyebrow and periodic head gestures using CRFs for non-manual grammatical marker detection in ASL. In 2013 10th IEEE International Conference andWorkshops on Automatic Face and Gesture Recognition (FG). IEEE, 1--6.Google ScholarCross Ref
- Google LLC. 2019. Mobile Vision | Google Developers. https://developers.google. com/vision/Google Scholar
- Michael J Lyons, Chi-Ho Chan, and Nobuji Tetsutani. 2004. Mouthtype: Text entry by hand and mouth. In CHI'04 Extended Abstracts on Human Factors in Computing Systems. 1383--1386.Google ScholarDigital Library
- I Scott MacKenzie and Behrooz Ashtiani. 2011. BlinkWrite: efficient text entry using eye blinks. Universal Access in the Information Society 10, 1 (2011), 69--80.Google ScholarDigital Library
- Emiliano Miluzzo, Tianyu Wang, and Andrew T. Campbell. 2010. EyePhone: activating mobile phones with your eyes. In Proceedings of the second ACM SIGCOMM workshop on Networking, systems, and applications on mobile handhelds - MobiHeld '10. ACM Press, New York, New York, USA, 15. https://doi.org/10.1145/1851322.1851328Google Scholar
- Lyelle L Palmer. 1976. Inability to wink an eye and eye dominance. Perceptual and motor skills 42, 3 (1976), 825--826.Google Scholar
- Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12 (2011), 2825--2830.Google Scholar
- Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gazetouch: combining gaze with multi-touch for interaction on the same surface. In Proceedings of the 27th annual ACM symposium on User interface software and technology. 509--518.Google ScholarDigital Library
- Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. Gaze-shifting: Direct-indirect input with pen and touch modulated by gaze. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. 373--383.Google ScholarDigital Library
- Marcos Serrano, Barrett M Ens, and Pourang P Irani. 2014. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 3181--3190.Google ScholarDigital Library
- Robin Shaw, Everett Crisman, Anne Loomis, and Zofia Laszewski. 1990. The eye wink control interface: using the computer to provide the severely disabled with increased flexibility and comfort. In Proc. of the Third Annual IEEE Symposium on Computer-Based Medical Systems. IEEE, 105--111. https://doi.org/10.1109/ CBMSYS.1990.109386Google ScholarCross Ref
- John A Stern, Larry C Walrath, and Robert Goldstein. 1984. The endogenous eyeblink. Psychophysiology 21, 1 (1984), 22--33.Google ScholarCross Ref
- Roel Vertegaal, Aadil Mamuji, Changuk Sohn, and Daniel Cheng. 2005. Media eyepliances: using eye tracking for remote control focus selection of appliances. In CHI'05 Extended Abstracts on Human Factors in Computing Systems. 1861--1864.Google ScholarDigital Library
- Mélodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: spontaneous interaction with displays based on smooth pursuit eye movement and moving targets. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. 439--448.Google ScholarDigital Library
- Bryan Wang and Tovi Grossman. 2020. BlyncSync: Enabling Multimodal Smartwatch Gestures with Synchronous Touch and Blink. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.Google ScholarDigital Library
- Xuhai Xu, Chun Yu, Anind K Dey, and Jennifer Mankoff. 2019. Clench Interface: Novel Biting Input Techniques. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--12.Google ScholarDigital Library
- Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H Thomas, and Yuta Sugiura. 2017. CheekInput: turning your cheek into an input surface by embedded optical sensors on a head-mounted display. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. 1--8.Google ScholarDigital Library
- Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and gaze input cascaded (MAGIC) pointing. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 246--253.Google ScholarDigital Library
- Xiaoyi Zhang, Harish Kulkarni, and Meredith Ringel Morris. 2017. Smartphone- Based Gaze Gesture Communication for People with Motor Disabilities. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). ACM, New York, NY, USA, 2878--2889. https://doi.org/10.1145/3025453.3025790Google ScholarDigital Library
Index Terms
- iWink: Exploring Eyelid Gestures on Mobile Devices
Recommendations
Eyelid Gestures on Mobile Devices for People with Motor Impairments
ASSETS '20: Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and AccessibilityEye-based interactions for people with motor impairments have often used clunky or specialized equipment (e.g., eye-trackers with non-mobile computers) and primarily focused on gaze and blinks. However, two eyelids can open and close for different ...
Eyelid gestures for people with motor impairments
Although eye-based interactions can be beneficial for people with motor impairments, they often rely on clunky or specialized equipment (e.g., stationary eye-trackers) and focus primarily on gaze and blinks. However, two eyelids can open and close in ...
Exploring Subtle Foot Plantar-based Gestures with Sock-placed Pressure Sensors
CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing SystemsWe propose subtle foot-based gestures named foot plantar-based (FPB) gestures that are used with sock-placed pressure sensors. In this system, the user can control a computing device by changing his or her foot plantar distributions, e.g., pressing the ...
Comments