skip to main content
10.1145/3422852.3423479acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

iWink: Exploring Eyelid Gestures on Mobile Devices

Authors Info & Claims
Published:12 October 2020Publication History

ABSTRACT

Although gaze has been widely studied for mobile interactions, eyelid-based gestures are relatively understudied and limited to few basic gestures (e.g., blink). In this work, we propose a gesture grammar to construct both basic and compound eyelid gestures. We present an algorithm to detect nine eyelid gestures in real-time on mobile devices and evaluate its performance with 12 participants. Results show that our algorithm is able to recognize nine eyelid gestures with 83% and 78% average accuracy using user-dependent and user-independent models respectively. Further, we design a gesture mapping scheme to allow for navigating between and within mobile apps only using eyelid gestures. Moreover, we show how eyelid gestures can be used to enable cross-application and sensitive interactions. Finally, we highlight future research directions.

References

  1. Toshiyuki Ando, Yuki Kubo, Buntarou Shizuki, and Shin Takahashi. 2017. Canalsense: Face-related movement recognition system based on sensing air pressure in ear canals. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 679--689.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Behrooz Ashtiani and I Scott MacKenzie. 2010. BlinkWrite2: an improved text entry method using eye blinks. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. 339--345.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Matthias Böhmer, Brent Hecht, Johannes Schöning, Antonio Krüger, and Gernot Bauer. 2011. Falling asleep with Angry Birds, Facebook and Kindle. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services - MobileHCI '11. ACM Press, New York, New York, USA, 47. https://doi.org/10.1145/2037373.2037383Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze Interaction for SmartWatches using Smooth Pursuit Eye Movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology - UIST'15. ACM Press, New York, New York, USA, 457--466. https: //doi.org/10.1145/2807442.2807499Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze interaction for smart watches using smooth pursuit eye movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. 457--466.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Mingming Fan, Zhen Li, and Franklin Mingzhe Li. 2020. Eyelid Gestures on Mobile Devices for People with Motor Impairments. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'20). ACM, In Press. https://doi.org/10.1145/3373625.3416987Google ScholarGoogle Scholar
  7. Nathan Gitter. 2017. Rainbrow: an eyebrow-controlled game. https://apps.apple. com/us/app/rainbrow/id1312458558. (Accessed on 08/04/2020).Google ScholarGoogle Scholar
  8. Mayank Goel, Chen Zhao, Ruth Vinisha, and Shwetak N Patel. 2015. Tongue-incheek: Using wireless signals to enable non-intrusive and flexible facial gestures detection. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 255--258.Google ScholarGoogle Scholar
  9. Kristen Grauman, Margrit Betke, Jonathan Lombardi, James Gips, and Gary R Bradski. 2003. Communication via eye blinks and eyebrow raises: Video-based human-computer interfaces. Universal Access in the Information Society 2, 4 (2003), 359--373.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Aakar Gupta, Muhammed Anwar, and Ravin Balakrishnan. 2016. Porous Interfaces for Small Screen Multitasking using Finger Identification. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology - UIST '16. ACM Press, New York, New York, USA, 145--156. https: //doi.org/10.1145/2984511.2984557Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Henna Heikkilä and Kari-Jouko Räihä. 2012. Simple gaze gestures and the closure of the eyes as an interaction technique. In Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA '12. ACM Press, New York, New York, USA, 147. https://doi.org/10.1145/2168556.2168579Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Fabian Hemmert, Danijela Djokic, and RetoWettach. 2008. Perspective change: a system for switching between on-screen views by closing one eye. In Proceedings of the working conference on Advanced visual interfaces - AVI '08. ACM Press, New York, New York, USA, 484. https://doi.org/10.1145/1385569.1385668Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Seongkook Heo, Michelle Annett, Benjamin Lafreniere, Tovi Grossman, and George Fitzmaurice. 2017. No Need to Stop What You're Doing: Exploring No-Handed Smartwatch Interaction. In Graphics Interface. Canadian Human- Computer Communications Society, Waterloo, Ontario, Canada, 107--114. https: //doi.org/10.20380/gi2017.14Google ScholarGoogle Scholar
  14. Shoya Ishimaru, Kai Kunze, Koichi Kise, Jens Weppner, Andreas Dengel, Paul Lukowicz, and Andreas Bulling. 2014. In the Blink of an Eye: Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. In Proceedings of the 5th Augmented Human International Conference (Kobe, Japan) (AH '14). ACM, New York, NY, USA, Article 15, 4 pages. https://doi.org/10.1145/ 2582051.2582066Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Robert JK Jacob. 1990. What you look at is what you get: eye movement-based interaction techniques. In Proceedings of the SIGCHI conference on Human factors in computing systems. 11--18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Ricardo Jota and Daniel Wigdor. 2015. Palpebrae Superioris: Exploring the Design Space of Eyelid Gestures. In Proceedings of the 41st Graphics Interface Conference. Canadian Human-Computer Communications Society, Toronto, Ontario, Canada, 3--5. https://doi.org/10.20380/GI2015.35Google ScholarGoogle Scholar
  17. Arie E Kaufman, Amit Bandopadhay, and Bernard D Shaviv. 1993. An eye tracking computer user interface. In Proceedings of 1993 IEEE Research Properties in Virtual Reality Symposium. IEEE, 120--121.Google ScholarGoogle ScholarCross RefCross Ref
  18. Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, and Florian Alt. 2017. GazeTouchPIN: protecting sensitive data on mobile devices using secure multimodal authentication. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. 446--450.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Jinhyuk Kim, Jaekwang Cha, Hojun Lee, and Shiho Kim. 2017. Hand-free Natural User Interface for VR HMD with IR Based Facial Gesture Tracking Sensor. In Proceedings of the 23rd ACMSymposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST'17). ACM, New York, NY, USA, Article 62, 2 pages. https://doi.org/10.1145/3139131.3143420Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Piotr Kowalczyk and Dariusz Sawicki. 2019. Blink and wink detection as a control tool in multimodal interaction. Multimedia Tools and Applications 78, 10 (2019), 13749--13765.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Aleksandra Królak. 2012. Eye-blink detection system for human--computer interaction. Universal Access in the Information Society 11, 4 (2012), 409--419.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Pin-Sung Ku, Te-Yen Wu, Ericka Andrea Valladares Bastias, and Mike Y. Chen. 2018. Wink It: Investigating Wink-based Interactions for Smartphones. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (Barcelona, Spain) (MobileHCI'18). ACM, New York, NY, USA, 146--150. https://doi.org/10.1145/3236112.3236133Google ScholarGoogle Scholar
  23. Pin-Sung Ku, Te-Yan Wu, and Mike Y. Chen. 2017. EyeExpression: Exploring the Use of Eye Expressions As Hands-free Input for Virtual and Augmented Reality Devices. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST '17). ACM, New York, NY, USA, Article 60, 2 pages. https://doi.org/10.1145/3139131.3141206Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S.H. Kwon and H.C. Kim. 1999. EOG-based glasses-type wireless mouse for the disabled. Proceedings of the First Joint BMES/EMBS Conference. 1999 IEEE Engineering in Medicine and Biology 21st Annual Conference and the 1999 Annual Fall Meeting of the Biomedical Engineering Society (Cat. No.99CH37015) 1 (1999), 592. https://doi.org/10.1109/IEMBS.1999.802670Google ScholarGoogle Scholar
  25. Luis Leiva, Matthias Böhmer, Sven Gehring, and Antonio Krüger. 2012. Back to the app: the costs of mobile application interruptions. In Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services - MobileHCI '12. ACM Press, New York, New York, USA, 291. https: //doi.org/10.1145/2371574.2371617Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Jingjing Liu, Bo Liu, Shaoting Zhang, Fei Yang, Peng Yang, Dimitris N Metaxas, and Carol Neidle. 2013. Recognizing eyebrow and periodic head gestures using CRFs for non-manual grammatical marker detection in ASL. In 2013 10th IEEE International Conference andWorkshops on Automatic Face and Gesture Recognition (FG). IEEE, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  27. Google LLC. 2019. Mobile Vision | Google Developers. https://developers.google. com/vision/Google ScholarGoogle Scholar
  28. Michael J Lyons, Chi-Ho Chan, and Nobuji Tetsutani. 2004. Mouthtype: Text entry by hand and mouth. In CHI'04 Extended Abstracts on Human Factors in Computing Systems. 1383--1386.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. I Scott MacKenzie and Behrooz Ashtiani. 2011. BlinkWrite: efficient text entry using eye blinks. Universal Access in the Information Society 10, 1 (2011), 69--80.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Emiliano Miluzzo, Tianyu Wang, and Andrew T. Campbell. 2010. EyePhone: activating mobile phones with your eyes. In Proceedings of the second ACM SIGCOMM workshop on Networking, systems, and applications on mobile handhelds - MobiHeld '10. ACM Press, New York, New York, USA, 15. https://doi.org/10.1145/1851322.1851328Google ScholarGoogle Scholar
  31. Lyelle L Palmer. 1976. Inability to wink an eye and eye dominance. Perceptual and motor skills 42, 3 (1976), 825--826.Google ScholarGoogle Scholar
  32. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12 (2011), 2825--2830.Google ScholarGoogle Scholar
  33. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gazetouch: combining gaze with multi-touch for interaction on the same surface. In Proceedings of the 27th annual ACM symposium on User interface software and technology. 509--518.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. Gaze-shifting: Direct-indirect input with pen and touch modulated by gaze. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. 373--383.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Marcos Serrano, Barrett M Ens, and Pourang P Irani. 2014. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 3181--3190.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Robin Shaw, Everett Crisman, Anne Loomis, and Zofia Laszewski. 1990. The eye wink control interface: using the computer to provide the severely disabled with increased flexibility and comfort. In Proc. of the Third Annual IEEE Symposium on Computer-Based Medical Systems. IEEE, 105--111. https://doi.org/10.1109/ CBMSYS.1990.109386Google ScholarGoogle ScholarCross RefCross Ref
  37. John A Stern, Larry C Walrath, and Robert Goldstein. 1984. The endogenous eyeblink. Psychophysiology 21, 1 (1984), 22--33.Google ScholarGoogle ScholarCross RefCross Ref
  38. Roel Vertegaal, Aadil Mamuji, Changuk Sohn, and Daniel Cheng. 2005. Media eyepliances: using eye tracking for remote control focus selection of appliances. In CHI'05 Extended Abstracts on Human Factors in Computing Systems. 1861--1864.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Mélodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: spontaneous interaction with displays based on smooth pursuit eye movement and moving targets. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. 439--448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Bryan Wang and Tovi Grossman. 2020. BlyncSync: Enabling Multimodal Smartwatch Gestures with Synchronous Touch and Blink. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Xuhai Xu, Chun Yu, Anind K Dey, and Jennifer Mankoff. 2019. Clench Interface: Novel Biting Input Techniques. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H Thomas, and Yuta Sugiura. 2017. CheekInput: turning your cheek into an input surface by embedded optical sensors on a head-mounted display. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. 1--8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and gaze input cascaded (MAGIC) pointing. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 246--253.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Xiaoyi Zhang, Harish Kulkarni, and Meredith Ringel Morris. 2017. Smartphone- Based Gaze Gesture Communication for People with Motor Disabilities. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). ACM, New York, NY, USA, 2878--2889. https://doi.org/10.1145/3025453.3025790Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. iWink: Exploring Eyelid Gestures on Mobile Devices

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        HuMA'20: Proceedings of the 1st International Workshop on Human-centric Multimedia Analysis
        October 2020
        108 pages
        ISBN:9781450381512
        DOI:10.1145/3422852

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 12 October 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Upcoming Conference

        MM '24
        MM '24: The 32nd ACM International Conference on Multimedia
        October 28 - November 1, 2024
        Melbourne , VIC , Australia

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader