skip to main content
10.1145/3025453.3025790acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Smartphone-Based Gaze Gesture Communication for People with Motor Disabilities

Published: 02 May 2017 Publication History

Abstract

Current eye-tracking input systems for people with ALS or other motor impairments are expensive, not robust under sunlight, and require frequent re-calibration and substantial, relatively immobile setups. Eye-gaze transfer (e-tran) boards, a low-tech alternative, are challenging to master and offer slow communication rates. To mitigate the drawbacks of these two status quo approaches, we created GazeSpeak, an eye gesture communication system that runs on a smartphone, and is designed to be low-cost, robust, portable, and easy-to-learn, with a higher communication bandwidth than an e-tran board. GazeSpeak can interpret eye gestures in real time, decode these gestures into predicted utterances, and facilitate communication, with different user interfaces for speakers and interpreters. Our evaluations demonstrate that GazeSpeak is robust, has good user satisfaction, and provides a speed improvement with respect to an e-tran board; we also identify avenues for further improvement to low-cost, low-effort gaze-based communication technologies.

Supplementary Material

suppl.mov (pn2655-file3.mp4)
Supplemental video
suppl.mov (pn2655p.mp4)
Supplemental video

References

[1]
Javier S. Agustin, Henrik Skovsgaard, John P. Hansen, and Dan W. Hansen. 2009. Low-cost Gaze Interaction: Ready to Deliver the Promises. In Conference on Human Factors in Computing (CHI), 4453--4458. http://doi.org/10.1145/1520340.1520682
[2]
Monica Anderson. 2015. Technology Device Ownership: 2015. Retrieved September 15, 2016 from http://www.pewinternet.org/2015/10/29/technologydevice-ownership-2015/
[3]
Gary Becker. 1996. Vocal Eyes Becker Communication System. Retrieved September 15, 2016 from http://jasonbecker.com/eye_communication.html
[4]
Pradipta Biswas and Pat Langdon. 2011. A new input system for disabled users involving eye gaze tracker and scanning interface. Journal of Assistive Technologies 5, 2: 58--66. http://doi.org/10.1108/17549451111149269
[5]
Gary Bradski. 2000. The OpenCV Library. Doctor Dobbs Journal 25: 120--126.
[6]
Mark Davies. 2008. The Corpus of Contemporary American English: 520 million words, 1990-present. Retrieved from http://corpus.byu.edu/coca/
[7]
Kristen Grauman, Margrit Betke, Jonathan Lombardi, James Gips, and Gary R. Bradski. 2003. Communication via eye blinks and eyebrow raises: Video-based human-computer interfaces. Universal Access in the Information Society 2, 4: 359--373. http://doi.org/10.1007/s10209-003-0062-x
[8]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Advances in Psychology 52, C: 139--183. http://doi.org/10.1016/S0166--4115(08)62386--9
[9]
Shaun K. Kane, Barbara Linam-Church, Kyle Althoff, and Denise McCall. 2012. What We Talk About: Designing a Context-Aware Communication Tool for People with Aphasia. In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility - ASSETS '12, 49. http://doi.org/10.1145/2384916.2384926
[10]
Vahid Kazemi and Josephine Sullivan. 2014. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1867--1874. http://doi.org/10.1109/CVPR.2014.241
[11]
Davis E. King. 2009. Dlib-ml: A Machine Learning Toolkit. Journal of Machine Learning Research 10: 1755--1758.
[12]
Kyle Krafka, Aditya Khosla, Petr Kellnhofer, et al. 2016. Eye tracking for Everyone. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13]
Per Ola Kristensson and Keith Vertanen. 2012. The potential of dwell-free eye-typing for fast assistive gaze communication. In Proceedings of the Symposium on Eye Tracking Research and Applications, 241--244. http://doi.org/10.1145/2168556.2168605
[14]
Rene De La Briandais. 1959. File searching using variable length keys. In Papers presented at the western joint computer conference, 295--298.
[15]
Chris Lankford. 2000. Effective eye-gaze input into windows. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, 23--27. http://doi.org/10.1145/355017.355021
[16]
Erich Leo Lehmann and George Casella. 2006. Theory of point estimation. Springer Science & Business Media.
[17]
Low Tech Solutions. 2016. E-tran Board. Retrieved September 15, 2016 from http://store.lowtechsolutions.org/e-tran-board/
[18]
LusoVu. 2016. EyeSpeak. Retrieved September 15, 2016 from http://www.myeyespeak.com/eyespeak/
[19]
Scott MacKenzie and William Soukoreff. 2003. Phrase sets for evaluating text entry techniques. In CHI '03 Extended Abstracts on Human Factors in Computing Systems, 754--755. http://doi.org/10.1145/765968.765971
[20]
Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast gaze typing with an adjustable dwell time. In Proceedings of the 27th international conference on Human factors in computing systems CHI 09, 357. http://doi.org/10.1145/1518701.1518758
[21]
Microsoft Research. 2016. Hands-Free Keyboard. Retrieved September 15, 2016 from https://www.microsoft.com/enus/research/project/hands-free-keyboard/
[22]
Steven Nowlan, Ali Ebrahimi, David Richard Whaley, Pierre Demartines, Sreeram Balakrishnan, and Sheridan Rawlins. 2001. Data entry apparatus having a limited number of character keys and method. Retrieved from https://www.google.com/patents/US6204848
[23]
Nuance. 2016. T9 Text Input. Retrieved September 15, 2016 from http://www.nuance.com/for-business/byproduct/t9/index.htm
[24]
Alexandra Papoutsaki, Patsorn Sangkloy, James Laskey, Nediyana Daskalova, Jeff Huang, and James Hays. 2016. WebGazer: Scalable Webcam Eye Tracking Using User Interactions. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, 3839--3845.
[25]
Sarah Marie Swift. 2012. Low-Tech, Eye-MovementAccessible AAC and Typical Adults.
[26]
The ALS Association. 2016. Facts You Should Know. Retrieved September 15, 2016 from http://www.alsa.org/about-als/facts-you-shouldknow.html
[27]
TheEyeTribe. 2016. The Eye Tribe. Retrieved September 15, 2016 from https://theeyetribe.com/
[28]
Tobii. 2016. Tobii Dynavox Webshop. Retrieved September 15, 2016 from https://www.tobiiatiwebshop.com/
[29]
Tobii. 2016. Communicator 5 - Tobii Dynavox. Retrieved September 15, 2016 from http://www.tobiidynavox.com/communicator5/
[30]
Tobii. 2016. Sono Lexis - Tobii Dynavox. Retrieved September 15, 2016 from http://www.tobiidynavox.com/sono-lexis/
[31]
Vytautas Vaitukaitis and Andreas Bulling. 2012. Eye gesture recognition on portable devices. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing - UbiComp '12, 711. http://doi.org/10.1145/2370216.2370370
[32]
David Vitter. 2015. S.768 Steve Gleason Act of 2015. Retrieved September 15, 2016 from https://www.congress.gov/bill/114th-congress/senatebill/768
[33]
Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh, and Eero P. Simoncelli. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing 13, 4: 600--612. http://doi.org/10.1109/TIP.2003.819861
[34]
David J. Ward, Alan F. Blackwell, and David J. C. MacKay. 2002. Dasher: A Gesture-Driven Data Entry Interface for Mobile Computing. Human-Computer Interaction 17, 2/3: 199--228.
[35]
Craig Watman, David Austin, Nick Barnes, Gary Overett, and Simon Thompson. 2004. Fast sum of absolute differences visual landmark detector. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004, 4827--4832 Vol.5. http://doi.org/10.1109/ROBOT.2004.1302482
[36]
Jacob O. Wobbrock, Shaun K. Kane, Krzysztof Z. Gajos, Susumu Harada, and Jon Froehlich. 2011. Ability-Based Design: Concept, Principles and Examples. ACM Transactions on Accessible Computing 3, 3: 1--27. http://doi.org/10.1145/1952383.1952384
[37]
Jacob O. Wobbrock, James Rubinstein, Michael W. Sawyer, and Andrew T. Duchowski. 2008. Longitudinal evaluation of discrete consecutive gaze gestures for text entry. In Proceedings of the 2008 symposium on Eye tracking research & applications, 11. http://doi.org/10.1145/1344471.1344475

Cited By

View all
  • (2024)40 Years of Eye Typing: Challenges, Gaps, and Emergent StrategiesProceedings of the ACM on Human-Computer Interaction10.1145/36555968:ETRA(1-19)Online publication date: 28-May-2024
  • (2024)A3C: An Image-Association-Based Computing Device Authentication Framework for People with Upper Extremity ImpairmentsACM Transactions on Accessible Computing10.1145/365252217:2(1-37)Online publication date: 28-May-2024
  • (2024)Exploring Potential of Electromyography-based Avatar Operation Using Residual Muscles of ALS Individuals: Case Study on Avatar DJ PerformanceExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650857(1-7)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Smartphone-Based Gaze Gesture Communication for People with Motor Disabilities

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems
      May 2017
      7138 pages
      ISBN:9781450346559
      DOI:10.1145/3025453
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 02 May 2017

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. accessibility
      2. amyotrophic lateral sclerosis (ALS)
      3. augmentative and alternative communication (AAC)
      4. eye gesture

      Qualifiers

      • Research-article

      Conference

      CHI '17
      Sponsor:

      Acceptance Rates

      CHI '17 Paper Acceptance Rate 600 of 2,400 submissions, 25%;
      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)106
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 13 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)40 Years of Eye Typing: Challenges, Gaps, and Emergent StrategiesProceedings of the ACM on Human-Computer Interaction10.1145/36555968:ETRA(1-19)Online publication date: 28-May-2024
      • (2024)A3C: An Image-Association-Based Computing Device Authentication Framework for People with Upper Extremity ImpairmentsACM Transactions on Accessible Computing10.1145/365252217:2(1-37)Online publication date: 28-May-2024
      • (2024)Exploring Potential of Electromyography-based Avatar Operation Using Residual Muscles of ALS Individuals: Case Study on Avatar DJ PerformanceExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650857(1-7)Online publication date: 11-May-2024
      • (2024)Designing Upper-Body Gesture Interaction with and for People with Spinal Muscular Atrophy in VRProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642884(1-19)Online publication date: 11-May-2024
      • (2024)A Systematic Review of Human Activity Recognition Based on Mobile Devices: Overview, Progress and TrendsIEEE Communications Surveys & Tutorials10.1109/COMST.2024.335759126:2(890-929)Online publication date: Oct-2025
      • (2024)Privacy-Preserving Eye Movement Classification with Camera-Free, Non-Invasive EOG-EEG Glasses2024 IEEE 20th International Conference on Body Sensor Networks (BSN)10.1109/BSN63547.2024.10780485(1-4)Online publication date: 15-Oct-2024
      • (2024)Netravaad: Interactive Eye Based Communication System for People With Speech IssuesIEEE Access10.1109/ACCESS.2024.340233412(69838-69852)Online publication date: 2024
      • (2024)Personalized Facial Gesture Recognition for Accessible Mobile GamingComputers Helping People with Special Needs10.1007/978-3-031-62846-7_15(120-127)Online publication date: 5-Jul-2024
      • (2024)HoloAAC: A Mixed Reality AAC Application for People with Expressive Language DifficultiesVirtual, Augmented and Mixed Reality10.1007/978-3-031-61047-9_20(304-324)Online publication date: 1-Jun-2024
      • (2023)Perspective and the Use of Eye Tracking in Human-Computer InteractionHighlights in Science, Engineering and Technology10.54097/hset.v39i.658139(525-528)Online publication date: 1-Apr-2023
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media