skip to main content
10.1145/3290605.3300747acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

VIPBoard: Improving Screen-Reader Keyboard for Visually Impaired People with Character-Level Auto Correction

Published: 02 May 2019 Publication History

Abstract

Modern touchscreen keyboards are all powered by the word-level auto-correction ability to handle input errors. Unfortunately, visually impaired users are deprived of such benefit because a screen-reader keyboard offers only character-level input and provides no correction ability. In this paper, we present VIPBoard, a smart keyboard for visually impaired people, which aims at improving the underlying keyboard algorithm without altering the current input interaction. Upon each tap, VIPBoard predicts the probability of each key considering both touch location and language model, and reads the most likely key, which saves the calibration time when the touchdown point misses the target key. Meanwhile, the keyboard layout automatically scales according to users' touch point location, which enables them to select other keys easily. A user study shows that compared with the current keyboard technique, VIPBoard can reduce touch error rate by 63.0% and increase text entry speed by 12.6%.

Supplementary Material

MP4 File (paper517.mp4)
Supplemental video
MP4 File (paper517p.mp4)
Preview video

References

[1]
2009. PinyinIME. Retrieved August 23, 2018 from https://android.googlesource.com/platform/packages/inputmethods/PinyinIME/
[2]
2011. The Open American National Corpus(OANC). Retrieved August 23, 2018 from http://www.anc.org
[3]
2018. Gboard - the Google Keyboard. Retrieved January 4, 2019 from https://play.google.com/store/apps/details?id=com.google. android.inputmethod.latin&hl=en_US
[4]
2018. Huawei P20 Smartphone. Retrieved January 4, 2019 from https://consumer.huawei.com/en/phones/p20/
[5]
Khaldoun Al Faraj, Mustapha Mojahid, and Nadine Vigouroux. 2009. BigKey: A Virtual Keyboard for Mobile Devices. In Human-Computer Interaction. Ambient, Ubiquitous and Intelligent Interaction, Julie A. Jacko (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 3--10.
[6]
Shiri Azenkot, Jacob O. Wobbrock, Sanjana Prasain, and Richard E. Ladner. 2012. Input Finger Detection for Nonvisual Touch Screen Text Entry in Perkinput. In Proceedings of Graphics Interface 2012 (GI '12). Canadian Information Processing Society, Toronto, Ont., Canada, Canada, 121--129. http://dl.acm.org/citation.cfm?id=2305276.2305297
[7]
Shiri Azenkot and Shumin Zhai. 2012. Touch Behavior with Diferent Postures on Soft Smartphone Keyboards. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services (MobileHCI '12). ACM, New York, NY, USA, 251-- 260.
[8]
Tyler Baldwin and Joyce Chai. 2012. Towards Online Adaptation and Personalization of Key-target Resizing for Mobile Devices. In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces (IUI '12). ACM, New York, NY, USA, 11--20.
[9]
Nikola Banovic, Tovi Grossman, and George Fitzmaurice. 2013. The Effect of Time-based Cost of Error in Target-directed Pointing Tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 1373--1382.
[10]
Tom Bellman and Scott I. Mackenzie. 1998. A Probabilistic Character Layout Strategy for Mobile Text Entry. In Graphics Interface. 168--176. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.898
[11]
Matthew N. Bonner, Jeremy T. Brudvik, Gregory D. Abowd, and W. Keith Edwards. 2010. No-Look Notes: Accessible Eyes-Free Multitouch Text Entry. In Pervasive Computing, Patrik Floréen, Antonio Krüger, and Mirjana Spasojevic (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 409--426.
[12]
Stephen Brewster, Stephen Brewster, Faraz Chohan, and Lorna Brown. 2007. Tactile Feedback for Mobile Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07). ACM, New York, NY, USA, 159--162.
[13]
Tao Chen and Min-Yen Kan. 2013. Creating a live, public short message service corpus: the NUS SMS corpus. Language Resources and Evaluation 47, 2 (01 Jun 2013), 299--335.
[14]
G. Chomchalerm, J. Rattanakajornsak, U. Samsrisook, D. Wongsawang, and W. Kusakunniran. 2014. Braille dict: Dictionary application for the blind on android smartphone. In 2014 Third ICT International Student Project Conference (ICT-ISPC). 143--146.
[15]
Cecil D'silva, Vickram Parthasarathy, and Sethuraman N. Rao. 2016. Wireless Smartphone Keyboard for Visually Challenged Users. In Proceedings of the 2016 Workshop on Wearable Systems and Applications (WearSys '16). ACM, New York, NY, USA, 13--17.
[16]
Andrew Fowler, Kurt Partridge, Ciprian Chelba, Xiaojun Bi, Tom Ouyang, and Shumin Zhai. 2015. Effects of Language Modeling and Its Personalization on Touchscreen Typing Performance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 649--658.
[17]
Brian Frey, Caleb Southern, and Mario Romero. 2011. BrailleTouch: Mobile Texting for the Visually Impaired. In Universal Access in HumanComputer Interaction. Context Diversity, Constantine Stephanidis (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 19--25.
[18]
Mayank Goel, Leah Findlater, and Jacob Wobbrock. 2012. WalkType: Using Accelerometer Data to Accomodate Situational Impairments in Mobile Touch Screen Text Entry. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 2687--2696.
[19]
Mayank Goel, Alex Jansen, Travis Mandel, Shwetak N. Patel, and Jacob O. Wobbrock. 2013. ContextType: Using Hand Posture Information to Improve Mobile Touch Screen Text Entry. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 2795--2798.
[20]
D. Gonçalves, J. A. Jorge, H. Nicolau, T. Guerreiro, and P. Lagoá. 2008. From Tapping to Touching: Making Touch Screens Accessible to Blind Users. IEEE MultiMedia 15 (12 2008), 48--50.
[21]
Joshua Goodman, Gina Venolia, Keith Steury, and Chauncey Parker. 2002. Language Modeling for Soft Keyboards. In Proceedings of the 7th International Conference on Intelligent User Interfaces (IUI '02). ACM, New York, NY, USA, 194--195.
[22]
Asela Gunawardana, Tim Paek, and Christopher Meek. 2010. Usability Guided Key-target Resizing for Soft Keyboards. In Proceedings of the 15th International Conference on Intelligent User Interfaces (IUI '10). ACM, New York, NY, USA, 111--118.
[23]
Nils Klarlund and Michael Riley. 2003. Word N-grams for Cluster Keyboards. In Proceedings of the 2003 EACL Workshop on Language Modeling for Text Entry Methods (TextEntry '03). Association for Computational Linguistics, Stroudsburg, PA, USA, 51--58. http: //dl.acm.org/citation.cfm?id=1628195.1628202
[24]
D. Kocieli'ski and J. Brzostek-Pawlowska. 2013. Improving the accessibility of touchscreen-based mobile devices: Integrating Android-based devices and Braille notetakers. In 2013 Federated Conference on Computer Science and Information Systems. 655--658.
[25]
Per-Ola Kristensson and Shumin Zhai. 2005. Relaxing Stylus Typing Precision by Geometric Pattern Matching. In Proceedings of the 10th International Conference on Intelligent User Interfaces (IUI '05). ACM, New York, NY, USA, 151--158.
[26]
Yiqin Lu, Chun Yu, Xin Yi, Yuanchun Shi, and Shengdong Zhao. 2017. BlindType: Eyes-Free Text Entry on Handheld Touchpad by Leveraging Thumb's Muscle Memory. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 2, Article 18 (June 2017), 24 pages.
[27]
I Scott Mackenzie. 2002. A note on calculating text entry speed. Unpublished work. Available online at http://www. yorku. ca/mack/RNTextEntrySpeed. html (2002).
[28]
Sergio Mascetti, Cristian Bernareggi, and Matteo Belotti. 2012. TypeInBraille: Quick Eyes-Free Typing on Smartphones. In Computers Helping People with Special Needs, Klaus Miesenberger, Arthur Karshmer, Petr Penaz, and Wolfgang Zagler (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 615--622.
[29]
Hugo Nicolau, Kyle Montague, Tiago Guerreiro, João Guerreiro, and Vicki L. Hanson. 2014. B#: Chord-based Correction for Multitouch Braille Input. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 1705--1708.
[30]
Hugo Nicolau, Kyle Montague, Tiago Guerreiro, André Rodrigues, and Vicki L. Hanson. 2015. Typing Performance of Blind Users: An Analysis of Touch Behaviors, Learning Effect, and In-Situ Usage. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '15). ACM, New York, NY, USA, 273--280.
[31]
João Oliveira, Tiago Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves. 2011. BrailleType: Unleashing Braille over Touch Screen Mobile Phones. In Human-Computer Interaction -- INTERACT 2011, Pedro Campos, Nicholas Graham, Joaquim Jorge, Nuno Nunes, Philippe Palanque, and Marco Winckler (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 100--107.
[32]
João Oliveira, Tiago Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves. 2011. Blind People and Mobile Touch-based Textentry: Acknowledging the Need for Different Flavors. In The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '11). ACM, New York, NY, USA, 179--186.
[33]
Mathieu Raynal and Philippe Truillet. 2007. Fisheye Keyboard: Whole Keyboard Displayed on PDA. In Human-Computer Interaction. Interaction Platforms and Techniques, Julie A. Jacko (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 452--459.
[34]
Jaime Sánchez and Fernando Aguayo. 2007. Mobile Messenger for the Blind. In Universal Access in Ambient Intelligence Environments, Constantine Stephanidis and Michael Pieper (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 369--385.
[35]
Weinan Shi, Chun Yu, Xin Yi, Zhen Li, and Yuanchun Shi. 2018. TOAST: Ten-Finger Eyes-Free Typing on Touchable Surfaces. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 1, Article 33 (March 2018), 23 pages.
[36]
KUMIKO TANAKA-ISHII. 2007. Word-based predictive text entry using adaptive language models. Natural Language Engineering 13, 1 (2007), 51--74.
[37]
Keith Vertanen, Haythem Memmi, Justin Emge, Shyam Reyal, and Per Ola Kristensson. 2015. VelociTap: Investigating Fast Mobile Text Entry Using Sentence-Based Decoding of Touchscreen Keyboard Input. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 659--668.
[38]
Daryl Weir, Henning Pohl, Simon Rogers, Keith Vertanen, and Per Ola Kristensson. 2014. Uncertain Text Entry on Mobile Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 2307--2316.
[39]
Jacob O. Wobbrock and Brad A. Myers. 2006. Analyzing the Input Stream for Character- Level Errors in Unconstrained Text Entry Evaluations. ACM Trans. Comput.-Hum. Interact. 13, 4 (Dec. 2006), 458--489.
[40]
Georgios Yfantidis and Grigori Evreinov. 2006. Adaptive blind interaction technique for touchscreens. Universal Access in the Information Society 4, 4 (01 May 2006), 328--337.
[41]
Xin Yi, Chun Yu, Weinan Shi, Xiaojun Bi, and Yuanchun Shi. 2017. Word Clarity As a Metric in Sampling Keyboard Test Sets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 4216--4228.
[42]
Xin Yi, Chun Yu, Weinan Shi, and Yuanchun Shi. 2017. Is it too small?: Investigating the performances and preferences of users when typing on tiny QWERTY keyboards. International Journal of Human-Computer Studies 106 (2017), 44 -- 62.
[43]
Xin Yi, Chun Yu, Weijie Xu, Xiaojun Bi, and Yuanchun Shi. 2017. COMPASS: Rotational Keyboard on Non-Touch Smartwatches. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 705--715.
[44]
Suwen Zhu, Tianyao Luo, Xiaojun Bi, and Shumin Zhai. 2018. Typing on an Invisible Keyboard. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 439, 13 pages.

Cited By

View all
  • (2024)Accessible Gesture Typing on Smartphones for People with Low VisionProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676447(1-11)Online publication date: 13-Oct-2024
  • (2024)Eye-Hand Typing: Eye Gaze Assisted Finger Typing via Bayesian Processes in ARIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337210630:5(2496-2506)Online publication date: May-2024
  • (2023)A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind PersonsMultimodal Technologies and Interaction10.3390/mti70200227:2(22)Online publication date: 17-Feb-2023
  • Show More Cited By

Index Terms

  1. VIPBoard: Improving Screen-Reader Keyboard for Visually Impaired People with Character-Level Auto Correction

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
      May 2019
      9077 pages
      ISBN:9781450359702
      DOI:10.1145/3290605
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 02 May 2019

      Permissions

      Request permissions for this article.

      Check for updates

      Badges

      • Honorable Mention

      Author Tags

      1. auto-correction
      2. smartphone
      3. text entry
      4. visually impaired

      Qualifiers

      • Research-article

      Funding Sources

      • Tsinghua University Research Funding
      • Natural Science Foundation of China
      • National Key Research and Development Plan

      Conference

      CHI '19
      Sponsor:

      Acceptance Rates

      CHI '19 Paper Acceptance Rate 703 of 2,958 submissions, 24%;
      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)147
      • Downloads (Last 6 weeks)15
      Reflects downloads up to 05 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Accessible Gesture Typing on Smartphones for People with Low VisionProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676447(1-11)Online publication date: 13-Oct-2024
      • (2024)Eye-Hand Typing: Eye Gaze Assisted Finger Typing via Bayesian Processes in ARIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337210630:5(2496-2506)Online publication date: May-2024
      • (2023)A Review of Design and Evaluation Practices in Mobile Text Entry for Visually Impaired and Blind PersonsMultimodal Technologies and Interaction10.3390/mti70200227:2(22)Online publication date: 17-Feb-2023
      • (2023)From 2D to 3DProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35808297:1(1-25)Online publication date: 28-Mar-2023
      • (2023)A Human-Computer Collaborative Editing Tool for Conceptual DiagramsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580676(1-29)Online publication date: 19-Apr-2023
      • (2021)The practice of applying AI to benefit visually impaired people in ChinaCommunications of the ACM10.1145/348162364:11(70-75)Online publication date: 25-Oct-2021
      • (2021)Facilitating Text Entry on Smartphones with QWERTY Keyboard for Users with Parkinson’s DiseaseProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445352(1-12)Online publication date: 6-May-2021
      • (2021)LightWrite: Teach Handwriting to The Visually Impaired with A SmartphoneProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445322(1-15)Online publication date: 6-May-2021
      • (2020)Challenges in Mobile Text Entry using Virtual Keyboards for Low-Vision UsersProceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia10.1145/3428361.3428391(42-46)Online publication date: 22-Nov-2020
      • (2019)A contactless Morse code text input system using COTS wifi devicesAdjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers10.1145/3341162.3343850(328-331)Online publication date: 9-Sep-2019

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media