skip to main content
10.1145/3315002.3332447acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesw4aConference Proceedingsconference-collections
demonstration

Write-it-Yourself: Empowering Blind People to Independently Fill-out Paper Forms

Published:13 May 2019Publication History

ABSTRACT

Filling out printed forms (e.g., checks) independently is currently impossible for blind people, since they cannot pinpoint the locations of the form fields, and quite often, they cannot even figure out what fields (e.g., name) are present in the form. Hence, they always depend on sighted people to write on their behalf, and help them affix their signatures. Extant assistive technologies have exclusively focused on reading, with no support for writing. In this paper, we introduce WiYG, a Write-it-Yourself guide that directs a blind user to the different form fields, so that she can independently fill out these fields without seeking assistance from a sighted person. WiYG uses a pocket-sized custom 3D printed smartphone attachment, and well-established computer vision algorithms to dynamically generate audio instructions that guide the user to the different form fields. A user study with 13 blind participants showed that with WiYG, users could correctly fill out form fields at right locations with an accuracy as high as 89.5%.

References

  1. S. Feiz, S. M. Billah, V. Ashok, R. Shilkrot, and I. Ramakrishnan, "Towards Enabling Blind People to Independently Fill out Paper Forms," in The 2019 CHI Conference on Human Factors in Computing Systems, 2019.(to appear) Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. NFB. (2018). Statistical Facts about Blindness in the United States. Available: http://www.nfb.org/Google ScholarGoogle Scholar
  3. (2018). KURZWEIL 1000 FOR WINDOWS. Available: https://www.kurzweiledu.com/products/kurzweil-1000-v14-windows.htmlGoogle ScholarGoogle Scholar
  4. (2018). KNFB Reader. Available: https://knfbreader.comGoogle ScholarGoogle Scholar
  5. (2018). Microsoft AI. https://www.microsoft.com/enus/seeing-ai/Google ScholarGoogle Scholar
  6. AppleVis. (2018). text detective. https://www.applevis.com/apps/ios/utilities/text-detectiveGoogle ScholarGoogle Scholar
  7. R. Shilkrot, J. Huber, W. Meng Ee, P. Maes, and S. C. Nanayakkara, "FingerReader: a wearable device to explore printed text on the go," in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015, pp. 2363--2372: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Orcam. (2018). Available: www.orcam.com/en/myeye/Google ScholarGoogle Scholar
  9. L. Stearns et al., "Evaluating haptic and auditory directional guidance to assist blind people in reading printed text using finger-mounted cameras," ACM Transactions on Accessible Computing (TACCESS), vol. 9, no. 1, p. 1, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Jayant, H. Ji, S. White, and J. P. Bigham, "Supporting blind photography," in The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility, 2011, pp. 203--210: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Manduchi and J. M. Coughlan, "The last meter: blind visual guidance to a target," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014, pp. 3113--3122: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. M. Vázquez and A. Steinfeld, "An assisted photography framework to help visually impaired users properly aim a camera," ACM Transactions on Computer-Human Interaction (TOCHI), vol. 21, no. 5, p. 25, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. S. K. Kane, B. Frey, and J. O. Wobbrock, "Access lens: a gesture-based screen reader for real-world documents," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013, pp. 347--350: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Y. Borodin, J. P. Bigham, G. Dausch, and I. Ramakrishnan, "More than meets the eye: a survey of screen-reader browsing strategies," in Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A), 2010, p. 13: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, "Automatic generation and detection of highly reliable fiducial markers under occlusion," Pattern Recognition, vol. 47, no. 6, pp. 2280--2292, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Write-it-Yourself: Empowering Blind People to Independently Fill-out Paper Forms

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      W4A '19: Proceedings of the 16th International Web for All Conference
      May 2019
      224 pages
      ISBN:9781450367165
      DOI:10.1145/3315002

      Copyright © 2019 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 May 2019

      Check for updates

      Qualifiers

      • demonstration
      • Research
      • Refereed limited

      Acceptance Rates

      W4A '19 Paper Acceptance Rate18of49submissions,37%Overall Acceptance Rate171of371submissions,46%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader