skip to main content
research-article

Creating Accessible Online Floor Plans for Visually Impaired Readers

Published:15 October 2020Publication History
Skip Abstract Section

Abstract

We present a generic model for providing blind and severely vision-impaired readers with access to online information graphics. The model supports fully and semi-automatic transcription and allows the reader a choice of presentation mediums. We evaluate the model through a case study: online house floor plans. To do so, we conducted a formative user study with severely vision impaired users to determine what information they would like from an online floor plan and how to present the floor plan as a text-only description, tactile graphic, and on a touchscreen with audio feedback. We then built an automatic transcription tool using specialized graphics recognition algorithms. Finally, we measured the quality of system recognition as well as conducted a second user study to evaluate the usefulness of the accessible graphics produced by the tool for each of the three formats. The results generally support the design of the generic model and the usefulness of the tool we have produced. However, they also reveal the inability of current graphics recognition algorithms to handle unforeseen graphical conventions. This highlights the need for automatic transcription systems to return a level of confidence in the recognized components and to present this to the end-user so they can have an appropriate level of trust.

References

  1. Sheraz Ahmed, Marcus Liwicki, Markus Weber, and Andreas Dengel. 2011. Improved automatic analysis of architectural floor plans. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR’11). IEEE, 864--869. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Sheraz Ahmed, Marcus Liwicki, Markus Weber, and Andreas Dengel. 2012. Automatic room detection and room labeling from architectural floor plans. In Proceedings of the 10th IAPR International Workshop on Document Analysis Systems (DAS’12). IEEE, 339--343. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Sheraz Ahmed, Markus Weber, Marcus Liwicki, Christoph Langenhan, Andreas Dengel, and Frank Petzold. 2014. Automatic analysis and sketch-based retrieval of architectural floor plans. Pattern Recog. Lett. 35 (2014), 91--100.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Yasuhiro Aoki, Akio Shio, Hiroyuki Arai, and Kazumi Odaka. 1996. A prototype system for interpreting hand-sketched floor plans. In Proceedings of the 13th International Conference on Pattern Recognition, Vol. 3. IEEE, 747--751. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Nikola Banovic, Rachel L. Franz, Khai N. Truong, Jennifer Mankoff, and Anind K. Dey. 2013. Uncovering information needs for independent spatial learning for users who are visually impaired. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Alessio Barducci and Simone Marinai. 2012. Object recognition in floor plans by graphs of white connected components. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR’12). IEEE, 298--301.Google ScholarGoogle Scholar
  7. Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. 2010. VizWiz: Nearly real-time answers to visual questions. In Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology (UIST’10). ACM, 333--342. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Nizar Bouhlel and Anis Rojbi. 2014. New tools for automating tactile geographic map translation. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 313--314. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Andy Brown, Steve Pettifer, and Robert Stevens. 2004. Evaluation of a non-visual molecule browser. In Proceedings of the 6th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 40--47. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Timothy C. Craven. 2006. Some features of “alt” texts associated with images in web pages. Inf. Res.: Int. Electron. J. 11, 2 (2006), n2.Google ScholarGoogle Scholar
  11. Lluís-Pere de las Heras, David Fernández Mota, Alicia Fornés, Ernest Valveny, Gemma Sánchez, and Josep Lladós. 2013. Runlength histogram image signature for perceptual retrieval of architectural floor plans. In Proceedings of the International Workshop on Graphics Recognition. Springer, 135--146.Google ScholarGoogle Scholar
  12. Lluís-Pere de las Heras, Sheraz Ahmed, Marcus Liwicki, Ernest Valveny, and Gemma Sánchez. 2014. Statistical segmentation and structural recognition for floor plan interpretation. Int. J. Doc. Anal. Recog. 17, 3 (2014), 221--237. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Lluís-Pere de las Heras, Oriol Ramos Terrades, Sergi Robles, and Gemma Sánchez. 2015. CVC-FP and SGT: A new database for structural floor plan analysis and its groundtruthing tool. Int. J. Doc. Anal. Recog. 18, 1 (2015), 15--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Kees Van Deemter, Mariët Theune, and Emiel Krahmer. 2005. Real versus template-based natural language generation: A false opposition?Comput. Ling. 31, 1 (2005), 15--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Seniz Demir, David Oliver, Edward Schwartz, Stephanie Elzer, Sandra Carberry, Kathleen F. Mccoy, and Daniel Chester. 2010. Interactive SIGHT: Textual access to simple bar charts. New Rev. Hypermed. Multimed. 16, 3 (2010), 245--279. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Samuel Dodge, Jiu Xu, and Björn Stenger. 2017. Parsing floor plan images. In Proceedings of the 15th IAPR International Conference on Machine Vision Applications (MVA’17). IEEE, 358--361.Google ScholarGoogle ScholarCross RefCross Ref
  17. Stacy Doore, Kate Beard, and Nicholas Giudice. 2016. Spatial preposition use in indoor scene descriptions. In Proceedings of the International Conference on GIScience. UC Berkeley, 84--87.Google ScholarGoogle ScholarCross RefCross Ref
  18. Philippe Dosch, Karl Tombre, Christian Ah-Soon, and Gérald Masini. 2000. A complete system for the analysis of architectural drawings. Int. J. Doc. Anal. Recog. 3, 2 (2000), 102--116.Google ScholarGoogle ScholarCross RefCross Ref
  19. Max Feltes, Sheraz Ahmed, Andreas Dengel, and Marcus Liwicki. 2013. Improved contour-based corner detection for architectural floor plans. In Proceedings of the International Workshop on Graphics Recognition. Springer, 191--203.Google ScholarGoogle Scholar
  20. Leo Ferres, Petro Verkhogliad, Gitte Lindgaard, Louis Boucher, Antoine Chretien, and Martin Lachance. 2007. Improving accessibility to statistical graphs: The iGraph-lite system. In Proceedings of the 9th International ACM SIGACCESS Conference on Computers 8 Accessibility. ACM, 67--74. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Tyler J. Ferro and Dianne T. V. Pawluk. 2013. Automatic image conversion to tactile graphic. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Lloyd A. Fletcher and Rangachar Kasturi. 1988. A robust algorithm for text string separation from mixed text/graphics images. IEEE Trans. Pattern Anal. Mach. Intell. 10, 6 (1988), 910--918. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Thomas Gallagher, Elyse Wise, Hoe Chee Yam, Binghao Li, Euan Ramsey-Stewart, Andrew G. Dempster, and Chris Rizos. 2014. Indoor navigation for people who are blind or vision impaired: Where are we and where are we going? J. Loc. Based Serv. 8, 1 (2014), 54--73. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. John A. Gardner and Vladimir Bulatov. 2006. Scientific diagrams made easy with IVEO. In ICCHP: Computers Helping People with Special Needs. Springer, 1243--1250. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Lucile Gimenez, Sylvain Robert, Frédéric Suard, and Khaldoun Zreik. 2016. Automatic reconstruction of 3D building models from scanned 2D floor plans. Autom. Constr. 63 (2016), 48--56.Google ScholarGoogle ScholarCross RefCross Ref
  26. Cagatay Goncu. 2012. GraVVITAS: Accessible Graphics for Visually Impaired People. Ph.D. Dissertation. Monash University. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Cagatay Goncu, Anuradha Madugalla, Simone Marinai, and Kim Marriott. 2015. Accessible on-line floor plans. In Proceedings of the 24th International Conference on World Wide Web. ACM, 388--398. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Cagatay Goncu and Kim Marriott. 2011. GraVVITAS: Generic multi-touch presentation of accessible graphics. In Human-computer Interaction--INTERACT. Springer, 30--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Cagatay Goncu and Kim Marriott. 2015. Creating ebooks with accessible graphics content. In Proceedings of the ACM Symposium on Document Engineering. ACM, 89--92. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Timo Götzelmann and Aleksander Pavkovic. 2014. Towards automatically generated tactile detail maps by 3D printers for blind persons. In Proceedings of the International Conference on Computers for Handicapped Persons. Springer, 1--7.Google ScholarGoogle ScholarCross RefCross Ref
  31. Timo Götzelmann and Klaus Winkler. 2015. SmartTactMaps: A smartphone-based approach to support blind persons in exploring tactile maps. In Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments. ACM, 2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. HandyTech. 2018. Active Braille. Retrieved from https://handytech.de/en/products/mobile-braille-displays/active-braille.Google ScholarGoogle Scholar
  33. Caroline Hayes. 2017. Electronics lend a helping hand to young and old. Eng. Technol. 12, 1 (2017), 40--41.Google ScholarGoogle ScholarCross RefCross Ref
  34. Marion A. Hersh and Michael A. Johnson. 2008. Accessible information: An overview. In Assistive Technology for Visually Impaired and Blind People. Springer, 385--448. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. HyperBraille. 2007. The graphics-enabled display for blind computer users. Retrieved from http://www.hyperbraille.de/?lang=en.Google ScholarGoogle Scholar
  36. Chandrika Jayant, Matt Renzelmann, Dana Wen, Satria Krisnandi, Richard Ladner, and Dan Comden. 2007. Automated tactile graphics translation: In the field. In Proceedings of the 9th International ACM SIGACCESS Conference on Computers 8 Accessibility. ACM, 75--82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Ahti Kalervo, Juha Ylioinas, Markus Häikiö, Antti Karhu, and Juho Kannala. 2019. CubiCasa5k: A dataset and an improved multi-task model for floorplan image analysis. In Proceedings of the Scandinavian Conference on Image Analysis. Springer, 28--40.Google ScholarGoogle ScholarCross RefCross Ref
  38. Toshihiro Kanahori, Masayuki Naka, and Masakazu Suzuki. 2008. Braille-embedded tactile graphics editor with Infty system. In Proceedings of the International Conference on Computers for Handicapped Persons. Springer, 919--925. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Andrea R. Kennel. 1996. Audiograf: A diagram-reader for the blind. In Proceedings of the 2nd Annual ACM Conference on Assistive Technologies. ACM, 51--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Ig Mo Koo, Kwangmok Jung, Ja Choon Koo, Jae-Do Nam, Young Kwan Lee, and Hyouk Ryeol Choi. 2008. Development of soft-actuator-based wearable tactile display. IEEE Trans. Robot. 24, 3 (2008), 549--558. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Stephen E. Krufka, Kenneth E. Barner, and Tuncer Can Aysal. 2007. Visual to tactile conversion of vector graphics. IEEE Trans. Neural Syst. Rehab. Eng. 15, 2 (2007), 310--321.Google ScholarGoogle ScholarCross RefCross Ref
  42. Steven Landau and Lesley Wells. 2003. Merging tactile sensory input and audio data by means of the talking tactile tablet. In Proc. Eurohaptics, Vol. 3. 414--418.Google ScholarGoogle Scholar
  43. David Lareau and Jochen Lang. 2014. Instrument for haptic image exploration. IEEE Trans. Instrument. Meas. 63, 1 (2014), 35--45.Google ScholarGoogle ScholarCross RefCross Ref
  44. Google LLC. 2011. WalkyTalky—Android Apps on Google Play. Retrieved from https://play.google.com/store/apps/details?id=com.googlecode.eyesfree.walkytalky8hl=en.Google ScholarGoogle Scholar
  45. Google LLC. 2013. Intersection Explorer—Android Apps on Google Play. Retrieved from https://play.google.com/store/apps/details?id=com.google.android.marvin.intersectionexplorer8hl=en_US.Google ScholarGoogle Scholar
  46. Tong Lu, Huafei Yang, Ruoyu Yang, and Shijie Cai. 2007. Automatic analysis and integration of architectural drawings. Int. J. Doc. Anal. Recog. 9, 1 (2007), 31--47. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Sébastien Macé, Hervé Locteau, Ernest Valveny, and Salvatore Tabbone. 2010. A system to detect rooms in architectural floor plan images. In Proceedings of the 9th IAPR International Workshop on Document Analysis Systems. ACM, 167--174. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Anuradha Madugalla. 2019. Floorplan Corpus. https://bitbucket.org/AnuMd/taccess_material/src/master/Datasets/. (Accessed on 02/19/2019).Google ScholarGoogle Scholar
  49. Anuradha Madugalla, Kim Marriott, and Simone Marinai. 2017. Partitioning open plan areas in floor plans. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR’17), 2017 14th IAPR International Conference on, Vol. 1. IEEE, 47--52.Google ScholarGoogle ScholarCross RefCross Ref
  50. NASA. 2009. Earth+ Earth Science. Retrieved from https://prime.jsc.nasa.gov/earthplus/.Google ScholarGoogle Scholar
  51. The Braille Authority of North America. 2010. Guidelines and Standards for Tactile Graphics. Retrieved from http://www.brailleauthority.org/tg/web-manual/index.html.Google ScholarGoogle Scholar
  52. American Council of the Blind. 2018. What is Audio Description. Retrieved from http://www.acb.org/adp/ad.html.Google ScholarGoogle Scholar
  53. Devi A. Paladugu, Hima Bindu Maguluri, Qiongjie Tian, and Baoxin Li. 2012. Automated description generation for indoor floor maps. In Proceedings of the 14th International ACM SIGACCESS Conference on Computers 8 Accessibility. ACM, 211--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Devi Archana Paladugu, Qiongjie Tian, Hima Bindu Maguluri, and Baoxin Li. 2015. Towards building an automated system for describing indoor floor maps for individuals with visual impairment. Cyber-phys. Syst. 1, 2--4 (2015), 132--159.Google ScholarGoogle Scholar
  55. Konstantinos Papadopoulos, Marialena Barouti, and Konstantinos Charitakis. 2014. A university indoors audio-tactile mobility aid for individuals with blindness. In Proceedings of the International Conference on Computers for Handicapped Persons. Springer, 108--115.Google ScholarGoogle ScholarCross RefCross Ref
  56. Konstantinos Papadopoulos, Konstantinos Charitakis, Lefkothea Kartasidou, Georgios Kouroupetroglou, Suad Sakalli Gumus, Efstratios Stylianidis, Rainer Stiefelhagen, Karin Müller, Engin Yilmaz, Gerhard Jaworek et al. 2016. User requirements regarding information included in audio-tactile maps for individuals with blindness. In Proceedings of the International Conference on Computers Helping People with Special Needs. Springer, 168--175.Google ScholarGoogle ScholarCross RefCross Ref
  57. Don Parkes. 1994. Audio tactile systems for designing and learning complex environments as a vision impaired person: Static and dynamic spatial information access. Learning Environment Technology: Selected Papers from LETA 94 (1994), 219--223. Retrieved from http://www.ascilite.org/conferences/aset-archives/confs/edtech94/mp/parkes.html.Google ScholarGoogle Scholar
  58. Helen Petrie, Fraser Hamilton, Neil King, and Pete Pavan. 2006. Remote usability evaluations with disabled people. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1133--1141. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Helen Petrie, Chandra Harrison, and Sundeep Dev. 2005. Describing images on the web: A survey of current practice and prospects for the future. In Proceedings of Human-computer Interaction International Conference (HC’05).Google ScholarGoogle Scholar
  60. Helen Petrie, Neil King, Anne-Marie Burn, and Peter Pavan. 2006. Providing interactive access to architectural floorplans for blind people. Brit. J. Vis. Impair. 24, 1 (2006), 4--11.Google ScholarGoogle ScholarCross RefCross Ref
  61. Helen Petrie, Christoph Schlieder, Paul Blenkhorn, Gareth Evans, Alasdair King, Anne-Marie O’Neill, George T. Ioannidis, Blaithin Gallagher, David Crombie, Rolf Mager et al. 2002. TeDUB: A system for presenting and exploring technical drawings for blind people. In Computers Helping People with Special Needs. Springer, 537--539. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Ingo Redeke. 2001. Image 8 graphic reader. In Proceedings of the International Conference on Image Processing, Vol. 1. IEEE, 806--809.Google ScholarGoogle ScholarCross RefCross Ref
  63. Ehud Reiter. 1995. NLG vs. templates. Arxiv Preprint Cmp-lg/9504013 (1995).Google ScholarGoogle Scholar
  64. Jonathan Rowell and Simon Ungar. 2005. Feeling our way: Tactile map user requirements—A survey. In Proceedings of the International Cartographic Conference.Google ScholarGoogle Scholar
  65. Kathy Ryall, Stuart Shieber, Joe Marks, and Murray Mazer. 1995. Semi-automatic delineation of regions in floor plans. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Vol. 2. IEEE, 964--969. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Noureddin M. Sadawi, Alan P. Sexton, and Volker Sorge. 2012. Chemical structure recognition: A rule-based approach. In Document Recognition and Retrieval XIX, Vol. 8297. International Society for Optics and Photonics, 82970E.Google ScholarGoogle ScholarCross RefCross Ref
  67. J. Kenneth Salisbury and Mandayam A. Srinivasan. 1997. Phantom-based haptic interaction with virtual objects. IEEE Comput. Graph. Applic. 17, 5 (1997), 6--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Volker Sorge, Mark Lee, and Sandy Wilkinson. 2015. End-to-end solution for accessible chemical diagrams. In Proceedings of the 12th Web for All Conference. ACM, 6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Jing Su, Alyssa Rosenzweig, Ashvin Goel, Eyal de Lara, and Khai N. Truong. 2010. Timbremap: Enabling the visually-impaired to use maps on touch-enabled devices. In Proceedings of the 12th International Conference on Human-computer Interaction with Mobile Devices and Services. ACM, 17--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Hao Tang, Norbu Tsering, Feng Hu, and Zhigang Zhu. 2016. Automatic pre-journey indoor map generation using autoCAD floor plan. J. Technol. Pers. Disab. 4 (2016), 176--191.Google ScholarGoogle Scholar
  71. Brandon Taylor, Anind Dey, Dan Siewiorek, and Asim Smailagic. 2016. Customizable 3D printed tactile maps as interactive overlays. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 71--79. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. M. Tezuka, K. Ishimaru, and N. Miki. 2017. Electrotactile display composed of two-dimensionally and densely distributed microneedle electrodes. Sens. Actuat. A: Phys. 258 (2017), 32--38.Google ScholarGoogle ScholarCross RefCross Ref
  73. Luis Von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 319--326. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Luis Von Ahn, Shiry Ginosar, Mihir Kedia, Ruoran Liu, and Manuel Blum. 2006. Improving accessibility of the web with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 79--82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. W3C. 2008. Web Content Accessibility Guidelines 2.0. Retrieved from https://www.w3.org/TR/WCAG20/.Google ScholarGoogle Scholar
  76. Zheshen Wang, Nan Li, and Baoxin Li. 2012. Fast and independent access to map directions for people who are blind. Interact. Comput. 24, 2 (2012), 91--106. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Thomas P. Way and Kenneth E. Barner. 1997. Automatic visual to tactile translation. II. Evaluation of the TACTile image creation system. IEEE Trans. Rehab. Eng. 5, 1 (1997), 95--105.Google ScholarGoogle ScholarCross RefCross Ref
  78. Limin Zeng and Gerhard Weber. 2010. Audio-haptic browser for a geographical information system. In Proceedings of the International Conference on Computers for Handicapped Persons. Springer, 466--473. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Limin Zeng and Gerhard Weber. 2012. ATMap: Annotated tactile maps for the visually impaired. In Cognitive Behavioural Systems. Springer, 290--298. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Xuning Zhang, Albert Kai-Sun Wong, and Chin-Tau Lea. 2016. Automatic floor plan analysis for adaptive indoor Wi-Fi positioning system. In Proceedings of the International Conference on Computational Science and Computational Intelligence (CSCI’16). IEEE, 869--874.Google ScholarGoogle ScholarCross RefCross Ref
  81. Haixia Zhao, Catherine Plaisant, Ben Shneiderman, and Jonathan Lazar. 2008. Data sonification for users with visual impairment: A case study with georeferenced data. ACM Trans. Computer-hum. Interact. 15, 1 (2008), 4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Ruiyun Zhu, Jingcheng Shen, Xiangtian Deng, Marcus Walldén, and Fumihiko Ino. 2020. Training strategies for CNN-based models to parse complex floor plans. In Proceedings of the 9th International Conference on Software and Computer Applications (ICSCA’20). Association for Computing Machinery, New York, NY, 11--16. DOI:https://doi.org/10.1145/3384544.3384566Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Zahra Ziran and Simone Marinai. 2018. Object detection in floor plan images. In Proceedings of the 8th IAPR Workshop on Artificial Neural Networks in Pattern Recognition. Springer Nature, 383--394.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Creating Accessible Online Floor Plans for Visually Impaired Readers

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Accessible Computing
        ACM Transactions on Accessible Computing  Volume 13, Issue 4
        December 2020
        117 pages
        ISSN:1936-7228
        EISSN:1936-7236
        DOI:10.1145/3430472
        Issue’s Table of Contents

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 15 October 2020
        • Accepted: 1 July 2020
        • Revised: 1 May 2020
        • Received: 1 March 2019
        Published in taccess Volume 13, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format