skip to main content
10.1145/3677846.3677849acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesw4aConference Proceedingsconference-collections
research-article

Enhancing Walk-Light Detector Usage for the Visually Impaired: A Comparison of VR Exploration and Verbal Instructions

Published: 22 October 2024 Publication History

Abstract

People with visual impairments (PVI) increasingly rely on camera-enabled smartphone apps for tasks like photography, navigation, and text recognition. Despite the growing use of these applications, precise camera aiming remains a significant challenge. This study explores the impact of virtual reality (VR) exploration compared to traditional text/audio (TA) instructions in the context of learning to use a walk-light detector app at traffic intersections. We developed a VR exploration tool based on insights gathered from interviews with PVI. A user study was conducted, involving 13 PVI participants divided into two groups: VR exploration and TA instructions. Following indoor training using the respective approaches, participants from both groups used the walk light detector app outdoors. According to the participants’ subjective feedback, a higher proportion of participants in the TA group found the training easier, potentially due to shortcomings in our VR protocol and differences between the real world and VR. However, more VR participants gained insights into walk light detection and felt unable to use the detector without VR training, compared to the TA group.

References

[1]
[2]
Envision AI. 2023. Enabling vision for the blind.https://www.letsenvision.com
[3]
Seeing AI. 2023. Seeing AI App from Microsoft. https://www.microsoft.com/en-us/ai/seeing-ai
[4]
Aira. 2023. Visual Interpreting – Get Live, On-demand Access to Visual Information. https://aira.io
[5]
Nida Aziz, Tony Stockman, and Rebecca Stewart. 2022. Planning your journey in audio: design and evaluation of auditory route overviews. ACM Transactions on Accessible Computing 15, 4 (2022), 1–48.
[6]
Jan Balata, Zdenek Mikovec, and Lukas Neoproud. 2015. Blindcamera: Central and golden-ratio composition for blind photographers. In Proceedings of the Mulitimedia, Interaction, Design and Innnovation. 1–8.
[7]
Michael Barnett. 2005. Using virtual reality computer models to support student understanding of astronomical concepts. Journal of computers in Mathematics and Science Teaching 24, 4 (2005), 333–356.
[8]
Jeffrey P Bigham, Chandrika Jayant, Andrew Miller, Brandyn White, and Tom Yeh. 2010. VizWiz:: LocateIt-enabling blind people to locate objects in their environment. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEE, 65–72.
[9]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
[10]
Carmen Chai, Bee Theng Lau, and Zheng Pan. 2019. Hungry Cat—a serious game for conveying spatial information to the visually impaired. Multimodal Technologies and Interaction 3, 1 (2019), 12.
[11]
Athanasios Christopoulos, Nikolaos Pellas, and Mikko-Jussi Laakso. 2020. A learning analytics theoretical framework for STEM education virtual reality applications. Education Sciences 10, 11 (2020), 317.
[12]
Antonio Cobo, Nancy E Guerrón, Carlos Martín, Francisco del Pozo, and José Javier Serrano. 2017. Differences between blind people’s cognitive maps after proximity and distant exploration of virtual environments. Computers in Human Behavior 77 (2017), 294–308.
[13]
Kanjar De and V Masilamani. 2013. Image sharpness measure for blurred images in frequency domain. Procedia Engineering 64 (2013), 149–158.
[14]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248–255.
[15]
Be My Eyes. 2023. Be My Eyes - See the world together. http://www.bemyeyes.org/
[16]
Ricardo E Gonzalez Penuela, Paul Vermette, Zihan Yan, Cheng Zhang, Keith Vertanen, and Shiri Azenkot. 2022. Understanding How People with Visual Impairments Take Selfies: Experiences and Challenges. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. 1–4.
[17]
João Guerreiro, Daisuke Sato, Dragan Ahmetovic, Eshed Ohn-Bar, Kris M Kitani, and Chieko Asakawa. 2020. Virtual navigation for blind people: Transferring route knowledge to the real-World. International Journal of Human-Computer Studies 135 (2020), 102369.
[18]
C Harrison, PM Dall, PM Grant, MH Granat, TW Maver, and BA Conway. 2000. Development of a wheelchair virtual reality platform for use in evaluating wheelchair access. In 3rd International Conference on Disability, VR and Associated Technologies, Sardinia, Edited by P. Sharkey.
[19]
Hsiu-Mei Huang, Ulrich Rauch, and Shu-Sheng Liaw. 2010. Investigating learners’ attitudes toward virtual reality learning environments: Based on a constructivist approach. Computers & Education 55, 3 (2010), 1171–1182.
[20]
Masakazu Iwamura, Naoki Hirabayashi, Zheng Cheng, Kazunori Minatani, and Koichi Kise. 2020. VisPhoto: photography for people with visual impairment as post-production of omni-directional camera image. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–9.
[21]
Chandrika Jayant, Hanjie Ji, Samuel White, and Jeffrey P Bigham. 2011. Supporting blind photography. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility. 203–210.
[22]
Hannes Kaufmann, Dieter Schmalstieg, and Michael Wagner. 2000. Construct3D: a virtual reality application for mathematics and geometry education. Education and information technologies 5 (2000), 263–276.
[23]
Orly Lahav. 2022. Virtual Reality Systems as an Orientation Aid for People Who Are Blind to Acquire New Spatial Information. Sensors 22, 4 (2022), 1307.
[24]
Orly Lahav, Hadas Gedalevitz, Steven Battersby, David Brown, Lindsay Evett, and Patrick Merritt. 2018. Virtual environment navigation with look-around mode to explore new real spaces by people who are blind. Disability and rehabilitation 40, 9 (2018), 1072–1084.
[25]
Kyungjun Lee, Jonggi Hong, Simone Pimento, Ebrima Jarjue, and Hernisa Kacorri. 2019. Revisiting Blind Photography in the Context of Teachable Object Recognizers. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 83–95.
[26]
G Michael Lemole Jr, P Pat Banerjee, Cristian Luciano, Sergey Neckrysh, and Fady T Charbel. 2007. Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback. Neurosurgery 61, 1 (2007), 142–149.
[27]
Wei-Kai Liou and Chun-Yen Chang. 2018. Virtual reality classroom applied to science education. In 2018 23rd International Scientific-Professional Conference on Information Technology (IT). IEEE, 1–4.
[28]
Eric Nersesian, Adam Spryszynski, and Michael J Lee. 2019. Integration of virtual reality in secondary STEM education. In 2019 IEEE Integrated STEM Education Conference (ISEC). IEEE, 83–90.
[29]
Sebastian Oberdörfer, David Heidrich, and Marc Erich Latoschik. 2019. Usability of gamified knowledge learning in VR and desktop-3D. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
[30]
Matthias Oberhauser and Daniel Dreyer. 2017. A virtual reality flight simulator for human factors engineering. Cognition, Technology & Work 19 (2017), 263–277.
[31]
OKO. 2023. The AI App for the Blind. https://www.ayes.ai/
[32]
Shanmugam Muruga Palaniappan, Ting Zhang, and Bradley S Duerstock. 2019. Identifying comfort areas in 3D space for persons with upper extremity mobility impairments using virtual reality. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility. 495–499.
[33]
Jocelyn Parong and Richard E Mayer. 2021. Cognitive and affective processes for learning science in immersive virtual reality. Journal of Computer Assisted Learning 37, 1 (2021), 226–241.
[34]
Siyou Pei, Alexander Chen, Chen Chen, Franklin Mingzhe Li, Megan Fozzard, Hao-Yun Chi, Nadir Weibel, Patrick Carrington, and Yang Zhang. 2023. Embodied Exploration: Facilitating Remote Accessibility Assessment for Wheelchair Users with Virtual Reality. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility. 1–17.
[35]
Johanna Pirker, Johannes Kopf, Alexander Kainz, Andreas Dengel, and Benjamin Buchbauer. 2021. The potential of virtual reality for computer science education-engaging students through immersive visualizations. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 297–302.
[36]
Joseph Redmon and Ali Farhadi. 2017. YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7263–7271.
[37]
Larry D Rosen, Kelly Whaling, L Mark Carrier, Nancy A Cheever, and Jeffrey Rokkum. 2013. The media and technology usage and attitudes scale: An empirical investigation. Computers in human behavior 29, 6 (2013), 2501–2511.
[38]
TapTapSee. 2023. Mobile camera application designed specifically for the blind and visually impaired iOS users. http://www.taptapseeapp.com/
[39]
Ender Tekin and James M Coughlan. 2010. A mobile phone application enabling visually impaired users to find and read product barcodes. In Computers Helping People with Special Needs: 12th International Conference, ICCHP 2010, Vienna, Austria, July14-16, 2010, Proceedings, Part II 12. Springer, 290–295.
[40]
Paraskevi Theodorou, Kleomenis Tsiligkos, Apostolos Meliones, and Costas Filios. 2022. A training smartphone application for the simulation of outdoor blind pedestrian navigation: usability, UX evaluation, sentiment analysis. Sensors 23, 1 (2022), 367.
[41]
Balasaravanan Thoravi Kumaravel, Fraser Anderson, George Fitzmaurice, Bjoern Hartmann, and Tovi Grossman. 2019. Loki: Facilitating remote instruction of physical tasks using bi-directional mixed-reality telepresence. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 161–174.
[42]
Marynel Vázquez and Aaron Steinfeld. 2012. Helping visually impaired users properly aim a camera. In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility. 95–102.
[43]
Marynel Vázquez and Aaron Steinfeld. 2014. An assisted photography framework to help visually impaired users properly aim a camera. ACM Transactions on Computer-Human Interaction (TOCHI) 21, 5 (2014), 1–29.
[44]
Samuel White, Hanjie Ji, and Jeffrey P Bigham. 2010. EasySnap: real-time audio feedback for blind photography. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology. 409–410.
[45]
Mihye Won, Mauro Mocerino, Kok-Sing Tang, David F Treagust, and Roy Tasker. 2019. Interactive immersive virtual reality to enhance students’ visualisation of complex molecules. In Research and Practice in Chemistry Education: Advances from the 25th IUPAC International Conference on Chemistry Education 2018. Springer, 51–64.
[46]
Yutaro Yamanaka, Seita Kayukawa, Hironobu Takagi, Yuichi Nagaoka, Yoshimune Hiratsuka, and Satoshi Kurihara. 2021. One-Shot Wayfinding Method for Blind People via OCR and Arrow Analysis with a 360-degree Smartphone Camera. In International Conference on Mobile and Ubiquitous Systems: Computing, Networking, and Services. Springer, 150–168.
[47]
Yuhang Zhao, Shaomei Wu, Lindsay Reynolds, and Shiri Azenkot. 2018. A face recognition application for people with visual impairments: Understanding use beyond the lab. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–14.
[48]
Zhengzhe Zhu, Ziyi Liu, Youyou Zhang, Lijun Zhu, Joey Huang, Ana M Villanueva, Xun Qian, Kylie Peppler, and Karthik Ramani. 2023. LearnIoTVR: An End-to-End Virtual Reality Environment Providing Authentic Learning Experiences for Internet of Things. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.

Index Terms

  1. Enhancing Walk-Light Detector Usage for the Visually Impaired: A Comparison of VR Exploration and Verbal Instructions

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        W4A '24: Proceedings of the 21st International Web for All Conference
        May 2024
        220 pages
        ISBN:9798400710308
        DOI:10.1145/3677846
        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 22 October 2024

        Check for updates

        Author Tags

        1. Blind photography
        2. virtual reality
        3. walk light
        4. navigation
        5. blindness and low vision

        Qualifiers

        • Research-article

        Funding Sources

        • NIDILRR

        Conference

        W4A '24
        W4A '24: The 21st International Web for All Conference
        May 13 - 14, 2024
        Singapore, Singapore

        Acceptance Rates

        Overall Acceptance Rate 171 of 371 submissions, 46%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 94
          Total Downloads
        • Downloads (Last 12 months)94
        • Downloads (Last 6 weeks)4
        Reflects downloads up to 16 Feb 2025

        Other Metrics

        Citations

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        Full Text

        HTML Format

        View this article in HTML Format.

        HTML Format

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media