skip to main content
10.1145/3597638.3608381acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article

Beyond Audio Description: Exploring 360° Video Accessibility with Blind and Low Vision Users Through Collaborative Creation

Published: 22 October 2023 Publication History

Abstract

While audio description (AD) is a standard method for making traditional videos more accessible to blind and low vision (BLV) users, we lack an understanding of how to make 360° videos accessible while preserving their immersive nature. Through individual interviews and collaborative design workshops, we explored ways to improve 360° video accessibility with immersion and engagement in mind. Our design workshops presented a unique opportunity for participants with diverse backgrounds to build on each others’ personal and professional experiences and collaboratively develop accessible 360° video prototypes. Participants included both AD creators and users, with a focus on BLV AD creators as their perspectives are underrepresented in prior work. We found that immersive video accessibility went beyond an extension of traditional video accessibility techniques. Participants valued accurate vocabulary and different points of view for descriptions, preferred a variety of presentation locations for spatialized AD, appreciated sound effects for setting the mood and subtly guiding, and wished to engage multiple senses to boost engagement. We conclude with implications for immersive media accessibility and future research directions to support disabled people as creators of access technology.

Supplemental Material

MP4 File
The pre-written description presented during the interviews. A video prototype of one of the scripts created during Design Workshop 1.
PDF File
The pre-written description presented during the interviews. A video prototype of one of the scripts created during Design Workshop 1.

References

[1]
ADI AD Guidelines Committee. 2003. Guidelines for Audio Describers. https://adp.acb.org/guidelines.html
[2]
Jérémy Albouys-Perrois, Jérémy Laviole, Carine Briant, and Anke M Brock. 2018. Towards a multisensory augmented reality map for blind and low vision people: A participatory design approach. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–14. https://doi.org/10.1145/3173574.3174203
[3]
Apple Inc.2023. Change your VoiceOver settings on iPhone. https://support.apple.com/guide/iphone/change-your-voiceover-settings-iphfa3d32c50/ios
[4]
Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, and Cordelia Schmid. 2021. Vivit: A video vision transformer. In Proceedings of the IEEE/CVF international conference on computer vision. 6836–6846. https://doi.org/10.1109/ICCV48922.2021.00676
[5]
Audio Description Coalition. 2009. Standards for Audio Description and Code of Professional Conduct for Describers. https://www.perkins.org/wp-content/uploads/elearning-media/adc_standards.pdf
[6]
Shiri Azenkot, Catherine Feng, and Maya Cakmak. 2016. Enabling building service robots to guide blind people a participatory design approach. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 3–10. https://doi.org/10.1109/HRI.2016.7451727
[7]
Cynthia L Bennett, Erin Brady, and Stacy M Branham. 2018. Interdependence as a frame for assistive technology research and design. In Proceedings of the 20th international acm sigaccess conference on computers and accessibility. 161–173. https://doi.org/10.1145/3234695.3236348
[8]
Cynthia L Bennett, Cole Gleason, Morgan Klaus Scheuerman, Jeffrey P Bigham, Anhong Guo, and Alexandra To. 2021. “It’s Complicated”: Negotiating Accessibility and (Mis) Representation in Image Descriptions of Race, Gender, and Disability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19. https://doi.org/10.1145/3411764.3445498
[9]
Binaulab Audio 3D. 2016. Queen - Bohemian Rhapsody - 3D AUDIO (TOTAL IMMERSION). https://www.youtube.com/watch?v=VnzIIhLNHqg
[10]
Samantha W Bindman, Lisa M Castaneda, Mike Scanlon, and Anna Cechony. 2018. Am I a bunny? The impact of high and low immersion platforms and viewers’ perceptions of role on presence, narrative engagement, and empathy during an animated 360 video. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–11. https://doi.org/10.1145/3173574.3174031
[11]
William Boateng. 2012. Evaluating the efficacy of focus group discussion (FGD) in qualitative social research. International Journal of Business and Social Science 3, 7 (2012). https://s3.wp.wsu.edu/uploads/sites/2154/2015/09/Evaluating-the-Efficacy-of-Focus-Group-Discussion-in-Qualitative-Social-Research.pdf
[12]
Aditya Bodi, Pooyan Fazli, Shasta Ihorn, Yue-Ting Siu, Andrew T Scott, Lothar Narins, Yash Kant, Abhishek Das, and Ilmi Yoon. 2021. Automated Video Description for Blind and Low Vision Users. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–7. https://doi.org/10.1145/3411763.3451810
[13]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa
[14]
Diane Brauner. 2023. iCons and Earcons: Critical but often overlooked tech skills. https://www.perkins.org/resource/icons-and-earcons-critical-often-overlooked-tech-skills/
[15]
Emeline Brulé and Gilles Bailly. 2018. Taking into Account Sensory Knowledge: The case of geo-techologies for children with visual impairments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–14. https://doi.org/10.1145/3173574.3173810
[16]
Virginia P Campos, Tiago MU de Araújo, Guido L de Souza Filho, and Luiz MG Gonçalves. 2020. CineAD: a system for automated audio description script generation for the visually impaired. Universal Access in the Information Society 19 (2020), 99–111. https://doi.org/10.1007/s10209-018-0634-4
[17]
Virgínia P Campos, Luiz MG Gonçalves, Wesnydy L Ribeiro, Tiago MU Araújo, Thaís G Do Rego, Pedro HV Figueiredo, Suanny FS Vieira, Thiago FS Costa, Caio C Moraes, Alexandre CS Cruz, 2023. Machine Generation of Audio Description for Blind and Visually Impaired People. ACM Transactions on Accessible Computing 16, 2 (2023), 1–28. https://doi.org/10.1145/3590955
[18]
Martha Ann Carey. 1995. Comment: Concerns in the analysis of focus group data. Qualitative health research 5, 4 (1995), 487–495. https://doi.org/10.1177/104973239500500409
[19]
Martha Ann Carey and Mickey W Smith. 1994. Capturing the group effect in focus groups: A special concern in analysis. Qualitative health research 4, 1 (1994), 123–127. https://doi.org/10.1177/104973239400400108
[20]
Luis Cavazos Quero, Jorge Iranzo Bartolomé, Seonggu Lee, En Han, Sunhee Kim, and Jundong Cho. 2018. An interactive multimodal guide to improve art accessibility for blind people. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. 346–348. https://doi.org/10.1145/3234695.3241033
[21]
Ruei-Che Chang, Chao-Hsien Ting, Chia-Sheng Hung, Wan-Chen Lee, Liang-Jin Chen, Yu-Tzu Chao, Bing-Yu Chen, and Anhong Guo. 2022. OmniScribe: Authoring Immersive Audio Descriptions for 360 Videos. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–14. https://doi.org/10.1145/3526113.3545613
[22]
Neha Chintala. 2023. From Blind Driving Assists to One Touch Driving, Meet The Most Accessible Forza Motorsport Ever. https://news.xbox.com/en-us/2023/04/27/forza-motorsport-accessibility-features-blind-driving/
[23]
Agnieszka Chmiel and Iwona Mazur. 2022. A homogenous or heterogeneous audience? Audio description preferences of persons with congenital blindness, non-congenital blindness and low vision. Perspectives 30, 3 (2022), 552–567. https://doi.org/10.1080/0907676X.2021.1913198
[24]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020). https://doi.org/10.48550/arXiv.2010.11929
[25]
Wendy Duggleby. 2005. What about focus group interaction data?Qualitative health research 15, 6 (2005), 832–840. https://doi.org/10.1177/1049732304273916
[26]
Equal Entry. 2020. Audio Descriptions for 360 Degree Video: Best Practices. https://www.youtube.com/watch?v=jOX6gxUZq8w
[27]
Deborah I Fels, John Patrick Udo, Peter Ting, Jonas E Diamond, and Jeremy I Diamond. 2006. Odd Job Jack described: a universal design approach to described video. Universal Access in the Information society 5 (2006), 73–81. https://doi.org/10.1007/s10209-006-0025-0
[28]
Richard E Ferdig, Karl W Kosko, and Enrico Gandolfi. 2020. The use of ambisonic audio to improve presence, focus, and noticing while viewing 360 video. Journal For Virtual Worlds Research 13, 2-3 (2020). https://jvwr-ojs-utexas.tdl.org/jvwr/article/view/7422
[29]
Anita Fidyka and Anna Matamala. 2018. Audio description in 360° videos: Results from focus groups in Barcelona and Kraków. Translation Spaces 7, 2 (2018), 285–303. https://doi.org/10.1075/ts.18018.fid
[30]
Anita Fidyka and Anna Matamala. 2021. Retelling narrative in 360° videos: Implications for audio description. Translation Studies 14, 3 (2021), 298–312. https://doi.org/10.1080/14781700.2021.1888783
[31]
Anita Fidyka, Anna Matamala, Olga Soler Vilageliu, and Blanca Arias-Badia. 2021. Audio description in 360° content: results from a reception study. Skase Journal of Translation and Interpretation 14, 1 (2021), 14–32. http://www.skase.sk/Volumes/JTI20/pdf_doc/02.pdf
[32]
Langis Gagnon, Claude Chapdelaine, David Byrns, Samuel Foucher, Maguelonne Heritier, and Vishwa Gupta. 2010. A computer-vision-assisted system for videodescription scripting. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEE, 41–48. https://doi.org/10.1109/CVPRW.2010.5543575
[33]
David Gonçalves, Manuel Piçarra, Pedro Pais, João Guerreiro, and André Rodrigues. 2023. “My Zelda Cane”: Strategies Used by Blind Players to Play Visual-Centric Digital Games. In Proceedings of the 2023 CHI conference on human factors in computing systems. 1–15. https://doi.org/10.1145/3544548.3580702
[34]
Ricardo E Gonzalez Penuela, Wren Poremba, Christina Trice, and Shiri Azenkot. 2022. Hands-On: Using Gestures to Control Descriptions of a Virtual Environment for People with Visual Impairments. In Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–4. https://doi.org/10.1145/3526114.3558669
[35]
Nick Grudin. 2015. Introducing 360 Video on Facebook. https://www.facebook.com/formedia/blog/introducing-360-video-on-facebook
[36]
João Guerreiro, Yujin Kim, Rodrigo Nogueira, SeungA Chung, André Rodrigues, and Uran Oh. 2023. The Design Space of the Auditory Representation of Objects and Their Behaviours in Virtual Reality for Blind People. IEEE Transactions on Visualization and Computer Graphics 29, 5 (2023), 2763–2773. https://doi.org/10.1109/TVCG.2023.3247094
[37]
James Herndon and Chancey Fleet. 2020. Audio Descriptions for 360-Degree Video: Recap of Webinar. https://equalentry.com/audio-descriptions-for-360-degree-video-recap/
[38]
ImAc Project. 2019. Audio description for 360° content. https://www.imacproject.eu/2019/09/02/audio-description-for-360-content/
[39]
ImAc Project. 2019. Audio description in 3D audio. https://www.imacproject.eu/2019/01/14/audio-description-in-3d-audio/
[40]
Imperial Potato. 2022. The Super Mario Bros Movie 360/VR Experience. https://www.youtube.com/watch?v=ykLs1GohDFE
[41]
Gaurav Jain, Basel Hindi, Connor Courtien, Conrad Wyrick, Xin Yi Therese Xu, Michael C Malcolm, and Brian A Smith. 2023. Towards Accessible Sports Broadcasts for Blind and Low-Vision Viewers. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–7. https://doi.org/10.1145/3544549.3585610
[42]
Lucy Jiang and Richard Ladner. 2022. Co-Designing Systems to Support Blind and Low Vision Audio Description Writers. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. 1–3. https://doi.org/10.1145/3517428.3550394
[43]
Gabriella M Johnson and Shaun K Kane. 2020. Game changer: accessible audio and tactile guidance for board and card games. In Proceedings of the 17th International Web for All Conference. 1–12. https://doi.org/10.1145/3371300.3383347
[44]
Finn Kensing and Jeanette Blomberg. 1998. Participatory design: Issues and concerns. Computer supported cooperative work (CSCW) 7, 3 (1998), 167–185. https://doi.org/10.1023/A:1008689307411
[45]
Christopher A Le Dantec and Sarah Fox. 2015. Strangers at the gate: Gaining access, building rapport, and co-constructing community-based research. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. 1348–1358. https://doi.org/10.1145/2675133.2675147
[46]
Yen-Chen Lin, Yung-Ju Chang, Hou-Ning Hu, Hsien-Tzu Cheng, Chi-Wen Huang, and Min Sun. 2017. Tell me where to look: Investigating ways for assisting focus in 360 video. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2535–2545. https://doi.org/10.1145/3025453.3025757
[47]
Yung-Ta Lin, Yi-Chi Liao, Shan-Yuan Teng, Yi-Ju Chung, Liwei Chan, and Bing-Yu Chen. 2017. Outside-in: Visualizing out-of-sight regions-of-interest in a 360 video using spatial picture-in-picture previews. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 255–265. https://doi.org/10.1145/3126594.3126656
[48]
Xingyu Liu, Patrick Carrington, Xiang’Anthony’ Chen, and Amy Pavel. 2021. What Makes Videos Accessible to Blind and Visually Impaired People?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14. https://doi.org/10.1145/3411764.3445233
[49]
Xingyu “Bruce” Liu, Ruolin Wang, Dingzeyu Li, Xiang Anthony Chen, and Amy Pavel. 2022. CrossA11y: Identifying Video Accessibility Issues via Cross-modal Grounding. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–14. https://doi.org/10.1145/3526113.3545703
[50]
Rachael Luck. 2003. Dialogue in participatory design. Design studies 24, 6 (2003), 523–535. https://doi.org/10.1016/S0142-694X(03)00040-1
[51]
Carme Mangiron and Xiaochun Zhang. 2016. Game accessibility for the blind: Current overview and the potential application of audio description as the way forward. Researching audio description: New approaches (2016), 75–95. https://doi.org/10.1057/978-1-137-56917-2_5
[52]
Carme Mangiron and Xiaochun Zhang. 2022. Video games and audio description. In The Routledge Handbook of Audio Description. Routledge, 377–390. https://doi.org/10.4324/9781003003052-29
[53]
Troy McDaniel, Lakshmie Narayan Viswanathan, and Sethuraman Panchanathan. 2013. An evaluation of haptic descriptions for audio described films for individuals who are blind. In 2013 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 1–6. https://doi.org/10.1109/ICME.2013.6607554
[54]
Meta. 2023. Health & Safety Warnings. https://www.meta.com/legal/quest/health-and-safety-warnings/
[55]
Meta. 2023. Meta Quest 2 Health & Safety Manual. https://www.oculus.com/safety-center/quest-2/
[56]
Meredith Ringel Morris, Jazette Johnson, Cynthia L Bennett, and Edward Cutrell. 2018. Rich representations of visual content for screen reader users. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–11. https://doi.org/10.1145/3173574.3173633
[57]
Cecily Morrison, Edward Cutrell, Anupama Dhareshwar, Kevin Doherty, Anja Thieme, and Alex Taylor. 2017. Imagining artificial intelligence applications with people with visual disabilities using tactile ideation. In Proceedings of the 19th international acm sigaccess conference on computers and accessibility. 81–90. https://doi.org/10.1145/3132525.3132530
[58]
Martez E Mott, John Tang, and Edward Cutrell. 2023. Accessibility of Profile Pictures: Alt Text and Beyond to Express Identity Online. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–13. https://doi.org/10.1145/3544548.3580710
[59]
Annika Muehlbradt and Shaun K Kane. 2022. What’s in an ALT Tag? Exploring Caption Content Priorities through Collaborative Captioning. ACM Transactions on Accessible Computing (TACCESS) 15, 1 (2022), 1–32. https://doi.org/10.1145/3507659
[60]
Rosiana Natalie, Ebrima Jarjue, Hernisa Kacorri, and Kotaro Hara. 2020. Viscene: A collaborative authoring tool for scene descriptions in videos. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–4. https://doi.org/10.1145/3373625.3418030
[61]
Rosiana Natalie, Jolene Loh, Huei Suen Tan, Joshua Tseng, Ian Luke Yi-Ren Chan, Ebrima H Jarjue, Hernisa Kacorri, and Kotaro Hara. 2021. The efficacy of collaborative authoring of video scene descriptions. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility. 1–15. https://doi.org/10.1145/3441852.3471201
[62]
Rosiana Natalie, Jolene Loh, Huei Suen Tan, Joshua Tseng, Hernisa Kacorri, and Kotaro Hara. 2021. Uncovering patterns in reviewers’ feedback to scene description authors. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility. 1–4. https://doi.org/10.1145/3441852.3476550
[63]
Rosiana Natalie, Joshua Tseng, Hernisa Kacorri, and Kotaro Hara. 2023. Supporting Novices Author Audio Descriptions via Automatic Feedback. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18. https://doi.org/10.1145/3544548.3581023
[64]
Anthony J Onwuegbuzie, Wendy B Dickinson, Nancy L Leech, and Annmarie G Zoran. 2009. A qualitative framework for collecting and analyzing data in focus group research. International journal of qualitative methods 8, 3 (2009), 1–21. https://doi.org/10.1177/160940690900800301
[65]
OpenAI. 2023. GPT-4. https://openai.com/product/gpt-4
[66]
Amy Pavel, Gabriel Reyes, and Jeffrey P Bigham. 2020. Rescribe: Authoring and automatically editing audio descriptions. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 747–759. https://doi.org/10.1145/3379337.3415864
[67]
Yi-Hao Peng, Jeffrey P Bigham, and Amy Pavel. 2021. Slidecho: Flexible non-visual exploration of presentation videos. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility. 1–12. https://doi.org/10.1145/3441852.3471234
[68]
André Queirós, Daniel Faria, and Fernando Almeida. 2017. Strengths and limitations of qualitative and quantitative research methods. European journal of education studies (2017). https://oapub.org/edu/index.php/ejes/article/view/1017/2934
[69]
RNIB. 2018. Audio description for 360-degree content. https://www.rnib.org.uk/news/audio-description-for-360-degree-content/
[70]
Elizabeth B-N Sanders, Eva Brandt, and Thomas Binder. 2010. A framework for organizing the tools and techniques of participatory design. In Proceedings of the 11th biennial participatory design conference. 195–198. https://doi.org/10.1145/1900441.1900476
[71]
Alexa Siu, Gene SH Kim, Sile O’Modhrain, and Sean Follmer. 2022. Supporting accessible data visualization through audio data narratives. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–19. https://doi.org/10.1145/3491102.3517678
[72]
Joel Snyder. 2005. Audio description: The visual made verbal. In International Congress Series, Vol. 1282. Elsevier, 935–939. https://doi.org/10.1016/j.ics.2005.05.215
[73]
Abigale Stangl, Shasta Ihorn, Yue-Ting Siu, Aditya Bodi, Mar Castanon, Lothar Narins, and Ilmi Yoon. 2023. The Potential of a Visual Dialogue Agent In a Tandem Automated Audio Description System for Videos. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility. 1–16.
[74]
Abigale Stangl, Meredith Ringel Morris, and Danna Gurari. 2020. Person, Shoes, Tree. Is the Person Naked? What People with Vision Impairments Want in Image Descriptions. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–13. https://doi.org/10.1145/3313831.3376404
[75]
Abigale Stangl, Nitin Verma, Kenneth R Fleischmann, Meredith Ringel Morris, and Danna Gurari. 2021. Going Beyond One-Size-Fits-All Image Descriptions to Satisfy the Information Wants of People Who are Blind or Have Low Vision. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility. 1–15. https://doi.org/10.1145/3441852.3471233
[76]
The American Council of the Blind. 2023. All About Audio Description. https://adp.acb.org/ad.html
[77]
The American Council of the Blind. 2023. The Audio Description Project. https://adp.acb.org/
[78]
John Patrick Udo, Bertha Acevedo, and Deborah I Fels. 2010. Horatio audio-describes Shakespeare’s Hamlet: Blind and low-vision theatre-goers evaluate an unconventional audio description strategy. British Journal of Visual Impairment 28, 2 (2010), 139–156. https://doi.org/10.1177/0264619609359753
[79]
John-Patrick Udo and Deborah I Fels. 2010. Enhancing the entertainment experience of blind and low-vision theatregoers through touch tours. Disability & Society 25, 2 (2010), 231–240. https://doi.org/10.1080/09687590903537497
[80]
John-Patrick Udo and Deborah I Fels. 2010. The rogue poster-children of universal design: Closed captioning and audio description. Journal of Engineering Design 21, 2-3 (2010), 207–221. https://doi.org/10.1080/09544820903310691
[81]
Sanjeev Verma. 2015. 360-degree videos now on Google Cardboard and iOS. https://blog.youtube/news-and-events/360-degree-videos-now-on-google/
[82]
Sanjeev Verma. 2015. A new way to see and share your world with 360-degree video. https://blog.youtube/news-and-events/a-new-way-to-see-and-share-your-world/
[83]
Lakshmie Narayan Viswanathan, Troy McDaniel, Sreekar Krishna, and Sethuraman Panchanathan. 2010. Haptics in audio described movies. In 2010 IEEE International Symposium on Haptic Audio Visual Environments and Games. IEEE, 1–2. https://doi.org/10.1109/HAVE.2010.5623958
[84]
Lakshmie Narayan Viswanathan, Troy McDaniel, and Sethuraman Panchanathan. 2011. Audio-haptic description in movies. In International Conference on Human-Computer Interaction. Springer, 414–418. https://doi.org/10.1007/978-3-642-22098-2_83
[85]
VR Planet. 2022. 360° VR || Avatar 2: The Way of Water. https://www.youtube.com/watch?v=V8Abp4MEg80
[86]
Yujia Wang, Wei Liang, Haikun Huang, Yongqi Zhang, Dingzeyu Li, and Lap-Fai Yu. 2021. Toward automatic audio description generation for accessible videos. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–12. https://doi.org/10.1145/3411764.3445347
[87]
Beste F Yuksel, Pooyan Fazli, Umang Mathur, Vaishali Bisht, Soo Jung Kim, Joshua Junhee Lee, Seung Jung Jin, Yue-Ting Siu, Joshua A Miele, and Ilmi Yoon. 2020. Human-in-the-Loop Machine Learning to Increase Video Accessibility for Visually Impaired and Blind Users. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 47–60. http://doi.org/10.1145/3357236.3395433
[88]
Yuhang Zhao, Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek, and Andrew D Wilson. 2019. SeeingVR: A set of tools to make virtual reality more accessible to people with low vision. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–14. https://doi.org/10.1145/3290605.3300341

Cited By

View all
  • (2024)An AI Guide to Enhance Accessibility of Social Virtual Reality for Blind PeopleProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3688498(1-5)Online publication date: 27-Oct-2024
  • (2024)Musical Performances in Virtual Reality with Spatial and View-Dependent Audio Descriptions for Blind and Low-Vision UsersProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3688492(1-5)Online publication date: 27-Oct-2024
  • (2024)Towards Accessible Musical Performances in Virtual Reality: Designing a Conceptual Framework for Omnidirectional Audio DescriptionsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675618(1-17)Online publication date: 27-Oct-2024
  • Show More Cited By

Index Terms

  1. Beyond Audio Description: Exploring 360° Video Accessibility with Blind and Low Vision Users Through Collaborative Creation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ASSETS '23: Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility
    October 2023
    1163 pages
    ISBN:9798400702204
    DOI:10.1145/3597638
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 October 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. 360° videos
    2. audio description
    3. blind and low vision
    4. co-design
    5. design workshop
    6. video accessibility

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    ASSETS '23
    Sponsor:

    Acceptance Rates

    ASSETS '23 Paper Acceptance Rate 55 of 182 submissions, 30%;
    Overall Acceptance Rate 436 of 1,556 submissions, 28%

    Upcoming Conference

    ASSETS '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)302
    • Downloads (Last 6 weeks)18
    Reflects downloads up to 14 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)An AI Guide to Enhance Accessibility of Social Virtual Reality for Blind PeopleProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3688498(1-5)Online publication date: 27-Oct-2024
    • (2024)Musical Performances in Virtual Reality with Spatial and View-Dependent Audio Descriptions for Blind and Low-Vision UsersProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3688492(1-5)Online publication date: 27-Oct-2024
    • (2024)Towards Accessible Musical Performances in Virtual Reality: Designing a Conceptual Framework for Omnidirectional Audio DescriptionsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675618(1-17)Online publication date: 27-Oct-2024
    • (2024)Audio Description CustomizationProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675617(1-19)Online publication date: 27-Oct-2024
    • (2024)WorldScribe: Towards Context-Aware Live Visual DescriptionsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676375(1-18)Online publication date: 13-Oct-2024
    • (2024)Making Short-Form Videos Accessible with Hierarchical Video SummariesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642839(1-17)Online publication date: 11-May-2024
    • (2024)“It’s Kind of Context Dependent”: Understanding Blind and Low Vision People’s Video Accessibility Preferences Across Viewing ScenariosProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642238(1-20)Online publication date: 11-May-2024
    • (2023)The Potential of a Visual Dialogue Agent In a Tandem Automated Audio Description System for VideosProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3608402(1-17)Online publication date: 22-Oct-2023

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media