skip to main content
10.1145/3678957.3685714acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article
Open access

Improving Usability of Data Charts in Multimodal Documents for Low Vision Users

Published: 04 November 2024 Publication History

Abstract

Data chart visualizations and text are often paired in news articles, online blogs, and academic publications to present complex data. While chart visualizations offer graphical summaries of the data, the accompanying text provides essential context and explanation. Associating information from text and charts is straightforward for sighted users but presents significant challenges for individuals with low vision, especially on small-screen devices such as smartphones. The visual nature of charts coupled with the layout of the text inherently makes it difficult for low vision users to mentally associate chart data with text and comprehend the content due to their dependence on screen magnifier assistive technology, which only displays a small portion of the screen at any instant due to content enlargement. To address this problem, in this paper, we present a smartphone-based multimodal mixed-initiative interface that transforms static data charts and the accompanying text into an interactive slide show featuring frames containing “magnified views” of relevant data point combinations. The interface also includes a narration component that delivers tailored information for each “magnified view”. The design of our interface was informed by a user study with 10 low-vision participants, aimed at uncovering low vision interaction challenges and user-interface requirements with multimodal documents that integrate text and chart visualizations. Our interface was also evaluated in a subsequent study with 12 low-vision participants, where we observed significant improvements in chart usability compared to both status-quo screen magnifiers and a state-of-the-art solution.

References

[1]
Nancy Alajarmeh. 2022. The extent of mobile accessibility coverage in WCAG 2.1: sufficiency of success criteria and appropriateness of relevant conformance levels pertaining to accessibility problems encountered by users who are visually impaired. Universal Access in the Information Society 21, 2 (2022), 507–532.
[2]
asbljy. 2024. DeepRuleDataset. https://huggingface.co/datasets/asbljy/DeepRuleDataset/tree/main.
[3]
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, and IV Ramakrishnan. 2020. Towards making videos accessible for low vision screen magnifier users. In Proceedings of the 25th international conference on intelligent user interfaces. 10–21.
[4]
Paul Ayres and John Sweller. 2005. The split-attention principle in multimedia learning. The Cambridge handbook of multimedia learning 2 (2005), 135–146.
[5]
Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of usability studies 4, 3 (2009), 114–123.
[6]
Syed Masum Billah, Vikas Ashok, Donald E Porter, and IV Ramakrishnan. 2018. SteeringWheel: a locality-preserving magnification interface for low vision web browsing. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–13.
[7]
John Brooke. 1996. Sus: a “quick and dirty’usability. Usability evaluation in industry 189, 3 (1996).
[8]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
[9]
Maria Claudia Buzzi, Marina Buzzi, Barbara Leporini, and Amaury Trujillo. 2015. Exploring visually impaired people’s gesture preferences for smartphones. In Proceedings of the 11th Biannual Conference of the Italian SIGCHI Chapter. 94–101.
[10]
Adriana Chammas, Manuela Quaresma, and Cláudia Mont’Alvão. 2015. A closer look on the user centred design. Procedia Manufacturing 3 (2015), 5397–5404.
[11]
Sarah Chiti and Barbara Leporini. 2012. Accessibility of android-based mobile devices: a prototype to investigate interaction with blind users. In International Conference on Computers for Handicapped Persons. Springer, 607–614.
[12]
Jinho Choi, Sanghun Jung, Deok Gun Park, Jaegul Choo, and Niklas Elmqvist. 2019. Visualizing for the non-visual: Enabling the visually impaired to use visualization. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 249–260.
[13]
World Wide Web Consortium 2016. Accessibility Requirements for People with Low Vision. https://www.w3.org/TR/low-vision-needs/.
[14]
Dart Team. 2024. Dart Programming Language. https://dart.dev/.
[15]
Leon Derczynski. 2016. Complementarity, F-score, and NLP Evaluation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16). 261–266.
[16]
FL Chart Contributors. 2024. Flutter Package. https://pub.dev/packages/fl_chart.
[17]
Flutter Team. 2024. Flutter Package for TTS. https://pub.dev/packages/flutter_tts.
[18]
Flutter Team. 2024. Write Your First Flutter App. https://docs.flutter.dev/get-started/codelab.
[19]
Google Cloud Team. 2024. Advanced Guide to Inception v3 on Cloud TPU. https://cloud.google.com/tpu/docs/inception-v3-advanced.
[20]
Charles F Greenbacker, Peng Wu, Sandra Carberry, Kathleen F McCoy, Stephanie Elzer, David D McDonald, Daniel Chester, and Seniz Demir. 2011. Improving the accessibility of line graphs in multimodal documents. In Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies. 52–62.
[21]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, 139–183.
[22]
Md Naimul Hoque, Md Ehtesham-Ul-Haque, Niklas Elmqvist, and Syed Masum Billah. 2023. Accessible Data Representation with Natural Sound. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.
[23]
Hugging Face. 2024. LLaMA Model Documentation. https://huggingface.co/docs/transformers/main/en/model_doc/llama.
[24]
Md Touhidul Islam and Syed Masum Billah. 2023. SpaceX Mag: An Automatic, Scalable, and Rapid Space Compactor for Optimizing Smartphone App Interfaces for Low-Vision Users. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 2 (2023), 1–36.
[25]
Mohit Jain, Nirmalendu Diwakar, and Manohar Swaminathan. 2021. Smartphone usage by expert blind users. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
[26]
Randall T Jose. 1983. Understanding low vision. American Foundation for the Blind.
[27]
Shaun K Kane, Jeffrey P Bigham, and Jacob O Wobbrock. 2008. Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility. 73–80.
[28]
Shankar Kanthara, Rixie Tiffany Ko Leong, Xiang Lin, Ahmed Masry, Megh Thakkar, Enamul Hoque, and Shafiq Joty. 2022. Chart-to-Text: A Large-Scale Benchmark for Chart Summarization. arXiv preprint arXiv:2203.06486 (2022).
[29]
Akif Khan and Shah Khusro. 2019. Blind-friendly user interfaces–a pilot study on improving the accessibility of touchscreen interfaces. Multimedia Tools and Applications 78, 13 (2019), 17495–17519.
[30]
Beob G Kim and Taemin Kim. 2010. A program for making completely balanced Latin Square designs employing a systemic method. Revista Colombiana de Ciencias Pecuarias 23, 3 (2010), 277–282.
[31]
Nicholas Kong, Marti A Hearst, and Maneesh Agrawala. 2014. Extracting references between text and charts via crowdsourcing. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 31–40.
[32]
Shahid Latif, Zheng Zhou, Yoon Kim, Fabian Beck, and Nam Wook Kim. 2021. Kori: Interactive synthesis of text and charts in data documents. IEEE Transactions on Visualization and Computer Graphics 28, 1 (2021), 184–194.
[33]
Hae-Na Lee and Vikas Ashok. 2022. Customizable Tabular Access to Web Data Records for Convenient Low-vision Screen Magnifier Interaction. ACM Transactions on Accessible Computing (TACCESS) 15, 2 (2022), 1–22.
[34]
Hae-Na Lee, Vikas Ashok, and IV Ramakrishnan. 2021. Bringing things closer: Enhancing low-vision interaction experience with office productivity applications. Proceedings of the ACM on Human-computer Interaction 5, EICS (2021), 1–18.
[35]
Hae-Na Lee, Yash Prakash, Mohan Sunkara, IV Ramakrishnan, and Vikas Ashok. 2022. Enabling Convenient Online Collaborative Writing for Low Vision Screen Magnifier Users. In Proceedings of the 33rd ACM Conference on Hypertext and Social Media. 143–153.
[36]
Hae-Na Lee, Sami Uddin, and Vikas Ashok. 2020. TableView: Enabling efficient access to web data records for screen-magnifier users. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–12.
[37]
Mengsha Liu, Daoyuan Chen, Yaliang Li, Guian Fang, and Ying Shen. 2024. ChartThinker: A Contextual Chain-of-Thought Approach to Optimized Chart Summarization. arXiv preprint arXiv:2403.11236 (2024).
[38]
Junyu Luo, Zekun Li, Jinpeng Wang, and Chin-Yew Lin. 2021. ChartOCR: data extraction from charts images via a deep hybrid framework. In Proceedings of the IEEE/CVF winter conference on applications of computer vision. 1917–1925.
[39]
Rubén Alcaraz Martínez, Mireia Ribera Turró, and Toni Granollers Saltiveri. 2019. Accessible statistical charts for people with low vision and colour vision deficiency. In Proceedings of the XX International Conference on Human Computer Interaction. 1–2.
[40]
Richard E Mayer and Logan Fiorella. 2014. 12 principles for reducing extraneous processing in multimedia learning: Coherence, signaling, redundancy, spatial contiguity, and temporal contiguity principles. In The Cambridge handbook of multimedia learning. Vol. 279. Cambridge University Press New York, NY, 279–315.
[41]
Farhani Momotaz and Syed Masum Billah. 2021. Tilt-Explore: Making Tilt Gestures Usable for Low-Vision Smartphone Users. In The 34th Annual ACM Symposium on User Interface Software and Technology. 1154–1168.
[42]
Alvitta Ottley, Aleksandra Kaszowska, R Jordan Crouser, and Evan M Peck. 2019. The curious case of combining text and visualization. EuroVis 2019-Short Papers (2019).
[43]
Fred Paas and John Sweller. 2014. Implications of cognitive load theory for multimedia learning. The Cambridge handbook of multimedia learning 27 (2014), 27–42.
[44]
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 311–318.
[45]
Joao Pinheiro and Jorge Poco. 2022. ChartText: Linking Text with Charts in Documents. arXiv preprint arXiv:2201.05043 (2022).
[46]
Yash Prakash, Akshay Kolgar Nayak, Mohan Sunkara, Sampath Jayarathna, Hae-Na Lee, and Vikas Ashok. 2024. All in One Place: Ensuring Usable Access to Online Shopping Items for Blind Users. Proceedings of the ACM on Human-Computer Interaction 8, EICS (2024), 1–25.
[47]
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446 (2021).
[48]
Leonard Richardson. 2015. Beautiful Soup Documentation. https://beautiful-soup-4.readthedocs.io/en/latest/.
[49]
María-Carmen Ricoy, Sara Martínez-Carrera, and Isabel Martínez-Carrera. 2022. Social overview of smartphone use by teenagers. International journal of environmental research and public health 19, 22 (2022), 15068.
[50]
Johnny Saldaña. 2021. The coding manual for qualitative researchers. sage.
[51]
Edward Segel and Jeffrey Heer. 2010. Narrative visualization: Telling stories with data. IEEE transactions on visualization and computer graphics 16, 6 (2010), 1139–1148.
[52]
Ather Sharif, Sanjana Shivani Chintalapati, Jacob O. Wobbrock, and Katharina Reinecke. 2021. Understanding Screen-Reader Users’ Experiences with Online Data Visualizations. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, USA) (ASSETS ’21). Association for Computing Machinery, New York, NY, USA, Article 14, 16 pages. https://doi.org/10.1145/3441852.3471202
[53]
Danqing Shi, Xinyue Xu, Fuling Sun, Yang Shi, and Nan Cao. 2020. Calliope: Automatic visual data story generation from a spreadsheet. IEEE Transactions on Visualization and Computer Graphics 27, 2 (2020), 453–463.
[54]
Alexa Siu, Gene SH Kim, Sile O’Modhrain, and Sean Follmer. 2022. Supporting accessible data visualization through audio data narratives. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–19.
[55]
Alexa F Siu, Danyang Fan, Gene SH Kim, Hrishikesh V Rao, Xavier Vazquez, Sile O’Modhrain, and Sean Follmer. 2021. COVID-19 highlights the issues facing blind and visually impaired people in accessing data on the web. In Proceedings of the 18th International Web for All Conference. 1–15.
[56]
soap117. 2023. DeepRule. https://github.com/soap117/DeepRule.
[57]
Arjun Srinivasan, Bongshin Lee, and John Stasko. 2020. Interweaving multimodal interaction with flexible unit visualizations for data exploration. IEEE Transactions on Visualization and Computer Graphics 27, 8 (2020), 3519–3533.
[58]
Sarit Felicia Anais Szpiro, Shafeka Hashash, Yuhang Zhao, and Shiri Azenkot. 2016. How people with low vision access computing devices: Understanding challenges and opportunities. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. 171–180.
[59]
Stefano Targher, Valeria Occelli, and Massimiliano Zampini. 2012. Audiovisual integration in low vision individuals. Neuropsychologia 50, 5 (2012), 576–582.
[60]
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
[61]
W3C. 2023. 2023. Low vision definition. https://www.w3.org/WAI/GL/low-vision-a11y-tf/wiki/Definitions_of_Low_Vision
[62]
Miriam Walker, Leila Takayama, and James A Landay. 2002. High-fidelity or low-fidelity, paper or computer? Choosing attributes when testing web prototypes. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 46. Sage Publications Sage CA: Los Angeles, CA, 661–665.
[63]
Yanan Wang and Yea-Seul Kim. 2023. Making data-driven articles more accessible: An active preference learning approach to data fact personalization. In Proceedings of the 2023 ACM Designing Interactive Systems Conference. 1353–1366.
[64]
Yanan Wang, Yuhang Zhao, and Yea-Seul Kim. 2024. How Do Low-Vision Individuals Experience Information Visualization?. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–15.
[65]
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837.
[66]
Max Wertheimer. 1938. Laws of organization in perceptual forms. (1938).
[67]
David Wicks. 2017. The coding manual for qualitative researchers. Qualitative research in organizations and management: an international journal 12, 2 (2017), 169–170.
[68]
Xiaoling Xia, Cui Xu, and Bing Nan. 2017. Inception-v3 for flower classification. In 2017 2nd international conference on image, vision and computing (ICIVC). IEEE, 783–787.
[69]
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629 (2022).
[70]
Qiyu Zhi, Alvitta Ottley, and Ronald Metoyer. 2019. Linking and layout: Exploring the integration of text and visualization in storytelling. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 675–685.

Index Terms

  1. Improving Usability of Data Charts in Multimodal Documents for Low Vision Users

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ICMI '24: Proceedings of the 26th International Conference on Multimodal Interaction
      November 2024
      725 pages
      ISBN:9798400704628
      DOI:10.1145/3678957
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 November 2024

      Check for updates

      Author Tags

      1. Graph perception
      2. Graph usability
      3. Low vision
      4. Screen magnifier

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      ICMI '24
      ICMI '24: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
      November 4 - 8, 2024
      San Jose, Costa Rica

      Acceptance Rates

      Overall Acceptance Rate 453 of 1,080 submissions, 42%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 150
        Total Downloads
      • Downloads (Last 12 months)150
      • Downloads (Last 6 weeks)65
      Reflects downloads up to 17 Feb 2025

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media